id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
73,127,816 | https://en.wikipedia.org/wiki/Exploration-exploitation%20dilemma | The exploration-exploitation dilemma, also known as the explore-exploit tradeoff, is a fundamental concept in decision-making that arises in many domains. It is depicted as the balancing act between two opposing strategies. Exploitation involves choosing the best option based on current knowledge of the system (which may be incomplete or misleading), while exploration involves trying out new options that may lead to better outcomes in the future at the expense of an exploitation opportunity. Finding the optimal balance between these two strategies is a crucial challenge in many decision-making problems whose goal is to maximize long-term benefits.
Application in machine learning
In the context of machine learning, the exploration-exploitation tradeoff is fundamental in reinforcement learning (RL), a type of machine learning that involves training agents to make decisions based on feedback from the environment. Crucially, this feedback may be incomplete or delayed. The agent must decide whether to exploit the current best-known policy or explore new policies to improve its performance.
Multi-armed bandit methods
The multi-armed bandit (MAB) problem was a classic example of the tradeoff, and many methods were developed for it, such as epsilon-greedy, Thompson sampling, and the upper confidence bound (UCB). See the page on MAB for details.
In more complex RL situations than the MAB problem, the agent can treat each choice as a MAB, where the payoff is the expected future reward. For example, if the agent performs epsilon-greedy method, then the agent would often "pull the best lever" by picking the action that had the best predicted expected reward (exploit). However, it would with probability epsilon to pick a random action (explore). The Monte Carlo Tree Search, for example, uses a variant of the UCB method.
Exploration problems
There are some problems that make exploration difficult.
Sparse reward. If rewards occur only once a long while, then the agent might not persist in exploring. Furthermore, if the space of actions is large, then the sparse reward would mean the agent would not be guided by the reward to find a good direction for deeper exploration. A standard example is Montezuma's Revenge.
Deceptive reward. If some early actions give immediate small reward, but other actions give later large reward, then the agent might be lured away from exploring the other actions.
Noisy TV problem. If certain observations are irreducibly noisy (such as a television showing random images), then the agent might be trapped exploring those observations (watching the television).
Exploration reward
This section based on.
The exploration reward (also called exploration bonus) methods convert the exploration-exploitation dilemma into a balance of exploitations. That is, instead of trying to get the agent to balance exploration and exploitation, exploration is simply treated as another form of exploitation, and the agent simply attempts to maximize the sum of rewards from exploration and exploitation. The exploration reward can be treated as a form of intrinsic reward.
We write these as , meaning the intrinsic and extrinsic rewards at time step .
However, exploration reward is different from exploitation in two regards:
The reward of exploitation is not freely chosen, but given by the environment, but the reward of exploration may be picked freely. Indeed, there are many different ways to design described below.
The reward of exploitation is usually stationary (i.e. the same action in the same state gives the same reward), but the reward of exploration is non-stationary (i.e. the same action in the same state should give less and less reward).
Count-based exploration uses , the number of visits to a state during the time-steps , to calculate the exploration reward. This is only possible in small and discrete state space. Density-based exploration extends count-based exploration by using a density model . The idea is that, if a state has been visited, then nearby states are also partly-visited.
In maximum entropy exploration, the entropy of the agent's policy is included as a term in the intrinsic reward. That is, .
Prediction-based
This section based on.
The forward dynamics model is a function for predicting the next state based on the current state and the current action: . The forward dynamics model is trained as the agent plays. The model becomes better at predicting state transition for state-action pairs that had been done many times.
A forward dynamics model can define an exploration reward by . That is, the reward is the squared-error of the prediction compared to reality. This rewards the agent to perform state-action pairs that had not been done many times. This is however susceptible to the noisy TV problem.
Dynamics model can be run in latent space. That is, for some featurizer . The featurizer can be the identity function (i.e. ), randomly generated, the encoder-half of a variational autoencoder, etc. A good featurizer improves forward dynamics exploration.
The Intrinsic Curiosity Module (ICM) method trains simultaneously a forward dynamics model and a featurizer. The featurizer is trained by an inverse dynamics model, which is a function for predicting the current action based on the features of the current and the next state: . By optimizing the inverse dynamics, both the inverse dynamics model and the featurizer are improved. Then, the improved featurizer improves the forward dynamics model, which improves the exploration of the agent.
Random Network Distillation (RND) method attempts to solve this problem by teacher-student distillation. Instead of a forward dynamics model, it has two models . The teacher model is fixed, and the student model is trained to minimize on states . As a state is visited more and more, the student network becomes better at predicting the teacher. Meanwhile, the prediction error is also an exploration reward for the agent, and so the agent learns to perform actions that result in higher prediction error. Thus, we have a student network attempting to minimize the prediction error, while the agent attempting to maximize it, resulting in exploration.
The states are normalized by subtracting a running average and dividing a running variance, which is necessary since the teacher model is frozen. The rewards are normalized by dividing with a running variance.
Exploration by disagreement trains an ensemble of forward dynamics models, each on a random subset of all tuples. The exploration reward is the variance of the models' predictions.
Noise
For neural-network based agents, the NoisyNet method changes some of its neural network modules by noisy versions. That is, some network parameters are random variables from a probability distribution. The parameters of the distribution are themselves learnable. For example, in a linear layer , both are sampled from gaussian distributions at every step, and the parameters are learned via the reparameterization trick.
References
Machine learning
Strategy
Cognition | Exploration-exploitation dilemma | Engineering | 1,376 |
1,072,959 | https://en.wikipedia.org/wiki/Viking%20lander%20biological%20experiments | In 1976 two identical Viking program landers each carried four types of biological experiments to the surface of Mars. The first successful Mars landers, Viking 1 and Viking 2, then carried out experiments to look for biosignatures of microbial life on Mars. The landers each used a robotic arm to pick up and place soil samples into sealed test containers on the craft.
The two landers carried out the same tests at two places on Mars' surface, Viking 1 near the equator and Viking 2 further north.
The experiments
The four experiments below are presented in the order in which they were carried out by the two Viking landers. The biology team leader for the Viking program was Harold P. Klein (NASA Ames).
Gas chromatograph — mass spectrometer
A gas chromatograph — mass spectrometer (GCMS) is a device that separates vapor components chemically via a gas chromatograph and then feeds the result into a mass spectrometer, which measures the molecular weight of each chemical. As a result, it can separate, identify, and quantify a large number of different chemicals. The GCMS (PI: Klaus Biemann, MIT) was used to analyze the components of untreated Martian soil, and particularly those components that are released as the soil is heated to different temperatures. It could measure molecules present at a level of a few parts per billion.
The GCMS measured no significant amount of organic molecules in the Martian soil. In fact, Martian soils were found to contain less carbon than lifeless lunar soils returned by the Apollo program. This result was difficult to explain if Martian bacterial metabolism was responsible for the positive results seen by the Labeled Release experiment (see below). A 2011 astrobiology textbook notes that this was the decisive factor due to which "For most of the Viking scientists, the final conclusion was that the Viking missions failed to detect life in the Martian soil."
Experiments conducted in 2008 by the Phoenix lander discovered the presence of perchlorate in Martian soil. The 2011 astrobiology textbook discusses the importance of this finding with respect to the results obtained by Viking as "while perchlorate is too poor an oxidizer to reproduce the LR results (under the conditions of that experiment perchlorate does not oxidize organics), it does oxidize, and thus destroy, organics at the higher temperatures used in the Viking GCMS experiment. NASA astrobiologist Chris McKay has estimated, in fact, that if Phoenix-like levels of perchlorates were present in the Viking samples, the organic content of the Martian soil could have been as high as 0.1% and still would have produced the (false) negative result that the GCMS returned. Thus, while conventional wisdom regarding the Viking biology experiments still points to "no evidence of life", recent years have seen at least a small shift toward "inconclusive evidence"."
According to a 2010 NASA press release: "The only organic chemicals identified when the Viking landers heated samples of Martian soil were chloromethane and dichloromethane -- chlorine compounds interpreted at the time as likely contaminants from cleaning fluids." According to a paper authored by a team led by Rafael Navarro-González of the National Autonomous University of Mexico, "those chemicals are exactly what [their] new study found when a little perchlorate -- the surprise finding from Phoenix -- was added to desert soil from Chile containing organics and analyzed in the manner of the Viking tests." However, the 2010 NASA press release also noted that: "One reason the chlorinated organics found by Viking were interpreted as contaminants from Earth was that the ratio of two isotopes of chlorine in them matched the three-to-one ratio for those isotopes on Earth. The ratio for them on Mars has not been clearly determined yet. If it is found to be much different than Earth's, that would support the 1970s interpretation." Biemann has written a commentary critical of the Navarro-González and McKay paper, to which the latter have replied; the exchange was published in December 2011. In 2021 the chlorine isotope ratio on Mars was measured by the Trace Gas Orbiter and found to be almost indistinguishable from the terrestrial ratio, leaving the interpretation of the GCMS results inconclusive.
Gas exchange
The gas exchange (GEX) experiment (PI: Vance Oyama, NASA Ames) looked for gases given off by an incubated soil sample by first replacing the Martian atmosphere with the inert gas helium. It applied a liquid complex of organic and inorganic nutrients and supplements to a soil sample, first with just nutrients added, then with water added too. Periodically, the instrument sampled the atmosphere of the incubation chamber and used a gas chromatograph to measure the concentrations of several gases, including oxygen, CO2, nitrogen, hydrogen, and methane. The scientists hypothesized that metabolizing organisms would either consume or release at least one of the gases being measured.
In early November 1976, it was reported that "on Viking 2, the gas exchange experiment is producing analogous results to those from Viking 1. Again, oxygen disappeared once the nutrient solution came into contact with the soil. Again, carbon dioxide began to appear and still continues to evolve".
Labeled release
The labeled release (LR) experiment (PI: Gilbert Levin, Biospherics Inc.) gave the most promise for exobiologists. In the LR experiment, a sample of Martian soil was inoculated with a drop of very dilute aqueous nutrient solution. The nutrients (7 molecules that were Miller-Urey products) were tagged with radioactive 14C. The air above the soil was monitored for the evolution of radioactive 14CO2 (or other carbon-based) gas as evidence that microorganisms in the soil had metabolized one or more of the nutrients. Such a result was to be followed with the control part of the experiment as described for the PR below. The result was quite a surprise, considering the negative results of the first two tests, with a steady stream of radioactive gases being given off by the soil immediately following the first injection. The experiment was done by both Viking probes, the first using a sample from the surface exposed to sunlight and the second probe taking the sample from underneath a rock; both initial injections came back positive. Sterilization control tests were subsequently carried out by heating various soil samples. Samples heated for 3 hours at 160 °C gave off no radioactive gas when nutrients were injected, and samples heated for 3 hours at 50 °C exhibited a substantial reduction in radioactive gas released following nutrient injection. A sample stored at 10 °C for several months was later tested showing significantly reduced radioactive gas release.
A CNN article from 2000 noted that "Though most of his peers concluded otherwise, Levin still holds that the robot tests he coordinated on the 1976 Viking lander indicated the presence of living organisms on Mars." A 2006 astrobiology textbook noted that "With unsterilized Terrestrial samples, though, the addition of more nutrients after the initial incubation would then produce still more radioactive gas as the dormant bacteria sprang into action to consume the new dose of food. This was not true of the Martian soil; on Mars, the second and third nutrient injections did not produce any further release of labeled gas." The 2011 edition of the same textbook noted that "Albet Yen of the Jet Propulsion Laboratory has shown that, under extremely cold and dry conditions and in a carbon dioxide atmosphere, ultraviolet light (remember: Mars lacks an ozone layer, so the surface is bathed in ultraviolet) can cause carbon dioxide to react with soils to produce various oxidizers, including highly reactive superoxides (salts containing O2−). When mixed with small organic molecules, superoxidizers readily oxidize them to carbon dioxide, which may account for the LR result. Superoxide chemistry can also account for the puzzling results seen when more nutrients were added to the soil in the LR experiment; because life multiplies, the amount of gas should have increased when a second or third batch of nutrients was added, but if the effect was due to a chemical being consumed in the first reaction, no new gas would be expected. Lastly, many superoxides are relatively unstable and are destroyed at elevated temperatures, also accounting for the "sterilization" seen in the LR experiment."
In a 2002 paper published by Joseph Miller, he speculates that recorded delays in the system's chemical reactions point to biological activity similar to the circadian rhythm previously observed in terrestrial cyanobacteria.
A 2007 paper by Dirk Schulze-Makuch and Joop M. Houtkooper argues that the experiment may have killed potential microbes by supplying them with an excessive amount of water. Schulze-Makuch revisited the idea in an article for Big Think in 2023.
On 12 April 2012, an international team including Levin and Patricia Ann Straat published a peer reviewed paper suggesting the detection of "extant microbial life on Mars", based on mathematical speculation through cluster analysis of the Labeled Release experiments of the 1976 Viking Mission.
Pyrolytic release
The pyrolytic release (PR) experiment (PI: Norman Horowitz, Caltech) consisted of the use of light, water, and a carbon-containing atmosphere of carbon monoxide (CO) and carbon dioxide (CO2), simulating that on Mars. The carbon-bearing gases were made with carbon-14 (14C), a heavy, radioactive isotope of carbon. If there were photosynthetic organisms present, it was believed that they would incorporate some of the carbon as biomass through the process of carbon fixation, just as plants and cyanobacteria on earth do. After several days of incubation, the experiment removed the gases, baked the remaining soil at 650 °C (1200 °F), and collected the products in a device which counted radioactivity. If any of the 14C had been converted to biomass, it would be vaporized during heating and the radioactivity counter would detect it as evidence for life. Should a positive response be obtained, a duplicate sample of the same soil would be heated to "sterilize" it. It would then be tested as a control and should it still show activity similar to the first response, that was evidence that the activity was chemical in nature. However, a nil, or greatly diminished response, was evidence for biology. This same control was to be used for any of the three life detection experiments that showed a positive initial result. The initial assessment of results from the Viking 1 PR experiment was that "analysis of the results shows that a small but significant formation of organic matter occurred" and that the sterilized control showed no evidence of organics, showing that the "findings could be attributed to biological activity." However, given the persistence of organic release at 90 °C, the inhibition of organics after injecting water vapor and, especially, the lack of detection of organics in the Martian soil by the GCMS experiment, the investigators concluded that a nonbiological explanation of the PR results was most likely. However, in subsequent years, as the GCMS results have come increasingly under scrutiny, the pyrolytic release experiment results have again come to be viewed as possibly consistent with biological activity, although "An explanation for the apparent small synthesis of organic matter in the pyrolytic release experiment remains obscure."
Scientific conclusions
Organic compounds seem to be common, for example, on asteroids, meteorites, comets and the icy bodies orbiting the Sun, so detecting no trace of any organic compound on the surface of Mars came as a surprise. The GC-MS was definitely working, because the controls were effective and it was able to detect traces of chlorine, attributed to the cleaning solvents that had been used to sterilize it prior to launch. A reanalysis of the GC-MS data was performed in 2018, suggesting that organic compounds may actually have been detected, corroborating with data from the Curiosity rover. At the time, the total absence of organic material on the surface made the results of the biology experiments moot, since metabolism involving organic compounds were what those experiments were designed to detect. The general scientific community surmises that the Viking's biological tests remain inconclusive, and can be explained by purely chemical processes.
Despite the positive result from the Labeled Release experiment, a general assessment is that the results seen in the four experiments are best explained by oxidative chemical reactions with the Martian soil. One of the current conclusions is that the Martian soil, being continuously exposed to UV light from the Sun (Mars has no protective ozone layer), has built up a thin layer of a very strong oxidant. A sufficiently strong oxidizing molecule would react with the added water to produce oxygen and hydrogen, and with the nutrients to produce carbon dioxide (CO2).
Norman Horowitz was the chief of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976. Horowitz considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life.
In August 2008, the Phoenix lander detected perchlorate, a strong oxidizer when heated above 200 °C. This was initially thought to be the cause of a false positive LR result. However, results of experiments published in December 2010 propose that organic compounds "could have been present" in the soil analyzed by both Viking 1 and 2, since NASA's Phoenix lander in 2008 detected perchlorate, which can break down organic compounds. The study's authors found that perchlorate can destroy organics when heated and produce chloromethane and dichloromethane as byproduct, the identical chlorine compounds discovered by both Viking landers when they performed the same tests on Mars. Because perchlorate would have broken down any Martian organics, the question of whether Viking found organic compounds is still wide open, as alternative chemical and biological interpretations are possible.
In 2013, astrobiologist Richard Quinn at the Ames Center conducted experiments in which amino acids reacting with hypochlorite, which is created when perchlorate is irradiated with gamma rays, seemed to reproduce the findings of the labeled-release experiment. He concluded that neither hydrogen peroxide nor superoxide is required to explain the results of the Viking biology experiments. A more detailed study was conducted in 2017 by a team of researchers including Quinn. While this study was not specifically designed to match the data from the LR experiment, it was found that hypochlorite could partially explain the control results, including the 160 °C sterilization test. The authors stated "Further experiments are planned to characterize the thermal stability of hypochlorite and other oxychlorine species in the context of the LR experiments."
Controversy
Before the discovery of the oxidizer perchlorate on Mars in 2008, some theories remained opposed to the general scientific conclusion. An investigator suggested that the biological explanation of the lack of detected organics by GC-MS could be that the oxidizing inventory of the H2O2-H2O solvent well exceeded the reducing power of the organic compounds of the organisms.
It has also been argued that the Labeled Release (LR) experiment detected so few metabolising organisms in the Martian soil, that it would have been impossible for the gas chromatograph to detect them. This view has been put forward by the designer of the LR experiment, Gilbert Levin, who believes the positive LR results are diagnostic for life on Mars. He and others have conducted ongoing experiments attempting to reproduce the Viking data, either with biological or non-biological materials on Earth. While no experiment has ever precisely duplicated the Mars LR test and control results, experiments with hydrogen peroxide-saturated titanium dioxide have produced similar results.
While the majority of astrobiologists still conclude that the Viking biological experiments were inconclusive or negative, Gilbert Levin is not alone in believing otherwise. The current claim for life on Mars is grounded on old evidence reinterpreted in the light of recent developments. In 2006, scientist Rafael Navarro demonstrated that the Viking biological experiments likely lacked sensitivity to detect trace amounts of organic compounds. In a paper published in December 2010, the scientists suggest that if organics were present, they would not have been detected because when the soil is heated to check for organics, perchlorate destroys them rapidly producing chloromethane and dichloromethane, which is what the Viking landers found. This team also notes that this is not a proof of life but it could make a difference in how scientists look for organic biosignatures in the future. Results from the current Mars Science Laboratory mission and the under-development ExoMars program may help settle this controversy.
In 2006, Mario Crocco went as far as proposing the creation of a new nomenclatural rank that classified some Viking results as 'metabolic' and therefore representative of a new form of life. The taxonomy proposed by Crocco has not been accepted by the scientific community, and the validity of Crocco's interpretation hinged entirely on the absence of an oxidative agent in the Martian soil.
According to Gilbert Levin and Patricia Ann Straat, investigators of the LR experiment, no explanation involving inorganic chemistry as of 2016 is able to give satisfactory explanations of the complete data from the LR experiment, and specifically address the question of what active agent on the soil samples could be adversely affected by heating to approximately 50 °C and destroyed with long-term storage in the dark at 10 °C, as data suggest.
Critiques
Failure to examine the atmosphere
James Lovelock argued that the Viking mission would have done better to examine the Martian atmosphere than look at the soil. He theorised that all life tends to expel waste gases into the atmosphere, and as such it would be possible to theorise the existence of life on a planet by detecting an atmosphere that was not in chemical equilibrium. He concluded that there was enough information about Mars' atmosphere at that time to discount the possibility of life there. Since then, methane has been discovered in Mars' atmosphere at 10ppb, thus reopening this debate. Although in 2013 the Curiosity rover failed to detect methane at its location in levels exceeding 1.3ppb. later in 2013 and in 2014, measurements by Curiosity did detect methane, suggesting a time-variable source. The ExoMars Trace Gas Orbiter, launched in March 2016, implements this approach and will focus on detection, characterization of spatial and temporal variation, and localization of sources for a broad suite of atmospheric trace gases on Mars and help determine if their formation is of biological or geological origin. The Mars Orbiter Mission has also been attempting – since late 2014 – to detect and map methane on Mars' atmosphere.
Studies using the Trace Gas Orbiter and Curiosity have shown that methane is only detectable at night and therefore cannot be detected by the Trace Gas Orbiter which relies on sunlight for its measurements.
Effects of landing rockets
A press commentary argued that, if there was life at the Viking lander sites, it may have been killed by the exhaust from the landing rockets. That is not a problem for missions which land via an airbag-protected capsule, slowed by parachutes and retrorockets, and dropped from a height that allows rocket exhaust to avoid the surface. Mars Pathfinder's Sojourner rover and the Mars Exploration Rovers each used this landing technique successfully. The Phoenix Scout lander descended to the surface with retro-rockets, however, their fuel was hydrazine, and the end products of the plume (water, nitrogen, and ammonia) were not found to have affected the soils at the landing site.
See also
Astrobiology
Biological Oxidant and Life Detection mission
Biosignature
ExoMars
Exploration of Mars
Life on Mars
Viking program
Viking 1
Viking 2
Europa Lander (NASA) (The next NASA mission with primary science life detection)
References
Further reading
External links
Astrobiology space missions
Viking program
Extraterrestrial life | Viking lander biological experiments | Astronomy,Biology | 4,171 |
64,325,880 | https://en.wikipedia.org/wiki/Galilei-covariant%20tensor%20formulation | The Galilei-covariant tensor formulation is a method for treating non-relativistic physics using the extended Galilei group as the representation group of the theory. It is constructed in the light cone of a five dimensional manifold.
Takahashi et al., in 1988, began a study of Galilean symmetry, where an explicitly covariant non-relativistic field theory could be developed. The theory is constructed in the light cone of a (4,1) Minkowski space. Previously, in 1985, Duval et al. constructed a similar tensor formulation in the context of Newton–Cartan theory. Some other authors also have developed a similar Galilean tensor formalism.
Galilean manifold
The Galilei transformations are
where stands for the three-dimensional Euclidean rotations, is the relative velocity determining Galilean boosts, a stands for spatial translations and b, for time translations. Consider a free mass particle ; the mass shell relation is given by .
We can then define a 5-vector,
,
with .
Thus, we can define a scalar product of the type
where
is the metric of the space-time, and .
Extended Galilei algebra
A five dimensional Poincaré algebra leaves the metric invariant,
We can write the generators as
The non-vanishing commutation relations will then be rewritten as
An important Lie subalgebra is
is the generator of time translations (Hamiltonian), Pi is the generator of spatial translations (momentum operator), is the generator of Galilean boosts, and stands for a generator of rotations (angular momentum operator). The generator is a Casimir invariant and is an additional Casimir invariant. This algebra is isomorphic to the extended Galilean Algebra in (3+1) dimensions with , The central charge, interpreted as mass, and .
The third Casimir invariant is given by , where is a 5-dimensional analog of the Pauli–Lubanski pseudovector.
Bargmann structures
In 1985 Duval, Burdet and Kunzle showed that four-dimensional Newton–Cartan theory of gravitation can be reformulated as Kaluza–Klein reduction of five-dimensional Einstein gravity along a null-like direction. The metric used is the same as the Galilean metric but with all positive entries
This lifting is considered to be useful for non-relativistic holographic models. Gravitational models in this framework have been shown to precisely calculate the Mercury precession.
See also
Galilean group
Representation theory of the Galilean group
Lorentz group
Poincaré group
Pauli–Lubanski pseudovector
References
Rotational symmetry
Quantum mechanics
Representation theory of Lie groups | Galilei-covariant tensor formulation | Physics | 549 |
11,897,622 | https://en.wikipedia.org/wiki/Integrated%20design | Integrated design is a comprehensive holistic approach to design which brings together specialisms usually considered separately. It attempts to take into consideration all the factors and modulations necessary to a decision-making process.
A few examples are the following:
Design of a building which considers whole building design including architecture, structural engineering, passive solar building design and HVAC. The approach may also integrate building lifecycle management and a greater consideration of the end users of the building. The aim of integrated building design is often to produce sustainable architecture.
Design of both a product (or family of products) and the assembly system that will produce it.
Design of an electronic product that considers both hardware and software aspects, although this is often called co-design (not to be confused with participatory design, which is also often called co-design).
The requirement for integrated design comes when the different specialisms are dependent on each other or "coupled". An alternative or complementary approach to integrated design is to consciously reduce the dependencies. In computing and systems design, this approach is known as loose coupling.
Dis-integrated design
Three phenomena are associated with a lack of integrated design:
Silent design: design by default, by omission or by people not aware that they are participating in design activity.
Partial design: design is only used to a limited degree, such as in superficial styling, often after the important design decisions have been made.
Disparate design: design activity may be widespread, but is not co-ordinated or brought together to realise its potential. The resulting design may have needless complexity, internal inconsistency, logical flaws and a lack of a unifying vision.
A committee is sometimes a deliberate attempt to address disparate design, but the phrase "design by committee" is associated with this failing, leading to disparate design. "Design by committee" can also lead to a kind of silent design, as design decisions are not properly considered, for fear of upsetting a hard-won compromise.
Methods for integrated design
The integrated design approach incorporates collaborative methods and tools to encourage and enable the specialists in the different areas to work together to produce an integrated design.
A charrette provides opportunity for all specialists to collaborate and align early in the design process.
Human-Centered Design provides an integrated approach to problem solving, commonly used in design and management frameworks that develops solutions to problems by involving the human perspective in all steps of the problem-solving process.
References
See also
Holism
Mode 2
Participatory design
System integration
Systems engineering
Design | Integrated design | Engineering | 517 |
575,776 | https://en.wikipedia.org/wiki/Lethal%20dose | In toxicology, the lethal dose (LD) is an indication of the lethal toxicity of a given substance or type of radiation. Because resistance varies from one individual to another, the "lethal dose" represents a dose (usually recorded as dose per kilogram of subject body weight) at which a given percentage of subjects will die. The lethal concentration is a lethal dose measurement used for gases or particulates. The LD may be based on the standard person concept, a theoretical individual that has perfectly "normal" characteristics, and thus not apply to all sub-populations.
Median lethal dose (LD50)
The median lethal dose, LD50 (abbreviation for "lethal dose, 50%"), LC50 (lethal concentration, 50%) or LCt50 (lethal concentration and time) of a toxin, radiation, or pathogen is the dose required to kill half the members of a tested population after a specified test duration. LD50 figures are frequently used as a general indicator of a substance's acute toxicity. A lower LD50 is indicative of increased toxicity.
History
The test was created by J.W. Trevan in 1927. The term "semilethal dose" is occasionally used with the same meaning, in particular in translations from non-English-language texts, but can also refer to a sublethal dose; because of this ambiguity, it is usually avoided. LD50 is usually determined by tests on animals such as laboratory mice. In 2011 the US Food and Drug Administration approved alternative methods to LD50 for testing the cosmetic drug Botox without animal tests.
Units and measurement
The LD50 is usually expressed as the mass of substance administered per unit mass of test subject, typically as milligrams of substance per kilogram of body mass, but stated as nanograms (suitable for botulinum), micrograms, milligrams, or grams (suitable for paracetamol) per kilogram. Stating it this way allows the relative toxicity of different substances to be compared, and normalizes for the variation in the size of the animals exposed, although toxicity does not always scale simply with body mass.
The choice of 50% lethality as a benchmark avoids the potential for ambiguity of making measurements in the extremes and reduces the amount of testing required. However, this also means that LD50 is not the lethal dose for all subjects; some may be killed by much less, while others survive doses far higher than the LD50. Measures such as "LD1" and "LD99" (dosage required to kill 1% or 99%, respectively, of the test population) are occasionally used for specific purposes.
Lethal dosage often varies depending on the method of administration; for instance, many substances are less toxic when administered orally than when intravenously administered. For this reason, LD50 figures are often qualified with the mode of administration, e.g., "LD50 i.v."
The related quantities LD50/30 or LD50/60 are used to refer to a dose that without treatment will be lethal to 50% of the population within (respectively) 30 or 60 days. These measures are used more commonly with radiation, as survival beyond 60 days usually results in recovery.
Estimation using model organisms
LD values for humans are best estimated by extrapolating results from human cell cultures. One form of measuring LD is to use model organisms, particularly animals like mice or rats, converting to dosage per kilogram of biomass, and extrapolating to human norms. The degree of error from animal-extrapolated LD values is large. The biology of test animals differs in important aspects to that of humans. For instance, mouse tissue is approximately fifty times less responsive than human tissue to the venom of the Sydney funnel-web spider. The square–cube law also complicates the scaling relationships involved. Researchers are shifting away from animal-based LD measurements in some instances. The U.S. Food and Drug Administration has begun to approve more non-animal methods in response to animal welfare concerns.
Median infective dose
The median infective dose (ID50) is the number of organisms received by a person or test animal qualified by the route of administration (e.g., 1,200 org/man per oral). Because of the difficulties in counting actual organisms in a dose, infective doses may be expressed in terms of biological assay, such as the number of LD50's to some test animal. In biological warfare infective dosage is the number of infective doses per minute for a cubic meter (e.g., ICt50 is 100 medium doses - min/m3).)
Lowest lethal dose
The lowest lethal dose (LDLo) is the least amount of drug that can produce death in a given animal species under controlled conditions. The dosage is given per unit of bodyweight (typically stated in milligrams per kilogram) of a substance known to have resulted in fatality in a particular species. When quoting an LDLo, the particular species and method of administration (e.g. ingested, inhaled, intravenous) are typically stated.
Median lethal concentration
For gases and aerosols, lethal concentration (given in mg/m3 or ppm, parts per million) is the analogous concept, although this also depends on the duration of exposure, which has to be included in the definition. The term incipient lethal level is used to describe a LC50 value that is independent of time.
A comparable measurement is LCt50, which relates to lethal dosage from exposure, where C is concentration and t is time. It is often expressed in terms of mg-min/m3. LCt50 is the dose that will cause incapacitation rather than death. These measures are commonly used to indicate the comparative efficacy of chemical warfare agents, and dosages are typically qualified by rates of breathing (e.g., resting = 10 L/min) for inhalation, or degree of clothing for skin penetration. The concept of Ct was first proposed by Fritz Haber and is sometimes referred to as Haber's law, which assumes that exposure to 1 minute of 100 mg/m3 is equivalent to 10 minutes of 10 mg/m3 (1 × 100 = 100, as does 10 × 10 = 100).{{Citation needed|reason=A citation for what?
There is a Wikipedia article about Habers law clearly linked to on the paragraph - please elaborate or remove this citation needed somebody, as I am not sure which to do. Thank you. date=April 2016|date=June 2017}}
Some chemicals, such as hydrogen cyanide, are rapidly detoxified by the human body, and do not follow Haber's Law. So, in these cases, the lethal concentration may be given simply as LC50 and qualified by a duration of exposure (e.g., 10 minutes). The material safety data sheets for toxic substances frequently use this form of the term even if the substance does follow Haber's Law.
Lowest lethal concentration
The LCLo is the lowest concentration of a chemical, given over a period of time, that results in the fatality of an individual animal. LCLo is typically for an acute (<24 hour) exposure. It is related to the LC50, the median lethal concentration. The LCLo is used for gases and aerosolized material.
Limitations
As a measure of toxicity, lethal dose is somewhat unreliable and results may vary greatly between testing facilities due to factors such as the genetic characteristics of the sample population, animal species tested, environmental factors and mode of administration.
There can be wide variability between species as well; what is relatively safe for rats may very well be extremely toxic for humans (cf.'' paracetamol toxicity), and vice versa. For example, chocolate, comparatively harmless to humans, is known to be toxic to many animals. When used to test venom from venomous creatures, such as snakes, LD50 results may be misleading due to the physiological differences between mice, rats, and humans. Many venomous snakes are specialized predators of mice, and their venom may be adapted specifically to incapacitate mice; and mongooses may be exceptionally resistant. While most mammals have a very similar physiology, LD50 results may or may not have equal bearing upon every mammal species, including humans.
Animal rights concerns
Animal-rights and animal-welfare groups, such as Animal Rights International, have campaigned against LD50 testing on animals in particular as, in the case of some substances, causing the animals to die slow, painful deaths. Several countries, including the UK, have taken steps to ban the oral LD50, and the Organisation for Economic Co-operation and Development (OECD) abolished the requirement for the oral test in 2001.
See also
References
Causes of death
Concentration indicators
Medical emergencies
Toxicology | Lethal dose | Environmental_science | 1,806 |
31,260,737 | https://en.wikipedia.org/wiki/Digital%20Humanities%20Quarterly | Digital Humanities Quarterly is a peer-reviewed open-access academic journal covering all aspects of digital media in the humanities. The journal is also a community experiment in journal publication.
The journal is funded and published by the Alliance of Digital Humanities Organizations and its editor-in-chief is Julia Flanders.
Editorial policy
Digital Humanities Quarterly has been noted among the "few interesting attempts to peer review born-digital scholarship." Having emerged from a desire to disseminate digital humanities practices to the wider arts and humanities community and beyond, the journal is committed to open access and open standards to deliver journal content, publishing under a Creative Commons license. It develops translation services and multilingual reviews in keeping with the international character of the Alliance of Digital Humanities Organizations.
The journal aims to heighten the visibility and acceptance of digital humanities with reviews that are modeled on traditional book reviews but focus on digital projects, providing assessments of "software tools, sites, other kinds of innovations that need the same kind of critical scrutiny and benefit from the same kind of contextualizing review that a traditional book review offers."
References
External links
Open access journals
Digital humanities
Quarterly journals
Academic journals established in 2007
Multidisciplinary humanities journals
English-language journals | Digital Humanities Quarterly | Technology | 244 |
208,999 | https://en.wikipedia.org/wiki/Non-renewable%20resource | A non-renewable resource (also called a finite resource) is a natural resource that cannot be readily replaced by natural means at a pace quick enough to keep up with consumption. An example is carbon-based fossil fuels. The original organic matter, with the aid of heat and pressure, becomes a fuel such as oil or gas. Earth minerals and metal ores, fossil fuels (coal, petroleum, natural gas) and groundwater in certain aquifers are all considered non-renewable resources, though individual elements are always conserved (except in nuclear reactions, nuclear decay or atmospheric escape).
Conversely, resources such as timber (when harvested sustainably) and wind (used to power energy conversion systems) are considered renewable resources, largely because their localized replenishment can also occur within human lifespans.
Earth minerals and metal ores
Earth minerals and metal ores are examples of non-renewable resources. The metals themselves are present in vast amounts in Earth's crust, and their extraction by humans only occurs where they are concentrated by natural geological processes (such as heat, pressure, organic activity, weathering and other processes) enough to become economically viable to extract. These processes generally take from tens of thousands to millions of years, through plate tectonics, tectonic subsidence and crustal recycling.
The localized deposits of metal ores near the surface which can be extracted economically by humans are non-renewable in human time-frames. There are certain rare earth minerals and elements that are more scarce and exhaustible than others. These are in high demand in manufacturing, particularly for the electronics industry.
Fossil fuels
Natural resources such as coal, petroleum (crude oil) and natural gas take thousands of years to form naturally and cannot be replaced as fast as they are being consumed. It is projected that fossil-based resources will eventually become too costly to harvest and humanity will need to shift its reliance to renewable energy such as solar or wind power.
An alternative hypothesis is that carbon-based fuel is virtually inexhaustible in human terms, if one includes all sources of carbon-based energy such as methane hydrates on the sea floor, which are much greater than all other carbon-based fossil fuel resources combined. These sources of carbon are also considered non-renewable, although their rate of formation/replenishment on the sea floor is not known. However, their extraction at economically viable costs and rates has yet to be determined.
At present, the main energy source used by humans is non-renewable fossil fuels. Since the dawn of internal combustion engine technologies in the 19th century, petroleum and other fossil fuels have remained in continual demand. As a result, conventional infrastructure and transport systems, which are fitted to combustion engines, remain predominant around the globe.
The modern-day fossil fuel economy is widely criticized for its lack of renewability, as well as being a contributor to climate change.
Nuclear fuels
In 1987, the World Commission on Environment and Development (WCED) classified fission reactors that produce more fissile nuclear fuel than they consume (i.e. breeder reactors) among conventional renewable energy sources, such as solar and falling water. The American Petroleum Institute likewise does not consider conventional nuclear fission as renewable, but rather that breeder reactor nuclear power fuel is considered renewable and sustainable, noting that radioactive waste from used spent fuel rods remains radioactive and so has to be very carefully stored for several hundred years. With the careful monitoring of radioactive waste products also being required upon the use of other renewable energy sources, such as geothermal energy.
The use of nuclear technology relying on fission requires naturally occurring radioactive material as fuel. Uranium, the most common fission fuel, is present in the ground at relatively low concentrations and mined in 19 countries. This mined uranium is used to fuel energy-generating nuclear reactors with fissionable uranium-235 which generates heat that is ultimately used to power turbines to generate electricity.
As of 2013 only a few kilograms (picture available) of uranium have been extracted from the ocean in pilot programs and it is also believed that the uranium extracted on an industrial scale from the seawater would constantly be replenished from uranium leached from the ocean floor, maintaining the seawater concentration at a stable level. In 2014, with the advances made in the efficiency of seawater uranium extraction, a paper in the journal of Marine Science & Engineering suggests that with, light water reactors as its target, the process would be economically competitive if implemented on a large scale.
Nuclear power provides about 6% of the world's energy and 13–14% of the world's electricity. Nuclear energy production is associated with potentially dangerous radioactive contamination as it relies upon unstable elements. In particular, nuclear power facilities produce about 200,000 metric tons of low and intermediate level waste (LILW) and 10,000 metric tons of high level waste (HLW) (including spent fuel designated as waste) each year worldwide.
Separate from the question of the sustainability of nuclear fuel use are concerns about the high-level radioactive waste the nuclear industry generates, which if not properly contained, is highly hazardous to people and wildlife. The United Nations (UNSCEAR) estimated in 2008 that average annual human radiation exposure includes 0.01 millisievert (mSv) from the legacy of past atmospheric nuclear testing plus the Chernobyl disaster and the nuclear fuel cycle, along with 2.0 mSv from natural radioisotopes and 0.4 mSv from cosmic rays; all exposures vary by location. Natural uranium in some inefficient reactor nuclear fuel cycles becomes part of the nuclear waste "once through" stream, and in a similar manner to the scenario were this uranium remained naturally in the ground, this uranium emits various forms of radiation in a decay chain that has a half-life of about 4.5 billion years. The storage of this unused uranium and the accompanying fission reaction products has raised public concerns about risks of leaks and containment, however studies conducted on the natural nuclear fission reactor in Oklo Gabon, have informed geologists on the proven processes that kept the waste from this 2 billion year old natural nuclear reactor.
Land surface
Land surface can be considered both a renewable and non-renewable resource depending on the scope of comparison. Land can be reused, but new land cannot be created on demand, making it a fixed resource with perfectly inelastic supply from an economic perspective.
Renewable resources
Natural resources, known as renewable resources, are replaced by natural processes and forces persistent in the natural environment. There are intermittent and reoccurring renewables, and recyclable materials, which are utilized during a cycle across a certain amount of time, and can be harnessed for any number of cycles.
The production of goods and services by manufacturing products in economic systems creates many types of waste during production and after the consumer has made use of it. The material is then either incinerated, buried in a landfill or recycled for reuse. Recycling turns materials of value that would otherwise become waste into valuable resources again.
In the natural environment water, forests, plants and animals are all renewable resources, as long as they are adequately monitored, protected and conserved. Sustainable agriculture is the cultivation of plant and animal materials in a manner that preserves plant and animal ecosystems and that can improve soil health and soil fertility over the long term. The overfishing of the oceans is one example of where an industry practice or method can threaten an ecosystem, endanger species and possibly even determine whether or not a fishery is sustainable for use by humans. An unregulated industry practice or method can lead to a complete resource depletion.
The renewable energy from the sun, wind, wave, biomass and geothermal energies are based on renewable resources. Renewable resources such as the movement of water (hydropower, tidal power and wave power), wind and radiant energy from geothermal heat (used for geothermal power) and solar energy (used for solar power) are practically infinite and cannot be depleted, unlike their non-renewable counterparts, which are likely to run out if not used sparingly.
The potential wave energy on coastlines can provide 1/5 of world demand. Hydroelectric power can supply 1/3 of our total energy global needs. Geothermal energy can provide 1.5 more times the energy we need. There is enough wind to power all of humanity's needs 30 times over. Solar currently supplies only 0.1% of our world energy needs, but could power humanity's needs 4,000 times over, the entire global projected energy demand by 2050.
Renewable energy and energy efficiency are no longer niche sectors that are promoted only by governments and environmentalists. The increasing levels of investment and capital from conventional financial actors suggest that sustainable energy has become mainstream and the future of energy production, as non-renewable resources decline. This is reinforced by climate change concerns, nuclear dangers and accumulating radioactive waste, high oil prices, peak oil and increasing government support for renewable energy. These factors are commercializing renewable energy, enlarging the market and increasing the adoption of new products to replace obsolete technology and the conversion of existing infrastructure to a renewable standard.
Economic models
In economics, a non-renewable resource is defined as goods whose greater consumption today implies less consumption tomorrow. David Ricardo in his early works analysed the pricing of exhaustible resources, and argued that the price of a mineral resource should increase over time. He argued that the spot price is always determined by the mine with the highest cost of extraction, and mine owners with lower extraction costs benefit from a differential rent. The first model is defined by Hotelling's rule, which is a 1931 economic model of non-renewable resource management by Harold Hotelling. It shows that efficient exploitation of a nonrenewable and nonaugmentable resource would, under otherwise stable conditions, lead to a depletion of the resource. The rule states that this would lead to a net price or "Hotelling rent" for it that rises annually at a rate equal to the rate of interest, reflecting the increasing scarcity of the resources. The Hartwick's rule provides an important result about the sustainability of welfare in an economy that uses non-renewable resources.
See also
Clean technology
Energy conservation
Eurosolar
Fossil fuel
Fossil water
Green design
Hartwick's rule
Hermann Scheer
Hotelling's rule
Hubbert's peak
Liebig's law of the minimum
Natural resource management
Overfishing
Peak oil
Reserves-to-production ratio
Sustainability
References
Natural resources
Environmental conservation
Space
it:Energie non rinnovabili | Non-renewable resource | Physics,Mathematics | 2,132 |
71,912,158 | https://en.wikipedia.org/wiki/KIF19 | Kinesin family member 19 (KIF19) is a protein in humans encoded by the KIF19 gene. It is part of the kinesin family of motor proteins.
Function
KIF19 is involved in cancer metastasis, as well as cell-cell and cell-matrix interactions.
References
Motor proteins | KIF19 | Chemistry | 66 |
55,570,663 | https://en.wikipedia.org/wiki/Koningic%20acid | Koningic acid (KA, also known as heptelidic acid) is a potent, selective, irreversible GAPDH inhibitor. It is also a DNA polymerase inhibitor. The koningic acid molecule, produced by fungi that consume sweet potatoes, has been shown to curb the excessive glucose consumption in tumors exhibiting the Warburg effect and leaving healthy cells alone.
References
Carboxylic acids
Sesquiterpene lactones
Epoxides
Benzoxepines
Lactones
Isopropyl compounds
Spiro compounds | Koningic acid | Chemistry | 109 |
64,181,070 | https://en.wikipedia.org/wiki/Propanolamine | In organic chemistry, propanolamine can describe any of the following parent compounds:
2-Amino-1-propanol, the hydrogenated derivative of alanine.
3-Amino-1-propanol, straight-chain and not widely used.
3-Amino-2-propanol (1-Aminopropan-2-ol) (isopropanolamines), prepared by addition of amines to one or two equivalents of propylene oxide.
The propanolamines include many derivatives where the amine is secondary or tertiary.
The parent propanolamines are colorless liquids with the formula .
References
Amino alcohols | Propanolamine | Chemistry | 137 |
186,475 | https://en.wikipedia.org/wiki/Arithmetical%20hierarchy | In mathematical logic, the arithmetical hierarchy, arithmetic hierarchy or Kleene–Mostowski hierarchy (after mathematicians Stephen Cole Kleene and Andrzej Mostowski) classifies certain sets based on the complexity of formulas that define them. Any set that receives a classification is called arithmetical. The arithmetical hierarchy was invented independently by Kleene (1943) and Mostowski (1946).
The arithmetical hierarchy is important in computability theory, effective descriptive set theory, and the study of formal theories such as Peano arithmetic.
The Tarski–Kuratowski algorithm provides an easy way to get an upper bound on the classifications assigned to a formula and the set it defines.
The hyperarithmetical hierarchy and the analytical hierarchy extend the arithmetical hierarchy to classify additional formulas and sets.
The arithmetical hierarchy of formulas
The arithmetical hierarchy assigns classifications to the formulas in the language of first-order arithmetic. The classifications are denoted and for natural numbers n (including 0). The Greek letters here are lightface symbols, which indicates that the formulas do not contain
If a formula is logically equivalent to a formula having no unbounded quantifiers, i.e. in which all quantifiers are bounded quantifiers then is assigned the classifications and .
The classifications and are defined inductively for every natural number n using the following rules:
If is logically equivalent to a formula of the form , where is , then is assigned the classification .
If is logically equivalent to a formula of the form , where is , then is assigned the classification .
A formula is equivalent to a formula that begins with some existential quantifiers and alternates times between series of existential and universal quantifiers; while a formula is equivalent to a formula that begins with some universal quantifiers and alternates analogously.
Because every first-order formula has a prenex normal form, every formula is assigned at least one classification. Because redundant quantifiers can be added to any formula, once a formula is assigned the classification or it will be assigned the classifications and for every m > n. The only relevant classification assigned to a formula is thus the one with the least n; all the other classifications can be determined from it.
The arithmetical hierarchy of sets of natural numbers
A set X of natural numbers is defined by a formula φ in the language of Peano arithmetic (the first-order language with symbols "0" for zero, "S" for the successor function, "+" for addition, "×" for multiplication, and "=" for equality), if the elements of X are exactly the numbers that satisfy φ. That is, for all natural numbers n,
where is the numeral in the language of arithmetic corresponding to . A set is definable in first-order arithmetic if it is defined by some formula in the language of Peano arithmetic.
Each set X of natural numbers that is definable in first-order arithmetic is assigned classifications of the form , , and , where is a natural number, as follows. If X is definable by a formula then X is assigned the classification . If X is definable by a formula then X is assigned the classification . If X is both and then is assigned the additional classification .
Note that it rarely makes sense to speak of formulas; the first quantifier of a formula is either existential or universal. So a set is not necessarily defined by a formula in the sense of a formula that is both and ; rather, there are both and formulas that define the set. For example, the set of odd natural numbers is definable by either or .
A parallel definition is used to define the arithmetical hierarchy on finite Cartesian powers of the set of natural numbers. Instead of formulas with one free variable, formulas with k free first-order variables are used to define the arithmetical hierarchy on sets of k-tuples of natural numbers. These are in fact related by the use of a pairing function.
Meaning of the notation
The following meanings can be attached to the notation for the arithmetical hierarchy on formulas.
The subscript in the symbols and indicates the number of alternations of blocks of universal and existential first-order quantifiers that are used in a formula. Moreover, the outermost block is existential in formulas and universal in formulas.
The superscript in the symbols , , and indicates the type of the objects being quantified over. Type 0 objects are natural numbers, and objects of type are functions that map the set of objects of type to the natural numbers. Quantification over higher type objects, such as functions from natural numbers to natural numbers, is described by a superscript greater than 0, as in the analytical hierarchy. The superscript 0 indicates quantifiers over numbers, the superscript 1 would indicate quantification over functions from numbers to numbers (type 1 objects), the superscript 2 would correspond to quantification over functions that take a type 1 object and return a number, and so on.
Examples
The sets of numbers are those definable by a formula of the form where has only bounded quantifiers. These are exactly the recursively enumerable sets.
The set of natural numbers that are indices for Turing machines that compute total functions is . Intuitively, an index falls into this set if and only if for every "there is an such that the Turing machine with index halts on input after steps". A complete proof would show that the property displayed in quotes in the previous sentence is definable in the language of Peano arithmetic by a formula.
Every subset of Baire space or Cantor space is an open set in the usual topology on the space. Moreover, for any such set there is a computable enumeration of Gödel numbers of basic open sets whose union is the original set. For this reason, sets are sometimes called effectively open. Similarly, every set is closed and the sets are sometimes called effectively closed.
Every arithmetical subset of Cantor space or Baire space is a Borel set. The lightface Borel hierarchy extends the arithmetical hierarchy to include additional Borel sets. For example, every subset of Cantor or Baire space is a set, that is, a set that equals the intersection of countably many open sets. Moreover, each of these open sets is and the list of Gödel numbers of these open sets has a computable enumeration. If is a formula with a free set variable and free number variables then the set is the intersection of the sets of the form as ranges over the set of natural numbers.
The formulas can be checked by going over all cases one by one, which is possible because all their quantifiers are bounded. The time for this is polynomial in their arguments (e.g. polynomial in for ); thus their corresponding decision problems are included in E (as is exponential in its number of bits). This no longer holds under alternative definitions of that allow the use of primitive recursive functions, as now the quantifiers may be bounded by any primitive recursive function of the arguments.
The formulas under an alternative definition, that allows the use of primitive recursive functions with bounded quantifiers, correspond to sets of natural numbers of the form for a primitive recursive function . This is because allowing bounded quantifier adds nothing to the definition: for a primitive recursive , is the same as , and is the same as ; with course-of-values recursion each of these can be defined by a single primitive recursive function.
Relativized arithmetical hierarchies
Just as we can define what it means for a set X to be recursive relative to another set Y by allowing the computation defining X to consult Y as an oracle we can extend this notion to the whole arithmetic hierarchy and define what it means for X to be , or in Y, denoted respectively , and . To do so, fix a set of natural numbers Y and add a predicate for membership of Y to the language of Peano arithmetic. We then say that X is in if it is defined by a formula in this expanded language. In other words, X is if it is defined by a formula allowed to ask questions about membership of Y. Alternatively one can view the sets as those sets that can be built starting with sets recursive in Y and alternately taking unions and intersections of these sets up to n times.
For example, let Y be a set of natural numbers. Let X be the set of numbers divisible by an element of Y. Then X is defined by the formula so X is in (actually it is in as well, since we could bound both quantifiers by n).
Arithmetic reducibility and degrees
Arithmetical reducibility is an intermediate notion between Turing reducibility and hyperarithmetic reducibility.
A set is arithmetical (also arithmetic and arithmetically definable) if it is defined by some formula in the language of Peano arithmetic. Equivalently X is arithmetical if X is or for some natural number n. A set X is arithmetical in a set Y, denoted , if X is definable as some formula in the language of Peano arithmetic extended by a predicate for membership of Y. Equivalently, X is arithmetical in Y if X is in or for some natural number n. A synonym for is: X is arithmetically reducible to Y.
The relation is reflexive and transitive, and thus the relation defined by the rule
is an equivalence relation. The equivalence classes of this relation are called the arithmetic degrees; they are partially ordered under .
The arithmetical hierarchy of subsets of Cantor and Baire space
The Cantor space, denoted , is the set of all infinite sequences of 0s and 1s; the Baire space, denoted or , is the set of all infinite sequences of natural numbers. Note that elements of the Cantor space can be identified with sets of natural numbers and elements of the Baire space with functions from natural numbers to natural numbers.
The ordinary axiomatization of second-order arithmetic uses a set-based language in which the set quantifiers can naturally be viewed as quantifying over Cantor space. A subset of Cantor space is assigned the classification if it is definable by a formula. The set is assigned the classification if it is definable by a formula. If the set is both and then it is given the additional classification . For example, let be the set of all infinite binary strings that aren't all 0 (or equivalently the set of all non-empty sets of natural numbers). As we see that is defined by a formula and hence is a set.
Note that while both the elements of the Cantor space (regarded as sets of natural numbers) and subsets of the Cantor space are classified in arithmetic hierarchies, these are not the same hierarchy. In fact the relationship between the two hierarchies is interesting and non-trivial. For instance the elements of the Cantor space are not (in general) the same as the elements of the Cantor space so that is a subset of the Cantor space. However, many interesting results relate the two hierarchies.
There are two ways that a subset of Baire space can be classified in the arithmetical hierarchy.
A subset of Baire space has a corresponding subset of Cantor space under the map that takes each function from to to the characteristic function of its graph. A subset of Baire space is given the classification , , or if and only if the corresponding subset of Cantor space has the same classification.
An equivalent definition of the arithmetical hierarchy on Baire space is given by defining the arithmetical hierarchy of formulas using a functional version of second-order arithmetic; then the arithmetical hierarchy on subsets of Cantor space can be defined from the hierarchy on Baire space. This alternate definition gives exactly the same classifications as the first definition.
A parallel definition is used to define the arithmetical hierarchy on finite Cartesian powers of Baire space or Cantor space, using formulas with several free variables. The arithmetical hierarchy can be defined on any effective Polish space; the definition is particularly simple for Cantor space and Baire space because they fit with the language of ordinary second-order arithmetic.
Note that we can also define the arithmetic hierarchy of subsets of the Cantor and Baire spaces relative to some set of natural numbers. In fact boldface is just the union of for all sets of natural numbers Y. Note that the boldface hierarchy is just the standard hierarchy of Borel sets.
Extensions and variations
It is possible to define the arithmetical hierarchy of formulas using a language extended with a function symbol for each primitive recursive function. This variation slightly changes the classification of , since using primitive recursive functions in first-order Peano arithmetic requires, in general, an unbounded existential quantifier, and thus some sets that are in by this definition are strictly in by the definition given in the beginning of this article. The class and thus all higher classes in the hierarchy remain unaffected.
A more semantic variation of the hierarchy can be defined on all finitary relations on the natural numbers; the following definition is used. Every computable relation is defined to be . The classifications and are defined inductively with the following rules.
If the relation is then the relation is defined to be
If the relation is then the relation is defined to be
This variation slightly changes the classification of some sets. In particular, , as a class of sets (definable by the relations in the class), is identical to as the latter was formerly defined. It can be extended to cover finitary relations on the natural numbers, Baire space, and Cantor space.
Properties
The following properties hold for the arithmetical hierarchy of sets of natural numbers and the arithmetical hierarchy of subsets of Cantor or Baire space.
The collections and are closed under finite unions and finite intersections of their respective elements.
A set is if and only if its complement is . A set is if and only if the set is both and , in which case its complement will also be .
The inclusions and hold for all . Thus the hierarchy does not collapse. This is a direct consequence of Post's theorem.
The inclusions , and hold for .
For example, for a universal Turing machine T, the set of pairs (n,m) such that T halts on n but not on m, is in (being computable with an oracle to the halting problem) but not in .
. The inclusion is strict by the definition given in this article, but an identity with holds under one of the variations of the definition given above.
Relation to Turing machines
Computable sets
If S is a Turing computable set, then both S and its complement are recursively enumerable (if T is a Turing machine giving 1 for inputs in S and 0 otherwise, we may build a Turing machine halting only on the former, and another halting only on the latter).
By Post's theorem, both S and its complement are in . This means that S is both in and in , and hence it is in .
Similarly, for every set S in , both S and its complement are in and are therefore (by Post's theorem) recursively enumerable by some Turing machines T1 and T2, respectively. For every number n, exactly one of these halts. We may therefore construct a Turing machine T that alternates between T1 and T2, halting and returning 1 when the former halts or halting and returning 0 when the latter halts. Thus T halts on every n and returns whether it is in S; so S is computable.
Summary of main results
The Turing computable sets of natural numbers are exactly the sets at level of the arithmetical hierarchy. The recursively enumerable sets are exactly the sets at level .
No oracle machine is capable of solving its own halting problem (a variation of Turing's proof applies). The halting problem for a oracle in fact sits in .
Post's theorem establishes a close connection between the arithmetical hierarchy of sets of natural numbers and the Turing degrees. In particular, it establishes the following facts for all n ≥ 1:
The set (the nth Turing jump of the empty set) is many-one complete in .
The set is many-one complete in .
The set is Turing complete in .
The polynomial hierarchy is a "feasible resource-bounded" version of the arithmetical hierarchy in which polynomial length bounds are placed on the numbers involved (or, equivalently, polynomial time bounds are placed on the Turing machines involved). It gives a finer classification of some sets of natural numbers that are at level of the arithmetical hierarchy.
Relation to other hierarchies
See also
Analytical hierarchy
Lévy hierarchy
Hierarchy (mathematics)
Interpretability logic
Polynomial hierarchy
References
.
.
.
.
Mathematical logic hierarchies
Computability theory
Effective descriptive set theory
Hierarchy
Complexity classes | Arithmetical hierarchy | Mathematics | 3,496 |
57,153,278 | https://en.wikipedia.org/wiki/Oncolytic%20AAV | Adeno-associated virus (AAV) has been researched as a viral vector in gene therapy for cancer treatment as an oncolytic virus. Currently there are not any FDA approved AAV cancer treatments, as the first FDA approved AAV treatment was approved December 2017. However, there are many Oncolytic AAV applications that are in development and have been researched.
Adeno-Associated Virus Therapy
AAV is a small virus that was found as a contaminant in adenovirus studies. AAV is a non-pathogenic virus, so it is currently being investigated for many gene therapy applications including oncolytic cancer treatments due to its relatively safe nature. AAV also has little risk for insertional mutagenesis, a common problem when dealing with viral vectors, as its transgenes are normally expressed episomally. It was found that AAV genome inserts in less than ~10% of occasions AAV infects a cell and the expression is less than when episomally expressed. AAV can only package genomes between 2 – 5.2 kb in size when they are flanked with inverted terminal repeat sequences (ITRs), but optimally holds a genome of 4.1 to 4.9 kb in length. This limits the therapeutic application of AAV as a cancer treatment as the gene the virus carries must be able to fit in less than 4.9 kb .
Targeting
AAV, and its many different strains, known as serotypes, to each have their unique cell-type targeting preferences, also known as tropism. In order to target viral vector gene therapies to disease sites, researchers often exploit the simplicity of the AAV virus capsid to mutate new targeting behaviors into the virus. In example, AAV has been mutated to target primary glial blastoma cells by combining regions of many AAV serotypes.
On top of combining the serotypes, mutating foreign sequences into the capsid known to have certain behaviors has been used to target cancer sites. For example, Matrix metalloproteinases (MMPs) are known to be up-regulated in cancer sites. By inserting an infection blocking tetra-aspartic acid residue into the capsid flanked by MMP cleavable sequences, a lab has developed a protease activatable virus (PAV) using AAV. In the presence of high concentrations of MMPs, the cleavable sequences are removed and the virus is “un-locked”, allowing infection in the neighboring diseased cells. PAV is still being optimized. A study to change out the tetra-aspartic acid blocking residues revealed that the negative charge of the residues is likely what is needed to block transduction. PAV is currently in investigation for targeting of ovarian and pancreatic cancer with the plan of delivering cytotoxic transgenes.
Examples of Oncolytic AAV in Development
AAV-2-TRAIL: The tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) has been studied when delivered by AAV serotype 2 capsids on human cancer cell lines. It was found that the different cell lines had different sensitivities to the AAV-2-TRAIL delivery and enhanced by pre-treatment of the cells with chemotherapy drug cisplatin. When mice with tumor xenograft models were injected with AAV-2-TRAIL intratumorally with and without cisplatin, tumor sizes were found to decrease equally significantly with cisplatin and AAV-2-TRAIL. The tumor weight decreased even more significantly when cisplatin and AAV-2-TRAIL where combined.
AAV-2-HSV-TK: Herpes simplex virus type 1 thymidine kinase (HSV-TK) is a common anti-cancer therapy that converts the ganciclovir (GCV) into the toxic GCV-triphosphate within cells expressing the enzyme. This induces the cell and bystander toxicity to the neighboring cells. When cultured with MCF-7 cancer cells, AAV2 delivery of HSV-TK significantly decreased viability of the cells only when in the presence of GCV.
AAV-2-sc39TK: sc39TK is a five-codon substitution from HSV-TK where silent mutations have been introduced into the GCV-resistant spliced acceptor and donor sequences. The result is a hyperactive version of HSV-TK. When combined with GCV, these mutants also result in high levels of cancer cell death in-vitro. Furthermore, when injected into mouse xenograft models, tumor size has been shown to significantly decrease over a period of 30 days. It has also been investigated in combination with AAV-2-mTOR which targets the mammalian target of rapamycin (mTOR), an investigated anti-cancer target.
Oncolytic AAV Clinical Trials
AAV-DC-CTL: A clinical trial is currently in Phase I investigation of Oncolytic AAV treatments for stage IV gastric cancer in China. In these trials AAV has been used to deliver Carcinoembryonic antigen (CEA). CEA is normally produced in the gastric tissue of infants and would be used in these studies to target cancer cells for CEA specific cytotoxic-T-cell lysis. In this regard, the virus would be used to make cancer cells be recognized by the immune system for destruction. As a result, AAV will target cancer cells and the CEA specific cytotoxic-T-cells will target the cells that AAV infects.
References
Gene therapy | Oncolytic AAV | Engineering,Biology | 1,185 |
53,469 | https://en.wikipedia.org/wiki/Cinnamon | Cinnamon is a spice obtained from the inner bark of several tree species from the genus Cinnamomum. Cinnamon is used mainly as an aromatic condiment and flavouring additive in a wide variety of cuisines, sweet and savoury dishes, breakfast cereals, snack foods, bagels, teas, hot chocolate and traditional foods. The aroma and flavour of cinnamon derive from its essential oil and principal component, cinnamaldehyde, as well as numerous other constituents including eugenol.
Cinnamon is the name for several species of trees and the commercial spice products that some of them produce. All are members of the genus Cinnamomum in the family Lauraceae. Only a few Cinnamomum species are grown commercially for spice. Cinnamomum verum (alternatively C. zeylanicum), known as "Ceylon cinnamon" after its origins in Sri Lanka (formerly Ceylon), is considered to be "true cinnamon", but most cinnamon in international commerce is derived from four other species, usually and more correctly referred to as "cassia": C. burmanni (Indonesian cinnamon or Padang cassia), C. cassia (Chinese cinnamon or Chinese cassia), C. loureiroi (Saigon cinnamon or Vietnamese cassia), and the less common C. citriodorum (Malabar cinnamon).
In 2021, world production of cinnamon was 226,753 tonnes, led by China with 43% of the total.
Etymology
The English word "cinnamon", attested in English since the 15th century, deriving from the Ancient Greek (, later κίνναμον : ), via Latin and medieval French intermediate forms. The Greek was borrowed from a Phoenician word, which was similar to the related Hebrew word ().
The name "cassia", first recorded in late Old English from Latin, ultimately derives from the Hebrew word , a form of the verb , "to strip off bark".
Early Modern English also used the names canel and canella, similar to the current names of cinnamon in several other European languages, which are derived from the Latin word , a diminutive of , "tube", from the way the bark curls up as it dries.
History
Cinnamon has been known from remote antiquity. It was imported to Egypt as early as 2000 BC, but those who reported that it had come from China had confused it with Cinnamomum cassia, a related species. Cinnamon was so highly prized among ancient nations that it was regarded as a gift fit for monarchs and even for a deity; an inscription records the gift of cinnamon and cassia to the temple of Apollo at Miletus. Its source was kept a trade secret in the Mediterranean world for centuries by those in the spice trade, in order to protect their monopoly as suppliers.
Cinnamomum verum, which translates from Latin as "true cinnamon", is native to India, Sri Lanka, Bangladesh and Myanmar. Cinnamomum cassia (cassia) is native to China. Related species, all harvested and sold in the modern era as cinnamon, are native to Vietnam ("Saigon cinnamon"), Indonesia and other southeast Asian countries with warm climates.
In Ancient Egypt, cinnamon was used to embalm mummies. From the Ptolemaic Kingdom onward, Ancient Egyptian recipes for kyphi, an aromatic used for burning, included cinnamon and cassia. The gifts of Hellenistic rulers to temples sometimes included cassia and cinnamon.
The first Greek reference to is found in a poem by Sappho in the 7th century BC. According to Herodotus, both cinnamon and cassia grew in Arabia, together with incense, myrrh and , and were guarded by winged serpents. Herodotus, Aristotle and other authors named Arabia as the source of cinnamon; they recounted that giant "cinnamon birds" collected the cinnamon sticks from an unknown land where the cinnamon trees grew and used them to construct their nests.
Pliny the Elder wrote that cinnamon was brought around the Arabian Peninsula on "rafts without rudders or sails or oars", taking advantage of the winter trade winds. He also mentioned cassia as a flavouring agent for wine, and that the tales of cinnamon being collected from the nests of cinnamon birds was a traders' fiction made up to charge more. However, the story remained current in Byzantium as late as 1310.
According to Pliny the Elder, a Roman pound () of cassia, cinnamon (), cost up to 1,500 , the wage of fifty months' labour. Diocletian's Edict on Maximum Prices from 301 AD gives a price of 125 for a pound of cassia, while an agricultural labourer earned 25 per day. Cinnamon was too expensive to be commonly used on funeral pyres in Rome, but the Emperor Nero is said to have burned a year's worth of the city's supply at the funeral for his wife Poppaea Sabina in AD 65.
Middle Ages
Through the Middle Ages, the source of cinnamon remained a mystery to the Western world. From reading Latin writers who quoted Herodotus, Europeans had learned that cinnamon came up the Red Sea to the trading ports of Egypt, but where it came from was less than clear. When the Sieur de Joinville accompanied his king, Louis IX of France to Egypt on the Seventh Crusade in 1248, he reported—and believed—what he had been told: that cinnamon was fished up in nets at the source of the Nile out at the edge of the world (i.e., Ethiopia). Marco Polo avoided precision on the topic.
The first mention that the spice grew in the area of India was in Maimonides's Mishneh Torah, about 1180. The first mention that the spice grew specifically in Sri Lanka was in Zakariya al-Qazwini's ("Monument of Places and History of God's Bondsmen") about 1270. This was followed shortly thereafter by John of Montecorvino in a letter of about 1292.
Indonesian rafts transported cinnamon directly from the Moluccas to East Africa (see also Rhapta), where local traders then carried it north to Alexandria in Egypt. Venetian traders from Italy held a monopoly on the spice trade in Europe, distributing cinnamon from Alexandria. The disruption of this trade by the rise of other Mediterranean powers, such as the Mamluk sultans and the Ottoman Empire, was one of many factors that led Europeans to search more widely for other routes to Asia.
Early modern period
During the 1500s, Ferdinand Magellan was searching for spices on behalf of Spain; in the Philippines, he found , which was closely related to C. zeylanicum, the cinnamon found in Sri Lanka. This cinnamon eventually competed with Sri Lankan cinnamon, which was controlled by the Portuguese.
In 1638, Dutch traders established a trading post in Sri Lanka, took control of the manufactories by 1640, and expelled the remaining Portuguese by 1658. "The shores of the island are full of it," a Dutch captain reported, "and it is the best in all the Orient. When one is downwind of the island, one can still smell cinnamon eight leagues out to sea." The Dutch East India Company continued to overhaul the methods of harvesting in the wild and eventually began to cultivate its own trees.
In 1767, Lord Brown of the British East India Company established the Anjarakkandy Cinnamon Estate near Anjarakkandy in the Kannur district of Kerala, India. It later became Asia's largest cinnamon estate. The British took control of Ceylon from the Dutch in 1796.
Cultivation
Cinnamon is an evergreen tree characterized by oval-shaped leaves, thick bark and a berry fruit. When harvesting the spice, the bark and leaves are the primary parts of the plant used. However, in Japan, the more pungent roots are harvested in order to produce nikki (ニッキ) which is a product distinct from cinammon (シナモン shinamon). Cinnamon is cultivated by growing the tree for two years, then coppicing it, i.e., cutting the stems at ground level. The following year, about a dozen new shoots form from the roots, replacing those that were cut. A number of pests such as Colletotrichum gloeosporioides, Diplodia species and Phytophthora cinnamomi (stripe canker) can affect the growing plants.
The stems must be processed immediately after harvesting while the inner bark is still wet. The cut stems are processed by scraping off the outer bark, then beating the branch evenly with a hammer to loosen the inner bark, which is then pried off in long rolls. Only of the inner bark is used; the outer, woody portion is discarded, leaving metre-long cinnamon strips that curl into rolls ("quills") on drying. The processed bark dries completely in four to six hours, provided it is in a well-ventilated and relatively warm environment. Once dry, the bark is cut into lengths for sale.
A less than ideal drying environment encourages the proliferation of pests in the bark, which may then require treatment by fumigation with sulphur dioxide. In 2011, the European Union approved the use of sulphur dioxide at a concentration of up to for the treatment of C. verum bark harvested in Sri Lanka.
Species
A number of species are often sold as cinnamon:
Cinnamomum cassia (cassia or Chinese cinnamon, the most common commercial type in the USA)
C. burmanni (Korintje, Padang cassia, or Indonesian cinnamon)
C. loureiroi (Saigon cinnamon, Vietnamese cassia, or Vietnamese cinnamon)
C. verum (Sri Lanka cinnamon, Ceylon cinnamon or Cinnamomum zeylanicum)
C. citriodorum (Malabar cinnamon)
Cassia induces a strong, spicy flavour and is often used in baking, especially associated with cinnamon rolls, as it handles baking conditions well. Among cassia, Chinese cinnamon is generally medium to light reddish-brown in colour, hard and woody in texture, and thicker ( thick), as all of the layers of bark are used. Ceylon cinnamon, using only the thin inner bark, has a lighter brown colour and a finer, less dense, and more crumbly texture. It is subtle and more aromatic in flavour than cassia and it loses much of its flavour during cooking.
The barks of the species are easily distinguished when whole, both in macroscopic and microscopic characteristics. Ceylon cinnamon sticks (quills) have many thin layers and can easily be made into powder using a coffee or spice grinder, whereas cassia sticks are much harder. Indonesian cinnamon is often sold in neat quills made up of one thick layer, capable of damaging a spice or coffee grinder. Saigon cinnamon (C. loureiroi) and Chinese cinnamon (C. cassia) are always sold as broken pieces of thick bark, as the bark is not supple enough to be rolled into quills.
The powdered bark is harder to distinguish, but if it is treated with tincture of iodine (a test for starch), little effect is visible with pure Ceylon cinnamon; however, when Chinese cinnamon is present, a deep-blue tint is produced.
Grading
The Sri Lankan grading system divides the cinnamon quills into four groups:
Alba, less than in diameter
Continental, less than in diameter
Mexican, less than in diameter
Hamburg, less than in diameter
These groups are further divided into specific grades. For example, Mexican is divided into M00000 special, M000000 and M0000, depending on quill diameter and number of quills per kilogram. Any pieces of bark less than long are categorized as quillings. Featherings are the inner bark of twigs and twisted shoots. Chips are trimmings of quills, outer and inner bark that cannot be separated, or the bark of small twigs.
Production
In 2021, four countries accounted for 98% of the world's cinnamon production, a total of 226,753 tonnes: China, Indonesia, Vietnam, and Sri Lanka.
Counterfeit
True cinnamon from C. verum bark can be mixed with cassia (C. cassia) as counterfeit and falsely marketed as authentic cinnamon. In one analysis, authentic Ceylon cinnamon bark contained 12-143 mg/kg of coumarin a phenolic typically low in content in true cinnamon but market samples contained coumarin with levels as high as 3462 mg/kg, indicating probable contamination with cassia in the counterfeit cinnamon. ConsumerLab.com found the same problem in a 2020 analysis; "a supplement that contained the highest amount of coumarin was labeled as Ceylon cinnamon".
Food uses
Cinnamon bark is used as a spice. It is principally employed in cookery as a condiment and flavouring material. It is used in the preparation of chocolate, especially in Mexico. Cinnamon is often used in savoury dishes of chicken and lamb. In the United States and Europe, cinnamon and sugar are often used to flavour cereals, bread-based dishes such as toast, and fruits, especially apples; a cinnamon and sugar mixture (cinnamon sugar) is sold separately for such purposes. It is also used in Portuguese and Turkish cuisine for both sweet and savoury dishes. Cinnamon can also be used in pickling, and in Christmas drinks such as eggnog. Cinnamon powder has long been an important spice in enhancing the flavour of Persian cuisine, used in a variety of thick soups, drinks and sweets.
Cinnamon is a common ingredient in Jewish cuisine across various communities. In Sephardic cooking, it is incorporated into vegetable stews and desserts such as tishpishti and travados, both of which are soaked in honey. In Ashkenazi cuisine, cinnamon features in dishes like honey cakes, and kugels. It is also one of "four sibling spices" (rempah empat beradik) essential in Malay cuisine along with clove, star anise and cardamom.
Nutrient composition
Ground cinnamon is 11% water, 81% carbohydrates (including 53% dietary fiber), 4% protein and 1% fat.
Characteristics
Texture
Ceylon cinnamon may be crushed into small pieces by hand while Indonesian cinnamon requires a powerful blender.
Flavour, aroma and taste
The flavour of cinnamon is due to the aromatic essential oils that makes up 0.5 to 1% of its composition.
Cinnamon bark can be macerated, then extracted in 80% ethanol, to a tincture.
Cinnamon essential oil can be prepared by roughly pounding the bark, macerating it in sea water, and then quickly distilling the whole. It is of a golden-yellow colour, with the characteristic odour of cinnamon and a very hot aromatic taste.
Cinnamon oil nanoemulsion can be made with polysorbate 80, cinnamon essential oil, and water, by ultrasonic emulsification.
Cinnamon oil macroemulsion can be made with a dispersing emulsifying homogenizer.
The pungent taste and scent come from cinnamaldehyde, about 90% of the essential oil from cinnamon bark. Cinnamaldehyde decomposes, in high humidity and high temperatures, to styrene, and, by reaction with oxygen as it ages, it darkens in colour and forms resinous compounds.
Cinnamon constituents include some 80 aromatic compounds, including eugenol, found in the oil from leaves or bark of cinnamon trees.
Alcohol flavorant
Cinnamon is used as a flavoring in cinnamon liqueur, such as cinnamon-flavored whiskey in the United States, and , a cinnamon brandy in Greece.
Health-related research
Cinnamon has a long history of use in traditional medicine as a digestive aid. However, contemporary studies are unable to find evidence of any significant medicinal or therapeutic effect.
Reviews of clinical trials reported lowering of fasting plasma glucose and inconsistent effects on hemoglobin A1C (HbA1c, an indicator of chronically elevated plasma glucose). Four of the reviews reported a decrease in fasting plasma glucose, only two reported lower HbA1c, and one reported no change to either measure. The Cochrane review noted that trial durations were limited to 4 to 16 weeks, and that no trials reported on changes to quality of life, morbidity or mortality rate. The Cochrane authors' conclusion was: "There is insufficient evidence to support the use of cinnamon for type 1 or type 2 diabetes mellitus." Citing the Cochrane review, the U.S. National Center for Complementary and Integrative Health stated: "Studies done in people don't support using cinnamon for any health condition." However, the results of the studies are difficult to interpret because it is often unclear what type of cinnamon and what part of the plant were used.
A meta-analysis of cinnamon supplementation trials with lipid measurements reported lower total cholesterol and triglycerides, but no significant changes in LDL-cholesterol or HDL-cholesterol. Another reported no change to body weight or insulin resistance.
Toxicity
A systematic review of adverse events as a result of cinnamon use reported gastrointestinal disorders and allergic reactions as the most frequently reported side effects.
In 2008, the European Food Safety Authority considered the toxicity of coumarin, a component of cinnamon, and confirmed a maximum recommended tolerable daily intake (TDI) of 0.1 mg of coumarin per kg of body weight. Coumarin is known to cause liver and kidney damage in high concentrations and metabolic effect in humans with CYP2A6 polymorphism. Based on this assessment, the European Union set a guideline for maximum coumarin content in foodstuffs of 50 mg per kg of dough in seasonal foods, and 15 mg per kg in everyday baked foods. The maximum recommended TDI of 0.1 mg of coumarin per kg of body weight equates to 5 mg of coumarin (or 5.6 g C. verum with 0.9 mg coumarin per gram) for a body weight of 50 kg. C as shown in the table below:
Due to the variable amount of coumarin in C. cassia, usually well over 1.0 mg of coumarin per g of cinnamon and sometimes up to 12 times that, C. cassia has a low safe-intake-level upper limit to adhere to the above TDI. In contrast, C. verum has only trace amounts of coumarin.
In March 2024, the US Food and Drug Administration recommended a voluntary recall on 6 brands of cinnamon due to contamination with lead, after an investigation stemming from 500 reports of child lead poisoning across the US. The FDA determined that cinnamon was adulterated with lead chromate.
Gallery
See also
Canella, a plant known as "wild cinnamon" or "white cinnamon"
Cinnamomea, a Neo-Latin adjective meaning 'cinnamon-coloured'
Cinnamon challenge
List of culinary herbs and spices
Notes
References
Further reading
Wijesekera R. O. B., Ponnuchamy S., Jayewardene A. L., "Cinnamon" (1975) monograph published by CISIR, Colombo, Sri Lanka
External links
"In pictures: Sri Lanka's spice of life". BBC News.
Antifungals
Cinnamomum
Incense material
Medicinal plants
Non-timber forest products
Spices
Indian spices
Sri Lankan spices | Cinnamon | Physics | 3,992 |
35,831,726 | https://en.wikipedia.org/wiki/Bandwidth-sharing%20game | A bandwidth-sharing game is a type of resource allocation game designed to model the real-world allocation of bandwidth to many users in a network. The game is popular in game theory because the conclusions can be applied to real-life networks.
The game
The game involves players.
Each player has utility for units of bandwidth.
Player pays for units of bandwidth and receives net utility of .
The total amount of bandwidth available is .
Regarding , we assume
is increasing and concave;
is continuous.
The game arises from trying to find a price so that every player individually optimizes their own welfare. This implies every player must individually find . Solving for the maximum yields .
Problem
With this maximum condition, the game then becomes a matter of finding a price that satisfies an equilibrium. Such a price is called a market clearing price.
Possible solution
A popular idea to find the price is a method called fair sharing. In this game, every player is asked for the amount they are willing to pay for the given resource denoted by . The resource is then distributed in amounts by the formula . This method yields an effective price .
This price can proven to be market clearing; thus, the distribution is optimal. The proof is as so:
Proof
We have
.
Hence,
from which we conclude
and thus
Comparing this result to the equilibrium condition above, we see that when is very small, the two conditions equal each other and thus, the fair sharing game is almost optimal.
References
Game theory game classes | Bandwidth-sharing game | Mathematics | 297 |
1,837,480 | https://en.wikipedia.org/wiki/Knudsen%20gas | A Knudsen gas is a gas in a state of such low density that the average distance travelled by the gas molecules between collisions (mean free path) is greater than the diameter of the receptacle that contains it. If the mean free path is much greater than the diameter, the flow regime is dominated by collisions between the gas molecules and the walls of the receptacle, rather than intermolecular collisions with each other. It is named after Martin Knudsen.
Knudsen number
For a Knudsen gas, the Knudsen number must be greater than 1. The Knudsen number can be defined as:
where
is the mean free path [m]
is the diameter of the receptacle [m].
When , the flow regime of the gas is transitional flow. In this regime the intermolecular collisions between gas particles are not yet negligible compared to collisions with the wall. However when , the flow regime is free molecular flow, so the intermolecular collisions between the particles are negligible compared to the collisions with the wall.
Example
For example, consider a receptacle of air at room temperature and pressure with a mean free path of 68nm. If the diameter of the receptacle is less than 68nm, the Knudsen number would greater than 1, and this sample of air would be considered a Knudsen gas. It would not be a Knudsen gas if the diameter of the receptacle is greater than 68nm.
See also
Free streaming
Kinetic theory
References
Gases
Phases of matter | Knudsen gas | Physics,Chemistry | 322 |
151,183 | https://en.wikipedia.org/wiki/Turpentine | Turpentine (which is also called spirit of turpentine, oil of turpentine, terebenthine, terebenthene, terebinthine and, colloquially, turps) is a fluid obtained by the distillation of resin harvested from living trees, mainly pines. Principally used as a specialized solvent, it is also a source of material for organic syntheses.
Turpentine is composed of terpenes, primarily the monoterpenes alpha- and beta-pinene, with lesser amounts of carene, camphene, limonene, and terpinolene.
Substitutes include white spirit or other petroleum distillates – although the constituent chemicals are very different.
Etymology
The word turpentine derives (via French and Latin) from the Greek word τερεβινθίνη terebinthine, in turn the feminine form (to conform to the feminine gender of the Greek word, which means 'resin') of an adjective (τερεβίνθινος) derived from the Greek noun (τερέβινθος) for the terebinth tree.
Although the word originally referred to the resinous exudate of terebinth trees (e.g. Chios turpentine, Cyprus turpentine, and Persian turpentine), it now refers to that of coniferous trees, namely crude turpentine (e.g. Venice turpentine is the oleoresin of larch), or the volatile oil part thereof, namely oil (spirit) of turpentine; the latter usage is much more common today.
Source trees
Important pines for turpentine production include: maritime pine (Pinus pinaster), Aleppo pine (Pinus halepensis), Masson's pine (Pinus massoniana), Sumatran pine (Pinus merkusii), longleaf pine (Pinus palustris), loblolly pine (Pinus taeda), slash pine (Pinus elliottii), and ponderosa pine (Pinus ponderosa).
Converting crude turpentine to oil of turpentine
Crude turpentine collected from the trees may be evaporated by steam distillation in a copper still. Molten rosin remains in the still bottoms after turpentine has been distilled out. Such turpentine is called gum turpentine. The term gum turpentine may also refer to crude turpentine, which may cause some confusion.
Turpentine may alternatively be extracted from destructive distillation of pine wood, such as shredded pine stumps, roots, and slash, using the light end of the heavy naphtha fraction (boiling between ) from a crude oil refinery. Such turpentine is called wood turpentine. Multi-stage counter-current extraction is commonly used so fresh naphtha first contacts wood leached in previous stages and naphtha laden with turpentine from previous stages contacts fresh wood before vacuum distillation to recover naphtha from the turpentine. Leached wood is steamed for additional naphtha recovery prior to burning for energy recovery.
Sulfate turpentine
When producing chemical wood pulp from pines or other coniferous trees, sulfate turpentine may be condensed from the gas generated in Kraft process pulp digesters. The average yield of crude sulfate turpentine is 5–10 kg/t pulp. Unless burned at the mill for energy production, sulfate turpentine may require additional treatment measures to remove traces of sulfur compounds.
Industrial and other end uses
Solvent
As a solvent, turpentine is used for thinning oil-based paints, for producing varnishes, and as a raw material for the chemical industry. Its use as a solvent in industrialized nations has largely been replaced by the much cheaper turpentine substitutes obtained from petroleum such as white spirit. A solution of turpentine and beeswax or carnauba wax has long been used as a furniture wax.
Lighting
Spirits of turpentine, called camphine, was burned in lamps with glass chimneys in the 1830s through the 1860s. Turpentine blended with grain alcohol was known as burning fluid. Both were used as domestic lamp fuels, gradually replacing whale oil, until kerosene, gas lighting and electric lights began to predominate.
Source of organic compounds
Turpentine is also used as a source of raw materials in the synthesis of fragrant chemical compounds. Commercially used camphor, linalool, alpha-terpineol, and geraniol are all usually produced from alpha-pinene and beta-pinene, which are two of the chief chemical components of turpentine. These pinenes are separated and purified by distillation. The mixture of diterpenes and triterpenes that is left as residue after turpentine distillation is sold as rosin.
Niche uses
Turpentine is also added to many cleaning and sanitary products due to its antiseptic properties and its "clean scent".
In early 19th-century America, spirits of turpentine (camphine) was burned in lamps as a cheap alternative to whale oil. It produced a bright light but had a strong odour. Camphine and burning fluid (a mix of alcohol and turpentine) served as the dominant lamp fuels replacing whale oil until the advent of kerosene, electric lights and gas lighting.
Honda motorcycles, first manufactured in 1946, ran on a blend of gasoline and turpentine, due to the scarcity of gasoline in Japan following World War II. The French Emeraude rocket uses a similar fuel mixture. Turpentine has also been researched as a potential biofuel for mixing into gasoline.
In his book If Only They Could Talk, veterinarian and author James Herriot describes the use of the reaction of turpentine with resublimed iodine to "drive the iodine into the tissue", or perhaps just impress the watching customer with a spectacular treatment (a dense cloud of purple smoke).
Safety and health considerations
Turpentine is highly flammable, so much so that it has been considered as an automotive fuel.
Turpentine was added extensively into gin during the Gin Craze.
Turpentine's vapour can irritate the skin and eyes, damage the lungs and respiratory system, as well as the central nervous system when inhaled, and cause damage to the renal system when ingested, among other things. Ingestion can cause burning sensations, abdominal pain, nausea, vomiting, confusion, convulsions, diarrhea, tachycardia, unconsciousness, respiratory failure, and chemical pneumonia.
The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for turpentine exposure in the workplace as 100 ppm (560 mg/m3) over an 8-hour workday. The same threshold was adopted by the National Institute for Occupational Safety and Health (NIOSH) as the recommended exposure limit (REL). At levels of 800 ppm (4480 mg/m3), turpentine is immediately dangerous to life and health.
Folk medicine
Turpentine and petroleum distillates such as coal oil and kerosene, were used in folk medicine for abrasions and wounds, as a treatment for lice, and when mixed with animal fat, as a chest rub or inhaler for nasal and throat ailments. Vicks chest rubs still contain turpentine in their formulations, although not as an active ingredient.
Turpentine, now understood to be dangerous for consumption, was a common medicine among seamen during the Age of Discovery. It was one of several products carried aboard Ferdinand Magellan's fleet during the first circumnavigation of the globe. Taken internally it was used as a treatment for intestinal parasites. This is dangerous, due to the chemical's toxicity.
Turpentine enemas, a very harsh purgative, had formerly been used for stubborn constipation or impaction. They were also given punitively to political dissenters in post-independence Argentina.
See also
McCranie's Turpentine Still – a historic site in Willacoochee, Georgia
Patent medicine – over-the-counter "proprietary" medications
flavored with Aleppo pine resin
Russia leather – a water-resistant leather curried after tanning with a birch oil distillate similar to turpentine
References
External links
Inchem.org, IPCS INCHEM Turpentine classification, hazard, and property table.
CDC - NIOSH Pocket Guide to Chemical Hazards - Turpentine
Household chemicals
Hydrocarbon solvents
Painting materials
Patent medicines
Resins
Terpenes and terpenoids
Papermaking
Non-timber forest products
Wood extracts | Turpentine | Physics,Chemistry | 1,866 |
762,047 | https://en.wikipedia.org/wiki/Freshwater%20ecosystem | Freshwater ecosystems are a subset of Earth's aquatic ecosystems that include the biological communities inhabiting freshwater waterbodies such as lakes, ponds, rivers, streams, springs, bogs, and wetlands. They can be contrasted with marine ecosystems, which have a much higher salinity. Freshwater habitats can be classified by different factors, including temperature, light penetration, nutrients, and vegetation.
There are three basic types of freshwater ecosystems: lentic (slow moving water, including pools, ponds, and lakess), lotic (faster moving streams, for example creeks and rivers) and wetlands (semi-aquatic areas where the soil is saturated or inundated for at least part of the time). Freshwater ecosystems contain 41% of the world's known fish species.
Freshwater ecosystems have undergone substantial transformations over time, which has impacted various characteristics of the ecosystems. Original attempts to understand and monitor freshwater ecosystems were spurred on by threats to human health (for example cholera outbreaks due to sewage contamination). Early monitoring focused on chemical indicators, then bacteria, and finally algae, fungi and protozoa. A new type of monitoring involves quantifying differing groups of organisms (macroinvertebrates, macrophytes and fish) and measuring the stream conditions associated with them.
Threats to freshwater biodiversity include overexploitation, water pollution, flow modification, destruction or degradation of habitat, and invasion by exotic species. Climate change is putting further pressure on these ecosystems because water temperatures have already increased by about 1 °C, and there have been significant declines in ice coverage which have caused subsequent ecosystem stresses.
Types
There are three basic types of freshwater ecosystems: Lentic (slow moving water, including pools, ponds, and lakes), lotic (faster moving water, for example streams and rivers) and wetlands (areas where the soil is saturated or inundated for at least part of the time). Limnology (and its branch freshwater biology) is a study about freshwater ecosystems.
Lentic ecosystems
Lotic ecosystems
Wetlands
Threats
Biodiversity
Five broad threats to freshwater biodiversity include overexploitation, water pollution, flow modification, destruction or degradation of habitat, and invasion by exotic species. Recent extinction trends can be attributed largely to sedimentation, stream fragmentation, chemical and organic pollutants, dams, and invasive species. Common chemical stresses on freshwater ecosystem health include acidification, eutrophication and copper and pesticide contamination.
Freshwater biodiversity faces many threats. The World Wide Fund for Nature's Living Planet Index noted an 83% decline in the populations of freshwater vertebrates between 1970 and 2014. These declines continue to outpace contemporaneous declines in marine or terrestrial systems. The causes of these declines are related to:
A rapidly changing climate
Online wildlife trade and invasive species
Infectious disease
Toxic algae blooms
Hydropower damming and fragmenting of half the world's rivers
Emerging contaminants, such as hormones
Engineered nanomaterials
Microplastic pollution
Light and noise interference
Saltier coastal freshwaters due to sea level rise
Calcium concentrations falling below the needs of some freshwater organisms
The additive—and possibly synergistic—effects of these threats
Invasive species
Invasive plants and animals are a major issue to freshwater ecosystems, in many cases outcompeting native species and altering water conditions. Introduced species are especially devastating to ecosystems that are home to endangered species. An example of this being the Asian carp competing with the paddlefish in the Mississippi river. Common causes of invasive species in freshwater ecosystems include aquarium releases, introduction for sport fishing, and introduction for use as a food fish.
Extinction of freshwater fauna
Over 123 freshwater fauna species have gone extinct in North America since 1900. Of North American freshwater species, an estimated 48.5% of mussels, 22.8% of gastropods, 32.7% of crayfishes, 25.9% of amphibians, and 21.2% of fish are either endangered or threatened. Extinction rates of many species may increase severely into the next century because of invasive species, loss of keystone species, and species which are already functionally extinct (e.g., species which are not reproducing). Even using conservative estimates, freshwater fish extinction rates in North America are 877 times higher than background extinction rates (1 in 3,000,000 years). Projected extinction rates for freshwater animals are around five times greater than for land animals, and are comparable to the rates for rainforest communities. Given the dire state of freshwater biodiversity, a team of scientists and practitioners from around the globe recently drafted an Emergency Action plan to try and restore freshwater biodiversity.
Current freshwater biomonitoring techniques focus primarily on community structure, but some programs measure functional indicators like biochemical (or biological) oxygen demand, sediment oxygen demand, and dissolved oxygen. Macroinvertebrate community structure is commonly monitored because of the diverse taxonomy, ease of collection, sensitivity to a range of stressors, and overall value to the ecosystem. Additionally, algal community structure (often using diatoms) is measured in biomonitoring programs. Algae are also taxonomically diverse, easily collected, sensitive to a range of stressors, and overall valuable to the ecosystem. Algae grow very quickly and communities may represent fast changes in environmental conditions.
In addition to community structure, responses to freshwater stressors are investigated by experimental studies that measure organism behavioural changes, altered rates of growth, reproduction or mortality. Experimental results on single species under controlled conditions may not always reflect natural conditions and multi-species communities.
The use of reference sites is common when defining the idealized "health" of a freshwater ecosystem. Reference sites can be selected spatially by choosing sites with minimal impacts from human disturbance and influence. However, reference conditions may also be established temporally by using preserved indicators such as diatom valves, macrophyte pollen, insect chitin and fish scales can be used to determine conditions prior to large scale human disturbance. These temporal reference conditions are often easier to reconstruct in standing water than moving water because stable sediments can better preserve biological indicator materials.
Climate change
The effects of climate change greatly complicate and frequently exacerbate the impacts of other stressors that threaten many fish, invertebrates, phytoplankton, and other organisms. Climate change is increasing the average temperature of water bodies, and worsening other issues such as changes in substrate composition, oxygen concentration, and other system changes that have ripple effects on the biology of the system. Water temperatures have already increased by around 1 °C, and significant declines in ice coverage have caused subsequent ecosystem stresses.
See also
Ecology
Freshwater
References
Freshwater ecology
Ecosystems
Limnology
Fisheries science
Systems ecology
Water | Freshwater ecosystem | Biology,Environmental_science | 1,345 |
58,513,265 | https://en.wikipedia.org/wiki/Silica%20cycle | The silica cycle is the biogeochemical cycle in which biogenic silica is transported between the Earth's systems. Silicon is considered a bioessential element and is one of the most abundant elements on Earth. The silica cycle has significant overlap with the carbon cycle (see carbonate–silicate cycle) and plays an important role in the sequestration of carbon through continental weathering, biogenic export and burial as oozes on geologic timescales.
Overview
Silicon is the eighth most abundant element in the universe and the second most abundant element in the Earth's crust (the most abundant is oxygen). The weathering of the Earth's crust by rainwater rich in carbon dioxide is a key process in the control of atmospheric carbon dioxide. It results in the generation of silicic acid in aqueous environments. Silicic acid, Si(OH)4, is a hydrated form of silica found only as an unstable solution in water, yet it plays a central role in the silica cycle.
Silicifiers are organisms that use silicic acid to precipitate biogenic silica, SiO2. Biogenic silica, also referred to as opal, is precipitated by silicifiers as internal structures and/or external structures. Silicifiers are among the most important aquatic organisms. They include micro-organisms such as diatoms, rhizarians, silicoflagellates and several species of choanoflagellates, as well as macro-organisms such as siliceous sponges. Phototrophic silicifiers, such as diatoms, globally consume vast amounts of silicon along with nitrogen (N), phosphorus (P), and inorganic carbon (C), connecting the biogeochemistry of these elements and contributing to the sequestration of atmospheric carbon dioxide in the ocean. Heterotrophic organisms like rhizarians, choanoflagellates, and sponges produce biogenic silica independently of the photoautotrophic processing of C and N.
The diatoms dominate the fixation and export of particulate matter in the contemporary marine silica cycle. This includes the export of organic carbon from the euphotic zone to the deep ocean via the biological carbon pump. As a result, diatoms, and other silica-secreting organisms play crucial roles in the global carbon cycle by sequestering carbon in the ocean. The connection between biogenic silica and organic carbon, together with the significantly higher preservation potential of biogenic siliceous compounds compared to organic carbon makes opal accumulation records of interest in paleoceanography and paleoclimatology.
Understanding the silica cycle is important for understanding the functioning of marine food webs, biogeochemical cycles, and the biological pump. Silicic acid is delivered to the ocean through six pathways as illustrated in the diagram above, which all ultimately derive from the weathering of the Earth's crust.
Terrestrial silica cycling
Silica is an important nutrient utilized by plants, trees, and grasses in the terrestrial biosphere. Silicate is transported by rivers and can be deposited in soils in the form of various siliceous polymorphs. Plants can readily uptake silicate in the form of H4SiO4 for the formation of phytoliths. Phytoliths are tiny rigid structures found within plant cells that aid in the structural integrity of the plant. Phytoliths also serve to protect the plants from consumption by herbivores who are unable to consume and digest silica-rich plants efficiently. Silica release from phytolith degradation or dissolution is estimated to occur at a rate double that of global silicate mineral weathering. Considering biogeochemical cycling within ecosystems, the import and export of silica to and from terrestrial ecosystems is small.
Weathering
Silicate minerals are abundant in rock formations all over the planet, comprising approximately 90% of the Earth's crust. The primary source of silicate to the terrestrial biosphere is weathering. The process and rate of weathering is variable, depending on rainfall, runoff, vegetation, lithology, and topography.
Given sufficient time, rainwater can dissolve even a highly resistant silicate-based mineral such as quartz. Water breaks the bonds between atoms in the crystal:
The overall reaction for the dissolution of quartz results in silicic acid
Another example of a silicate-based mineral is enstatite (MgSiO3). Rainwater weathers this to silicic acid as follows:
MgSiO3(s) + 2CO2(g) + H2O(l) = Mg2+(aq) + 2HCO3- (aq) + SiO2(aq)
Reverse weathering
In recent years, the effect of reverse weathering on biogenic silica has been of interest in quantifying the silica cycle. During weathering, dissolved silica is delivered to oceans through glacial runoff and riverine inputs. This dissolved silica is taken up by a multitude of marine organisms, such as diatoms, and is used to create protective shells. When these organisms die, they sink through the water column. Without active production of biogenic SiO2, the mineral begins diagenesis. Conversion of this dissolved silica into authigenic silicate clays through the process of reverse weathering constitutes a removal of 20-25% of silicon input.
Reverse weathering is often found in river deltas as these systems have high sediment accumulation rates and are observed to undergo rapid diagenesis. The formation of silicate clays removes reactive silica from the pore waters of sediment, increasing the concentration of silica found in the rocks that form in these locations.
Silicate weathering also appears to be a dominant process in deeper methanogenic sediments, whereas reverse weathering is more common in surface sediments, but still occurs at a lower rate.
Sinks
The major sink of the terrestrial silica cycle is export to the ocean by rivers. Silica that is stored in plant matter or dissolved can be exported to the ocean by rivers. The rate of this transport is approximately 6 Tmol Si yr−1. This is the major sink of the terrestrial silica cycle, as well as the largest source of the marine silica cycle. A minor sink for terrestrial silica is silicate that is deposited in terrestrial sediments and eventually exported to the Earth's crust.
Marine inputs
Riverine
As of 2021, the best estimate of the total riverine input of silicic acid is 6.2 (±1.8) Tmol Si yr−1. This is based on data representing 60% of the world river discharge and a discharge-weighted average silicic acid riverine concentration of 158 μM−Si. However, silicic acid is not the only way silicon can be transferred from terrestrial to riverine systems, since particulate silicon can also be mobilised in crystallised or amorphous forms. According to Saccone and others in 2007, the term "amorphous silica" includes biogenic silica (from phytoliths, freshwater diatoms, sponge spicules), altered biogenic silica, and pedogenic silicates, the three of which can have similar high solubilities and reactivities. Delivery of amorphous silica to the fluvial system has been reviewed by Frings and others in 2016, who suggested a value of 1.9(±1.0) Tmol Si yr−1. Therefore, the total riverine input is 8.1(±2.0) Tmol Si yr−1.
Aeolian
No progress has been made regarding aeolian dust deposition into the ocean and subsequent release of silicic acid via dust dissolution in seawater since 2013, when Tréguer and De La Rocha summed the flux of particulate dissolvable silica and wet deposition of silicic acid through precipitation. Thus, the best estimate for the aeolian flux of silicic acid, FA, remains 0.5(±0.5) Tmol Si yr−1.
Sandy beaches
A 2019 study has proposed that, in the surf zone of beaches, wave action disturbed abiotic sand grains and dissolved them over time. To test this, the researchers placed sand samples in closed containers with different kinds of water and rotated the containers to simulate wave action. They discovered that the higher the rock/water ratio within the container, and the faster the container spun, the more silica dissolved into solution. After analyzing and upscaling their results, they estimated that anywhere from 3.2 ± 1.0 – 5.0 ± 2.0 Tmol Si yr−1 of lithogenic DSi could enter the ocean from sandy beaches, a massive increase from a previous estimate of 0.3 Tmol Si yr−1. If confirmed, this represents a significant input of dissolved LSi that was previously ignored.
Marine silica cycling
Siliceous organisms in the ocean, such as diatoms and radiolaria, are the primary sink of dissolved silicic acid into opal silica. Only 3% of the Si molecules dissolved in the ocean are exported and permanently deposited in marine sediments on the seafloor each year, demonstrating that silicon recycling is a dominant process in the oceans. This rapid recycling is dependent on the dissolution of silica in organic matter in the water column, followed by biological uptake in the photic zone. The estimated residence time of the silica biological reservoir is about 400 years. Opal silica is predominately undersaturated in the world's oceans. This undersaturation promotes rapid dissolution as a result of constant recycling and long residence times. The estimated turnover time of Si is 1.5x104 years. The total net inputs and outputs of silica in the ocean are 9.4 ± 4.7 Tmol Si yr−1 and 9.9 ± 7.3 Tmol Si yr−1, respectively.
Biogenic silica production in the photic zone is estimated to be 240 ± 40 Tmol Si year −1. Dissolution in the surface removes roughly 135 Tmol Si year−1, while the remaining Si is exported to the deep ocean within sinking particles. In the deep ocean, another 26.2 Tmol Si Year−1 is dissolved before being deposited to the sediments as opal rain. Over 90% of the silica here is dissolved, recycled and eventually upwelled for use again in the euphotic zone.
Sources
The major sources of marine silica include rivers, groundwater flux, seafloor weathering inputs, hydrothermal vents, and atmospheric deposition (aeolian flux). Rivers are by far the largest source of silica to the marine environment, accounting for up to 90% of all the silica delivered to the ocean. A source of silica to the marine biological silica cycle is silica that has been recycled by upwelling from the deep ocean and seafloor.
The diagram on low-temperature processes shows how these can control the dissolution of (either amorphous or crystallized) siliceous minerals in seawater in and to the coastal zone and in the deep ocean, feeding submarine groundwater (FGW) and dissolved silicon in seawater and sediments (FW). These processes correspond to both low and medium energy flux dissipated per volume of a given siliceous particle in the coastal zone, in the continental margins, and in the abysses and to high-energy flux dissipated in the surf zone.
Sinks
Rapid dissolution in the surface removes roughly 135 Tmol opal Si year−1, converting it back to soluble silicic acid that can be used again for biomineralization. The remaining opal silica is exported to the deep ocean in sinking particles. In the deep ocean, another 26.2 Tmol Si Year−1 is dissolved before being deposited to the sediments as opal silica. At the sediment water interface, over 90% of the silica is recycled and upwelled for use again in the photic zone. Biogenic silica production in the photic zone is estimated to be 240 ± 40 Tmol si year −1. The residence time on a biological timescale is estimated to be about 400 years, with each molecule of silica recycled 25 times before sediment burial.
Deep seafloor deposition is the largest long-term sink of the marine silica cycle (6.3 ± 3.6 Tmol Si year−1), and is roughly balanced by the sources of silica to the ocean. The silica deposited in the deep ocean is primarily in the form of siliceous ooze. When opal silica accumulates faster than it dissolves, it is buried and can provide a diagenetic environment for marine chert formation. The processes leading to chert formation have been observed in the Southern Ocean, where siliceous ooze accumulation is the fastest. Chert formation however can take tens of millions of years. Skeleton fragments from siliceous organisms are subject to recrystallization and cementation. Chert is the main fate of buried siliceous ooze and permanently removes silica from the oceanic silica cycle.
The siliceous ooze is eventually subducted under the crust and metamorphosed in the upper mantle. Under the mantle, silicate minerals are formed in oozes and eventually uplifted to the surface. At the surface, silica can enter the cycle again through weathering. This process can take tens of millions of years. The only other major sink of silica in the ocean is burial along continental margins (3.6 ± 3.7 Tmol Si year −1), primarily in the form of siliceous sponges. Due to the high degrees of uncertainty in source and sink estimations, it's difficult to conclude if the marine silica cycle is in equilibrium. The residence time of silica in the oceans is estimated to be about 10,000 years. Silica can also be removed from the cycle by becoming chert and being permanently buried.
Anthropogenic influences
The rise in agriculture of the past 400 years has increased the exposure rocks and soils, which has resulted in increased rates of silicate weathering. In turn, the leaching of amorphous silica stocks from soils has also increased, delivering higher concentrations of dissolved silica in rivers. Conversely, increased damming has led to a reduction in silica supply to the ocean due to uptake by freshwater diatoms behind dams. The dominance of non-siliceous phytoplankton due to anthropogenic nitrogen and phosphorus loading and enhanced silica dissolution in warmer waters has the potential to limit silicon ocean sediment export in the future.
In 2019 a group of scientists suggested acidification is reducing diatom silica production in the Southern Ocean.
Role in climate regulation
The silica cycle plays an important role in long term global climate regulation. The global silica cycle also has large effects on the global carbon cycle through the carbonate-silicate cycle. The process of silicate mineral weathering transfers atmospheric CO2 to the hydrologic cycle through the chemical reaction displayed above. Over geologic timescales, the rates of weathering change due to tectonic activity. During a time of high uplift rate, silicate weathering increases which results in high CO2 uptake rates, offsetting increased volcanic CO2 emissions associated with the geologic activity. This balance of weathering and volcanoes is part of what controls the greenhouse effect and ocean pH over geologic time scales.
Biogenic silica accumulation on the sea floor contains lot of information about where in the ocean export production has occurred on time scales ranging from hundreds to millions of years. For this reason, opal deposition records provide valuable information regarding large-scale oceanographic reorganizations in the geological past, as well as paleoproductivity. The mean oceanic residence time for silicate is approximately 10,000–15,000 yr. This relative short residence time, makes oceanic silicate concentrations and fluxes sensitive to glacial/interglacial perturbations, and thus an excellent proxy for evaluating climate changes.
Isotope ratios of oxygen (O18:O16) and silicon (Si30:Si28) are analysed from biogenic silica preserved in lake and marine sediments to derive records of past climate change and nutrient cycling (De La Rocha, 2006; Leng and Barker, 2006). This is a particularly valuable approach considering the role of diatoms in global carbon cycling. In addition, isotope analyses from BSi are useful for tracing past climate changes in regions such as in the Southern Ocean, where few biogenic carbonates are preserved.
The silicon isotope compositions in fossil sponge spicules (δ30Si) are being increasingly often used to estimate the level of silicic acid in marine settings throughout the geological history, which enables the reconstruction of past silica cycles.
See also
Carbon cycle
Lithogenic silica
Oxygen cycle
Silicification
Silicon isotope biogeochemistry
References
Biogeochemical cycle
Silicon | Silica cycle | Chemistry | 3,564 |
56,385,031 | https://en.wikipedia.org/wiki/Camidanlumab%20tesirine | Camidanlumab tesirine (Cami-T or ADCT-301) is an antibody-drug conjugate (ADC) composed of a human antibody that binds to the protein CD25, conjugated to a pyrrolobenzodiazepine dimer toxin.
The experimental drug, developed by ADC Therapeutics is being tested in clinical trials for the treatment of B-cell Hodgkin's lymphoma (HL) and non-Hodgkin lymphoma (NHL), and for the treatment of B-cell acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML).
Technology
The human monoclonal antibody is conjugated via a cleavable linker to a cytotoxic (anticancer) pyrrolobenzodiazepine (PBD) dimer. The antibody binds to CD25, which is the alpha chain of the interleukin 2 receptor IL2RA. This molecule is expressed mainly on activated T- and B-cells, both of which are types of white blood cells that play a role in the human immune system. CD25 is over-expressed in a wide range of hematological malignancies, such as leukemias and lymphomas. After binding to a CD25-expressing cell, the antibody is internalized into the cell where enzymes release the cytotoxic drug. PBD dimers work by crosslinking specific sites of the DNA, blocking the cancer cells’ division that cause the cells to die. As a class of DNA-crosslinking agents they are significantly more potent than systemic chemotherapeutic drugs.
Clinical trials
Two phase I trials are evaluating the drug in patients with relapsed or refractory Hodgkin’s and non-Hodgkin’s lymphoma and relapsed or refractory CD25-positive acute myeloid leukemia or acute lymphoblastic leukemia. At the 59th American Society of Hematology (ASH) Annual Meeting, interim results from a Phase I, open-label, single agent dose-escalating study designed to evaluate the treatment of camidanlumab tesirine in relapsed or refractory Hodgkin’s or non-Hodgkin’s lymphoma were presented. Among the patients enrolled at the time of the data cutoff the overall response rate was 78% (including a 44% complete response rate) in patients with relapsing or refractory Hodgkin’s lymphoma.
References
Experimental cancer drugs
Antibody-drug conjugates | Camidanlumab tesirine | Biology | 567 |
10,348,500 | https://en.wikipedia.org/wiki/Material%20flow%20management | Material flow management (MFM) is an economic focused method of analysis and reformation of goods production and subsequent waste through the lens of material flows, incorporating themes of sustainability and the theory of a circular economy. It is used in social, medical, and urban contexts. However, MFM has grown in the field of industrial ecology, combining both technical and economic approaches to minimize waste that impacts economic prosperity and the environment. It has been heavily utilized by the country of Germany, but it has been applied to the industries of various other countries. The material flow management process utilizes the Sankey diagram, and echoes the circular economy model, while being represented in media environments as a business model which may help lower the costs of production and waste.
Context
Material flow management began as a largely academic discourse, eventually becoming an actual tool implemented by both countries and industries.
The first clear suggestion of material flow management was that of Robert A. Frosch and Nicholas E. Gallopoulos. Published in the Scientific American Journal in 1989, Frosch and Gallopoulos introduced and recommended the optimization of waste from industrial processes to then be reused for another. While lacking in detail, the analysis of material flow management continued to develop years later, with Robert Socolow and Valerie Thomas beginning to support findings with data, publishing their work in the Journal of Industrial Ecology in 1997.
Material flow management was established as a policy at the 1992 Rio De Janeiro United Nations Conference on Environment and Development (UNCED), or UN "Earth Summit" Conference. The event was later credited as an advancement towards three UN treaties: Framework Convention on Climate Change, the Convention on Biological Diversity, and the Convention to Combat Desertification.
Material flow management has been credited as a factor in environmental sustainability and environmental management, given its focus on responsible management of ecosystems and ecosystem services for current use, and that of future generations.
Uses and applications
One of the terms used in academic and practical discussions of material flow management is "material flow analysis" (MFA), which is identified as part of the MFM process. MFA is the more target-oriented analysis of substance flow within a system of production, especially within a company.
Material flow analysis is the responsibility of both organized governments and industries. While policies produced by governmental bodies create a framework, the actual design and implementation are done by industries. There are several stakeholders involved in these processes.
Material flow management assessment began to take country- and government-focused approaches following a 1997 publication by the World Resources Institute for the Netherlands and Germany. It displayed the total flow, soon adjusted to divide overall flows into its major constituents. In 2002, the United States Environmental Protection Agency released the report, Beyond RCRA: Waste and Materials Management in the Year 2020, finding that it is time for society to shift from a waste management-focused environmental plan to a material management-focused plan.
Another assessment was conducted by Taylor Searcy in 2017, revitalizing Fiji’s sustainable sea transportation industry to improve socio-economic and environmental impacts.
A 2019 study of the material flow in Brazil's mortar and concrete supply chain concluded that in terms of material use efficiency, the ratio of product to material consumption results in a low score, with the most outstanding inefficient processes being quarry waste and building waste at extraction and construction sites.
Government policies
The United States began seriously incorporating material flow management in its environmental policies with the Resource Conservation and Recovery Act of 1976. This gave the government the ability the control hazardous waste produced by all steps of production. Eventually, Congress helped strengthen the RCRA with the Hazardous and Solid Waste Amendments of 1984, incorporating more preventative policies.
In 2006, Israel released a Sustainable Solid Waste Management Plan, outlining green goals and priorities for the country's waste system, including economic tools of execution. Policies for household recycling and waste collection separation were then solidified with the 2010 Recycling Action Plan.
Korea has also introduced various policies that have executed MFM, specifically in regard to food waste. A 2005 ban on putting untreated food in landfills was followed by a 2012 ban on ocean dumping. In addition to these environmental initiatives, the country combined the economic and social aspects of MFM using food waste agreements with vital economic sectors, as well as public awareness campaigns.
Sankey diagram
The material flow management process utilizes the Sankey diagram, and echoes the circular economy model, while being represented in media environments as a business model which may help lower the costs of production and waste. An important tool for MFM is the Sankey diagram. It was developed by Irish engineer Riall Sankey to analyze the efficiency of steam engines and has since become a tool in industrial engineering and science. Sankey diagrams are a visual representation of industrial ecology. While they were mostly used in historical contexts, they are useful for assessing ecological impacts.
Circular economy
A circular economy is a model of resource production and consumption in any economy that involves sharing, leasing, reusing, repairing, refurbishing, and recycling existing materials and products. The circular economy, an economic system still in the development process (not yet widely adopted), intends to model itself after the material flow management and energy models in biological systems. Focusing on society-wide benefits, it designs a system without waste or pollution and intends to keep products and materials in the system for as long as possible. Applications of the circular economy in the European Union have produced evidence of practicality, estimating that implementation in agricultural, chemical, and construction sectors could reduce up to 7.5 billion tonnes of CO2e globally.
Media representation
In today's studies on the effectiveness of MFM for improving productivity, an analysis of its implementation has been debated with its correlation to government roles in environmental management. With the large spike in environmental disaster concepts regarding the depletion of resources by human activity, MFM could be interpreted as a promoter of the circular economy and an analysis of the necessity of such.
In this light, MFM is being utilized as a business strategy that would be meant to optimize vertical integration of manufacturing. Companies focusing on the economics behind MFM, rather than the subject of the environmental crisis, may take note of how MFM lowers the cost of materials by creating an efficient approach to sustainability.
Material flow management as a business model may appear to some to promote sustainability in the long run. However, its critics still find these two terms – MFM and sustainability – to be at a crossroads despite the vertical integration model of manufacturing showing agreement in their processes.
See also
Environmental ethics, a part of environmental philosophy which argues for the protection of natural entities, informing waste management practices
Environmentalism, a philosophy which centers on the protection of the Earth and can be achieved through material flow management
Environmental protection, a practice involving the protection of natural environments and taking preventative measures that may include material flow management
Material flow accounting, a method of studying the flow of materials and wastes on national or regional scales, with economic factors which lend themselves to material flow management
References
External links
Cadence: Material Flow Analysis in Manufacturing Improves Production and Efficiency
iPoint: Sankey Diagram Software
Materials
Industrial ecology
Sustainability and environmental management | Material flow management | Physics,Chemistry,Engineering | 1,432 |
938,631 | https://en.wikipedia.org/wiki/Antineutron | The antineutron is the antiparticle of the neutron with symbol . It differs from the neutron only in that some of its properties have equal magnitude but opposite sign. It has the same mass as the neutron, and no net electric charge, but has opposite baryon number (+1 for neutron, −1 for the antineutron). This is because the antineutron is composed of antiquarks, while neutrons are composed of quarks. The antineutron consists of one up antiquark and two down antiquarks.
Background
The antineutron was discovered in proton–antiproton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by the team of Bruce Cork, Glen Lambertson, Oreste Piccioni, and William Wenzel in 1956, one year after the antiproton was discovered.
Since the antineutron is electrically neutral, it cannot easily be observed directly. Instead, the products of its annihilation with ordinary matter are observed. In theory, a free antineutron should decay into an antiproton, a positron, and a neutrino in a process analogous to the beta decay of free neutrons. There are theoretical proposals of neutron–antineutron oscillations, a process that implies the violation of the baryon number conservation.
Magnetic moment
The magnetic moment of the antineutron is the opposite of that of the neutron. It is for the antineutron but for the neutron (relative to the direction of the spin). Here μN is the nuclear magneton.
See also
Antimatter
Neutron magnetic moment
List of particles
References
External links
LBL Particle Data Group: summary tables
suppression of neutron-antineutron oscillation
Elementary particles: includes information about antineutron discovery (archived link)
"Is Antineutron the Same as Neutron?" explains how the antineutron differs from the regular neutron despite having the same, that is zero, charge.
Antimatter
Baryons
Neutron
Nucleons | Antineutron | Physics | 424 |
22,138,309 | https://en.wikipedia.org/wiki/GJ%203379 | GJ 3379 (Giclas 99-49) is the nearest star in the Orion constellation, located at a distance of 17 light years from the Sun based on parallax. It is a single star with an apparent visual magnitude of +11.31 and an absolute magnitude of +12.71, therefore, the star is not visible with the naked eye. It is positioned in the upper left part of the Orion constellation, to the SSE of Betelgeuse. This star is drifting further away with a radial velocity of +30.0 kilometers per second. In the past, this star had a relatively close encounter with the Solar System. Some ago, it achieved a minimum distance of .
Physical characteristics
This star is a small red dwarf with a stellar classification of M3.5V – an M-type main-sequence star. It is much smaller, cooler, and less massive than the Sun, radiating only 0.6% of the Sun's luminosity. This is a very active star that varies in brightness with an amplitude of magnitude, modulated by a rapid rotation period of 1.8 days. The magnetic field strength has been measured as . It is a source of X-ray emission with a luminosity of .
According to the SIMBAD database, the star is classified as an eruptive variable.
References
External links
M-type main-sequence stars
Orion (constellation)
J06000351+0242236
3379 | GJ 3379 | Astronomy | 301 |
1,335,244 | https://en.wikipedia.org/wiki/Electric%20Dreams%20%28film%29 | Electric Dreams is a 1984 science fiction romantic comedy film directed by Steve Barron (in his feature film directorial debut) and written by Rusty Lemorande. The film stars Lenny Von Dohlen, Virginia Madsen, Maxwell Caulfield, and the voice of Bud Cort.
Electric Dreams was released in the United States by MGM/UA Entertainment Co. on July 20, 1984, and in the United Kingdom by 20th Century Fox on August 17, 1984. The film received mixed reviews from critics.
Plot
Miles Harding is an architect who envisions a brick shaped like a jigsaw puzzle piece that could enable buildings to withstand earthquakes. Seeking a way to get organized, he buys a personal computer to help him develop his ideas. Although he is initially unsure that he will even be able to correctly operate the computer, he later buys numerous extra gadgets that were not necessary for his work, such as switches to control household appliances like the blender, a speech synthesizer, and a microphone. The computer addresses Miles as "Moles", because Miles had incorrectly typed his name during the initial set-up. When Miles attempts to download the entire database from a mainframe computer at work, his computer begins to overheat. In a state of panic, Miles uses a nearby bottle of champagne to douse the overheating machine, which then becomes sentient. Miles initially is unaware of the computer's newfound sentience, but discovers it one night when he is awakened by the computer in the middle of the night when it mimics Miles talking in his sleep.
A love triangle soon develops among Miles, his computer (who later identifies himself as Edgar), and Miles's neighbor, an attractive cellist named Madeline Robistat. Upon hearing her practicing Minuet in G major, BWV Anh. 114 from Notebook for Anna Magdalena Bach on her cello through an air vent connecting both apartments, Edgar promptly elaborates a parallel variation of the piece, leading to an improvised duet. Believing it was Miles who had engaged her in the duet, Madeline begins to fall in love with him though she has an ongoing relationship with fellow musician Bill.
At Miles's request, Edgar composes a piece of music for Madeline. When their mutual love becomes evident, however, Edgar responds with jealousy upon not receiving credit for creating these songs, cancelling Miles's credit cards and registering him as an "armed and dangerous" criminal. Upon discovering this humiliation, Miles and Edgar have a confrontation, where Miles shoves the computer and tries to unplug it, getting an electric shock. Then, the computer retaliates by harassing him with an improvised maze of remotely controlled household electronics, in the style of Pac-Man.
Eventually, Edgar accepts Madeline and Miles's love for each other, and appears to commit suicide by sending a large electric current out through his acoustic coupler modem, around the world, and finally reaching back to himself just after he and Miles make amends.
Later, as Madeline and Miles go on vacation together, Edgar's voice is heard on the radio dedicating a song to "the ones I love", titled "Together in Electric Dreams". The credits are interspersed with scenes of the song being heard all over California, including a radio station trying to shut it off, declaring that they do not know where the signal is coming from.
Cast
Lenny Von Dohlen as Miles Harding
Virginia Madsen as Madeline Robistat
Maxwell Caulfield as Bill
Bud Cort as the voice of Edgar
Don Fellows as Mr. Ryley
Giorgio Moroder as Record Producer
Miriam Margolyes as Ticket Girl
Koo Stark as Girl in Soap Opera
Production
Steve Barron had made more than 100 music videos and routinely sent them to his mother for comment. She particularly liked one he did for Haysi Fantayzee. She was doing continuity on Yentl, co-produced by Rusty Lemorande and Larry deWaay, and she showed it to them. Lemorande had finished his own script for Electric Dreams and was looking for a director, so he offered Barron the job.
Barron took the script to Virgin Films, and it agreed to finance within four days. The film was presold to Metro-Goldwyn-Mayer who held rights for the U.S., Canada, Japan and South East Asia. Two months after Virgin agreed to make the movie, filming began in San Francisco on October 11, 1983.
Virginia Madsen later recalled she "was very spoiled on that movie, because it was such a lovefest that I now believe that every movie should be like that... I had a mad, crazy crush on Lenny Von Dohlen. God, we were so... we were head-over-heels for each other. Nothing happened, and at this point, I admit it: I wanted it to happen... He's still one of my best friends."
Bud Cort provided the voice of the computer. The director did not want Cort to be seen by the other actors during scenes so Cort had to do his lines in a padded box on a sound stage. He said, "It got a little lonely in there, I must admit. I kept waiting to meet the other actors, but nobody came to say hello." Boy George visited the set and, being a fan of Harold and Maude, got Cort's autograph.
The computer hardware company's name in the film is "Pinecone," a play on Apple Computer.
Fans of Electric Dreams have noted the similarities between the film and Spike Jonze's Her. When asked about it, Jonze claimed to have never seen the former film.
In 2009, Barron said that Madsen told him she was planning on being involved in a remake. He said, "She didn't ask me to do it, so I guess I blew my chance on the first one! I wouldn't actually do it, but it would have been nice for the ego to be asked."
Music
The soundtrack features music from prominent popular musicians of the time, being among the movies of this generation that actively explored the commercial link between a movie and its soundtrack. The soundtrack album Electric Dreams was re-issued on CD in 1998. The movie features music from Giorgio Moroder, Culture Club, Jeff Lynne (Electric Light Orchestra), and Heaven 17. During filming, Barron said, "The fact that there's so much music has to do with the success of Flashdance. This film isn't Flashdance 2. Flashdance worked because of the dancing. It didn't have a story. Electric Dreams does."
The film also features the song "Together in Electric Dreams." Barron has said about its creation:
He has also said about the film's music: "Electric Dreams was definitely an attempt to try and weave the early 1980s music video genre into a movie... [The film] isn't that deep. The closest parallel is probably that it's a Cyrano de Bergerac-like exploration of how words and music can help nurture and grow feelings on the path to love. Oops that's too deep." In 2015, he said when he made the film there was a prejudice against video clip directors doing drama, and because Electric Dreams "was a little bit like an extended music video... I didn't help that cause in a lot of ways."
Release
Critical reception
On Rotten Tomatoes the film has an approval rating of 45% based on 20 reviews.
The New York Times said that the film failed to "blend and balance its ingredients properly," and that it lost plot elements and taxed credibility.
The Los Angeles Times called it "inspired and appealing... a romantic comedy of genuine sweetness and originality."
Film critics Gene Siskel and Roger Ebert each gave the film 3.5 out of 4 stars, with Siskel writing that it showed a new director eager to show off his talents and Ebert writing "One of the nicest things about the movie is the way it maintains its note of slightly bewildered innocence."
Home media
Electric Dreams was released on VHS in 1984 and in 1991 in the US by MGM/UA Home Video, who released a LaserDisc in America in 1985. Warner Home Video released a Video CD version for the Singapore market in 2001. The film received a Region 2 DVD release on April 6, 2009, by MGM. UK video label Second Sight released a Blu-ray on August 7, 2017, making its worldwide debut on Blu-ray.
Currently, MGM owns international rights to film due to the Virgin/M.E.C.G film catalog they gained in the mid-90s thru their acquisition of the pre-1996 PolyGram Filmed Entertainment library. However, MGM does not own the film outright anymore (despite having distributed the film in the US initially) as Shout! Studios now owns rights to the film in the US, putting the film up on VOD in April 2022, marking its digital debut.
Additionally, it is also one of the few MGM titles from the pre-May 1986 MGM/UA library that are not owned by Warner Bros. Discovery (thru Turner Entertainment Co.) due to Virgin Pictures’ co-production of the film.
See also
Cyrano de Bergerac, a play written in 1897 by Edmond Rostand, featuring a brilliant but unattractive swordsman who woos the woman he loves on behalf of a handsome but tongue-tied friend.
EPICAC, a short story written in 1950 by Kurt Vonnegut, featuring a supercomputer that sacrifices itself after composing poems for its owner to woo a colleague, while falling in love with said colleague.
Her, a 2013 film about a man who develops a relationship with a computer operating system.
From Agnes—With Love, an episode of The Twilight Zone (1959 TV series) about a sentient computer sabotaging the attempts of one of her programmers to woo a co-worker by giving him bad dating advice.
References
External links
1984 films
1984 directorial debut films
1984 independent films
1984 romantic comedy films
1980s American films
1980s British films
1980s English-language films
1980s science fiction comedy films
American independent films
American romantic comedy films
American science fiction comedy films
American science fiction romance films
British independent films
British romantic comedy films
British science fiction comedy films
Films about architects
Films about artificial intelligence
Films about classical music and musicians
Films about computing
Films about technological impact
Films directed by Steve Barron
Films scored by Giorgio Moroder
Films set in San Francisco
Films shot at Twickenham Film Studios
Films shot in San Francisco
Metro-Goldwyn-Mayer films
1984 science fiction films
English-language science fiction comedy films
English-language independent films
English-language romantic comedy films | Electric Dreams (film) | Technology | 2,173 |
170,350 | https://en.wikipedia.org/wiki/Desert%20climate | The desert climate or arid climate (in the Köppen climate classification BWh and BWk) is a dry climate sub-type in which there is a severe excess of evaporation over precipitation. The typically bald, rocky, or sandy surfaces in desert climates are dry and hold little moisture, quickly evaporating the already little rainfall they receive. Covering 14.2% of Earth's land area, hot deserts are the second-most common type of climate on Earth after the Polar climate.
There are two variations of a desert climate according to the Köppen climate classification: a hot desert climate (BWh), and a cold desert climate (BWk). To delineate "hot desert climates" from "cold desert climates", a mean annual temperature of is used as an isotherm so that a location with a BW type climate with the appropriate temperature above this isotherm is classified as "hot arid subtype" (BWh), and a location with the appropriate temperature below the isotherm is classified as "cold arid subtype" (BWk).
Most desert/arid climates receive between of rainfall annually, although some of the most consistently hot areas of Central Australia, the Sahel and Guajira Peninsula can be, due to extreme potential evapotranspiration, classed as arid with the annual rainfall as high as .
Precipitation
Although no part of Earth is known for certain to be rainless, in the Atacama Desert of northern Chile, the average annual rainfall over 17 years was only . Some locations in the Sahara Desert such as Kufra, Libya, record an even drier of rainfall annually. The official weather station in Death Valley, United States reports annually, but in 40 months between 1931 and 1934 a total of just of rainfall was measured.
To determine whether a location has an arid climate, the precipitation threshold is determined. The precipitation threshold (in millimetres) involves first multiplying the average annual temperature in °C by 20, then adding 280 if 70% or more of the total precipitation is in the high-sun summer half of the year (April through September in the Northern Hemisphere, or October through March in the Southern), or 140 if 30–70% of the total precipitation is received during the applicable period, or 0 if less than 30% of the total precipitation is so received there. If the area's annual precipitation is less than half the threshold (50%), it is classified as a BW (desert climate), while 50–100% of the threshold results in a semi-arid climate.
Hot desert climates
Hot desert climates (BWh) are typically found under the subtropical ridge in the lower middle latitudes or the subtropics, often between 20° and 33° north and south latitudes. In these locations, stable descending air and high pressure aloft clear clouds and create hot, arid conditions with intense sunshine. Hot desert climates are found across vast areas of North Africa, West Asia, northwestern parts of the Indian Subcontinent, southwestern Africa, interior Australia, the Southwestern United States, northern Mexico, sections of southeastern Spain, the coast of Peru, and Chile. This makes hot deserts present in every continent except Antarctica.
At the time of high sun (summer), scorching, desiccating heat prevails. Hot-month average temperatures are normally between , and midday readings of are common. The world's absolute heat records, over , are generally in the hot deserts, where the heat potential can be the highest on the planet. This includes the record of in Death Valley, which is currently considered the highest temperature recorded on Earth. Some deserts in the tropics consistently experience very high temperatures all year long, even during wintertime. These locations feature some of the highest annual average temperatures recorded on Earth, exceeding , up to nearly in Dallol, Ethiopia. This last feature is seen in sections of Africa and Arabia. During colder periods of the year, night-time temperatures can drop to freezing or below due to the exceptional radiation loss under the clear skies. However, temperatures rarely drop far below freezing under the hot subtype.
Hot desert climates can be found in the deserts of North Africa such as the wide Sahara Desert, the Libyan Desert or the Nubian Desert; deserts of the Horn of Africa such as the Danakil Desert or the Grand Bara Desert; deserts of Southern Africa such as the Namib Desert or the Kalahari Desert; deserts of West Asia such as the Arabian Desert, or the Syrian Desert; deserts of South Asia such as Dasht-e Lut and Dasht-e Kavir of Iran or the Thar Desert of India and Pakistan; deserts of the United States and Mexico such as the Mojave Desert, the Sonoran Desert or the Chihuahuan Desert; deserts of Australia such as the Simpson Desert or the Great Victoria Desert and many other regions. In Europe, the hot desert climate can only be found on southeastern coast of Spain as well as small inland parts of southeastern, especially parts of the Tabernas Desert.
Hot deserts are lands of extremes: most of them are among the hottest, the driest, and the sunniest places on Earth because of nearly constant high pressure; the almost permanent removal of low-pressure systems, dynamic fronts, and atmospheric disturbances; sinking air motion; dry atmosphere near the surface and aloft; the exacerbated exposure to the sun where solar angles are always high makes this desert inhospitable to most species.
Cold desert climates
Cold desert climates (BWk) usually feature hot (or warm in a few instances), dry summers, though summers are not typically as hot as hot desert climates. Unlike hot desert climates, cold desert climates tend to feature cold, dry winters. Snow tends to be rare in regions with this climate. The Gobi Desert in northern China and Mongolia is one example of a cold desert. Though hot in the summer, it shares the freezing winters of the rest of Inner Asia. Summers in South America's Atacama Desert are mild, with only slight temperature variations between seasons. Cold desert climates are typically found at higher altitudes than hot desert climates and are usually drier than hot desert climates.
Cold desert climates are typically located in temperate zones in the 30s and 40s latitudes, usually in the leeward rain shadow of high mountains, restricting precipitation from the westerly winds. An example of this is the Patagonian Desert in Argentina, bounded by the Andes ranges to its west. In the case of Central Asia, mountains restrict precipitation from the eastern monsoon. The Kyzyl Kum, Taklamakan and Katpana Desert deserts of Central Asia are other significant examples of BWk climates. The Ladakh region and the city of Leh in the Great Himalayas in India also have a cold desert climate. In North America, the cold desert climate occurs in the drier parts of the Great Basin Desert and the Bighorn Basin in Big Horn and Washakie County in Wyoming. The Hautes Plaines, located in the northeastern section of Morocco and in Algeria, is another prominent example of a cold desert climate. In Europe, this climate only occurs in some inland parts of southeastern Spain, such as in Lorca.
Polar climate desert areas in the Arctic and Antarctic regions receive very little precipitation during the year owing to the cold, dry air freezing most precipitation. Polar desert climates have desert-like features that occur in cold desert climates, including intermittent streams, hypersaline lakes, and extremely barren terrain in unglaciated areas such as the McMurdo Dry Valleys of Antarctica. These areas are generally classified as having polar climates because they have average summer temperatures below even if they have some characteristics of extreme non-polar deserts.
Climate charts
Hot deserts
Cold deserts
See also
List of deserts
Dry climate
Semi-arid
Desert
References
External links
Desert climate summary
Desert climate explanation
Desert report/essay
Climate of Africa
Climate of Asia
Climate of Australia
Climate of North America
Climate of South America
climate
+Climate
Köppen climate types | Desert climate | Biology | 1,626 |
5,693,122 | https://en.wikipedia.org/wiki/Pseudorandom%20graph | In graph theory, a graph is said to be a pseudorandom graph if it obeys certain properties that random graphs obey with high probability. There is no concrete definition of graph pseudorandomness, but there are many reasonable characterizations of pseudorandomness one can consider.
Pseudorandom properties were first formally considered by Andrew Thomason in 1987. He defined a condition called "jumbledness": a graph is said to be -jumbled for real and with if
for every subset of the vertex set , where is the number of edges among (equivalently, the number of edges in the subgraph induced by the vertex set ). It can be shown that the Erdős–Rényi random graph is almost surely -jumbled. However, graphs with less uniformly distributed edges, for example a graph on vertices consisting of an -vertex complete graph and completely independent vertices, are not -jumbled for any small , making jumbledness a reasonable quantifier for "random-like" properties of a graph's edge distribution.
Connection to local conditions
Thomason showed that the "jumbled" condition is implied by a simpler-to-check condition, only depending on the codegree of two vertices and not every subset of the vertex set of the graph. Letting be the number of common neighbors of two vertices and , Thomason showed that, given a graph on vertices with minimum degree , if for every and , then is -jumbled. This result shows how to check the jumbledness condition algorithmically in polynomial time in the number of vertices, and can be used to show pseudorandomness of specific graphs.
Chung–Graham–Wilson theorem
In the spirit of the conditions considered by Thomason and their alternately global and local nature, several weaker conditions were considered by Chung, Graham, and Wilson in 1989: a graph on vertices with edge density and some can satisfy each of these conditions if
Discrepancy: for any subsets of the vertex set , the number of edges between and is within of .
Discrepancy on individual sets: for any subset of , the number of edges among is within of .
Subgraph counting: for every graph , the number of labeled copies of among the subgraphs of is within of .
4-cycle counting: the number of labeled -cycles among the subgraphs of is within of .
Codegree: letting be the number of common neighbors of two vertices and ,
Eigenvalue bounding: If are the eigenvalues of the adjacency matrix of , then is within of and .
These conditions may all be stated in terms of a sequence of graphs where is on vertices with edges. For example, the 4-cycle counting condition becomes that the number of copies of any graph in is as , and the discrepancy condition becomes that , using little-o notation.
A pivotal result about graph pseudorandomness is the Chung–Graham–Wilson theorem, which states that many of the above conditions are equivalent, up to polynomial changes in . A sequence of graphs which satisfies those conditions is called quasi-random. It is considered particularly surprising that the weak condition of having the "correct" 4-cycle density implies the other seemingly much stronger pseudorandomness conditions. Graphs such as the 4-cycle, the density of which in a sequence of graphs is sufficient to test the quasi-randomness of the sequence, are known as forcing graphs.
Some implications in the Chung–Graham–Wilson theorem are clear by the definitions of the conditions: the discrepancy on individual sets condition is simply the special case of the discrepancy condition for , and 4-cycle counting is a special case of subgraph counting. In addition, the graph counting lemma, a straightforward generalization of the triangle counting lemma, implies that the discrepancy condition implies subgraph counting.
The fact that 4-cycle counting implies the codegree condition can be proven by a technique similar to the second-moment method. Firstly, the sum of codegrees can be upper-bounded:
Given 4-cycles, the sum of squares of codegrees is bounded:
Therefore, the Cauchy–Schwarz inequality gives
which can be expanded out using our bounds on the first and second moments of to give the desired bound. A proof that the codegree condition implies the discrepancy condition can be done by a similar, albeit trickier, computation involving the Cauchy–Schwarz inequality.
The eigenvalue condition and the 4-cycle condition can be related by noting that the number of labeled 4-cycles in is, up to stemming from degenerate 4-cycles, , where is the adjacency matrix of . The two conditions can then be shown to be equivalent by invocation of the Courant–Fischer theorem.
Connections to graph regularity
The concept of graphs that act like random graphs connects strongly to the concept of graph regularity used in the Szemerédi regularity lemma. For , a pair of vertex sets is called -regular, if for all subsets satisfying , it holds that
where denotes the edge density between and : the number of edges between and divided by . This condition implies a bipartite analogue of the discrepancy condition, and essentially states that the edges between and behave in a "random-like" fashion. In addition, it was shown by Miklós Simonovits and Vera T. Sós in 1991 that a graph satisfies the above weak pseudorandomness conditions used in the Chung–Graham–Wilson theorem if and only if it possesses a Szemerédi partition where nearly all densities are close to the edge density of the whole graph.
Sparse pseudorandomness
Chung–Graham–Wilson theorem analogues
The Chung–Graham–Wilson theorem, specifically the implication of subgraph counting from discrepancy, does not follow for sequences of graphs with edge density approaching , or, for example, the common case of -regular graphs on vertices as . The following sparse analogues of the discrepancy and eigenvalue bounding conditions are commonly considered:
Sparse discrepancy: for any subsets of the vertex set , the number of edges between and is within of .
Sparse eigenvalue bounding: If are the eigenvalues of the adjacency matrix of , then .
It is generally true that this eigenvalue condition implies the corresponding discrepancy condition, but the reverse is not true: the disjoint union of a random large -regular graph and a -vertex complete graph has two eigenvalues of exactly but is likely to satisfy the discrepancy property. However, as proven by David Conlon and Yufei Zhao in 2017, slight variants of the discrepancy and eigenvalue conditions for -regular Cayley graphs are equivalent up to linear scaling in . One direction of this follows from the expander mixing lemma, while the other requires the assumption that the graph is a Cayley graph and uses the Grothendieck inequality.
Consequences of eigenvalue bounding
A -regular graph on vertices is called an -graph if, letting the eigenvalues of the adjacency matrix of be , . The Alon-Boppana bound gives that (where the term is as ), and Joel Friedman proved that a random -regular graph on vertices is for . In this sense, how much exceeds is a general measure of the non-randomness of a graph. There are graphs with , which are termed Ramanujan graphs. They have been studied extensively and there are a number of open problems relating to their existence and commonness.
Given an graph for small , many standard graph-theoretic quantities can be bounded to near what one would expect from a random graph. In particular, the size of has a direct effect on subset edge density discrepancies via the expander mixing lemma. Other examples are as follows, letting be an graph:
If , the vertex-connectivity of satisfies
If , is edge-connected. If is even, contains a perfect matching.
The maximum cut of is at most .
The largest independent subset of a subset in is of size at least
The chromatic number of is at most
Connections to the Green–Tao theorem
Pseudorandom graphs factor prominently in the proof of the Green–Tao theorem. The theorem is proven by transferring Szemerédi's theorem, the statement that a set of positive integers with positive natural density contains arbitrarily long arithmetic progressions, to the sparse setting (as the primes have natural density in the integers). The transference to sparse sets requires that the sets behave pseudorandomly, in the sense that corresponding graphs and hypergraphs have the correct subgraph densities for some fixed set of small (hyper)subgraphs. It is then shown that a suitable superset of the prime numbers, called pseudoprimes, in which the primes are dense obeys these pseudorandomness conditions, completing the proof.
References
Graph theory | Pseudorandom graph | Mathematics | 1,854 |
1,467,374 | https://en.wikipedia.org/wiki/Gestational%20age | In obstetrics, gestational age is a measure of the age of a pregnancy taken from the beginning of the woman's last menstrual period (LMP), or the corresponding age of the gestation as estimated by a more accurate method, if available. Such methods include adding 14 days to a known duration since fertilization (as is possible in in vitro fertilization), or by obstetric ultrasonography. The popularity of using this measure of pregnancy is largely due to convenience: menstruation is usually noticed, while there is generally no convenient way to discern when fertilization or implantation occurred.
Gestational age is contrasted with fertilization age, which takes the date of fertilization as the start date of gestation. There are different approaches to defining the start of a pregnancy. This definition is unusual in that it describes women as becoming "pregnant" about two weeks before they even had sex. The definition of pregnancy and the calculation of gestational age are also relevant in the context of the abortion debate and the philosophical debate over the beginning of human personhood.
Methods
According to American College of Obstetricians and Gynecologists, the main methods to calculate gestational age are:
Directly calculating the days since the beginning of the last menstrual period
Early obstetric ultrasound, comparing the size of an embryo or fetus to that of a reference group of pregnancies of known gestational age (such as calculated from last menstrual periods) and using the mean gestational age of other embryos or fetuses of the same size. If the gestational age as calculated from an early ultrasound is contradictory to the one calculated directly from the last menstrual period, it is still the one from the early ultrasound that is used for the rest of the pregnancy.
In case of in vitro fertilization, calculating days since oocyte retrieval or co-incubation and adding 14 days.
Gestational age can also be estimated by calculating days from ovulation if it was estimated from related signs or ovulation tests, and adding 14 days by convention.
A more complete listing of methods is given in following table:
As a general rule, the official gestational age should be based on the actual beginning of the last menstrual period, unless any of the above methods gives an estimated date that differs more than the variability for the method, in which case the difference cannot probably be explained by that variability alone. For example, if there is a gestational age based on the beginning of the last menstrual period of 9.0 weeks, and a first-trimester obstetric ultrasonography gives an estimated gestational age of 10.0 weeks (with a 2 SD variability of ±8% of the estimate, thereby giving a variability of ±0.8 weeks), the difference of 1.0 weeks between the tests is larger than the 2 SD variability of the ultrasonography estimate, indicating that the gestational age estimated by ultrasonography should be used as the official gestational age.
Once the estimated due date (EDD) is established, it should rarely be changed, as the determination of gestational age is most accurate earlier in the pregnancy.
Assessment of gestational age can be made based on selected head and trunk parameters. Following are diagrams for estimating gestational age from obstetric ultrasound, by various target parameters:
Comparison to fertilization age
The fertilization or conceptional age (also called embryonic age and later fetal age) is the time from the fertilization. It usually occurs within a day of ovulation, which, in turn, occurs on average 14.6 days after the beginning of the preceding menstruation (LMP). There is also considerable variability in this interval, with a 95% prediction interval of the ovulation of 9 to 20 days after menstruation even for an average woman who has a mean LMP-to-ovulation time of 14.6. In a reference group representing all women, the 95% prediction interval of the LMP-to-ovulation is 8.2 to 20.5 days. The actual variability between gestational age as estimated from the beginning of the last menstrual period (without the use of any additional method mentioned in previous section) is substantially larger because of uncertainty which menstrual cycle gave rise to the pregnancy. For example, the menstruation may be scarce enough to give the false appearance that an earlier menstruation gave rise to the pregnancy, potentially giving an estimated gestational age that is approximately one month too large. Also, vaginal bleeding occurs during 15–25% of first trimester pregnancies, and may be mistaken as menstruation, potentially giving an estimated gestational age that is too low.
Uses
Gestational age is used for example for:
The events of prenatal development, which usually occur at specific gestational ages. Hence, the gestational timing of a fetal toxin exposure, fetal drug exposure or vertically transmitted infection can be used to predict the potential consequences to the fetus.
Estimated date of delivery
Scheduling prenatal care
Estimation of fetal viability
Calculating the results of various prenatal tests, (for example, in the triple test).
Birth classification into for example preterm, term or postterm.
Classification of infant deaths and stillbirths
Postnatally (after birth) to estimate various risk factors
Estimation of due date
The mean pregnancy length has been estimated to be 283.4 days of gestational age as timed from the first day of the last menstrual period and 280.6 days when retrospectively estimated by obstetric ultrasound measurement of the fetal biparietal diameter (BPD) in the second trimester. Other algorithms take into account other variables, such as whether this is the first or subsequent child, the mother's race, age, length of menstrual cycle, and menstrual regularity. In order to have a standard reference point, the normal pregnancy duration is assumed by medical professionals to be 280 days (or 40 weeks) of gestational age. Furthermore, actual childbirth has only a certain probability of occurring within the limits of the estimated due date. A study of singleton live births determined that childbirth has a standard deviation of 14 days when gestational age is estimated by first-trimester ultrasound and 16 days when estimated directly by last menstrual period.
The most common system used among healthcare professionals is Naegele's rule, which estimates the expected date of delivery (EDD) by adding a year, subtracting three months, and adding seven days to the first day of a woman's last menstrual period (LMP) or corresponding date as estimated from other means.
Medical fetal viability
There is no sharp limit of development, gestational age, or weight at which a human fetus automatically becomes viable. According to studies between 2003 and 2005, 20 to 35 percent of babies born at 23 weeks of gestation survive, while 50 to 70 percent of babies born at 24 to 25 weeks, and more than 90 percent born at 26 to 27 weeks, survive. It is rare for a baby weighing less than 500 g (17.6 ounces) to survive. A baby's chances for survival increases 3–4% per day between 23 and 24 weeks of gestation and about 2–3% per day between 24 and 26 weeks of gestation. After 26 weeks the rate of survival increases at a much slower rate because survival is high already. Prognosis depends also on medical protocols on whether to resuscitate and aggressively treat a very premature newborn, or whether to provide only palliative care, in view of the high risk of severe disability of very preterm babies.
Birth classification
Using gestational age, births can be classified into broad categories:
Using the LMP (last menstrual period) method, a full-term human pregnancy is considered to be 40 weeks (280 days), though pregnancy lengths between 38 and 42 weeks are considered normal. A fetus born prior to the 37th week of gestation is considered to be preterm. A preterm baby is likely to be premature and consequently faces increased risk of morbidity and mortality. An estimated due date is given by Naegele's rule.
According to the WHO, a preterm birth is defined as "babies born alive before 37 weeks of pregnancy are completed." According to this classification, there are three sub-categories of preterm birth, based on gestational age: extremely preterm (fewer than 28 weeks), very preterm (28 to 32 weeks), moderate to late preterm (32 to 37 weeks). Various jurisdictions may use different classifications.
In classifying perinatal deaths, stillbirths and infant deaths
For most of the 20th century, official definitions of a live birth and infant death in the Soviet Union and Russia differed from common international standards, such as those established by the World Health Organization in the latter part of the century. Babies who were fewer than 28 weeks of gestational age, or weighed fewer than 1000 grams, or fewer than 35 cm in length – even if they showed some sign of life (breathing, heartbeat, voluntary muscle movement) – were classified as "live fetuses" rather than "live births." Only if such newborns survived seven days (168 hours) were they then classified as live births. If, however, they died within that interval, they were classified as stillbirths. If they survived that interval but died within the first 365 days they were classified as infant deaths.
More recently, thresholds for "fetal death" continue to vary widely internationally, sometimes incorporating weight as well as gestational age. The gestational age for statistical recording of fetal deaths ranges from 16 weeks in Norway, to 20 weeks in the US and Australia, 24 weeks in the UK, and 26 weeks in Italy and Spain.
The WHO defines the perinatal period as "The perinatal period commences at 22 completed weeks (154 days) of gestation and ends seven completed days after birth." Perinatal mortality is the death of fetuses or neonates during the perinatal period. A 2013 study found that "While only a small proportion of births occur before 24 completed weeks of gestation (about 1 per 1000), survival is rare and most of them are either fetal deaths or live births followed by a neonatal death."
Postnatal use
Gestational age (as well as fertilization age) is sometimes used postnatally (after birth) to estimate various risk factors. For example, it is a better predictor than postnatal age for risk of intraventricular hemorrhage in premature babies treated with extracorporeal membrane oxygenation.
Factors affecting pregnancy length
Child's gestational age at birth (pregnancy length) is associated with various likely causal maternal non-genetic factors: stress during pregnancy, age, parity, smoking, infection and inflammation, BMI. Also, preexisting maternal medical conditions with genetic component, e.g., diabetes mellitus type 1, systemic lupus erythematosus, anaemia. Parental ancestral background (race) also plays a role in pregnancy duration. Gestational age at birth is on average shortened by various pregnancy aspects: twin pregnancy, prelabor rupture of (fetal) membranes, pre-eclampsia, eclampsia, intrauterine growth restriction. The ratio between fetal growth rate and uterine size (reflecting uterine distension) is suspected to partially determine the pregnancy length.
Heritability of pregnancy length
Family-based studies showed that gestational age at birth is partially (25–40%) determined by genetic factors.
See also
Pregnancy
Maternity
Prenatal development
Gestation periods in mammals
Abortion law
Reproductive rights
Fetal rights
References
Obstetrics
Neonatology
Demography
Midwifery | Gestational age | Environmental_science | 2,501 |
33,303 | https://en.wikipedia.org/wiki/Wankel%20engine | The Wankel engine (, ) is a type of internal combustion engine using an eccentric rotary design to convert pressure into rotating motion. The concept was proven by German engineer Felix Wankel, followed by a commercially feasible engine designed by German engineer Hanns-Dieter Paschke. The Wankel engine's rotor, which creates the turning motion, is similar in shape to a Reuleaux triangle, with the sides having less curvature. The rotor spins inside a figure-eight-like epitrochoidal housing around a fixed-toothed gearing. The midpoint of the rotor moves in a circle around the output shaft, rotating the shaft via a cam.
In its basic gasoline fuelled form, the Wankel engine has lower thermal efficiency and higher exhaust emissions relative to the four-stroke reciprocating piston engine. The thermal inefficiency has restricted the engine to limited use since its introduction in the 1960s. However, many disadvantages have mainly been overcome over the succeeding decades as the production of road-going vehicles progressed. The advantages of compact design, smoothness, lower weight, and fewer parts over the reciprocating piston internal combustion engines make the Wankel engine suited for applications such as chainsaws, auxiliary power units (APUs), loitering munitions, aircraft, jet skis, snowmobiles, and range extenders in cars. The Wankel engine was also used to power motorcycles and racing cars.
Concept
The Wankel engine is a type of rotary piston engine and exists in two primary forms, the Drehkolbenmotor (DKM, "rotary piston engine"), designed by Felix Wankel (see Figure 2.) and the Kreiskolbenmotor (KKM, "circuitous piston engine"), designed by Hanns-Dieter Paschke (see Figure 3.), of which only the latter has left the prototype stage. Thus, all production Wankel engines are of the KKM type.
In a DKM engine, there are two rotors: the inner, trochoid-shaped rotor, and the outer rotor, which has an outer circular shape, and an inner figure eight shape. The center shaft is stationary, and torque is taken off the outer rotor, which is geared to the inner rotor.
In a KKM engine, the outer rotor is part of the stationary housing (thus not a moving part). The inner shaft is a moving part with an eccentric lobe for the inner rotor to spin around. The rotor spins around its center and around the axis of the eccentric shaft in a hula hoop fashion, resulting in the rotor making one complete revolution for every three revolutions of the eccentric shaft. In the KKM engine, torque is taken off the eccentric shaft, making it a much simpler design to be adopted to conventional powertrains.
Wankel engine development
Felix Wankel designed a rotary compressor in the 1920s, and received his first patent for a rotary type of engine in 1934. He realized that the triangular rotor of the rotary compressor could have intake and exhaust ports added producing an internal combustion engine. Eventually, in 1951, Wankel began working at German firm NSU Motorenwerke to design a rotary compressor as a supercharger for NSU's motorcycle engines. Wankel conceived the design of a triangular rotor in the compressor. With the assistance of Prof. from Stuttgart University of Applied Sciences, the concept was defined mathematically. The supercharger he designed was used for one of NSU's 50 cm3 one-cylinder two-stroke engines. The engine produced a power output of at 12,000rpm.
In 1954, NSU agreed to develop a rotary internal combustion engine with Felix Wankel, based upon Wankel's supercharger design for their motorcycle engines. Since Wankel was known as a "difficult colleague", the development work for the DKM was carried out at Wankel's private Lindau design bureau. According to John B. Hege, Wankel received help from his friend Ernst Höppner, who was a "brilliant engineer". The first working prototype, DKM 54 (see figure 2.), first ran on 1 February 1957, at the NSU research and development department Versuchsabteilung TX. It produced . Soon after that, a second prototype of the DKM was built. It had a working chamber volume Vk of 125 cm3 and also produced at 17,000rpm. It could even reach speeds of up to 25,000rpm. However, these engine speeds distorted the outer rotor's shape, thus proving impractical. According to Mazda Motors engineers and historians, four units of the DKM engine were built; the design is described to have a displacement Vh of 250 cm3 (equivalent to a working chamber volume Vk of 125 cm3). The fourth unit built is said to have received several design changes, and eventually produced at 17,000 rpm; it could reach speeds up to 22,000 rpm. One of the four engines built has been on static display at the Deutsches Museum Bonn (see figure. 2).
Due to its complicated design with a stationary center shaft, the DKM engine was impractical. Wolf-Dieter Bensinger explicitly mentions that proper engine cooling cannot be achieved in a DKM engine, and argues that this is the reason why the DKM design had to be abandoned. NSU development chief engineer Walter Froede solved this problem by using Hanns-Dieter Paschke's design and converting the DKM into what would later be known as the KKM (see figure 5.). The KKM proved to be a much more practical engine, as it has easily accessible spark plugs, a simpler cooling design, and a conventional power take-off shaft. Wankel disliked Froede's KKM engine because of its inner rotor's eccentric motion, which was not a pure circular motion, as Wankel had intended. He remarked that his "race horse" was turned into a "plough horse". Wankel also complained that more stresses would be placed on the KKM's apex seals due to the eccentric hula-hoop motion of the rotor. NSU could not afford to finance developing both the DKM and the KKM, and eventually decided to drop the DKM in favor of the KKM, because the latter seemed to be the more practical design.
Wankel obtained the US patent 2,988,065 on the KKM engine on 13 June 1961. Throughout the design phase of the KKM, Froede's engineering team had to solve problems such as repeated bearing seizures, the oil flow inside the engine, and the engine cooling. The first fully functioning KKM engine, the KKM 125, weighing in at only displaced 125 cm3 and produced at 11,000rpm. Its first run was on 1 July 1958.
In 1963, NSU produced the first series-production Wankel engine for a car, the KKM 502 (see Figure 6.). It was used in the NSU Spider sports car, of which about 2,000 were made. Despite its "teething troubles", the KKM 502 was a powerful engine with decent potential, smooth operation, and low noise emissions at high engine speeds. It was a single-rotor PP engine with a displacement of , a rated power of at 6,000rpm and a BMEP of .
Operation and design
The Wankel engine has a spinning eccentric power take-off shaft, with a rotary piston riding on eccentrics on the shaft in a hula-hoop fashion, with the crown gear with one and a half times the number of teeth as on the eccentric shaft. Thus the Wankel is a 2:3 type of rotary engine, i.e., its housing's inner side resembles a two lobes oval-like epitrochoid (equivalent to a peritrochoid),. In contrast, its rotary piston has a three vertices trochoid shape (similar to a Reuleaux triangle). Thus, the Wankel engine's rotor constantly forms three moving working chambers. The Wankel engine's basic geometry is depicted in figure 7. Seals at the rotor's apices seal against the housing's periphery. The rotor moves in its rotating motion guided by gears and the eccentric output shaft, not being guided by the external chamber. The rotor does not make contact with the external engine housing. The force of expanded gas pressure on the rotor exerts pressure on the center of the eccentric part of the output shaft.
All practical Wankel engines are four-cycle (i.e., four-stroke) engines. In theory, two-cycle engines are possible, but they are impractical because the intake gas and the exhaust gas cannot be properly separated. The operating principle is similar to the Otto operating principle; the Diesel operating principle with its compression ignition cannot be used in a practical Wankel engine. Therefore, Wankel engines typically have a high-voltage spark ignition system.
In a Wankel engine, one side of the triangular rotor completes the four-stage Otto cycle of intake, compression, expansion, and exhaust each revolution of the rotor (equivalent to three shaft revolutions, see Figure 8.). The shape of the rotor between the fixed apexes is to minimize the volume of the geometric combustion chamber and maximize the compression ratio, respectively. As the rotor has three sides, this gives three power pulses per revolution of the rotor.
Wankel engines have a much lower degree of irregularity relative to a reciprocating piston engine, making the Wankel engine run much smoother. This is because the Wankel engine has a lower moment of inertia and less excess torque area due to its more uniform torque delivery. For example, a two-rotor Wankel engine runs more than twice as smoothly as a four-cylinder piston engine. The eccentric output shaft of a Wankel engine also does not have the stress-related contours of a reciprocating piston engine's crankshaft. The maximum revolutions of a Wankel engine are thus mainly limited by tooth load on the synchronizing gears. Hardened steel gears are used for extended operation above 7,000 or 8,000rpm. In practice, automotive Wankel engines are not operated at much higher output shaft speeds than reciprocating piston engines of similar output power. Wankel engines in auto racing are operated at speeds up to 10,000rpm, but so are four-stroke reciprocating piston engines with relatively small displacement per cylinder. In aircraft, they are used conservatively, up to 6500 or 7500rpm.
Chamber volume
In a Wankel rotary engine, the chamber volume is equivalent to the product of the rotor surface and the rotor path . The rotor surface is given by the rotor tips' path across the rotor housing and determined by the generating radius , the rotor width , and the parallel transfers of the rotor and the inner housing . Since the rotor has a trochoid ("triangular") shape, the sine of 60 degrees describes the interval at which the rotors get closest to the rotor housing. Therefore,
The rotor path may be integrated via the eccentricity as follows:
Therefore,
For convenience, may be omitted because it is difficult to determine and small:
A different approach to this is introducing as the farthest, and as the shortest parallel transfer of the rotor and the inner housing and assuming that and . Then,
Including the parallel transfers of the rotor and the inner housing provides sufficient accuracy for determining chamber volume.
Equivalent displacement and power output
Different approaches have been used over time to evaluate the total displacement of a Wankel engine in relation to a reciprocating engine: considering only one, two, or all three chambers.
Part of this dispute was because of Europe vehicle taxation being dependent on engine displacement, as reported by Karl Ludvigsen.
If is the number of chambers considered for each rotor and the number of rotors, then the total displacement is:
If is the mean effective pressure, the shaft rotational speed and the number of shaft revolutions needed to complete a cycle ( is the frequency of the thermodynamic cycle), then the total power output is:
Considering one chamber
Kenichi Yamamoto and Walter G. Froede placed and :
With these values, a single-rotor Wankel engine produces the same average power as a single-cylinder two-stroke engine, with the same average torque, with the shaft running at the same speed, operating the Otto cycles at triple the frequency.
Considering two chambers
Richard Franz Ansdale, Wolf-Dieter Bensinger and Felix Wankel based their analogy on the number of cumulative expansion strokes per shaft revolution. In a Wankel rotary engine, the eccentric shaft must make three full rotations (1080°) per combustion chamber to complete all four phases of a four-stroke engine. Since a Wankel rotary engine has three combustion chambers, all four phases of a four-stroke engine are completed within one full rotation of the eccentric shaft (360°), and one power pulse is produced at each revolution of the shaft. This is different from a four-stroke piston engine, which needs to make two full rotations per combustion chamber to complete all four phases of a four-stroke engine. Thus, in a Wankel rotary engine, according to Bensinger, displacement () is:
If power is to be derived from BMEP, the four-stroke engine formula applies:
Considering three chambers
Eugen Wilhelm Huber, and Karl-Heinz Küttner counted all the chambers, since each one operates its own thermodynamic cycle. So and :
With these values, a single-rotor Wankel engine produces the same average power as a three-cylinder four-stroke engine, with 3/2 of the average torque, with the shaft running at 2/3 the speed, operating the Otto cycles at the same frequency:
Applying a 2/3 gear set to the output shaft of the three-cylinder (or a 3/2 one to the Wankel), the two are analogous from the thermodynamic and mechanical output point of view, as pointed out by Huber.
Examples (counting two chambers)
KKM 612 (NSU Ro80)
e=14 mm
R=100 mm
a=2 mm
B=67 mm
i=2
Mazda 13B-REW (Mazda RX-7)
e=15 mm
R=103 mm
a=2 mm
B=80 mm
i=2
Licenses issued
NSU licensed the Wankel engine design to companies worldwide, in various forms, with many companies implementing continual improvements. In his 1973 book Rotationskolben-Verbrennungsmotoren, German engineer Wolf-Dieter Bensinger describes the following licensees, in chronological order, which is confirmed by John B. Hege:
Curtiss-Wright: All types of engines, both air- and water-cooled, , from 1958; license sold to Deere & Company in 1984
Fichtel & Sachs: Industrial and marine engines, , from 1960
Yanmar Diesel: Marine engines up to , and engines running on diesel fuel up to , from 1961
Toyo Kogyo (Mazda): Motor vehicle engines up to , from 1961
Perkins Engines: All types of engines, up to , from 1961 until <1972
Klöckner-Humboldt-Deutz: Engines running on diesel fuel; development ended by 1972
Daimler Benz: All types of engines from up to , from 1961 until 1976.
MAN: Engines running on diesel fuel; development ended by 1972
Krupp: Engines running on diesel fuel; development ended by 1972
Rheinstahl-Hanomag: Petrol engines, , from 1963; by 1972 merged into Daimler-Benz
Alfa Romeo: Motor vehicle engines, , from 1964
Rolls-Royce: Engines for diesel fuel or multifuel operation, , from 1965
VEB Automobilbau: Automotive engines from and , from 1965; license abandoned by 1972
Porsche: Sportscar engines from , from 1965
Outboard Marine: Marine engines from , from 1966
Comotor (NSU Motorenwerke and Citroën): Petrol engines from , from 1967
Graupner: Model engines from , from 1967
Savkel: Industrial petrol engines from , from 1969
Nissan: Car engines from , from 1970
General Motors: All types of engines, excluding aircraft engines, up to four-rotor engines, from 1970
Suzuki: Motorcycle engines from , from 1970
Toyota: Car engines from , from 1971
Ford Germany: (including Ford Motor Company): Car engines from , from 1971
BSA Company : Petrol engines from , from 1972
Yamaha Motor Company: Petrol engines from , from 1972
Kawasaki Heavy Industries: Petrol engines from , from 1972
Brunswick Corporation Engines from , from 1972
Ingersoll Rand: Engines from , from 1972
American Motors Company: Petrol engines from , from 1973
In 1961, the Soviet research organizations of NATI, NAMI, and VNIImotoprom began developing a Wankel engine. Eventually, in 1974, development was transferred to a special design bureau at the AvtoVAZ plant. John B. Hege argues that no license was issued to any Soviet car manufacturer.
Engineering
Felix Wankel managed to overcome most of the problems that made prior attempts to perfect the rotary engines fail, by developing a configuration with vane seals having a tip radius equal to the amount of "oversize" of the rotor housing form, relative to the theoretical epitrochoid, to minimize radial apex seal motion plus introducing a cylindrical gas-loaded apex pin which abutted all sealing elements to seal around the three planes at each rotor apex.
In the early days, unique, dedicated production machines had to be built for different housing dimensional arrangements. However, patented designs such as , G. J. Watt, 1974, for a "Wankel Engine Cylinder Generating Machine", , "Apparatus for machining and/or treatment of trochoidal surfaces" and , "Device for machining trochoidal inner walls", and others, solved the problem.
Wankel engines have a problem not found in reciprocating piston four-stroke engines in that the block housing has intake, compression, combustion, and exhaust occurring at fixed locations around the housing. This causes a very uneven thermal load on the rotor housing. In contrast, four-stroke reciprocating engines perform these four strokes in one chamber, so that extremes of "freezing" intake and "flaming" exhaust are averaged and shielded by a boundary layer from overheating working parts. The University of Florida proposed the use of heat pipes in an air-cooled Wankel to overcome this uneven heating of the block housing. Pre-heating of certain housing sections with exhaust gas improved performance and fuel economy, also reducing wear and emissions.
The boundary layer shields and the oil film act as thermal insulation, leading to a low temperature of the lubricating film (approximate maximum on a water-cooled Wankel engine). This gives a more constant surface temperature. The temperature around the spark plug is about the same as in the combustion chamber of a reciprocating engine. With circumferential or axial flow cooling, the temperature difference remains tolerable.
Problems arose during research in the 1950s and 1960s. For a while, engineers were faced with what they called "chatter marks" and "devil's scratch" in the inner epitrochoid surface, resulting in chipping of the chrome coating of the trochoidal surfaces. They discovered that the cause was the apex seals reaching a resonating vibration, and the problem was solved by reducing the thickness and weight of the apex seals as well as using more suitable materials. Scratches disappeared after introducing more compatible materials for seals and housing coatings. Yamamoto experimentally lightened apex seals with holes. Now, weight was identified as the main cause. Mazda then used aluminum-impregnated carbon apex seals in their early production engines. NSU used carbon antimony-impregnated apex seals against chrome. NSU developed ELNISIL coating to production maturity and returned to a metal sealing strip for the RO80. Mazda continued to use chrome, but provided the aluminum housing with a steel jacket, which was then coated with a thin dimensional galvanized chrome layer. This allowed Mazda to return to the 3mm and later even 2mm thick metal apex seals. Another early problem was the build-up of cracks in the stator surface near the plug hole, which was eliminated by installing the spark plugs in a separate metal insert/ copper sleeve in the housing instead of a plug being screwed directly into the block housing.
Toyota found that substituting a glow-plug for the leading site spark plug improved low rpm, part load, specific fuel consumption by 7%, and emissions and idle. A later alternative solution to spark plug boss cooling was provided with a variable coolant velocity scheme for water-cooled rotaries, which has had widespread use, being patented by Curtiss-Wright, with the last-listed for better air-cooled engine spark plug boss cooling. These approaches did not require a high-conductivity copper insert, but did not preclude its use. Ford tested a Wankel engine with the plugs placed in the side plates, instead of the usual placement in the housing working surface (, 1978).
Torque delivery
Wankel engines are capable of high-speed operation, meaning they do not necessarily need to produce high torque to produce high power. The positioning of the intake port and intake port closing greatly affect the engine's torque production. Early closing of the intake port increases low-end torque, but reduces high-end torque (and thus power). In contrast, late closing of the intake port reduces low-end torque while increasing torque at high engine speeds, thus resulting in more power at higher engine speeds.
A peripheral intake port gives the highest mean effective pressure; however, side intake porting produces a more steady idle, because it helps to prevent blow-back of burned gases into the intake ducts, which cause "misfirings" caused by alternating cycles where the mixture ignites and fails to ignite. Peripheral porting (PP) gives the best mean effective pressure throughout the rpm range, but PP was also linked to worse idle stability and part-load performance. Early work by Toyota led to the addition of a fresh air supply to the exhaust port. It also proved that a Reed-valve in the intake port or ducts improved the low rpm and partial load performance of Wankel engines, by preventing blow-back of exhaust gas into the intake port and ducts, and reducing the misfire-inducing high EGR, at the cost of a slight loss of power at top rpm. Elasticity is improved with a greater rotor eccentricity, analogous to a longer stroke in a reciprocating engine.
Wankel engines operate better with a low-pressure exhaust system. Higher exhaust back pressure reduces mean effective pressure, more severely in peripheral intake port engines. The Mazda RX-8 Renesis engine improved performance by doubling the exhaust port area relative to earlier designs, and there have been studies of the effect of intake and exhaust piping configuration on the performance of Wankel engines. Side intake ports (as used in Mazda's Renesis engine) were first proposed by Hanns-Dieter Paschke in the late 1950s. Paschke predicted that precisely calculated intake ports and intake manifolds could make a side port engine as powerful as a PP engine.
Materials
As formerly described, the Wankel engine is affected by unequal thermal expansion due to the four cycles taking place in fixed places of the engine. While this puts great demands on the materials used, the simplicity of the Wankel makes it easier to use alternative materials, such as exotic alloys and ceramics. A commonplace method is, for engine housings made of aluminum, to use a spurted molybdenum layer on the engine housing for the combustion chamber area, and a spurted steel layer elsewhere. Engine housings cast from iron can be induction-brazed to make the material suited for withstanding combustion heat stress.
Among the alloys cited for Wankel housing use are A-132, Inconel 625, and 356 treated to T6 hardness. Several materials have been used for plating the housing working surface, Nikasil being one. Citroën, Daimler-Benz, Ford, A P Grazen, and others applied for patents in this field. For the apex seals, the choice of materials has evolved along with the experience gained, from carbon alloys, to steel, ferritic stainless, Ferro-TiC, and other materials. The combination of housing plating and the apex and side seal materials was determined experimentally, to obtain the best duration of both seals and housing cover. For the shaft, steel alloys with little deformation on load are preferred, the use of Maraging steel has been proposed for this.
Leaded petrol fuel was the predominant type available in the first years of the Wankel engine's development. Lead is a solid lubricant, and leaded petrol is designed to reduce the wearing of seals and housings. The first engines had the oil supply calculated with consideration of petrol's lubricating qualities. As leaded petrol was being phased out, Wankel engines needed an increased mix of oil in the petrol to provide lubrication to critical engine parts. An SAE paper by David Garside extensively described Norton's choices of materials and cooling fins.
Sealing
Early engine designs had a high incidence of sealing loss, both between the rotor and the housing and also between the various pieces making up the housing. Also, in earlier model Wankel engines, carbon particles could become trapped between the seal and the casing, jamming the engine and requiring a partial rebuild. It was common for very early Mazda engines to require rebuilding after . Further sealing problems arose from the uneven thermal distribution within the housings causing distortion and loss of sealing and compression. This thermal distortion also caused uneven wear between the apex seal and the rotor housing, evident on higher mileage engines. The problem was exacerbated when the engine was stressed before reaching operating temperature. However, Mazda Wankel engines solved these initial problems. Current engines have nearly 100 seal-related parts.
The problem of clearance for hot rotor apexes passing between the axially closer side housings in the cooler intake lobe areas was dealt with by using an axial rotor pilot radially inboard of the oils seals, plus improved inertia oil cooling of the rotor interior (C-W , C. Jones, 5/8/63, , M. Bentele, C. Jones. A.H. Raye. 7/2/62), and slightly "crowned" apex seals (different height in the center and in the extremes of seal).
Fuel economy and emissions
As is described in the thermodynamic disadvantages section, the early Wankel engines had poor fuel economy. This is caused by the Wankel engine's design of combustion chamber shape and huge surface area. The Wankel engine's design is, on the other hand, much less prone to engine knocking, which allows using low-octane fuels without reducing compression. NSU tested low octane gasoline at the suggestion of Felix Wankel.
On a trial basis 40-octane gasoline was produced by BV Aral, which was used in the Wankel DKM54 test engine with a compression ratio of 8:1; it ran without complaint. This upset the petrochemical industry in Europe, which had invested considerable sums of money in new plants for the production of higher quality gasoline.
Direct injection stratified charge engines can be operated with fuels with particularly low octane numbers. Such as diesel fuel, which only has an octane number of ~25. As a result of the poor efficiency, a Wankel engine with peripheral exhaust porting has a larger amount of unburnt hydrocarbons (HC) released into the exhaust. The exhaust is, however, relatively low in nitrogen oxide (NOx) emissions, because the combustion is slow, and temperatures are lower than in other engines, and also because of the Wankel engine's good exhaust gas recirculation (EGR) behavior. Carbon monoxide (CO) emissions of Wankel and Otto engines are about the same.
The Wankel engine has a significantly higher (ΔtK>100 K) exhaust gas temperature than an Otto engine, especially under low and medium load conditions. This is because of the higher combustion frequency and slower combustion. Exhaust gas temperatures can exceed 1300 K under high load at engine speeds of 6000 rpm−1. To improve the exhaust gas behavior of the Wankel engine, a thermal reactor or catalyst converter may be used to reduce hydrocarbon and carbon monoxide from the exhaust.
Mazda uses a dual ignition system with two spark plugs per chamber. This increases the power output and at the same time reduces HC emissions. At the same time, HC emissions can be lowered by reducing the pre-ignition of the T leading plug relative to the L trailing plug. This leads to internal afterburning and reduces HC emissions. On the other hand, the same ignition timing of L and T leads to a higher energy conversion. Hydrocarbons adhering to the combustion chamber wall are expelled into the exhaust at the peripheral outlet.
Mazda used 3 spark plugs in their R26B engine per chamber. The third spark plug ignites the mixture in the trailing side before the squish is generated, causing the mixture to burn completely and, also speeding up flame propagation, which improves fuel consumption. According to Curtiss-Wright research, the factor that controls the amount of unburnt hydrocarbons in the exhaust is the rotor surface temperature, with higher temperatures resulting in fewer hydrocarbons in the exhaust. Curtiss-Wright widened the rotor, keeping the rest of engine's architecture unchanged, thus reducing friction losses and increasing displacement and power output. The limiting factor for this widening was mechanical, especially shaft deflection at high rotative speeds. Quenching is the dominant source of hydrocarbon at high speeds and leakage at low speeds. Using side-porting which enables closing the exhaust port around the top-dead center and reducing intake and exhaust overlap helps improving fuel consumption.
Mazda's RX-8 car with the Renesis engine (that was first presented in 1999), met in 2004 the United States' low emissions vehicle (LEV-II) standard. This was mainly achieved by using side porting: The exhaust ports, which in earlier Mazda rotary engines were located in the rotor housings, were moved to the side of the combustion chamber. This approach allowed Mazda to eliminate overlap between intake and exhaust port openings, while simultaneously increasing the exhaust port area. This design improved the combustion stability in the low-speed and light load range. The HC emissions from the side exhaust port rotary engine are 35–50% less than those from the peripheral exhaust port Wankel engine. Peripheral ported rotary engines have a better mean effective pressure, especially at high rpm and with a rectangular-shaped intake port. However, the RX-8 was not improved to meet Euro 5 emission regulations, and it was discontinued in 2012. The new Mazda 8C of the Mazda MX-30 R-EV meets the Euro 6d-ISC-FCM emissions standard.
Laser ignition
Laser ignition was first proposed in 2011, but first studies of laser ignition were only conducted in 2021. It is assumed that laser ignition of lean fuel mixtures in Wankel engines could improve fuel consumption and exhaust gas behavior. In a 2021 study, a Wankel model engine was tested with laser ignition and various gaseous and liquid fuels. Laser ignition leads to a faster center of combustion development, thus improving combustion speed, leading to a reduction in NOx emissions. The laser pulse energy required for proper ignition is "reasonable", in the low single-digit mJ-range. A significant modification of the Wankel engine is not required for laser ignition.
Compression-ignition Wankel
Research has occurred into rotary compression ignition engines. The basic design parameters of the Wankel engine preclude obtaining a compression ratio sufficient for Diesel operation in a practical engine. The Rolls-Royce and Yanmar compression-ignition approach was to use a two-stage unit (see figure 16.), with one rotor acting as compressor, while combustion takes place in the other. Both engines were not functional.
Multifuel Wankel engine
A different approach from a compression ignition (Diesel) Wankel engine is a non-CI, multifuel Wankel engine that is capable of operating on a huge variety of fuels: diesel, petrol, kerosene, methanol, natural gas, and hydrogen. German engineer Dankwart Eiermann designed this engine at Wankel SuperTec (WST) in the early 2000s. It has a chamber volume of 500 cm3 (cc) and an indicated power output of per rotor. Versions with one up to four rotors are possible.
The WST engine has a common-rail direct injection system operating on a stratified charge principle. Similar to a Diesel engine and unlike a conventional Wankel engine, the WST engine compresses air rather than an air–fuel mixture as in the four-cycle engine compression phase. Fuel is only injected into the compressed air shortly before top-dead centre, which results in stratified charge (i.e., no homogeneous mixture). A spark plug is used to initiate combustion. The pressure at the end of the compression phase and during combustion is lower than in a conventional Diesel engine, and the fuel consumption is equivalent to that of a small indirect injection compression ignition engine (i.e., >250 g/(kW·h)).
Diesel-fuel-powered variants of the WST Wankel engine are being used as APUs in 60 Deutsche Bahn diesel locomotives. The WST diesel fuel engines can produce up to .
Hydrogen fuel
As a hydrogen/air fuel mixture is quicker to ignite with a faster burning rate than gasoline, an important issue of hydrogen internal combustion engines is to prevent pre-ignition and backfire. In a rotary engine, each cycle of the Otto cycle occurs in different chambers. Importantly, the intake chamber is separated from the combustion chamber, keeping the air/fuel mixture away from localized hot spots. Wankel engines also do not have hot exhaust valves, which eases adapting them to hydrogen operation. Another problem concerns the hydrogenate attack on the lubricating film in reciprocating engines. In a Wankel engine, the problem of a hydrogenate attack is circumvented by using ceramic apex seals.
In a prototype Wankel engine fitted to a Mazda RX-8 to research hydrogen operation, Wakayama et al. found that hydrogen operation improved thermal efficiency by 23% over petrol fuel operation.Although the lean operation emits little NOx, total amount of engine-out NOx
exceeds Japanese SULEV standard. The supplementary stoichiometic operation combined with a catalyst
provides additional NOx reduction. Accordingly, the vehicle satisfies the SULEV standard.
Advantages
Prime advantages of the Wankel engine are:
A far higher power-to-weight ratio than a piston engine
Easier to package in small engine spaces than an equivalent piston engine
Able to reach higher engine speeds than a comparable piston engine
Operating with almost no vibration
Not prone to engine-knock
Cheaper to mass-produce, because the engine contains fewer parts
Supplying torque for about two-thirds of the combustion cycle rather than one-quarter for a piston engine
Easily adapted and highly suitable to use hydrogen fuel.
Wankel engines are considerably lighter and simpler, containing far fewer moving parts than piston engines of equivalent power output. Valves or complex valve trains are eliminated by using simple ports cut into the walls of the rotor housing. Since the rotor rides directly on a large bearing on the output shaft, there are no connecting rods and no crankshaft. The elimination of reciprocating mass gives Wankel engines a low non-uniformity coefficient, meaning that they operate much smoother than comparable reciprocating piston engines. For example, a two-rotor Wankel engine is more than twice as smooth in its operation as a four-cylinder reciprocating piston engine.
A four-stroke cylinder produces a power stroke only every other rotation of the crankshaft, with three strokes being pumping losses. The Wankel engine also has higher volumetric efficiency than a reciprocating piston engine. Because of the quasi-overlap of the power strokes, the Wankel engine is very quick to react to power increases, delivering power quickly when demanded, especially at higher engine speeds. This difference is more pronounced relative to four-cylinder reciprocating engines and less pronounced relative to higher cylinder counts.
Due to the absence of hot exhaust valves, the fuel octane requirements of Wankel engines are lower than in reciprocating piston engines. As a rule of thumb, it may be assumed that a Wankel engine with a working chamber volume Vk of 500 cm3 and a compression of ε=9 runs well on mediocre-quality petrol with an octane rating of just 91 RON. If in a reciprocating piston engine, the compression must be reduced by one unit of compression to avoid knock, then, in a comparable Wankel engine, a reduction in compression may not be required.
Because of the lower injector count, fuel injection systems in Wankel engines are cheaper than in reciprocating piston engines. An injection system that allows stratified charge operation may help reduce rich mixture areas in undesirable parts of the engine, which improves fuel efficiency.
Disadvantages
Thermodynamic disadvantages
Wankel rotary engines mainly suffer from poor thermodynamics caused by the Wankel engine's design with its huge surface area and poor combustion chamber shape. As an effect of this, the Wankel engine has slow and incomplete combustion, which results in high fuel consumption and bad exhaust gas behavior. Wankel engines can reach a typical maximum efficiency of about 30 percent.
In a Wankel rotary engine, fuel combustion is slow, because the combustion chamber is long, thin, and moving. Flame travel occurs almost exclusively in the direction of rotor movement, adding to the poor quenching of the fuel and air mixture, being the main source of unburnt hydrocarbons at high engine speeds: The trailing side of the combustion chamber naturally produces a "squeeze stream" that prevents the flame from reaching the chamber's trailing edge, which worsens the consequences of the fuel and air mixture quenching poorly. Direct fuel injection, in which fuel is injected towards the leading edge of the combustion chamber, can minimize the amount of unburnt fuel in the exhaust.
Mechanical disadvantages
Although many of the disadvantages are the subject of ongoing research, the current disadvantages of the Wankel engine in production are the following:
Rotor sealing The engine housing has vastly different temperatures in each separate chamber section. The different expansion coefficients of the materials lead to imperfect sealing. Additionally, both sides of the apex seals are exposed to fuel, and the design does not allow for controlling the lubrication of the rotors accurately and precisely. Rotary engines tend to be overlubricated at all engine speeds and loads, and have relatively high oil consumption and other problems resulting from excess oil in the combustion areas of the engine, such as carbon formation and excessive emissions from burning oil. By comparison, a piston engine has all functions of a cycle in the same chamber giving a more stable temperature for piston rings to act against. Additionally, only one side of the piston in a (four-stroke) piston engine is exposed to fuel, allowing oil to lubricate the cylinders from the other side. Piston engine components can also be designed to increase ring sealing and oil control as cylinder pressures and power levels increase. To overcome the problems in a Wankel engine of differences in temperatures between different regions of housing and side and intermediary plates, and the associated thermal dilatation inequities, a heat pipe has been used to transport heat from the hot to the cold parts of the engine. The "heat pipes" effectively direct hot exhaust gas to the cooler parts of the engine, resulting in decreases in efficiency and performance. In small-displacement, charge-cooled rotor, air-cooled housing Wankel engines, that has been shown to reduce the maximum engine temperature from , and the maximum difference between hotter and colder regions of the engine from .
Apex seal lifting Centrifugal force pushes the apex seal onto the housing surface forming a firm seal. Gaps can develop between the apex seal and trochoid housing in light-load operation when imbalances in centrifugal force and gas pressure occur. At low engine-rpm ranges, or under low-load conditions, the gas pressure in the combustion chamber can cause the seal to lift off the surface, resulting in combustion gas leaking into the next chamber. Mazda developed a solution, changing the shape of the trochoid housing, which meant that the seals remained flush with the housing. Using the Wankel engine at sustained higher revolutions helps eliminate apex seal lift-off, making it viable in applications such as electricity generation. In motor vehicles, the engine is suited to series-hybrid applications. NSU circumvented this problem by adding slots on one side of the apex seals, thus directing the gas pressure into the base of the apex. This effectively prevented the apex seals from lifting off.
Although in two dimensions the seal system of a Wankel looks to be even simpler than that of a corresponding multi-cylinder piston engine, in three dimensions the opposite is true. As well as the rotor apex seals evident in the conceptual diagram, the rotor must also seal against the chamber ends.
Piston rings in reciprocating engines are not perfect seals; each has a gap to allow for expansion. The sealing at the apexes of the Wankel rotor is less critical because leakage is between adjacent chambers on adjacent strokes of the cycle, rather than to the mainshaft case. Although sealing has improved over the years, the less-than-effective sealing of the Wankel, which is mostly due to lack of lubrication, remains a factor reducing its efficiency.
The trailing side of the rotary engine's combustion chamber develops a squeeze stream that pushes back the flame front. With the conventional one or two-spark-plug system and homogenous mixture, this squeeze stream prevents the flame from propagating to the combustion chamber's trailing side in the mid and high-engine speed ranges. Kawasaki dealt with that problem in its US patent ; Toyota obtained a 7% economy improvement by placing a glow-plug in the leading side, and using Reed-Valves in intake ducts. In two-stroke engines, metal reeds last about while carbon fiber, around . This poor combustion in the trailing side of the chamber is one of the reasons why there is more carbon monoxide and unburned hydrocarbons in a Wankel's exhaust stream. A side-port exhaust, as is used in the Mazda Renesis, avoids port overlap, one of the causes of this, because the unburned mixture cannot escape. The Mazda 26B avoided this problem through the use of a three spark-plug ignition system and obtained a complete conversion of the aspirated mixture. In the 26B, the upper late trailing spark plug ignites before the onset of the squeeze flow.
Regulations and taxation
National agencies that tax automobiles according to displacement and regulatory bodies in automobile racing use a variety of equivalency factors to compare Wankel engines to four-stroke piston engines. Greece, for example, taxed cars based on the working chamber volume (the face of one rotor), multiplied by the number of rotors, lowering the cost of ownership. Japan did the same, but applied an equivalency factor of 1.5, making Mazda's 13B engine fit just under the 2-liter tax limit. FIA used an equivalency factor of 1.8 but later increased it to 2.0, using the displacement formula described by Bensinger. However, the DMSB applies an equivalency factor of 1.5 in motorsport.
Car applications
The first rotary-engined car for sale was the 1964 NSU Rotary Spider. Rotary engines were continuously fitted in cars until 2012 when Mazda discontinued the RX-8. Mazda introduced a rotary-engined hybrid electric car, the MX-30 R-EV in 2023.
NSU and Mazda
Mazda and NSU signed a study contract to develop the Wankel engine in 1961 and competed to bring the first Wankel-powered automobile to the market. Although Mazda produced an experimental rotary that year, NSU was the first with a rotary automobile for sale, the sporty NSU Spider in 1964; Mazda countered with a display of two- and four-rotor rotary engines at that year's Tokyo Motor Show. In 1967, NSU began production of a rotary-engined luxury car, the Ro 80. NSU had not produced reliable apex seals on the rotor, though, unlike Mazda and Curtiss-Wright. NSU had problems with apex seals' wear, poor shaft lubrication, and poor fuel economy, leading to frequent engine failures, not solved until 1972, which led to large warranty costs curtailing further NSU rotary engine development. This premature release of the new rotary engine gave a poor reputation for all makes, and even when these issues were solved in the last engines produced by NSU in the second half of the '70s, sales did not recover.
By early 1978, Audi engineers Richard van Basshuysen and Gottlieb Wilmers had designed a new generation of the Audi NSU Wankel engine, the KKM 871. It was a two-rotor unit with a chamber volume Vk of 746.6 cm3, derived from an eccentricity of 17 mm, a generating radius of 118.5 mm, and equidistance of 4 mm and a housing width of 69 mm. It had double side intake ports, and a peripheral exhaust port; it was fitted with a continuously injecting Bosch K-Jetronic multipoint manifold injection system. According to the DIN 70020 standard, it produced 121 kW at 6500 rpm, and could provide a max. torque of 210 N·m at 3500 rpm. Van Basshuysen and Wilmers designed the engine with either a thermal reactor, or a catalytic converter for emissions control. The engine had a mass of 142 kg, and a BSFC of approximately 315 g/(kW·h) at 3000 rpm and a BMEP of 900 kPa. For testing, two KKM 871 engines were installed in Audi 100 Type 43 test cars, one with a five-speed manual gearbox, and one with a three-speed automatic gearbox.
Mazda
Mazda claimed to have solved the apex seal problem, operating test engines at high speed for 300 hours without failure. After years of development, Mazda's first rotary engine car was the 1967 Cosmo 110S. The company followed with several Wankel ("rotary" in the company's terminology) vehicles, including a bus and a pickup truck. Customers often cited the cars' smoothness of operation. However, Mazda chose a method to comply with hydrocarbon emission standards which, while less expensive to produce, increased fuel consumption.
Mazda later abandoned the rotary in most of their automotive designs, continuing to use the engine in their sports car range only. The company normally used two-rotor designs. A more advanced twin-turbo three-rotor engine was fitted in the 1990 Eunos Cosmo sports car. In 2003, Mazda introduced the Renesis engine fitted in the RX-8. The Renesis engine relocated the ports for exhaust from the periphery of the rotary housing to the sides, allowing for larger overall ports, and better airflow. The Renesis is capable of with improved fuel economy, reliability, and lower emissions than prior Mazda rotary engines, all from a nominal 2.6 L displacement, but this was not enough to meet more stringent emissions standards. Mazda ended production of their rotary engine in 2012 after the engine failed to meet the more stringent Euro 5 emission standards, leaving no automotive company selling a rotary-powered road vehicle until 2023.
Mazda launched the MX-30 R-EV hybrid fitted with a Wankel engine range extender in March 2023. The Wankel engine has no direct connection to the wheels and serves only to charge the battery. It is a single-rotor unit with a engine and a rated power output of . The engine has petrol direct injection, exhaust gas recirculation, and an exhaust-gas treatment system with a Three-way catalyst and a particulate filter. The engine is Euro 6d-ISC-FCM-compliant.
Citroën
Citroën did much research, producing the M35 and GS Birotor cars, and the helicopter, using engines produced by Comotor, a joint venture by Citroën and NSU.
Daimler-Benz
Daimler-Benz fitted a Wankel engine in their C111 concept car. The C 111-II's engine was naturally aspirated, fitted with petrol direct injection, and had four rotors. The total displacement was , and the compression ration was 9.3:1 It provided a maximum torque of at 5,000rpm and a power output of at 6,000rpm.
American Motors
American Motors Corporation (AMC) was so convinced "... that the rotary engine will play an important role as a powerplant for cars and trucks of the future ...", that the chairman, Roy D. Chapin Jr., signed an agreement in February 1973 after a year's negotiations, to build rotary engines for both passenger cars and military vehicles, and the right to sell any rotary engines it produced to other companies. AMC's president, William Luneburg, did not expect dramatic development through to 1980, but Gerald C. Meyers, AMC's vice president of the engineering product group, suggested that AMC should buy the engines from Curtiss-Wright before developing its own rotary engines, and predicted a total transition to rotary power by 1984.
Plans called for the engine to be used in the AMC Pacer, but development was pushed back. American Motors designed the unique Pacer around the engine. By 1974, AMC had decided to purchase the General Motors (GM) rotary instead of building an engine in-house. Both GM and AMC confirmed the relationship would be beneficial in marketing the new engine, with AMC claiming that the GM rotary achieved good fuel economy. GM's engines had not reached production when the Pacer was launched onto the market. The 1973 oil crisis played a part in frustrating the use of the rotary engine. Rising fuel prices and speculation about proposed US emission standards legislation also increased concerns.
General Motors
At its annual meeting in May 1973, General Motors unveiled the Wankel engine it planned to use in the Chevrolet Vega. By 1974, GM R&D had not succeeded in producing a Wankel engine meeting both the emission requirements and good fuel economy, leading to a decision by the company to cancel the project. Because of that decision, the R&D team only partly released the results of its most recent research, which claimed to have solved the fuel-economy problem and built reliable engines with a lifespan above . Those findings were not taken into account when the cancellation order was issued. The ending of GM's rotary project required AMC, who was to purchase the engine, to reconfigure the Pacer to house its AMC straight-6 engine driving the rear wheels.
AvtoVAZ
In 1974, the Soviet Union created a special engine-design bureau, which, in 1978, designed an engine designated as VAZ-311 fitted into a VAZ-2101 car. In 1980, the company began delivering the VAZ-411 twin-rotor Wankel engine in VAZ-2106 cars, with about 200 being manufactured. Most of the production went to the security services.
Ford
Ford conducted research in rotary engines, resulting in patents granted: , 1974, a method for fabricating housings; 1974, side plates coating; , 1975, housing coating; , 1978: Housings alignment; , 1979, reed-valve assembly. In 1972, Henry Ford II stated that the rotary probably would not replace the piston in "my lifetime".
Car racing
The Sigma MC74 powered by a Mazda 12A engine was the first engine and only team from outside Western Europe or the United States to finish the entire 24hours of the 24 Hours of Le Mans race, in 1974. Yojiro Terada was the driver of the MC74. Mazda was the first team from outside Western Europe or the United States to win Le Mans outright. It was also the only non-piston engined car to win Le Mans, which the company accomplished in 1991 with their four-rotor 787B ( displacement), rated by FIA formula at ). In the C2 class, all participants had the same amount of fuel. The only exception was the unregulated C1 Category 1. This category only allowed naturally aspirated engines. The Mazdas were classified as naturally aspirated to start with 830 kg weight, 170 kg less than the supercharged competitors. The cars under the Group C1 Category 1 regulations for 1991 were allowed to be another 80 kg lighter than the 787B. In addition, Group C1 Category 1 had only permitted 3.5-liter naturally aspirated engines and had no fuel quantity limits.
As a vehicle range extender
Due to the compact size and the high power-to-weight ratio of a Wankel engine, it has been proposed for electric vehicles as range extenders to provide supplementary power when electric battery levels are low. A Wankel engine used as a generator has packaging, noise, vibration, and harshness advantages when used in a passenger car, maximizing interior passenger and luggage space, as well as providing a good noise and vibration emissions profile. However, it is questionable whether or not the inherent disadvantages of the Wankel engine allow the usage of the Wankel engine as a range extender for passenger cars.
In 2010, Audi unveiled a prototype series-hybrid electric car, the A1 e-tron. It incorporated a Wankel engine with a chamber volume Vk of 254 cm3, capable of producing 18 kW at 5000 rpm. It was mated to an electric generator, which recharged the car's batteries as needed and provided electricity directly to the electric driving motor. The package had a mass of 70 kg and could produce 15 kW of electric power.
In November 2013, Mazda announced to the motoring press a series-hybrid prototype car, the Mazda2 EV, using a Wankel engine as a range extender. The generator engine, located under the rear luggage floor, is a tiny, almost inaudible, single-rotor 330-cc unit, generating at 4,500rpm and maintaining a continuous electric output of 20 kW.
Mazda introduced the MX-30 R-EV fitted with a Wankel engine range extender in March 2023. The car's Wankel engine is a naturally aspirated single-rotor unit with a chamber volume Vk of , a compression of 11.9, and a rated power output of . It has petrol direct injection, exhaust gas recirculation, and an exhaust-gas treatment system with a TWC and a particulate filter. According to auto motor und sport, the engine is Euro 6d-ISC-FCM-compliant.
Motorcycle applications
The first Wankel-engined motorcycle was an MZ-built MZ ES 250, fitted with a water-cooled KKM 175 W Wankel engine. An air-cooled version followed this in 1965, called the KKM 175 L. The engine produced at 6,750rpm, but the motorcycle never went into series production.
Norton
In Britain, Norton Motorcycles developed a Wankel rotary engine for motorcycles, based on the Sachs air-cooled rotor Wankel that powered the DKW/Hercules W-2000 motorcycle. This two-rotor engine was included in the Commander and F1. Norton improved on Sachs's air cooling, introducing a plenum chamber. Suzuki also made a production motorcycle powered by a Wankel engine, the RE-5, using ferroTiC alloy apex seals and an NSU rotor in a successful attempt to prolong the engine's life.
In the early 1980s, using earlier work at BSA, Norton produced the air-cooled twin-rotor Classic, followed by the liquid-cooled Commander and the Interpol2 (a police version). Subsequent Norton Wankel bikes included the Norton F1, F1 Sports, RC588, Norton RCW588, and NRS588. Norton proposed a new 588-cc twin-rotor model called the "NRV588" and a 700-cc version called the "NRV700". A former mechanic at Norton, Brian Crighton, started developing his own rotary engined motorcycles line named "Roton", which won several Australian races.
Despite successes in racing, no motorcycles powered by Wankel engines have been produced for sale to the general public for road use since 1992.
Yamaha
In 1972, Yamaha introduced the RZ201 at the Tokyo Motor Show, a prototype with a Wankel engine, weighing 220 kg and producing from a twin-rotor 660-cc engine (US patent N3964448). In 1972, Kawasaki presented its two-rotor Kawasaki X99 rotary engine prototype (US patents N 3848574 &3991722). Both Yamaha and Kawasaki claimed to have solved the problems of poor fuel economy, high exhaust emissions, and poor engine longevity in early Wankels, but neither prototype reached production.
Hercules
In 1974, Hercules produced W-2000 Wankel motorcycles, but low production numbers meant the project was unprofitable, and production ceased in 1977.
Suzuki
From 1975 to 1976, Suzuki produced its RE5 single-rotor Wankel motorcycle. It was a complex design, with both liquid cooling and oil cooling, and multiple lubrication and carburetor systems. It worked well and was smooth, but it did not sell well because it was heavy and had a modest power output of .
Suzuki opted for a complicated oil-cooling and water-cooling system. The exhaust pipes become very hot, with Suzuki opting for a finned exhaust manifold, twin-skinned exhausted pipes with cooling grilles, heatproof pipe wrappings, and silencers with heat shields. Suzuki had three lube systems, while Garside had a single total-loss oil injection system that fed both the main bearings and the intake manifolds. Suzuki chose a single rotor that was fairly smooth, but with rough patches at 4,000 rpm. Suzuki mounted the massive rotor high in the frame.
Although it was described to handle well, the result was that the Suzuki was heavy, overcomplicated, expensive to manufacture, and, at 62 bhp, short on power.
Van Veen
Dutch motorcycle importer and manufacturer Van Veen produced small quantities of a dual-rotor Wankel-engined OCR-1000 motorcycle between 1978 and 1980, using surplus Comotor engines. The OCR 1000 engine used a modified KKM 624 engine initially intended for the Citroën GS Birotor car. Whereby an electronic map ignition from Hartig replaced the ignition distributor.
Non-road vehicle applications
Aircraft
In principle, rotary engines are ideal for light aircraft, being light, compact, almost vibrationless, and with a high power-to-weight ratio. Further aviation benefits of a rotary engine include:
The engine is not susceptible to "shock-cooling" during descent;
The engine does not require an enriched mixture for cooling at high power;
Having no reciprocating parts, there is less vulnerability to damage when the engine revolves at a higher rate than the designed maximum.
Unlike cars and motorcycles, a rotary aero-engine will be sufficiently warm before full power is applied because of the time taken for pre-flight checks. Also, the journey to the runway has minimum cooling, which further permits the engine to reach the operating temperature for full power on take-off. A Wankel aero-engine spends most of its operational time at high power outputs, with little idling.
Since rotary engines operate at a relatively high rotational speed, at 6,000rpm of the output shaft, the rotor spins only at about one-third of that speed. With relatively low torque, propeller-driven aircraft must use a propeller speed reduction unit to maintain propellers within the designed speed range. Experimental aircraft with Wankel engines use propeller speed reduction units; for example, the MidWest twin-rotor engine has a 2.95:1 reduction gearbox.
The first rotary engine aircraft was in the late-1960s in the experimental Lockheed Q-Star civilian version of the United States Army's reconnaissance QT-2, essentially a powered Schweizer sailplane. The plane was powered by a Curtiss-Wright RC2-60 Wankel rotary engine. The same engine model was also used in a Cessna Cardinal and a helicopter, as well as other airplanes. The French company Citroën developed a rotary-powered helicopter in the 1970s. In Germany in the mid-1970s, a pusher ducted fan airplane powered by a modified NSU multi-rotor rotary engine was developed in both civilian and military versions, Fanliner and Fantrainer.
At roughly the same time as the first experiments with full-scale aircraft powered with rotary engines, model aircraft-sized versions were pioneered by a combination of the well-known Japanese O.S. Engines firm and the then-extant German Graupner aeromodelling products firm, under license from NSU. The Graupner model Wankel engine has a chamber volume Vk of 4.9 cm3, and produces 460 W at 16,000 rpm−1; its mass is 370 g. It was produced by O.S. engines of Japan.
Rotary engines have been fitted in homebuilt experimental aircraft, such as the ARV Super2, a couple of which were powered by the British MidWest aero-engine. Most are Mazda 12A and 13B automobile engines, converted for aviation use. This is a very cost-effective alternative to certified aircraft engines, providing engines ranging from 100 to at a fraction of the cost of traditional piston engines. These conversions were initially in the early 1970s. Peter Garrison, a contributing editor for Flying magazine, wrote "in my opinion … the most promising engine for aviation use is the Mazda rotary."
The sailplane manufacturer Schleicher uses an Austro Engine AE50R engine in its self-launching models ASK-21 Mi, ASH-26E, ASH-25 M/Mi, ASH-30 Mi, ASH-31 Mi, ASW-22 BLE, and ASG-32 Mi.
In 2013, e-Go airplanes, based in Cambridge, United Kingdom, announced that a rotary engine from Rotron Power will power its new single-seater canard aircraft.
The DA36 E-Star, an aircraft designed by Siemens, Diamond Aircraft and EADS, employs a series hybrid powertrain with the propeller being turned by a Siemens electric motor. The aim is to reduce fuel consumption and emissions by up to 25%. An onboard Austro Engine engine and generator provide the electricity. A propeller speed reduction unit is eliminated. The electric motor uses electricity stored in batteries, with the generator engine off, to take off and climb reducing sound emissions. The series-hybrid powertrain using the Wankel engine reduces the plane's weight by 100 kg relative to its predecessor. The DA36 E-Star first flew in June 2013, making this the first-ever flight of a series-hybrid powertrain. Diamond Aircraft claims that rotary engine technology is scalable to a 100-seat aircraft.
Trains
Since 2015, a total of 60 trains in Germany have been equipped with Wankel-engined auxiliary power systems that burn diesel fuel. The locomotives use the WST KKM 351 Wankel diesel fuel engine.
Other uses
The Wankel engine is well-suited for devices in which a human operator is close to the engine, e.g., hand-held devices such as chainsaws. The excellent starting behavior and low mass make the Wankel engine also a good powerplant for portable fire pumps and portable power generators.
Small Wankel engines are being found in applications such as go-karts, personal watercraft, and auxiliary power units for aircraft. Kawasaki patented mixture-cooled rotary engine (US patent 3991722). Japanese diesel engine manufacturer Yanmar and Dolmar-Sachs of Germany had a rotary-engined chain saw (SAE paper 760642) and outboard boat engines, and the French Outils Wolf, made a lawnmower (Rotondor) powered by a Wankel rotary engine. The rotor was in a horizontal position to save on production costs, and there were no seals on the downside.
The simplicity of the rotary engine makes it well-suited for mini, micro, and micro-mini engine designs. The Microelectromechanical systems (MEMS) Rotary Engine Lab at the University of California, Berkeley, formerly researched developing rotary engines down to 1 mm in diameter, with displacements less than 0.1 cc. Materials include silicon, and motive power includes compressed air. The goal of such research was to eventually develop an internal combustion engine with the ability to deliver 100 milliwatts of electrical power, with the engine serving as the rotor of the electric generator, with magnets built into the engine rotor. Development of the miniature rotary engine stopped at UC Berkeley at the end of the DARPA contract.
In 1976, Road & Track reported that Ingersoll-Rand would develop a Wankel engine with a chamber volume Vk of with a rated power of per rotor. Eventually, 13 units of the proposed engine were built, albeit with a larger displacement, and covered over 90,000 operating hours combined. The engine was made with a chamber volume Vk of , and a power output of per rotor. Both single, and twin-rotor engines were made (producing or respectively). The engines ran on natural gas and had a relatively low engine speed due to its application.
Deere & Company acquired the Curtiss-Wright rotary division in February 1984, making large multi-fuel prototypes, some with an 11-liter rotor for large vehicles. The developers attempted to use a stratified charge concept. The technology was transferred to RPI in 1991.
Yanmar of Japan produced small, charge-cooled rotary engines for chainsaws and outboard engines. One of its products is the LDR (rotor recess in the leading edge of the combustion chamber) engine, which has better exhaust emissions profiles, and reed-valve controlled intake ports, which improve part-load and low rpm performance.
In 1971 and 1972, Arctic Cat produced snowmobiles powered by Sachs KM 914 303-cc and KC-24 294-cc Wankel engines made in Germany.
In the early 1970s, Outboard Marine Corporation sold snowmobiles under the Johnson and other brands, which were powered by OMC engines.
Aixro of Germany produces and sells a go-kart engine with a 294-cc-chamber charge-cooled rotor and liquid-cooled housings. Other makers include Wankel AG, Cubewano, Rotron, and Precision Technology.
Non-internal combustion
In addition to applications as an internal combustion engine, the basic Wankel design has also been used for gas compressors, and superchargers for internal combustion engines, but in these cases, although the design still offers advantages in reliability, the primary advantages of the Wankel in size and weight over the four-stroke internal combustion engine are irrelevant. In a design using a Wankel supercharger on a Wankel engine, the supercharger is twice the size of the engine.
The Wankel design is used in the seat belt pre-tensioner system in some Mercedes-Benz and Volkswagen cars. When the deceleration sensors detect a potential crash, small explosive cartridges are triggered electrically, and the resulting pressurized gas feeds into tiny Wankel engines, which rotate to take up the slack in the seat belt systems, anchoring the driver and passengers firmly in the seat before a collision.
See also
General Motors Rotary Combustion Engine
Gunderson Do-All Machine
Mazda RX-8 Hydrogen RE
Mazda Wankel engine
Mercedes-Benz M 950
Mercedes-Benz C111
O.S. Engines, the only licensed maker of Wankel model engines
Pistonless rotary engine
Quasiturbine
RKM engine
Liquidpiston
Notes
References
F Feller and M I Mech: "The 2-Stage Rotary Engine—A New Concept in Diesel Power" by Rolls-Royce, The Institution of Mechanical Engineers, Proceedings 1970–71, Vol. 185, pp. 139–158, D55-D66. London
P V Lamarque, "The Design of Cooling Fins for Motor-Cycle Engines", The Institution of Automobile Engineers Magazine, London, March 1943 issue, and also in "The Institution of Automobile Engineers Proceedings", XXXVII, Session 1942–1943, pp. 99–134 and 309–312.
Walter G. Froede (1961): 'The NSU-Wankel Rotating Combustion Engine', SAE Technical paper 610017
M. R. Hayes & D. P. Bottrill: 'N.S.U. Spider -Vehicle Analysis', Mira (Motor Industry Research Association, UK), 1965.
C Jones (Curtiss-Wright), "Rotary Combustion Engine is as Neat and Trim as the Aircraft Turbine", SAE Journal, May 1968, Vol 76, nº 5: 67–69. Also in SAE paper 670194.
Jan P Norbye: "Rivals to the Wankel", Popular Science, Jan 1967; 'The Wankel Engine. Design, development, applications'; Chilton, 1972.
T W Rogers et al. (Mobil), "Lubricating Rotary Engines", Automotive Engineering (SAE) May 1972, Vol 80, nº 5: 23–35.
K Yamamoto et al. (Mazda): "Combustion and Emission Properties of Rotary Engines", Automotive Engineering (SAE), July 1972: 26–29. Also in SAE paper 720357.
L W Manley (Mobil): "Low-Octane Fuel is OK for Rotary Engines", Automotive Engineering (SAE), Aug 1972, Vol 80, nº 8: 28–29.
Reiner Nikulski: "The Norton rotor turns in my Hercules W-2000", "Sachs KC-27 engine with a catalyst converter", and other articles in: "Wankel News" (In German, from Hercules Wankel IG)
"A WorldWide Rotary Update", Automotive Engineering (SAE), Feb 1978, Vol 86, nº 2: 31–42.
B Lawton: 'The Turbocharged Diesel Wankel Engine', C68/78, of: 'Institution of Mechanical Engineers Conference Publications. 1978–2, Turbocharging and Turbochargers, , pp 151–160.
T Kohno et al. (Toyota): "Rotary Engine's Light-Load Combustion Improved", Automotive Engineering (SAE), Aug 1979: 33–38. Also in SAE paper 790435.
Kris Perkins: Norton Rotaries, 1991 Osprey Automotive, London.
Karl Ludvigsen: Wankel Engines A to Z, New York 1973.
Len Louthan (AAI corp.): 'Development of a Lightweight Heavy Fuel Rotary Engine', SAE paper 930682
Patents: , 1958 -Wankel; , 1974 -Kawasaki; , 1974 -Ford; , 1974; , 1976 -Ford; , 1978; , 1979 -Ford.
Dun-Zen Jeng et al.: 'The Numerical Investigation on the Performance of Rotary Engine with Leakage, Different Fuels and Recess Sizes', SAE paper 2013-32-9160, and same author: 'The intake and Exhaust Pipe Effect on Rotary Engine Performance', SAE paper 2013-32-9161
Wei Wu et al.: 'A Heat Pipe Assisted Air-Cooled Rotary Wankel Engine for Improved Durability, Power and Efficiency', SAE paper 2014-01-2160
Alberto Boretti: 'CAD/CFD/CAE Modelling of Wankel Engines for UAV', SAE Technical Paper 2015-01-2466
External links
Piston ported engines
Pistonless rotary engine
Motorcycle engines
German inventions
20th-century inventions | Wankel engine | Technology | 14,854 |
363,551 | https://en.wikipedia.org/wiki/Universal%20enveloping%20algebra | In mathematics, the universal enveloping algebra of a Lie algebra is the unital associative algebra whose representations correspond precisely to the representations of that Lie algebra.
Universal enveloping algebras are used in the representation theory of Lie groups and Lie algebras. For example, Verma modules can be constructed as quotients of the universal enveloping algebra. In addition, the enveloping algebra gives a precise definition for the Casimir operators. Because Casimir operators commute with all elements of a Lie algebra, they can be used to classify representations. The precise definition also allows the importation of Casimir operators into other areas of mathematics, specifically, those that have a differential algebra. They also play a central role in some recent developments in mathematics. In particular, their dual provides a commutative example of the objects studied in non-commutative geometry, the quantum groups. This dual can be shown, by the Gelfand–Naimark theorem, to contain the C* algebra of the corresponding Lie group. This relationship generalizes to the idea of Tannaka–Krein duality between compact topological groups and their representations.
From an analytic viewpoint, the universal enveloping algebra of the Lie algebra of a Lie group may be identified with the algebra of left-invariant differential operators on the group.
Informal construction
The idea of the universal enveloping algebra is to embed a Lie algebra into an associative algebra with identity in such a way that the abstract bracket operation in corresponds to the commutator in and the algebra is generated by the elements of . There may be many ways to make such an embedding, but there is a unique "largest" such , called the universal enveloping algebra of .
Generators and relations
Let be a Lie algebra, assumed finite-dimensional for simplicity, with basis . Let be the structure constants for this basis, so that
Then the universal enveloping algebra is the associative algebra (with identity) generated by elements subject to the relations
and no other relations. Below we will make this "generators and relations" construction more precise by constructing the universal enveloping algebra as a quotient of the tensor algebra over .
Consider, for example, the Lie algebra sl(2,C), spanned by the matrices
which satisfy the commutation relations , , and . The universal enveloping algebra of sl(2,C) is then the algebra generated by three elements subject to the relations
and no other relations. We emphasize that the universal enveloping algebra is not the same as (or contained in) the algebra of matrices. For example, the matrix satisfies , as is easily verified. But in the universal enveloping algebra, the element does not satisfy because we do not impose this relation in the construction of the enveloping algebra. Indeed, it follows from the Poincaré–Birkhoff–Witt theorem (discussed § below) that the elements are all linearly independent in the universal enveloping algebra.
Finding a basis
In general, elements of the universal enveloping algebra are linear combinations of products of the generators in all possible orders. Using the defining relations of the universal enveloping algebra, we can always re-order those products in a particular order, say with all the factors of first, then factors of , etc. For example, whenever we have a term that contains (in the "wrong" order), we can use the relations to rewrite this as plus a linear combination of the 's. Doing this sort of thing repeatedly eventually converts any element into a linear combination of terms in ascending order. Thus, elements of the form
with the 's being non-negative integers, span the enveloping algebra. (We allow , meaning that we allow terms in which no factors of occur.) The Poincaré–Birkhoff–Witt theorem, discussed below, asserts that these elements are linearly independent and thus form a basis for the universal enveloping algebra. In particular, the universal enveloping algebra is always infinite dimensional.
The Poincaré–Birkhoff–Witt theorem implies, in particular, that the elements themselves are linearly independent. It is therefore common—if potentially confusing—to identify the 's with the generators of the original Lie algebra. That is to say, we identify the original Lie algebra as the subspace of its universal enveloping algebra spanned by the generators. Although may be an algebra of matrices, the universal enveloping of does not consist of (finite-dimensional) matrices. In particular, there is no finite-dimensional algebra that contains the universal enveloping of ; the universal enveloping algebra is always infinite dimensional. Thus, in the case of sl(2,C), if we identify our Lie algebra as a subspace of its universal enveloping algebra, we must not interpret , and as matrices, but rather as symbols with no further properties (other than the commutation relations).
Formalities
The formal construction of the universal enveloping algebra takes the above ideas, and wraps them in notation and terminology that makes it more convenient to work with. The most important difference is that the free associative algebra used in the above is narrowed to the tensor algebra, so that the product of symbols is understood to be the tensor product. The commutation relations are imposed by constructing a quotient space of the tensor algebra quotiented by the smallest two-sided ideal containing elements of the form . The universal enveloping algebra is the "largest" unital associative algebra generated by elements of with a Lie bracket compatible with the original Lie algebra.
Formal definition
Recall that every Lie algebra is in particular a vector space. Thus, one is free to construct the tensor algebra from it. The tensor algebra is a free algebra: it simply contains all possible tensor products of all possible vectors in , without any restrictions whatsoever on those products.
That is, one constructs the space
where is the tensor product, and is the direct sum of vector spaces. Here, is the field over which the Lie algebra is defined. From here, through to the remainder of this article, the tensor product is always explicitly shown. Many authors omit it, since, with practice, its location can usually be inferred from context. Here, a very explicit approach is adopted, to minimize any possible confusion about the meanings of expressions.
The first step in the construction is to "lift" the Lie bracket from the Lie algebra (where it is defined) to the tensor algebra (where it is not), so that one can coherently work with the Lie bracket of two tensors. The lifting is done as follows. First, recall that the bracket operation on a Lie algebra is a map that is bilinear, skew-symmetric and satisfies the Jacobi identity. We wish to define a Lie bracket [-,-] that is a map that is also bilinear, skew symmetric and obeys the Jacobi identity.
The lifting can be done grade by grade. Begin by defining the bracket on as
This is a consistent, coherent definition, because both sides are bilinear, and both sides are skew symmetric (the Jacobi identity will follow shortly). The above defines the bracket on ; it must now be lifted to for arbitrary This is done recursively, by defining
and likewise
It is straightforward to verify that the above definition is bilinear, and is skew-symmetric; one can also show that it obeys the Jacobi identity. The final result is that one has a Lie bracket that is consistently defined on all of one says that it has been "lifted" to all of in the conventional sense of a "lift" from a base space (here, the Lie algebra) to a covering space (here, the tensor algebra).
The result of this lifting is explicitly a Poisson algebra. It is a unital associative algebra with a Lie bracket that is compatible with the Lie algebra bracket; it is compatible by construction. It is not the smallest such algebra, however; it contains far more elements than needed. One can get something smaller by projecting back down. The universal enveloping algebra of is defined as the quotient space
where the equivalence relation is given by
That is, the Lie bracket defines the equivalence relation used to perform the quotienting. The result is still a unital associative algebra, and one can still take the Lie bracket of any two members. Computing the result is straight-forward, if one keeps in mind that each element of can be understood as a coset: one just takes the bracket as usual, and searches for the coset that contains the result. It is the smallest such algebra; one cannot find anything smaller that still obeys the axioms of an associative algebra.
The universal enveloping algebra is what remains of the tensor algebra after modding out the Poisson algebra structure. (This is a non-trivial statement; the tensor algebra has a rather complicated structure: it is, among other things, a Hopf algebra; the Poisson algebra is likewise rather complicated, with many peculiar properties. It is compatible with the tensor algebra, and so the modding can be performed. The Hopf algebra structure is conserved; this is what leads to its many novel applications, e.g. in string theory. However, for the purposes of the formal definition, none of this particularly matters.)
The construction can be performed in a slightly different (but ultimately equivalent) way. Forget, for a moment, the above lifting, and instead consider the two-sided ideal generated by elements of the form
This generator is an element of
A general member of the ideal will have the form
for some All elements of are obtained as linear combinations of elements of this form. Clearly, is a subspace. It is an ideal, in that if and then and Establishing that this is an ideal is important, because ideals are precisely those things that one can quotient with; ideals lie in the kernel of the quotienting map. That is, one has the short exact sequence
where each arrow is a linear map, and the kernel of that map is given by the image of the previous map. The universal enveloping algebra can then be defined as
Superalgebras and other generalizations
The above construction focuses on Lie algebras and on the Lie bracket, and its skewness and antisymmetry. To some degree, these properties are incidental to the construction. Consider instead some (arbitrary) algebra (not a Lie algebra) over a vector space, that is, a vector space endowed with multiplication that takes elements If the multiplication is bilinear, then the same construction and definitions can go through. One starts by lifting up to so that the lifted obeys all of the same properties that the base does – symmetry or antisymmetry or whatever. The lifting is done exactly as before, starting with
This is consistent precisely because the tensor product is bilinear, and the multiplication is bilinear. The rest of the lift is performed so as to preserve multiplication as a homomorphism. By definition, one writes
and also that
This extension is consistent by appeal to a lemma on free objects: since the tensor algebra is a free algebra, any homomorphism on its generating set can be extended to the entire algebra. Everything else proceeds as described above: upon completion, one has a unital associative algebra; one can take a quotient in either of the two ways described above.
The above is exactly how the universal enveloping algebra for Lie superalgebras is constructed. One need only to carefully keep track of the sign, when permuting elements. In this case, the (anti-)commutator of the superalgebra lifts to an (anti-)commuting Poisson bracket.
Another possibility is to use something other than the tensor algebra as the covering algebra. One such possibility is to use the exterior algebra; that is, to replace every occurrence of the tensor product by the exterior product. If the base algebra is a Lie algebra, then the result is the Gerstenhaber algebra; it is the exterior algebra of the corresponding Lie group. As before, it has a grading naturally coming from the grading on the exterior algebra. (The Gerstenhaber algebra should not be confused with the Poisson superalgebra; both invoke anticommutation, but in different ways.)
The construction has also been generalized for Malcev algebras, Bol algebras and left alternative algebras.
Universal property
The universal enveloping algebra, or rather the universal enveloping algebra together with the canonical map , possesses a universal property. Suppose we have any Lie algebra map
to a unital associative algebra (with Lie bracket in given by the commutator). More explicitly, this means that we assume
for all . Then there exists a unique unital algebra homomorphism
such that
where is the canonical map. (The map is obtained by embedding into its tensor algebra and then composing with the quotient map to the universal enveloping algebra. This map is an embedding, by the Poincaré–Birkhoff–Witt theorem.)
To put it differently, if is a linear map into a unital algebra satisfying , then extends to an algebra homomorphism of . Since is generated by elements of , the map must be uniquely determined by the requirement that
.
The point is that because there are no other relations in the universal enveloping algebra besides those coming from the commutation relations of , the map is well defined, independent of how one writes a given element as a linear combination of products of Lie algebra elements.
The universal property of the enveloping algebra immediately implies that every representation of acting on a vector space extends uniquely to a representation of . (Take .) This observation is important because it allows (as discussed below) the Casimir elements to act on . These operators (from the center of ) act as scalars and provide important information about the representations. The quadratic Casimir element is of particular importance in this regard.
Other algebras
Although the canonical construction, given above, can be applied to other algebras, the result, in general, does not have the universal property. Thus, for example, when the construction is applied to Jordan algebras, the resulting enveloping algebra contains the special Jordan algebras, but not the exceptional ones: that is, it does not envelope the Albert algebras. Likewise, the Poincaré–Birkhoff–Witt theorem, below, constructs a basis for an enveloping algebra; it just won't be universal. Similar remarks hold for the Lie superalgebras.
Poincaré–Birkhoff–Witt theorem
The Poincaré–Birkhoff–Witt theorem gives a precise description of . This can be done in either one of two different ways: either by reference to an explicit vector basis on the Lie algebra, or in a coordinate-free fashion.
Using basis elements
One way is to suppose that the Lie algebra can be given a totally ordered basis, that is, it is the free vector space of a totally ordered set. Recall that a free vector space is defined as the space of all finitely supported functions from a set to the field (finitely supported means that only finitely many values are non-zero); it can be given a basis such that is the indicator function for . Let be the injection into the tensor algebra; this is used to give the tensor algebra a basis as well. This is done by lifting: given some arbitrary sequence of , one defines the extension of to be
The Poincaré–Birkhoff–Witt theorem then states that one can obtain a basis for from the above, by enforcing the total order of onto the algebra. That is, has a basis
where , the ordering being that of total order on the set . The proof of the theorem involves noting that, if one starts with out-of-order basis elements, these can always be swapped by using the commutator (together with the structure constants). The hard part of the proof is establishing that the final result is unique and independent of the order in which the swaps were performed.
This basis should be easily recognized as the basis of a symmetric algebra. That is, the underlying vector spaces of and the symmetric algebra are isomorphic, and it is the PBW theorem that shows that this is so. See, however, the section on the algebra of symbols, below, for a more precise statement of the nature of the isomorphism.
It is useful, perhaps, to split the process into two steps. In the first step, one constructs the free Lie algebra: this is what one gets, if one mods out by all commutators, without specifying what the values of the commutators are. The second step is to apply the specific commutation relations from The first step is universal, and does not depend on the specific It can also be precisely defined: the basis elements are given by Hall words, a special case of which are the Lyndon words; these are explicitly constructed to behave appropriately as commutators.
Coordinate-free
One can also state the theorem in a coordinate-free fashion, avoiding the use of total orders and basis elements. This is convenient when there are difficulties in defining the basis vectors, as there can be for infinite-dimensional Lie algebras. It also gives a more natural form that is more easily extended to other kinds of algebras. This is accomplished by constructing a filtration whose limit is the universal enveloping algebra
First, a notation is needed for an ascending sequence of subspaces of the tensor algebra. Let
where
is the -times tensor product of The form a filtration:
More precisely, this is a filtered algebra, since the filtration preserves the algebraic properties of the subspaces. Note that the limit of this filtration is the tensor algebra
It was already established, above, that quotienting by the ideal is a natural transformation that takes one from to This also works naturally on the subspaces, and so one obtains a filtration whose limit is the universal enveloping algebra
Next, define the space
This is the space modulo all of the subspaces of strictly smaller filtration degree. Note that is not at all the same as the leading term of the filtration, as one might naively surmise. It is not constructed through a set subtraction mechanism associated with the filtration.
Quotienting by has the effect of setting all Lie commutators defined in to zero. One can see this by observing that the commutator of a pair of elements whose products lie in actually gives an element in . This is perhaps not immediately obvious: to get this result, one must repeatedly apply the commutation relations, and turn the crank. The essence of the Poincaré–Birkhoff–Witt theorem is that it is always possible to do this, and that the result is unique.
Since commutators of elements whose products are defined in lie in , the quotienting that defines has the effect of setting all commutators to zero. What PBW states is that the commutator of elements in is necessarily zero. What is left are the elements that are not expressible as commutators.
In this way, one is lead immediately to the symmetric algebra. This is the algebra where all commutators vanish. It can be defined as a filtration of symmetric tensor products . Its limit is the symmetric algebra . It is constructed by appeal to the same notion of naturality as before. One starts with the same tensor algebra, and just uses a different ideal, the ideal that makes all elements commute:
Thus, one can view the Poincaré–Birkhoff–Witt theorem as stating that is isomorphic to the symmetric algebra , both as a vector space and as a commutative algebra.
The also form a filtered algebra; its limit is This is the associated graded algebra of the filtration.
The construction above, due to its use of quotienting, implies that the limit of is isomorphic to In more general settings, with loosened conditions, one finds that is a projection, and one then gets PBW-type theorems for the associated graded algebra of a filtered algebra. To emphasize this, the notation is sometimes used for serving to remind that it is the filtered algebra.
Other algebras
The theorem, applied to Jordan algebras, yields the exterior algebra, rather than the symmetric algebra. In essence, the construction zeros out the anti-commutators. The resulting algebra is an enveloping algebra, but is not universal. As mentioned above, it fails to envelop the exceptional Jordan algebras.
Left-invariant differential operators
Suppose is a real Lie group with Lie algebra . Following the modern approach, we may identify with the space of left-invariant vector fields (i.e., first-order left-invariant differential operators). Specifically, if we initially think of as the tangent space to at the identity, then each vector in has a unique left-invariant extension. We then identify the vector in the tangent space with the associated left-invariant vector field. Now, the commutator (as differential operators) of two left-invariant vector fields is again a vector field and again left-invariant. We can then define the bracket operation on as the commutator on the associated left-invariant vector fields. This definition agrees with any other standard definition of the bracket structure on the Lie algebra of a Lie group.
We may then consider left-invariant differential operators of arbitrary order. Every such operator can be expressed (non-uniquely) as a linear combination of products of left-invariant vector fields. The collection of all left-invariant differential operators on forms an algebra, denoted . It can be shown that is isomorphic to the universal enveloping algebra .
In the case that arises as the Lie algebra of a real Lie group, one can use left-invariant differential operators to give an analytic proof of the Poincaré–Birkhoff–Witt theorem. Specifically, the algebra of left-invariant differential operators is generated by elements (the left-invariant vector fields) that satisfy the commutation relations of . Thus, by the universal property of the enveloping algebra, is a quotient of . Thus, if the PBW basis elements are linearly independent in —which one can establish analytically—they must certainly be linearly independent in . (And, at this point, the isomorphism of with is apparent.)
Algebra of symbols
The underlying vector space of may be given a new algebra structure so that and are isomorphic as associative algebras. This leads to the concept of the algebra of symbols : the space of symmetric polynomials, endowed with a product, the , that places the algebraic structure of the Lie algebra onto what is otherwise a standard associative algebra. That is, what the PBW theorem obscures (the commutation relations) the algebra of symbols restores into the spotlight.
The algebra is obtained by taking elements of and replacing each generator by an indeterminate, commuting variable to obtain the space of symmetric polynomials over the field . Indeed, the correspondence is trivial: one simply substitutes the symbol for . The resulting polynomial is called the symbol of the corresponding element of . The inverse map is
that replaces each symbol by . The algebraic structure is obtained by requiring that the product act as an isomorphism, that is, so that
for polynomials
The primary issue with this construction is that is not trivially, inherently a member of , as written, and that one must first perform a tedious reshuffling of the basis elements (applying the structure constants as needed) to obtain an element of in the properly ordered basis. An explicit expression for this product can be given: this is the Berezin formula. It follows essentially from the Baker–Campbell–Hausdorff formula for the product of two elements of a Lie group.
A closed form expression is given by
where
and is just in the chosen basis.
The universal enveloping algebra of the Heisenberg algebra is the Weyl algebra (modulo the relation that the center be the unit); here, the product is called the Moyal product.
Representation theory
The universal enveloping algebra preserves the representation theory: the representations of correspond in a one-to-one manner to the modules over . In more abstract terms, the abelian category of all representations of is isomorphic to the abelian category of all left modules over .
The representation theory of semisimple Lie algebras rests on the observation that there is an isomorphism, known as the Kronecker product:
for Lie algebras . The isomorphism follows from a lifting of the embedding
where
is just the canonical embedding (with subscripts, respectively for algebras one and two). It is straightforward to verify that this embedding lifts, given the prescription above. See, however, the discussion of the bialgebra structure in the article on tensor algebras for a review of some of the finer points of doing so: in particular, the shuffle product employed there corresponds to the Wigner-Racah coefficients, i.e. the 6j and 9j-symbols, etc.
Also important is that the universal enveloping algebra of a free Lie algebra is isomorphic to the free associative algebra.
Construction of representations typically proceeds by building the Verma modules of the highest weights.
In a typical context where is acting by infinitesimal transformations, the elements of act like differential operators, of all orders. (See, for example, the realization of the universal enveloping algebra as left-invariant differential operators on the associated group, as discussed above.)
Casimir operators
The center of can be identified with the centralizer of in Any element of must commute with all of and in particular with the canonical embedding of into Because of this, the center is directly useful for classifying representations of . For a finite-dimensional semisimple Lie algebra, the Casimir operators form a distinguished basis from the center . These may be constructed as follows.
The center corresponds to linear combinations of all elements that commute with all elements that is, for which That is, they are in the kernel of Thus, a technique is needed for computing that kernel. What we have is the action of the adjoint representation on we need it on The easiest route is to note that is a derivation, and that the space of derivations can be lifted to and thus to This implies that both of these are differential algebras.
By definition, is a derivation on if it obeys Leibniz's law:
(When is the space of left invariant vector fields on a group , the Lie bracket is that of vector fields.) The lifting is performed by defining
Since is a derivation for any the above defines acting on and
From the PBW theorem, it is clear that all central elements are linear combinations of symmetric homogeneous polynomials in the basis elements of the Lie algebra. The Casimir invariants are the irreducible homogeneous polynomials of a given, fixed degree. That is, given a basis , a Casimir operator of order has the form
where there are terms in the tensor product, and is a completely symmetric tensor of order belonging to the adjoint representation. That is, can be (should be) thought of as an element of Recall that the adjoint representation is given directly by the structure constants, and so an explicit indexed form of the above equations can be given, in terms of the Lie algebra basis; this is originally a theorem of Israel Gel'fand. That is, from , it follows that
where the structure constants are
As an example, the quadratic Casimir operator is
where is the inverse matrix of the Killing form That the Casimir operator belongs to the center follows from the fact that the Killing form is invariant under the adjoint action.
The center of the universal enveloping algebra of a simple Lie algebra is given in detail by the Harish-Chandra isomorphism.
Rank
The number of algebraically independent Casimir operators of a finite-dimensional semisimple Lie algebra is equal to the rank of that algebra, i.e. is equal to the rank of the Cartan–Weyl basis. This may be seen as follows. For a -dimensional vector space , recall that the determinant is the completely antisymmetric tensor on . Given a matrix , one may write the characteristic polynomial of as
For a -dimensional Lie algebra, that is, an algebra whose adjoint representation is -dimensional, the linear operator
implies that is a -dimensional endomorphism, and so one has the characteristic equation
for elements The non-zero roots of this characteristic polynomial (that are roots for all ) form the root system of the algebra. In general, there are only such roots; this is the rank of the algebra. This implies that the highest value of for which the is non-vanishing is
The are homogeneous polynomials of degree This can be seen in several ways: Given a constant , ad is linear, so that By plugging and chugging in the above, one obtains that
By linearity, if one expands in the basis,
then the polynomial has the form
that is, a is a tensor of rank . By linearity and the commutativity of addition, i.e. that , one concludes that this tensor must be completely symmetric. This tensor is exactly the Casimir invariant of order
The center corresponded to those elements for which for all by the above, these clearly corresponds to the roots of the characteristic equation. One concludes that the roots form a space of rank and that the Casimir invariants span this space. That is, the Casimir invariants generate the center
Example: Rotation group SO(3)
The rotation group SO(3) is of rank one, and thus has one Casimir operator. It is three-dimensional, and thus the Casimir operator must have order (3 − 1) = 2 i.e. be quadratic. Of course, this is the Lie algebra of As an elementary exercise, one can compute this directly. Changing notation to with belonging to the adjoint rep, a general algebra element is and direct computation gives
The quadratic term can be read off as , and so the squared angular momentum operator for the rotation group is that Casimir operator. That is,
and explicit computation shows that
after making use of the structure constants
Example: Pseudo-differential operators
A key observation during the construction of above was that it was a differential algebra, by dint of the fact that any derivation on the Lie algebra can be lifted to . Thus, one is led to a ring of pseudo-differential operators, from which one can construct Casimir invariants.
If the Lie algebra acts on a space of linear operators, such as in Fredholm theory, then one can construct Casimir invariants on the corresponding space of operators. The quadratic Casimir operator corresponds to an elliptic operator.
If the Lie algebra acts on a differentiable manifold, then each Casimir operator corresponds to a higher-order differential on the cotangent manifold, the second-order differential being the most common and most important.
If the action of the algebra is isometric, as would be the case for Riemannian or pseudo-Riemannian manifolds endowed with a metric and the symmetry groups SO(N) and SO (P, Q), respectively, one can then contract upper and lower indices (with the metric tensor) to obtain more interesting structures. For the quadratic Casimir invariant, this is the Laplacian. Quartic Casimir operators allow one to square the stress–energy tensor, giving rise to the Yang-Mills action. The Coleman–Mandula theorem restricts the form that these can take, when one considers ordinary Lie algebras. However, the Lie superalgebras are able to evade the premises of the Coleman–Mandula theorem, and can be used to mix together space and internal symmetries.
Examples in particular cases
If , then it has a basis of matriceswhich satisfy the following identities under the standard bracket:, , and this shows us that the universal enveloping algebra has the presentationas a non-commutative ring.
If is abelian (that is, the bracket is always ), then is commutative; and if a basis of the vector space has been chosen, then can be identified with the polynomial algebra over , with one variable per basis element.
If is the Lie algebra corresponding to the Lie group , then can be identified with the algebra of left-invariant differential operators (of all orders) on ; with lying inside it as the left-invariant vector fields as first-order differential operators.
To relate the above two cases: if is a vector space as abelian Lie algebra, the left-invariant differential operators are the constant coefficient operators, which are indeed a polynomial algebra in the partial derivatives of first order.
The center consists of the left- and right- invariant differential operators; this, in the case of not commutative, is often not generated by first-order operators (see for example Casimir operator of a semi-simple Lie algebra).
Another characterization in Lie group theory is of as the convolution algebra of distributions supported only at the identity element of .
The algebra of differential operators in variables with polynomial coefficients may be obtained starting with the Lie algebra of the Heisenberg group. See Weyl algebra for this; one must take a quotient, so that the central elements of the Lie algebra act as prescribed scalars.
The universal enveloping algebra of a finite-dimensional Lie algebra is a filtered quadratic algebra.
Hopf algebras and quantum groups
The construction of the group algebra for a given group is in many ways analogous to constructing the universal enveloping algebra for a given Lie algebra. Both constructions are universal and translate representation theory into module theory. Furthermore, both group algebras and universal enveloping algebras carry natural comultiplications that turn them into Hopf algebras. This is made precise in the article on the tensor algebra: the tensor algebra has a Hopf algebra structure on it, and because the Lie bracket is consistent with (obeys the consistency conditions for) that Hopf structure, it is inherited by the universal enveloping algebra.
Given a Lie group , one can construct the vector space of continuous complex-valued functions on , and turn it into a C*-algebra. This algebra has a natural Hopf algebra structure: given two functions
, one defines multiplication as
and comultiplication as
the counit as
and the antipode as
Now, the Gelfand–Naimark theorem essentially states that every commutative Hopf algebra is isomorphic to the Hopf algebra of continuous functions on some compact topological group —the theory of compact topological groups and the theory of commutative Hopf algebras are the same. For Lie groups, this implies that is isomorphically dual to ; more precisely, it is isomorphic to a subspace of the dual space
These ideas can then be extended to the non-commutative case. One starts by defining the quasi-triangular Hopf algebras, and then performing what is called a quantum deformation to obtain the quantum universal enveloping algebra, or quantum group, for short.
See also
Milnor–Moore theorem
Harish-Chandra homomorphism
References
Shlomo Sternberg (2004), Lie algebras, Harvard University.
Ring theory
Hopf algebras
Representation theory of Lie algebras | Universal enveloping algebra | Mathematics | 7,310 |
8,125,462 | https://en.wikipedia.org/wiki/Zoster%20vaccine | A zoster vaccine is a vaccine that reduces the incidence of herpes zoster (shingles), a disease caused by reactivation of the varicella zoster virus, which is also responsible for chickenpox. Shingles provokes a painful rash with blisters, and can be followed by chronic pain (postherpetic neuralgia), as well as other complications. Older people are more often affected, as are people with weakened immune systems (immunosuppression). Both shingles and postherpetic neuralgia can be prevented by vaccination.
Two zoster vaccines have been approved for use in people over 50 years old. Shingrix (GSK) is a recombinant subunit vaccine which has been used in many countries since 2017. Zostavax (Merck), in use since 2006, is an attenuated vaccine which consists of a larger-than-normal dose of chickenpox vaccine. Unlike Shingrix, Zostavax is not suitable for people with immunosuppression or diseases that affect the immune system. Zostavax was discontinued in the United States in November 2020.
Shingrix appears to prevent more cases of shingles than Zostavax, although side effects seem to be more frequent.
Another vaccine, known as varicella vaccine, is used to prevent diseases caused by the same virus.
Medical uses
Zoster vaccination is used to prevent shingles and its complications, including postherpetic neuralgia. It can be considered a therapeutic vaccine, given that it is used to treat a latent virus that has remained dormant in cells since chicken pox infection earlier in life. The available zoster vaccine is intended for use in people over the age of 50. it was not confirmed whether a booster dose was required, but the Advisory Committee on Immunization Practices (ACIP) in the United States recommends Shingrix for adults over the age of 50, including those who have already received Zostavax.
Shingrix
The ACIP voted that Shingrix is preferred over Zostavax for the prevention of zoster and related complications because data showed vaccine efficacy of more than 90% against shingles across all age groups. Unlike Zostavax, which is given as a single shot, Shingrix is given as two identical intramuscular doses, two to six months apart. Shingrix provides high levels of immunity for at least 7 years after vaccination, but it is possible the vaccine may provide protection for much longer.
A large randomized clinical trial showed Shingrix reduced the incidence of shingles 96.6% (relative risk reduction, RRR) in the 50–59 age group, and 91.3% (relative risk reduction, RRR) in those over age 70. The absolute decrease in risk (absolute risk reduction, ARR) of herpes zoster following immunization over three and a half years is 3.3% (3.54% down to 0.28%) while the decrease in the risk of postherpetic neuralgia is 0.3% (0.34% down to 0.06%).
Zostavax
The Zostavax vaccine (both single dose and two-dose regime) is likely effective at protecting people from herpes zoster disease for a duration of up to three years. The degree of longer term protection (beyond 4 years from the initial vaccination) is not clear. The need for re-vaccination after the first full vaccine schedule is complete remains to be confirmed.
Zostavax was shown to reduce the incidence of shingles by 51% in a study of 38,000 adults aged 60 and older who received the vaccine. The vaccine also reduced by 67% the number of cases of postherpetic neuralgia (PHN) and reduced the severity and duration of pain and discomfort associated with shingles, by 61%. The FDA originally recommended it for individuals 60 years of age or older who are not severely allergic to any of its components and who meet the following requirements:
do not have a weakened immune system due to HIV/AIDS or another disease or medications (such as steroids, radiation and chemotherapy) that affect the immune system;
do not have a history of cancer affecting the bone marrow or lymphatic system, such as leukemia or lymphoma; and
do not have active, untreated tuberculosis.
In 2006, the US Advisory Committee on Immunization Practices (ACIP) recommended that the live vaccine be given to all adults age 60 and over, including those who have had a previous episode of shingles, and those who do not recall having had chickenpox, since more than 99% of Americans ages 40 and older have had chickenpox.
Side effects
Shingrix
Temporary side effects from the Shingrix shots are likely and can be severe enough in one out of six people to affect normal daily activities for up to three days. Mild to moderate pain at the injection site is common, and some may have redness or swelling. Side effects include fatigue, muscle pain, headache, shivering, fever, and nausea. Symptoms usually resolve in two to three days. Side effects with Shingrix are greater than those with Zostavax and occur more frequently in individuals aged 50 to 69 years compared with those 70 years and older.
Zostavax
The live vaccine (Zostavax) is very safe; one to a few percent of people develop a mild form of chickenpox, often with about five or six blisters around the injection site, and without fever. The blisters are harmless and temporary. In one study 64% of the Zostavax group and 14% of the controls had some adverse reaction. However, the rates of serious adverse events were comparable between the Zostavax group (0.6%) and those receiving the placebo (0.5%). A study including children with leukaemia found that the risk of getting shingles after vaccination is much lower than the risk of getting shingles for children with natural chicken pox in their history. Data from healthy children and adults point in the same direction.
Zostavax is not used in people with compromised immune function.
Composition
Shingrix
Shingrix is a suspension for intramuscular injection consisting of a lyophilized recombinant varicella zoster virus glycoprotein E antigen that is reconstituted at the time of use with AS01B suspension as an immunological adjuvant. The antigen is a purified truncated form of the glycoprotein, expressed in Chinese hamster ovary cells. The AS01B adjuvant suspension is composed of 3-O-desacyl-4'-monophosphoryl lipid A (MPL) from Salmonella (Minnesota strain) and a saponin molecule (QS-21) purified from Quillaja saponaria (soap bark tree) extract, combined in a liposomal formulation consisting of dioleoyl phosphatidylcholine (DOPC) and cholesterol in phosphate-buffered saline solution.
Zostavax
Zostavax contains live attenuated varicella-zoster virus. It is injected subcutaneously (under the skin) in the upper arm. The live vaccine is produced using the MRC-5 line of fetal cells. This has raised religious and ethical concerns for some potential users, since that cell line was derived from an aborted fetus.
Cost effectiveness
A 2007 study found that the live vaccine is likely to be cost-effective in the US, projecting an annual savings of to million in healthcare costs with cost-effectiveness ratios ranging from to per quality-adjusted life year gained. In 2007, the live vaccine was officially recommended in the US for healthy adults aged 60 and over, but is no longer given out in the United States , given the superiority of Shingrix.
In Canada the cost of Shingrix is about for the two doses. This likely represents a more cost effective intervention than the live vaccine given its lower cost and increased effectiveness.
History
European Union
In 2006, the European Medicines Agency (EMA) issued a marketing authorization for the zoster vaccine to Sanofi Pasteur for routine vaccination in individuals aged 60 and over. In 2007, the EMA updated the marketing authorization for routine vaccination in individuals aged 50 and over.
Shingrix was approved for medical use in the European Union in March 2018, with an indication for the prevention of herpes zoster (HZ) and post-herpetic neuralgia (PHN) in adults 50 years of age or older.
United Kingdom
From 2013, the UK National Health Service (NHS) started offering shingles vaccination to elderly people. People aged either 70 or 79 on 1 September 2013, were offered the vaccine. People aged 71 to 78 on that date would only have an opportunity to have the shingles vaccine after reaching the age of 79. The original intention was for people aged between 70 and 79 to be vaccinated, but the NHS later said that the vaccination program was being staggered as it would be impractical to vaccinate everyone in their 70s in a single year.
In 2021, vaccination against shingles is available on the NHS to people aged 70 to 79. Vaccination is with single-dose Zostavax, except for people for whom Zostavax is deemed unsuitable, for example, with a condition that affects the immune system, for whom two-dose Shingrix vaccine is recommended. The NHS stated "The shingles vaccine is not available on the NHS to anyone aged 80 or over because it seems to be less effective in this age group". Since 2023, the shingles vaccines is being offered to healthy people turning 65.
United States
Zostavax was developed by Merck & Co. and approved and licensed by the US Food and Drug Administration (FDA) in May 2006, In 2011, the FDA approved the live vaccine for use in individuals 50 to 59 years of age. Shingrix is a zoster vaccine developed by GlaxoSmithKline that was approved in the United States in October 2017. Shingrix, which provides strong protection against shingles and PHN, was preferred over Zostavax before Zostavax was discontinued.
In June 2020, Merck discontinued the sale of Zostavax in the US. Vaccine doses already held by practitioners could still be administered up to the expiration date (none expired later than November 2020).
The US Centers for Disease Control and Prevention (CDC) recommends that healthy adults 50 years and older get two doses of Shingrix, at least two months apart. Initial clinical trials only tested a gap of less than six months between doses, but unexpected popularity and resulting shortages caused further testing to validate wider spacing of the two doses. Adults 19 years and older who are immunocompromised because of disease or therapy are also recommended to receive two doses of Shingrix.
The zoster vaccine is covered by Medicare Part D. In 2019, more than 90% of Medicare Part D vaccine spending was for the zoster vaccine. 5.8 million vaccine doses were administered to Part D beneficiaries that year at a cost of $857 million.
Research
There is emerging evidence that the shingles vaccine may protect against dementia. A 2024 study of over 200,000 older US adults found that the recombinant shingles vaccine was linked to a larger reduction in dementia compared to the live shingles vaccine. Over a six-year follow-up, those who received the recombinant vaccine had a 17% increase in time without a dementia diagnosis compared to those who received the live vaccine.
References
Further reading
External links
Zostavax Product Page U.S. Food and Drug Administration (FDA)
Shingrix Product Page U.S. Food and Drug Administration (FDA)
Chickenpox
Live vaccines
Drugs developed by Merck & Co.
Withdrawn drugs | Zoster vaccine | Chemistry | 2,504 |
2,108,721 | https://en.wikipedia.org/wiki/Gamma%20Cancri | Gamma Cancri, or γ Cancri, is a star in the northern constellation of Cancer. It is formally named Asellus Borealis , the traditional name of the system. Based on parallax measurements, it is located at a distance of approximately 181 light years from the Sun. The star is drifting further away with a radial velocity of 29 km/s. In 1910 this star was reported to be a spectroscopic binary by O. J. Lee, but is now considered a single star. Since it is near the ecliptic, it can be occulted by the Moon and, very rarely, by planets.
Nomenclature
γ Cancri (Latinised to Gamma Cancri) is the star's Bayer designation. It bore the traditional name Asellus Borealis (Latin for "northern donkey"). In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Asellus Borealis for the star on 6 November 2016 and it is now so included in the List of IAU-approved Star Names. Together with Delta Cancri, it formed the Aselli, flanking Praesepe.
In Chinese astronomy, Ghost () refers to an asterism consisting of Theta Cancri, Eta Cancri, Gamma Cancri and Delta Cancri. Gamma Cancri itself is known as the third star of Ghost ().
Properties
Gamma Cancri presents as a white A-type subgiant with an apparent magnitude of +4.67. The star is an estimated 171 million years old and is spinning with a projected rotational velocity of 86 km/s. It has 2.18 times the mass of the Sun and shines with a luminosity approximately 36 times greater at an effective temperature of 9108 K.
It has been included as a member of the Hyades Stream based on its distance, space motion, and likely age.
References
A-type subgiants
Hyades Stream
Cancer (constellation)
Cancri, Gamma
Durchmusterung objects
Cancri, 43
074198
042806
3449
Asellus Borealis | Gamma Cancri | Astronomy | 473 |
1,004,399 | https://en.wikipedia.org/wiki/Human%20decontamination | Human decontamination is the process of removing hazardous materials from the human body, including chemicals, radioactive substances, and infectious material.
General principle
People suspected of being contaminated are usually separated by sex, and led into a decontamination tent, trailer, or pod, where they shed their potentially contaminated clothes in a strip-down room. They then enter a wash-down room where they are showered. Finally, they enter a drying and re-robing room to be issued clean clothing, a jumpsuit, or other attire. Some more structured facilities include six rooms (strip-down, wash-down and examination rooms, for each of men's and women's side as per attached drawing). Some facilities, such as MODEC, and many others, are remotely operable, and function like "human car washes". Common lathering in soap, removes external dust that may contain radioisotopes. It is advised that when lathering, effort should be made not to spread potential dust that deposited onto exposed, unclothed areas of skin, to areas that were once likely clean.
Mass decontamination is the decontamination of large numbers of people. The ACI World Aviation Security Standing Committee describes a decontamination process thus, specifically referring to plans for Los Angeles authorities:
Hospital decontamination
Most hospitals in the United States are prepared to handle a large influx of patients from a terrorist attack. Volunteer hospital decontamination teams are common and trained to set up showers or washing equipment, to wear personal protective equipment, and to ensure the safety of both the victims and the community during the response. From a planning perspective it must be remembered that first responders in Level A or B personal protective equipment (PPE) will have a limited working duration, typically 20 minutes to 2 hours.
Typically these teams use decontamination showers built into the hospital or tents which are set up outside in order to decontaminate individuals. Beyond terrorism incidents, common exposures may be related to factory spills, agricultural incidents, and vehicle accidents. Incidents are common in both urban and rural communities. Hospital decontamination is a component of the Hospital Incident Command System and is required in the standards set forth by the Joint Commission.
Decontamination exercises
Decontamination exercises are frequently used to test the preparedness of emergency plans and personnel.
Exercises are of three types:
Tabletop - An exercise held with responsible personnel in which a facilitator relays information about a scenario to the group. The group then discusses the actions they each would take in the given situation. There is no "live response" or use of assets. The table top is a low impact, low stress method to review emergency plans.
Functional - A functional exercise involves the agencies involved in an Emergency Operations Center, a scenario is presented and the players go through the actions they would if it were a real incident. The exercise tests the technical resources and plans of the Emergency Operations Center. There is no "live response" outside of the Emergency Operations Center.
Full Scale - A full-scale exercise is the most involved type of exercise and the most difficult to plan and execute. Full-scale exercises can vary in size from one agency or municipality to multinational exercises such as the US Government-led annual TOPOFF exercise. In a full-scale exercise, a scenario is created and acted out in a real-world manner. Responders are expected to act in accordance with established plans, just as they would in a real incident. At times, certain parts of the exercise have to be simulated due to equipment, financial, or safety reasons, which can make the scenario confusing. Full-scale exercises are often used as an opportunity to test and assess an agency's true level of preparedness.
Unified command
Collaboration among various levels of authority, and among various countries, is required to address bioterror threats, because contamination knows no boundaries. Disease and contamination do not stop at the border from one country to another. Thus organizations such as NATO, bring together member countries to practice how to contain an outbreak, setup quarantine facilities, and care for displaced persons.
Collection of personal belongings for evidence
"Dofficers" (Decontamination officers in the "doffing" or disrobing area) are often police or military personnel, ready to handle potentially unruly persons who refuse to cooperate with first responders.
For example, the U.S. ARMY SOLDIER AND BIOLOGICAL CHEMICAL COMMAND suggests that:
"The entire incident is a crime scene requiring the collection of criminal evidence and suspicious victim belongings. The preservation of a proper chain of custody must be maintained for all evidence. ... patients could be suspects and their belongings may be evidence. ... Direct patients through a detailed decontamination process and deal with potentially unruly patients. ... Enforce order when persons become uncooperative when asked to remove clothing and relinquish personal items.".
Paul Rega, M.D., FACEP, and Kelly Burkholder-Allen also note, in "The ABCs of Bioterrorism" an additional advantage in decontaminating everyone found at the scene of an incident, because this will help the authorities in searching through everyone's clothes to find suspicious items:
"Removal of clothing in the decon procedure has the additional advantage of detecting weapons or a secondary device on a victim or "pseudo-victim".
Chris Seiple, in "Another Perspective on the Domestic Role of the Military in Consequence Management" suggests that the evidence gathering process of identifying contaminated people and their belongings should also include the process of video surveillance:
The identification of contaminated victims and their personal effects... Victims are also videotaped as they proceed through the decontamination line.
Video Surveillance... Videotaped documentation could later be used in the evidence processes;
Although there are the obvious privacy concerns in surveillance, one can also argue that due to the high risk nature of terrorism, such surveillance is warranted, as it is in other high risk areas like bathing complexes where surveillance is often used because of the risk of drowning. In these cases the importance of safety may often be thought to outweigh privacy concerns.
Handling uncooperative victims
One of the elements that separates a drill from a real-life situation is dealing with panicked or uncooperative victims. Security personnel should be assigned to the area for crowd control and to ensure appropriate flow of individuals in and out of the decontamination area.
In a real attack, the perpetrators may be among the victims, or some of the victims may be in possession of contraband, or of evidence that might help law enforcement in solving the crime.
Another consideration is that some of the perpetrator victims might refuse to go through decon because this would result in discovery of the contraband they may be hiding.
For example, a person with explosives strapped to his or her body, under their clothing, would likely not be so willing to take it off. Such a victim might try to escape, and need to be restrained for decontamination.
Separate male and female officers (decontamination officers) deal with potentially unruly patients, by restraining the hands using flex cuffs, and cutting off the shirt, then removing shoes and pants normally. This usually requires a couple of officers.
The Belfast Telegraph describes such a situation:
"...holds back hundreds of extras playing traumatised bomb victims. Coated in ash and wrapped up in bandages, these people are staggering around, dazed and confused, like so many shell-shocked World War I soldiers. While troops in riot gear charge forward to reinforce the cordon and use their shields and batons to beat back ... desperately appeal for calm. They ask people to file in an orderly fashion towards the decontamination units being rapidly assembled by fire fighters in inflated orange Chemical Biological Radiation Nuclear (CBRN) suits."
See also Battalion Chief Michael Farri:
They bring a law enforcement agency group with them and they have no problem if somebody needs to be restrained with handcuffs or flex cuffs or whatever to keep them from going from the hot zone to a cool zone; whereas the fire department, we are not geared to do that.... Kwame Holman: Colonel Hammes says his Marines are trained to handle uncooperative people. ... If they're really hysterical, there's some simple techniques from this program called Marine Martial Arts, that teaches various martial arts skills; there are common techniques that police also use to provide pain compliance-- no permanent damage, just enough to get your attention, and allows us to control you. If you still won't, then we can control in flex cuffs, and then we'll flex cuff decontaminate you. And if you're calm at that point, we turn you loose. If you're still not calm, then the police will be asked to give us a hand.
Internal human contamination
Radioactive contamination can enter the body through ingestion, inhalation, absorption, or injection. This will result in a committed dose of radiation.
For this reason, it is important to use personal protective equipment when working with radioactive materials. Radioactive contamination may also be ingested as the result of eating contaminated plants and animals or drinking contaminated water or milk from exposed animals. Following a major contamination incident, all potential pathways of internal exposure should be considered.
Successfully used on Harold McCluskey, chelation therapy and other treatments exist for internal radionuclide contamination.
See also
Contamination control
DeconGel
Decontamination foam
Disease surveillance
Fukushima disaster cleanup
Incident Support Unit
Mass decontamination
Radioactive contamination
References
External links
Cleanroom Technology-The Internal Journal of Contamination Control
Airport shows off human carwash
Annual decontamination (de)conference
RODS Laboratory
TVI corporation (makes tents that have a separate decon corridors for men and women on each side, with a central corridor for nonambulatory patients)
First Line Technology - Mass Decontamination, Personal Decontamination Systems and Equipment
Airshelter - ACD, manufacturer of mobile, rapid deployment shelters and (mass) decon systems
qüb9 Environmental - Manufacturer of container-based mobile, deployable human decontamination, decon platforms
Hygiene
Security
Safety
Civil defense
Cleanroom technology | Human decontamination | Chemistry | 2,109 |
78,749,948 | https://en.wikipedia.org/wiki/Alfv%C3%A9n%20Mach%20number | The Alfvén Mach number (also known as the Alfvén number, Alfvénic Mach number, and magnetic Mach number; or ) is a dimensionless quantity representing the ratio of the relative velocity of a fluid to the local Alfvén speed. It is used in plasma physics, where it is analogous to the Mach number but based on Alfvén waves rather than sound waves. Alfvén and Mach were physicists who studied shock waves.
Along with the sonic Mach number, the Alfvén Mach number is frequently used to characterize shock fronts and turbulence in magnetized plasmas.
where
is the Alfvén Mach number,
is the flow velocity, and
is the Alfvén speed.
When , the flow is referred to as sub-Alfvénic; and when , the flow is referred to as super-Alfvénic.
See also
Plasma parameters
Magnetic Reynolds number
Lundquist number
References
Dimensionless numbers of fluid mechanics
Plasma parameters | Alfvén Mach number | Physics | 185 |
23,991,325 | https://en.wikipedia.org/wiki/Jacobi%20form | In mathematics, a Jacobi form is an automorphic form on the Jacobi group, which is the semidirect product of the symplectic group Sp(n;R) and the Heisenberg group . The theory was first systematically studied by .
Definition
A Jacobi form of level 1, weight k and index m is a function of two complex variables (with τ in the upper half plane) such that
for all integers λ, μ.
has a Fourier expansion
Examples
Examples in two variables include Jacobi theta functions, the Weierstrass ℘ function, and Fourier–Jacobi coefficients of Siegel modular forms of genus 2. Examples with more than two variables include characters of some irreducible highest-weight representations of affine Kac–Moody algebras. Meromorphic Jacobi forms appear in the theory of Mock modular forms.
References
Modular forms
Theta functions | Jacobi form | Mathematics | 177 |
34,071,247 | https://en.wikipedia.org/wiki/Social%20media%20and%20psychology | Social media began in the form of generalized online communities. These online communities formed on websites like Geocities.com in 1994, Theglobe.com in 1995, and Tripod.com in 1995. Many of these early communities focused on social interaction by bringing people together through the use of chat rooms. The chat rooms encouraged users to share personal information, ideas, or even personal web pages. Later the social networking community Classmates took a different approach by simply having people link to each other by using their personal email addresses. By the late 1990s, social networking websites began to develop more advanced features to help users find and manage friends. These newer generation of social networking websites began to flourish with the emergence of SixDegrees.com in 1997, Makeoutclub in 2000, Hub Culture in 2002, and Friendster in 2002. However, the first profitable mass social networking website was the South Korean service, Cyworld. Cyworld initially launched as a blog-based website in 1999 and social networking features were added to the website in 2001. Other social networking websites emerged like Myspace in 2002, LinkedIn in 2003, and Bebo in 2005. In 2009, the social networking website Facebook (launched in 2004) became the largest social networking website in the world. Both Instagram and Kik were launched in October 2010. Active users of Facebook increased from just a million in 2004 to over 750 million by the year 2011. Making internet-based social networking both a cultural and financial phenomenon. In September 2011, Snapchat was launched and has had over 300 million monthly subscribers as of 2023.
Psychology of social networking
A social network is a social structure made up of individuals or organizations who communicate and interact with each other. Social networking sites – such as Facebook, Twitter, Instagram, Pinterest and LinkedIn – are defined as technology-enabled tools that assist users with creating and maintaining their relationships. A study found that middle schooler's reported using social media to see what their friends are doing, to post pictures, and to connect with friends. Human behavior related to social networking is influenced by major individual differences. Meaning that people differ quite systematically in the quantity and quality of their social relationships. Two of the main personality traits that are responsible for this variability are the traits of extraversion and introversion. Extraversion refers to the tendency to be socially dominant, exert leadership, and influence on others. Contrastingly, introversion refers to the tendency of a person to have a disposition of shyness, social phobia, or even avoid social situations altogether, which could lead to a reduction in the number of potential contacts that person may have. These individual differences may result in different social networking outcomes. Other psychology factors related to social media and Media psychology are depression, anxiety, attachment, self-identity, well-being, and the need to belong.
Neuroscience
The three domains that neural systems rely on to be strengthened to support social media use are social cognition, self-referential cognition, and social rewarding.
When someone posts something on social media, they think of how their audience will react, while the audience thinks of the motivations behind posting the information. Both parties are analyzing the other's thoughts and feelings, which coherently rely on multiple network systems of the brain including the dorsomedial prefrontal cortex, bilateral temporoparietal junction, anterior temporal lobes, inferior frontal gyri, and posterior cingulate cortex. All of these systems work to help us process social behaviors and thoughts drawn out on social media.
Social media requires a great deal of self-referential thought. People use social media as a platform to express their opinions and show off their past and present selves. In other words, as Bailey Parnell said in her Ted Talk, we're showing off our "highlight reel" (4). When one receives feedback from others, the individual obtains more reflected self-appraisal which leads to comparisons of their social behaviors or "highlights" to other users. Self-referential thought involves activity in the medial prefrontal cortex and the posterior cingulate cortex. The brain uses these systems when thinking of oneself.
Social media also provides a constant supply of rewards that keeps users coming back for more. Whenever users receive a like or a new follower, it activates the brain's social reward system which includes the ventromedial prefrontal cortex, ventral striatum, and ventral tegmental area.
While these areas of the brain become strengthened, other parts of the brain start to weaken. Technology is encouraging multi-tasking, especially because of how easy it is to switch from one task to another by opening another tab or using two devices at once. The brain's hippocampus is mainly associated with long-term memory. In a study done by Russell Poldark, a professor at UCLA, they found that "for the task learned without distraction, the hippocampus was involved. However, for the task learned with the distraction of the beeps, the hippocampus was not involved; but the striatum was, which is the brain system that underlies our ability to learn new skills." The study concludes that multitasking can cause reliance on the striatum more than the hippocampus, which can change the way we learn. The striatum is known to be connected to mainly the brain's reward system. The brain will strengthen the neurons to the striatum while it weakens the neurons to the hippocampus to make the brain more efficient. Because our brain starts to rely on the striatum more than the hippocampus, it becomes harder for us to process new information. Nicholas Carr, author of The Shallows: How The Internet Is Changing Our Brains, agrees: "What psychologists and brain scientists tell us about interruptions is that they have a fairly profound effect on the way we think. It becomes much harder to sustain attention, to think about one thing for a long period of time, and to think deeply when new stimuli are pouring at you all day long. I argue that the price we pay for being constantly inundated with information is a loss of our ability to be contemplative and to engage in the kind of deep thinking that requires you to concentrate on one thing."
Well-Being
An important aspect that may sometimes be overlooked is our well-being. How does it relate to social media? In an article titled Social Impact of Psychological Research on Well-Being Shared in Social Media, Pulido et al. found a 15.7% social impact in their results. These new results were compared to a previous study conducted by Pulido et al., which had a high of 4.98% compared to 27.5% in the new study. These results show the ESISM, which is evidence of social impact present. In a two year span, the difference between social impact rose 22.52% according to these studies. With such a high increase in a short time, research regarding social media's impact on well-being and psychology is needed now more than ever.
Depression
Especially in today's society, social media has gained a new perspective on younger generations. It is what younger generations are born into and are growing up to use, particularly what is running today's society. Social Media has its downfalls regarding depression and mental health. Many users often compare their lives regarding what they see on these platforms. In an article Does Social Media Cause Depression? by the Child Mind Institute, Miller states that "several studies, teenage and young adult users who spend the most time on Instagram, Facebook and other platforms for have shown to have substantially (from 13 to 66 percent) higher rates of reported depression than those who spent the least time", what the study shows how Facebook and Instagram, platforms showcasing daily lives and or lifestyles, or less fulfilling or less satisfied or more flaunting base or superficial. Instead of social community, there has become a perception of individuals striving for a life that is not real, whether that is editing photos or making life seem perfect when it is not. This causes a sense of depression by the weight of a comparing game. In "How Social Media Affects Your Teen's Mental Health: A Parent's Guide," Kathy Katella states, "According to a research study of American teens ages 12-15, those who used social media over three hours each day faced twice the risk of having negative mental health outcomes, including depression and anxiety symptoms." Teenagers are facing more and more complications everyday because of the overuse of social media.
Teenagers and young adults see these ideal lifestyles and make these assumptions about their personal lives, questioning their values and sense of belonging, bringing forth this aspect of depression. For example, on Facebook and Instagram, these platforms allow comments on posts or stories, indicating hateful and nasty comments/bullying that can cause mental health issues.
As the internet first began to grow in popularity, researchers noted an association between increases in internet usage and decreases in offline social involvement and psychological well-being. Investigators explained these findings through the hypothesis that the internet supports poor quality relationships. In light of the recent emergence of online social networking, there has been growing concern of a possible relationship between individuals’ activities on these forums and symptoms of psychopathology, particularly depression.
Research has shown a positive correlation between time spent on social networking sites and depressive symptoms. One possible explanation for this relationship is that people use social networking sites as a method of social comparison, which leads to social comparison bias. Adolescents who used Facebook and Instagram to compare themselves with and seek reassurance from other users experienced more depressive symptoms. It is likely, though, that the effects of social comparison on social networking sites is influenced by who people are interacting with on those sites. Specifically, Instagram users who followed a higher percentage of strangers were more likely to show an association between Instagram use and depressive symptoms than were users who followed a lower percentage of strangers.
Other studies have found that social media use can potentially increase symptoms of depression in adolescents. Kleppgang et al. (2021) found that adolescents who used social media or played video games for more than three hours a day experienced a higher proportion of symptoms of depression. The goal of Kleppang's study was to examine the relationship between electronic media use and symptoms of depression and to observe whether gender or platonic relationships affect said relationship. They used surveys and web-based questionnaires to gather data. The subjects, sourced from all over Norway, were adolescents in tenth grade. The questions that were presented to the participants asked them to identify any symptoms of depression they have experienced, the frequency of which they used social media, and their gender.
Research support for a relationship between online social networking and depression remains mixed. For example, some studies have found that people experiencing feelings of inferiority may share these spontaneously on social media rather than seeking face-to-face help with medical professionals. Similarly, Banjanin and colleagues (2015), for example, found a relationship between increased internet use and depressive symptoms, but no relationship between time spent on social networking sites and depressive symptoms. Several other studies have similarly found no relationship between online social networking and depression. In fact, studies that show there is no particular relationships between using Social media and the mental health suggest that there should be all the time support for young ages to prevent any mental health damage. Even though the direction of any relationship between depression and using social media platform is still unclear. Current research for this issue had been applying on ages between 13 and 18 and it was for the outcome depression, anxiety or psychological distress, assessed by validated instruments. Betul and colleagues,
Suicide
As found in a journal article from the American Academy of Pediatrics cyberbullying can lead to "profound psychosocial outcomes including depression, anxiety, severe isolation, and, tragically, suicide." This introduces relationship between social networking and suicide. Cyberbullying on social media has a strong correlation to causes of suicide among adolescents and young adults. Results of a study by Hinduja and Patchin examining a large sample of middle school-aged adolescents found that those who experienced cyberbullying were twice as likely to attempt or be successful in committing suicide. In a study done by The 2019 School Crime Supplement to the National Crime Victimization Survey (National Center for Education Statistics and Bureau of Justice) indicates that, nationwide, about 16 percent of students in grades 9–12 experienced cyberbullying by time they reach high school or are in high school. Additionally, "social media sites that allow greater anonymity (e.g., Yik Yak, Whisper) have higher rates of cyberbullying perpetration than sites on which users are more identifiable." Many people can easily bully and harass others when their name is not mentioned in the statement. It prevents them from getting the real-world consequences, in which they feel less responsible for their actions.
Attachment
In psychology, attachment theory is a model that attempts to describe the interpersonal relationships people have throughout their lives. The most commonly recognized four styles of attachment in adults are: secure, anxious-preoccupied, dismissive-avoidant, and fearful-avoidant. With the rapid increase in social networking sites, scientists have become interested in the phenomenon of people relying on these sites for their attachment needs. Attachment style has been significantly related to the level of social media use and social orientation on Facebook. Additionally, attachment anxiety has been found to be predictive of less feedback seeking and Facebook usage, whereas attachment avoidance was found to be predictive less feedback seeking and usage. The study found that anxiously attached individuals more frequently comment, "like," and post. Furthermore, the authors suggest that anxious people behave more actively on social media sites because they are motivated to seek positive feedback from others. Despite their attempts to fulfill their needs, data suggests that individuals who use social media to fulfill these voids are typically disappointed and further isolate themselves by reducing their face-to-face interaction time with others.
Self-identity
One's self-identity, also commonly known as self-concept, can be defined as a collection of beliefs an individual has about his or herself. It can also be defined as an individual's answer to "Who am I?". Social media offers a means of exploring and forming self-identity, especially for adolescents and young adults. Early adolescence has been found to be the period in which most online identity experimentation occurs, compared to other periods of development. Researchers have identified some of the most common ways early adolescents explore identity are through self-exploration (e.g. to investigate how others react), social compensation (e.g. to overcome shyness), and social facilitation (e.g. to facilitate relationship formation). Additionally, early adolescents use the Internet more to talk to strangers and form new relationships, whereas older adolescents tend to socialize with current friends." Individuals have a high need for social affiliation but find it hard to form social connections in the offline world, and social media may afford a sense of connection that satisfies their needs for belonging, social feedback, and social validation."
Of the various concepts comprising self-identity, self-esteem, and self-image, specifically body image, have been given much attention in regard to its relationship with social media usage. Despite the popularity of social media, the direct relationship between Internet exposure and body image has been examined in only a few studies. Individuals are known for having a tendency to compare themselves to others for their own self-evaluation, most prominently through adolescence. Social media makes it even easier for adolescents to engage in these behaviors of social comparison, allowing them to view others all over the world at any given moment. In one study looking at over 150 high school students, survey data regarding online social networking use and body image was collected. With students reporting an average of two to three hours per day online, online social media usage has been significantly related to an internalization of thin ideals, appearance comparison, weight dissatisfaction, and drive for thinness. In a more recent study that focused more specifically on Facebook usage in over 1,000 high school girls, the same association between the amount of use and body dissatisfaction was found, with Facebook users reporting significantly higher levels of body dissatisfaction than non-users. Current research findings suggest a negative relationship between self-image and social media usage for adolescents. In other words, the more an adolescent uses social media, the more likely he or she is to feel bad about themselves, more specifically regarding how they look.
Types of social media engagement may differently affect self-esteem in youth. There are unsaid social understanding on social media that make people come as 'uncool' or 'desperate', as a study research points out that liking, commenting on others' s posts is predicted to reduce the appearance of self-esteem. Social media use decrease future appearance confidence in young women especially. This has increased the negative effects of the beauty standard that many women and young girls struggle to live up too with social media causing it to become worse for them. This has led them to be more negatively affected by social media and to lash out using the device. According to the study done in Italy with students that were 11, 13, and 15 years old, “Girls reported higher cyber-victimization and problematic social media usage than boys (9.1% vs 6.0% and 10.2% vs 6.1%, respectively)." Also, the app TikTok creates the affect that social comparison or the "fear of missing out are related to negative affect and might have detrimental effects on the usage experience and/or TikTok users' lives in general." Apps like TikTok can make an addictive social media environment that can have negative correlations to self-esteem.
Narcissism and social media use
There are several personality disorders, one being narcissistic personality disorder. This disorder has been connected with an inflated sense of self-worth and a need for excessive attention. Like many disorders there are varying versions of narcissism. (1) Grandiose; typically arrogant, a higher sense of entitlement and a belief that they are better than everyone and everyone knows it. (2) malignant; similar to grandiose but as one tries to lift themselves up they have no concern with destroying others in the process. (3) covert; arrogance mixed with highly self-absorbed tendencies. Inability to accept responsibility and a chronic victim of the world and finally (4) communal; self-absorbed and needs acknowledgement for good they do while typically the good they do is all for show and not genuine.
There is a direct connection between narcissistic personality disorder and social media. Studies are showing a connection between narcissism and motives for social media, such as seeking admiration for content and increase following. It has been implicated that narcissists find their content to be of higher quality and therefore share more information on their social media platforms due to a feeling of superiority.
There have been many studies to date, all typically using predictive analysis and surveys that require participates to self-report social media usage. It should be mentioned as this self-reporting directly impacts the results and relies on participates to answer truthfully.
In 2016, McCain and Campbell found that narcissism was related to greater number of posts, more time spent on social media platforms and having more friends/followers on their platforms. In 2017 Andreassen, Pallesen, and Griffiths found that narcissism may be associated with addictive use of social media.
Most of the studies are finding positive relationships between grandiose forms of narcissism and self-reported SM activities. However, in general, there is still variance in the results and continued studies investigating how narcissism relates to use of social media is needed.
Child psychology and social media.
To clarify the impact even more, it is crucial to acknowledge the complex correlation between mental health issues and social media use. Primack et al. (2017) found that there is a correlation between heavy social media use and an increase in depressive symptoms in children, based on their longitudinal research. This emphasizes the need to understand the complex dynamics at work as well as the possible negative effects. The complex web of influences on mental health is influenced by several factors, including the type of content consumed, how long it is used for, and the caliber of online interactions. Understanding these intricacies emphasizes the necessity of an all-encompassing approach to awareness and research.
The multibillion-dollar advertising industry targeting youth, particularly through digital channels, raises concerns. Research links advertising exposure to unhealthy behaviors in children—consumption of low-nutrient foods, tobacco, alcohol, and indoor tanning. Children's vulnerability arises from immature critical thinking. The policy urges pediatricians to promote digital literacy, emphasizing the need for policymakers and tech companies to adopt practices fostering healthier outcomes in the digital environment and expressing concern about tracking children's digital behavior for targeted marketing.
In conclusion, the impact of social media on child psychology is a multifaceted and evolving field of study. As technology continues to advance, research and awareness must adapt to encompass the complexities of digital interactions.
The lens social media creates
A 2017 Washington Post study found that 55% of people who got plastic surgery did so to appear better in selfie pictures. Social media has created an environment in which people look at themselves through a unique lens. This lens may showcase whether the person is deemed worthy and whether they meet the requirements to fit into modern day society. At its core, social media is a place where people compare themselves and constantly attempt to better their online appearance as evidenced by the aforementioned study conducted in the Washington Post. There is not one set lens that people use to compare themselves, rather people can view themselves in any manner that is applicable to their lives. This the reason that people that come from poorer backgrounds and broken families are more likely to abuse social media. According to the study Sociodemographic factors and social media use in 9-year-old children: the Generation R Study, children from poorer backgrounds or broken homes are significantly more likely to abuse social media and use it. In the study they were found to have a more negative impacts to their lives when compared to children coming from wealthier and more stable families. This is because they are using it as an escape or that they are viewing social media through their lens and our developing mental health problems when they see people that have perceived better lives than them.
Living with everyday evaluations
Because social media plays such a significant role within society, our everyday lives are filled with constant evaluations based on the feedback we receive on social media. Blake Hallinan and Jed R. Brubaker (2021) discuss the significance of the "like" button on social media platforms, such as Facebook, Instagram, and Twitter, as an online form of evaluation. They explain that the like button is more than just a positive or good status update but is now interpreted as a "currency for self-esteem and belonging" (p. 1). To understand how social media users interpret likes they receive on their accounts, the researchers conducted in-depth interviews consisting of twenty-five self-identified artists who actively use Instagram to share their artwork. Hallinan and Brubaker explain they chose to interview artists is because artists take a lot of pride in their work and can be significantly influenced by the feedback they receive. The interview consisted of questions regarding the participants' artwork, experience, and knowledge of Instagram, and their interpretation of the networks like button. Based on the responses from the interviewers, the researchers found that some participants were unaffected by the number of likes they received from the posts of their artwork. However, they did find that the artists who were deeply affected by the feedback on their Instagram posts experienced doubt within their artwork and personally. As a result, their personal self-esteem decreased. Overall, the researchers emphasized the impact of likes among social media users and concluded that the like button is more than just a good rating, but a personal approval.
The need to belong
Belongingness
Belongingness is the personal experience of being involved in a system or group. There are two major components of belongingness which are the feeling of being valued or needed in the group and fitting into the group. The sense of belongingness is said to stem from attachment theories. Neubaum and Kramer (2015) state that individuals with a greater desire to form attachments, have a stronger need for belonging in a group.
Roy Baumeister and Mark Leary discussed the need to belong theory in a paper in 1995. They discuss the strong effects of belongingness and stated that humans have a "basic desire to form social attachments." Without social interactions, we are deprived of emotions and are prone to more illness, physical and psychological, in the future. In 2010, Judith Gere and Geoff MacDonald found inconsistencies in the research done on this topic and reported updated findings. Research still supported that lack of social interactions lead to negative outcomes in the future. When these needs were not met, an individual's daily life seemed to be negatively affected. However, questions about an individual's interpersonal problems, such as sensitivity and self-regulation, still seem to be unknown. In today's world, social media may be the outlet in which the need to belong theory is fulfilled for individuals.
Perceived social closeness
Social media platforms such as Facebook, Twitter, etc. are updated daily to include details of people's personal lives and what they are doing. This in turn gives the perception of being close to people without actually speaking with them. Individuals contribute to social media by ‘liking’ posts, commenting, updating statuses, tweeting, posting photos, videos and more.
Sixty Facebook users were recruited in a study by Neubaum and Kramer (2015) to take part in a series of questionnaires, spend ten minutes on Facebook and then complete a post-Facebook perceptions and an emotional status questionnaires. These individuals perceived more social closeness on Facebook that lead to maintaining relationships. Individuals with a higher need to belong also relied on Facebook, but in more private messages. This allowed these individuals to belong in a one-on-one setting or in a more personal way with a group of members who are more significant to them. Active Facebook users, individuals who posted and contributed to their newsfeed, had a greater sense of social closeness, whereas passive Facebook users, who only viewed posts and did not contribute to the newsfeed, had a lesser sense of social closeness. These findings indicate that social closeness and belonging on social media is dependent on the individual's own interactions and usage style.
Group membership
In a study conducted by Cohen & Lancaster (2014), 451 individuals were asked to complete a survey online. The results suggested that social media usage during television viewing made individuals feel like they were watching the shows in a group setting. Different emotional reactions to the show, were found on all social media platforms due to hashtags of the specific show. These emotional reactions were due to certain parts of the show, reactions to characters, and commenting on the overall show. In this way, social media enhanced people's social interactions just as if they were face-to-face co-viewing television. Individuals with high needs to belong can use social media to participate in social interactions regularly, in a broader sense (Cohen & Lancaster, 2014). Social media has the tendency of making people viral over night and not always in their best interest. Especially cases like woman from Pakistan who became a meme in Pakistan overnight and she got abandoned by her community The relationship of virtual and real is closely intertwined and it has direct and in certain cases devastating effect on people's relationships and their belongingness to their groups.
See also
Instagram's impact on people
References
External links
Digital media use and mental health
Technology in society
Social media | Social media and psychology | Technology | 5,718 |
35,804,887 | https://en.wikipedia.org/wiki/International%20X-ray%20Observatory | The International X-ray Observatory (IXO) is a cancelled X-ray telescope that was to be launched in 2021 as a joint effort by NASA, the European Space Agency (ESA), and the Japan Aerospace Exploration Agency (JAXA). In May 2008, ESA and NASA established a coordination group involving all three agencies, with the intent of exploring a joint mission merging the ongoing XEUS and Constellation-X Observatory (Con-X) projects. This proposed the start of a joint study for IXO. NASA was forced to cancel the observatory due to budget constraints in fiscal year 2012. ESA however decided to reboot the mission on its own developing Advanced Telescope for High Energy Astrophysics as a part of Cosmic Vision program.
Science with IXO
X-ray observations are crucial for understanding the structure and evolution of the stars, galaxies, and the Universe as a whole. X-ray images reveal hot spots in the Universe – regions where particles have been energized or raised to very high temperatures by strong magnetic fields, violent explosions, and intense gravitational forces. X-ray sources in the sky are also associated with the different phases of stellar evolution such as the supernova remnants, neutron stars, and black holes.
IXO would have explored X-ray Universe and address the following fundamental and timely questions in astrophysics:
What happens close to a black hole?
How did supermassive black holes grow?
How do large scale structures form?
What is the connection between these processes?
To address these science questions, IXO would have traced orbits close to the event horizon of black holes, measure black hole spin for several hundred active galactic nucleus (AGN), use spectroscopy to characterize outflows and the environment of AGN during their peak activity, search for supermassive black holes out to redshift z = 10, map bulk motions and turbulence in galaxy clusters, find the missing baryons in the cosmic web using background quasars, and observe the process of cosmic feedback where black holes inject energy on galactic and intergalactic scales.
This will allow astronomers to understand better the history and evolution of matter and energy, visible and dark matter, as well as their interplay during the formation of the largest structures.
Closer to home, IXO observations would have constrained the equation of state in neutron stars, black holes spin demographics, when and how elements were created and dispersed into the Outer space, and much more.
To achieve these science goals, IXO requires extremely large collecting area combined with good angular resolution in order to offer unmatched sensitivities for the study of the high-z Universe and for high-precision spectroscopy of bright X-ray sources.
The large collecting area required because, in astronomy, telescopes gather light and produce images by hunting and counting photons. The number of photons collected puts the limit to our knowledge about the size, energy, or mass of an object detected. More photons collected means better images and better spectra, and therefore offers better possibilities for understanding of cosmic processes.
IXO configuration
The heart of IXO mission was a single large X-ray mirror with up to 3 square meters of collecting area and 5 arcsec angular resolution, which is achieved with an extendable optical bench with a 20 m focal length.
Optics
A key feature of the IXO mirror design is a single mirror assembly (Flight Mirror Assembly, FMA), which is optimized to minimize mass while maximizing the collecting area, and an extendible optical bench.
Unlike visible light, X-rays cannot be focused at normal incidence, since the X-ray beams would be absorbed in the mirror. Instead, IXO's mirrors, like all prior X-ray telescopes, will use grazing incidences, scattering at a very shallow angle. As a result, X-ray telescopes consist of nested cylindrical shells, with their inner surface being the reflecting surface. However, as the goal is to collect as many photons as possible, IXO will have a bigger than 3 m diameter mirror.
As the grazing angle is a function inversely proportional to photon energy, the higher-energy X-rays require smaller (less than 2°) grazing angles to be focused. This implies longer focal lengths as the photon energy increases, thus making X-ray telescopes difficult to build if focusing of photons with energies higher than a few keV is desired. For that reason IXO features an extendible optical bench that offers a focal length of 20 m. A focal length of 20 meters was selected for IXO as a reasonable balance between scientific needs for advanced photon collecting capability at the higher energy ranges and engineering constraints. Since no payload fairing is large enough to fit a 20-meter long observatory, thus IXO has a deployable metering structure between the spacecraft bus and the instrument module.
Instruments
IXO scientific goals require gathering many pieces of information using different techniques such as spectroscopy, timing, imaging, and polarimetry. Therefore, IXO would have carried a range of detectors, which would have provided complementary spectroscopy, imaging, timing, and polarimetry data on cosmic X-ray sources to help disentangle the physical processes occurring in them.
Two high-resolution spectrometers, a microcalorimeter (XMS or cryogenic imaging spectrograph (CIS) and a set of dispersive gratings (XGS) would have provided high-quality spectra over the 0.1–10 keV bandpass where most astrophysically abundant ions have X-ray lines.
The detailed spectroscopy from these instruments would have enabled high-energy astronomers to learn about the temperature, composition, and velocity of plasmas in the Universe. Moreover, the study of specific X-ray spectral features probes the conditions of matter in extreme gravity field, such as around supermassive black holes. Flux variability adds a further dimension by linking the emission to the size of the emitting region and its evolution over time; the high timing resolution spectrometer (HTRS) on IXO would have allowed these types of studies in a broad energy range and with high sensitivity.
To extend our view of the high-energy Universe to the hard X-rays and find the most obscured black holes, the wide field imaging and hard X-ray imaging detectors (WFI/HXI) together would have imaged the sky up to 18 arcmin field of view (FOV) with a moderate resolution (<150 eV up to 6 keV and <1 keV (FWHM) at 40 keV).
IXO's imaging X-ray polarimeter would have been a powerful tool to explore sources such as neutron stars and black holes, measuring their properties and how they impact their surroundings.
The detectors would have been located on two instrument platforms—the Moveable Instrument Platform (MIP) and the Fixed Instrument Platform (FIP). The Moveable Instrument Platform is needed because an X-ray telescopes cannot be folded as it can be done with visible-spectrum telescopes. Therefore, IXO would have used the MIP that holds the following detectors – a wide field imaging and hard X-ray imaging detector, a high-spectral-resolution imaging spectrometer, a high timing resolution spectrometer, and a polarimeter – and rotates them into the focus in turn.
The X-ray Grating Spectrometer would have been located on the Fixed Instrument Platform. This is a wavelength-dispersive spectrometer that would have provided high spectral resolution in the soft X-ray band. It can be used to determine the properties of the warm-hot-intergalactic medium, outflows from active galactic nuclei, and plasma emissions from stellar coronae.
A fraction of the beam from the mirror would have been dispersed to a charge-coupled device (CCD) camera, which would have operated simultaneously with the observing MIP instrument and collect instrumental background data, which can occur when an instrument is not in the focal position. To avoid interfering the very faint astronomical signals with radiation from the telescope, the telescope itself and all its instruments must be kept cold. Therefore, the IXO Instrument Platform would have featured a large shield that blocks the light from the Sun, Earth, and Moon, which otherwise would heat up the telescope, and interfere with the observations.
IXO optics and instrumentation will provide up to 100-fold increase in effective area for high resolution spectroscopy, deep spectral, and microsecond spectroscopic timing with high count rate capability. The improvement of IXO relative to current X-ray missions is equivalent to a transition from the 200-inch Palomar telescope to a 22 m telescope while at the same time shifting from spectral band imaging to an integral field spectrograph.
Launch
The planned launch date for IXO was 2021, going into an L2 orbit on either the Ariane V or Atlas V.
Science operations
IXO was designed to operate for a minimum of 5 years, with a goal of 10 years, so IXO science operations were anticipated to last from 2021 to 2030.
See also
Advanced Telescope for High Energy Astrophysics
Arcus
Beyond Einstein program
Chandra X-ray Observatory
Constellation-X Observatory
Great Observatories program
Laser Interferometer Space Antenna
Lynx X-ray Observatory
XEUS
References
External links
NASA International X-Ray Observatory Mission Site
ESA International X-Ray Observatory Mission Site
Catherine Cesarsky, International Year of Astronomy 2009 The Universe: Yours to Discover
ESA – XEUS overview
ESA – Observations: Seeing in X-ray wavelengths
Astro2010 Decadal Survey
2000-2010 Decadal Survey
Space telescopes
X-ray telescopes
Cancelled spacecraft | International X-ray Observatory | Astronomy | 1,946 |
68,754,922 | https://en.wikipedia.org/wiki/List%20of%20fungi%20of%20South%20Africa%20%E2%80%93%20C | This is an alphabetical list of the fungal taxa as recorded from South Africa. Currently accepted names have been appended.
Ca
Genus: Caeoma Link 1809, accepted as Puccinia Pers., (1794) (Rusts)
Caeoma clematidis Thüm. 1876
Caeoma heteromorphae Doidge 1927
Caeoma lichtensteiniae Doidge 1941,
Caeoma nervisequum Thüm. 1877 accepted as Milesina nervisequa (Thüm.) P. Syd. & Syd., (1915)
Caeoma (Aecidium) resinaecola Rud. accepted as Caeoma resinicola F. Rudolphi, (1829)
Caeoma ricini Schltdl. 1826
Family: Caliciaceae Chevall. 1826
Genus: Calicium Pers. 1794 (Lichens)
Calicium turbinatum Pers. 1797 accepted as Sphinctrina turbinata Fr., (1825)
Genus: Callopisma De Not. 1847 accepted as Caloplaca Th. Fr., (1860)
Callopisma capense A. Massal. 1861 as capensis
Callopisma cinnabarinum (Ach.) Müll. Arg. 1881 accepted as Brownliella cinnabarina (Ach.) S.Y. Kondr., Kärnefelt, A. Thell, Elix, Jung Kim, A.S. Kondr. & Hur, (2013)
Callopisma cinnabarinum var. opacum Müll. Arg. 1881 accepted as Brownliella cinnabarina (Ach.) S.Y. Kondr., Kärnefelt, A. Thell, Elix, Jung Kim, A.S. Kondr. & Hur, (2013)
Callopisma crocodes A. Massal. 1861
Callopisma flavum Müll. Arg. 1893
Callopisma haematodes A. Massal. 1861;
Callopisma teicophilum A. Massal. 1861
Callopisma zambesicum Müll. Arg. 1893
Genus: Calloriopsis Syd. & P. Syd. 1917
Calloriopsis gelatinosa (Sacc.) Syd. & P. Syd. 1917, accepted as Calloriopsis herpotricha (Berk.) R. Sant., (1951)
Genus: Calocera (Fr.) Fr. 1828
Calocera cornea (Batsch) Fr. 1827
Genus: Calonectria De Not. 1867
Calonectria capensis Doidge 1924, accepted as Byssocallis capensis (Doidge) Rossman, (1979)
Calonectria cephalosporii Hansf. 1946, accepted as Dimerosporiella cephalosporii (Hansf.) Rossman & Samuels, in Rossman, Samuels, Rogerson & Lowen, (1999)
Calonectria decora (Wallr.) Sacc. 1878, accepted as Flammocladiella decora (Wallr.) Lechat & J. Fourn., (2018)
Calonectria leucorrhodina (Mont.) Speg. 1881 accepted as Dimerosporiella leucorrhodina (Mont.) Rossman & Samuels, (1999)
Calonectria meliolae Hansf. 1941
Calonectria rigidiuscula (Berk. & Broome) Sacc., (1878), accepted as Albonectria rigidiuscula (Berk. & Broome) Rossman & Samuels (1999)
Calonectria ugandae Hansf. 1941
Genus: Calopeltis Syd. 1925, accepted as Cyclotheca Theiss., (1914)
Calopeltis jasmini Doidge 1942, accepted as Cyclotheca jasmini (Doidge) Arx, (1962)
Genus: Caloplaca Th. Fr. 1860, (Lichens)
Caloplaca amphidoxa (Stizenb.) Zahlbr. 1930
Caloplaca aurantiaca (Lightf.) Th. Fr. 1861, accepted as Blastenia ferruginea (Huds.) A. Massal.,(1852)
Caloplaca aurantiaca f. fulva Zahlbr.
Caloplaca benguellensis (Nyl.) Zahlbr. 1930
Caloplaca calviniana Zahlbr. 1932
Caloplaca capensis (A. Massal.) Zahlbr. 1930
Caloplaca cardinalis Zahlbr. 1932
Caloplaca carphinea var. scoriophila (A. Massal.) J. Steiner 1911 accepted as Usnochroma scoriophilum (A. Massal.) Søchting, Arup & Frödén, (2013)
Caloplaca cataschista Zahlbr. 1936 [as catachista]
Caloplaca cerina (Hedw.) Th. Fr. 1861
Caloplaca cinnabarina (Ach.) Zahlbr. 1908, accepted as Brownliella cinnabarina (Ach.) S.Y. Kondr., Kärnefelt, A. Thell, Elix, Jung Kim, A.S. Kondr. & Hur, (2013)
Caloplaca cinnabarina var. opaca (Müll. Arg.) Zahlbr. 1930, accepted as Brownliella cinnabarina (Ach.) S.Y. Kondr., Kärnefelt, A. Thell, Elix, Jung Kim, A.S. Kondr. & Hur, (2013)
Caloplaca cinnabarina var. pallidior (Stizenb.) Jatta 1910
Caloplaca cinnabarina var. subfulgescens (Nyl.) Zahlbr. 1930
Caloplaca cinnabariza (Nyl.) Zahlbr. 1930
Caloplaca coccinella (Stizenb.) Zahlbr. 1930
Caloplaca conchiliata Zahlbr. 1932
Caloplaca crocodes (A. Massal.) Zahlbr. 1930
Caloplaca delectans Zahlbr. 1932
Caloplaca diploplaca Zahlbr. 1932
Caloplaca diploplaca var. gracilior Zahlbr. 1932
Caloplaca discolorella Zahlbr. 1932
Caloplaca ecklonii (A. Massal.) Zahlbr. 1931
Caloplaca effusa G. Merr. ex Van der Byl 1931
Caloplaca elegans (Link) Th.Fr. (1871), accepted as Xanthoria elegans (Link) Th.Fr. (1860)
Caloplaca elegantissima (Nyl.) Zahlbr. 1931 accepted as Stellarangia elegantissima (Nyl.) Frödén, Arup & Søchting, (2013)
Caloplaca eudoxa (Müll. Arg.) Zahlbr. 1931 accepted as Teloschistopsis eudoxa (Müll. Arg.) Frödén, Arup & Søchting, (2013)
Caloplaca euelpis (Stizenb.) Zahlbr. 1930
Caloplaca fecunda Zahlbr. 1932
Caloplaca ferruginea (Huds.) Th. Fr. 1861, accepted as Blastenia ferruginea (Huds.) A. Massal., (1852)
Caloplaca ferruginea f. erysibe Jatta 1900
Caloplaca ferrugineovirens (Vain.) Zahlbr. 1932
Caloplaca flava (Müll. Arg.) Zahlbr. 1930
Caloplaca flavorubens (Nyl.) Zahlbr. 1931;
Caloplaca flavovirescens (Wulfen) Dalla Torre & Sarnth. 1902, accepted as Gyalolechia flavovirescens (Wulfen) Søchting, Frödén & Arup, (2013)
Caloplaca gracilescens Zahlbr. 1932
Caloplaca granulosa Jatta. possibly (Müll. Arg.) J. Steiner 1894, accepted as Flavoplaca granulosa (Müll. Arg.) Arup, Frödén & Søchting, (2013)
Caloplaca haematodes (A. Massal.) Zahlbr. 1930
Caloplaca hampeana (A. Massal.) Zahlbr. 1930
Caloplaca lamprocheila Flagey.
Caloplaca leptopisma (Nyl.) Zahlbr. 1931
Caloplaca leucoxantha (Müll. Arg.) Zahlbr. 1931
Caloplaca massula (Stizenb.) Zahlbr. 1930
Caloplaca mastophora (Vain.) Zahlbr. 1932
Caloplaca mastophora var. flavorubescens Vain
Caloplaca murorum Th. Fr. 1871, accepted as Calogaya saxicola (Hoffm.) Vondrák,(2016)
Caloplaca neethlingii Zahlbr. 1936
Caloplaca nideri var. pruinosula Zahlbr. 1926
Caloplaca nolens Zahlbr. 1932
Caloplaca odoardii (Bagl.) Zahlbr. 1930
Caloplaca orichalcea (Stizenb.) Zahlbr. 1931
Caloplaca pallidior (Müll. Arg.) Zahlbr. 1931
Caloplaca pallidior f. opaca (Müll. Arg.) Zahlbr. 1932 accepted as Brownliella cinnabarina (Ach.) S.Y. Kondr., Kärnefelt, A. Thell, Elix, Jung Kim, A.S. Kondr. & Hur, (2013)
Caloplaca perexigua Zahlbr. 1932
Caloplaca phlogina (Ach.) Flagey 1886, accepted as Scythioria phlogina (Ach.) S.Y. Kondr., Kärnefelt, Elix, A. Thell & Hur, (2014)
Caloplaca placidia (A. Massal.) J. Steiner 1916
Caloplaca platyna Zahlbr. 1936
Caloplaca poliotera (Nyl.) J. Steiner 1897
Caloplaca punicea (Müll. Arg.) Jatta 1910
Caloplaca pyracea Th.Fr. , possibly (Ach.) Zwackh 1862, accepted as Athallia pyracea (Ach.) Arup, Frödén & Søchting, (2013)
Caloplaca pyracea f. pyrithromoides (Nyl.) H. Olivier 1909
Caloplaca pyracea f. subpicta Zahlbr. 1931
Caloplaca pyracea var. pyrithroma (Ach.) Flagey 1888, accepted as Athallia pyracea (Ach.) Arup, Frödén & Søchting, (2013)
Caloplaca pyropoecila (Nyl.) Zahlbr. 1931
Caloplaca pyropoeciloides Zahlbr. 1932
Caloplaca regalis (Vain.) Zahlbr. 1931 accepted as Polycauliona regalis (Vain.) Hue, (1908)
Caloplaca regalis f. prostrata (Hue) Zahlbr. 1931 accepted as Polycauliona prostrata (Hue) C.W. Dodge, (1973)
Caloplaca sophodes (Vain.) Zahlbr. 1932
Caloplaca subcerina (Nyl.) Zahlbr. 1924
Caloplaca subseptata Zahlbr. 1932
Caloplaca subsoluta (Nyl.) Zahlbr. 1931 accepted as Squamulea subsoluta (Nyl.) Arup, Søchting & Frödén, (2013)
Caloplaca subpunicolor Zahlbr.*
Caloplaca sympageella (Vain.) Zahlbr. 1932
Caloplaca tegularis (Ehrh.) Sandst. 1912
Caloplaca teicophila (A. Massal.) Zahlbr. 1931
Caloplaca theloschistoides Zahlbr. 1921 accepted as Polycauliona theloschistoides (Zahlbr.) C.W. Dodge, (1971)
Caloplaca zambesica (Müll. Arg.) Zahlbr. 1931
Family: Caloplacaceae Zahlbr. 1908
Genus: Calosphaeria Tul. & C. Tul. 1863
Calosphaeria cylindrica (Kalchbr. & Cooke) Sacc. 1882, accepted as Peroneutypa cylindrica (Kalchbr. & Cooke.) Berl., (1902)
Calosphaeria princeps Tul. & C. Tul. 1863,
Genus: Calospora
Calospora arausiaca (Fabre) Sacc. 1883, accepted as Coryneum arausiacum (Fabre) Senan., Maharachch. & K.D. Hyde [as 'arausiaca'], (2017)
Calospora bottomleyae Doidge 1941,
Genus: Calothyrium Theiss. 1912, accepted as Asterinella Theiss., (1912)
Calothyrium psychotriae Doidge 1922, accepted as Schiffnerula psychotriae (Doidge) S. Hughes, (1987)
Genus: Calvatia Fr. 1849
Calvatia caelata (Bull.) Morgan (1890), accepted as Bovistella utriformis (Bull.) Demoulin & Rebriev, (2017)
Calvatia candida (Rostk.) Hollós 1902
Calvatia fontanesii Lloyd*
Calvatia gigantea (Batsch) Lloyd 1904
Calvatia incerta Bottomley 1948
Calvatia lepidophora Lloyd possibly (Ellis & Everh.) Coker & Couch 1928;
Calvatia lilacina (Mont. & Berk.) Henn. 1904
Calvatia macrogemmae Lloyd 1923
Calvatia olivacea (Cooke & Massee) Lloyd 1905
Calvatia pachyderma (Peck) Morgan 1890, accepted as Langermannia pachyderma (Peck) Kreisel, (1962)
Calvatia saccata (Vahl) Morgan 1890, accepted as Lycoperdon excipuliforme (Scop.) Pers., (1801)
Genus: Campanella Henn. 1895,
Campanella buettneri Henn. [as büttneri], (1895)
Campanella cucullata (Fr.) Lloyd 1919, accepted as Campanella junghuhnii (Mont.) Singer, (1945)
Genus: Campbellia Cooke & Massee 1890, accepted as Gyrodon Opat., (1836)
Campbellia africana Cooke & Massee 1890, accepted as Gyrodon africanus (Cooke & Massee) Singer, (1951)
Genus: Candelaria A. Massal. 1852 (Lichens)
Candelaria concolor (Dicks.) Arnold 1879
Candelaria fibrosa (Fr.) Müll. Arg. 1887
Candelaria stellata (Tuck.) Müll. Arg. 1887, accepted as Coccocarpia stellata Tuck., (1862)
Genus: Candelariella Müll. Arg. 1894(Lichens)
Candelariella elaeophaea (Nyl.) Zahlbr. 1928
Candelariella glaucolivescens (Nyl.) Zahlbr. 1928
Candelariella vitellina (Hoffm.) Müll. Arg. 1894
Candelariella vitellina f. athallina (Wedd.) Zahlbr. 1928
Genus: Candida Berkhout 1923(Yeasts)
Candida albicans (C.P. Robin) Berkhout, (1923) recorded as Candida bethaliensis (Pijper) C.W. Dodge, 1935 and Candida triadis (Langeron & Talice) Langeron & Guerra 1938,
Candida bethaliensis (Pijper) C.W. Dodge 1935 accepted as Candida albicans (C.P. Robin) Berkhout, (1923)
Candida krusei Basgal. possibly (Castell.) Berkhout 1923, accepted as Issatchenkia orientalis Kudryavtsev, (1960)
Candida triadis (Langeron & Talice) Langeron & Guerra 1938, accepted as Candida albicans (C.P. Robin) Berkhout, (1923)
Genus: Cantharellus Adans. ex Fr. 1821
Cantharellus capensis Berk. 1844, accepted as Campanella capensis (Berk.) D.A. Reid, (1975)
Cantharellus cinnabarinus (Schwein.) Schwein. 1832
Cantharellus foliolum Kalchbr. 1881
Cantharellus leucophaeus (Pers.) Nouel 1831, accepted as Faerberia carbonaria (Alb. & Schwein.) Pouzar, (1981)
Family: Capnodiaceae Höhn. ex Theiss. 1916
Genus: Capnodium Mont. 1849,
Capnodium australe Mont. 1849
Capnodium citricola McAlpine [as citricolum], (1896)
Capnodium citri Berk. & Desm., in Berkeley, (1849)
Capnodium fuligo Berk. & Desm. 1849, accepted as Microxyphiella fuligo (Berk. & Desm.) Speg., (1918)
Capnodium salicinum Mont. 1849, accepted as Capnodium citri Berk. & Desm., (1849)
Genus: Castellania C.W. Dodge 1935, accepted as Candida Berkhout, (1923)
Castellania balcanica (Castell. & Chalm.) C.W. Dodge 1935, accepted as Issatchenkia orientalis Kudryavtsev, (1960)
Castellania linguae-pilosae (Lucet) C.W. Dodge 1935, accepted as Candida tropicalis (Castell.) Berkhout, (1923)
Castellania pseudolondinensis (Castell. & Chalm.) C.W. Dodge 1935, accepted as Candida albicans (C.P. Robin) Berkhout, (1923)
Castellania pseudotropicalis (Castell.) C.W. Dodge 1935, accepted as Kluyveromyces marxianus (E.C. Hansen) Van der Walt, (1971)
Castellania sp.
Genus: Catacauma Theiss. & Syd. 1914,accepted as Phyllachora Nitschke ex Fuckel, (1870)
Catacauma goyazense (Henn.) Theiss. & Syd. 1915, accepted as Phyllachora goyazensis Henn., (1895)
Catacauma grammicum (Henn.) Theiss. & Syd. 1915, accepted as Phyllachora grammica Henn., (1907)
Catacauma peglerae Doidge 1921, accepted as Phyllachora peglerae (Doidge) Doidge, (1942)
Catacauma pterocarpi (Syd. & P. Syd.) Syd. 1915, accepted as Phyllachora pterocarpi Syd. & P. Syd., (1912)
Catacauma punctum (Cooke) Theiss. & Syd. 1917 [as puncta] accepted as Phyllachora puncta (Cooke) Cooke, (1885)
Catacauma schotiae Doidge 1922, [as schotii], accepted as Phyllachora schotiae (Doidge) Doidge, (1942)
Genus: Catalechia *
Catalechia africana Müll.Arg.*
Genus: Catastoma Morgan 1892, accepted as Disciseda Czern. (1845)
Catastoma anomalum (Cooke & Massee) Lloyd 1905, accepted as Disciseda anomala (Cooke & Massee) G. Cunn., (1927)
Catastoma castaneum Lloyd*
Catastoma circumscissum Morgan possibly Berk. & M.A. Curtis 1892
Catastoma duthiei Lloyd*
Catastoma juglandiforme Lloyd , [as juglandiformis], possibly (Berk. ex Massee) Lohwag 1930, accepted as Disciseda juglandiformis (Berk. ex Massee) Hollós, (1902)
Catastoma magnum Lloyd*
Catastoma pedicellatum Morgan 1892, accepted as Disciseda pedicellata (Morgan) Hollós, (1902)
Catastoma zeyheri Lloyd*
Genus: Catillaria A. Massal. 1852(Lichens)
Catillaria chalybeia (Borrer) A. Massal. 1852
Catillaria finckei Zahlbr. 1921
Catillaria intermixta (Nyl.) Arnold 1870, accepted as Megalaria intermixta (Nyl.) Kalb, (2007)
Catillaria intermixta f. cyanocentra Zahlbr.
Catillaria lenticularis (Ach.) Th. Fr. 1874,
Catillaria lenticularis f. chloropoliza (Nyl.) Boistel 1903, accepted as Catillaria chalybeia (Borrer) A. Massal., (1852)
Catillaria lutea*
Catillaria melampepla (Tuck.) Zahlbr. 1926
Catillaria mortualis (Stizenb.) Zahlbr. 1926
Catillaria nigroclavata Schuler. , possibly (Nyl.) J. Steiner 1898
Catillaria opacata (Stizenb.) Zahlbr. 1926
Catillaria rhyparoleuca A. Massal. 1861
Catillaria stellenboschiana Vain. 1926
Catillaria stictella (Stirt.) Zahlbr. 1926
Catillaria subfuscata (Nyl.) Zahlbr. 1926
Ce
Genus: Celidium Tul. 1852, accepted as Arthonia (1806)
Celidium stictarum (De Not.) Tul. 1852, accepted as Plectocarpon lichenum (Sommerf.) D. Hawksw.,(1984)
Genus: Cenangium Fr. 1818
Cenangium pelidnum (Kalchbr. & Cooke) Sacc. 1889
Genus: Cephalosporiopsis Peyronel 1916,
Cephalosporiopsis parasitica Hansf. 1943
Genus: Cephalosporium accepted as Acremonium Link (1809)
Cephalosporium sacchari E.J. Butler & Hafiz Khan, (1913), accepted as Fusarium sacchari (E.J. Butler & Hafiz Khan) W. Gams, (1971)
Cephalosporium sp.
Genus: Cephalothecium Corda 1838, accepted as Trichothecium Link, (1809)
Cephalothecium roseum (Pers.) Corda (1838), accepted as Trichothecium roseum (Pers.) Link (1809)
Genus: Cephatelium *
Cephatelium macowanianum Syd.*
Genus: Ceratium Alb. & Schwein. 1805, accepted as Ceratiomyxa J. Schröt., (1897) (Protozoa/slime moulds)
Ceratium hydnoides (Jacq.) Alb. & Schwein. 1805
Ceratium arbuscula Berk. & Broome 1873, accepted as Ceratiomyxa fruticulosa T. Macbr., (1899)
Ceratium sphaeroideum Kalchbr. & Cooke (1880), accepted as Beniowskia sphaeroidea (Kalchbr. & Cooke) E.W. Mason, (1928)
Genus: Ceratosphaeria Niessl 1876
Ceratosphaeria crinigera (Cooke) Sacc. 1883, accepted as Lentomitella crinigera (Cooke) Réblová, (2006)
Genus: Ceratostoma
Ceratostoma cylindrica Kalchbr. & Cooke 1880, accepted as Peroneutypa cylindrica (Kalchbr. & Cooke.) Berl., (1902)
Genus: Ceratostomella Sacc. 1878,
Ceratostomella paradoxa Dade 1928, accepted as Ceratocystis paradoxa (Dade) C. Moreau, (1952)
Ceratostomella pilifera (Fr.) G. Winter 1885, accepted as Ceratocystis pilifera (Fr.) C. Moreau, (1952)
Genus: Cercoseptoria Petr. 1925, accepted as Pseudocercospora Speg., (1910)
Cercoseptoria egenula Syd. 1935, accepted as Pseudocercospora egenula (Syd.) U. Braun & Crous, (2003)
Genus: Cercospora Fresen. ex Fuckel 1863,
Cercospora apii Fresen. 1863,
Cercospora apii var. pastinacae Sacc. 1886, accepted as Passalora pastinacae (Sacc.) U. Braun, (1992)
Cercospora arachidicola Hori 1917,
Cercospora argyrolobii Chupp & Doidge 1948, accepted as Pseudocercospora argyrolobii (Chupp & Doidge) Deighton, (1976)
Cercospora bauhiniae Syd. & P. Syd. 1914, accepted as Pseudocercospora bauhiniae (Syd. & P. Syd.) Deighton, (1976)
Cercospora beticola Sacc. 1876,
Cercospora bolleana (Thüm.) Speg., (1879), accepted as Mycosphaerella bolleana B.B.Higgins, (1920)(?)
Cercospora byliana Syd. 1924, accepted as Pseudocercospora byliana (Syd.) J.M. Yen, (1980)
Cercospora caffra Syd. & P. Syd. 1914, accepted as Stigmina knoxdaviesii Crous & U. Braun, (1996)
Cercospora canescens Ellis & G. Martin 1882
Cercospora capensis (Thüm.) Sacc. 1886, accepted as Pleurophragmium capense (Thüm.) S. Hughes, (1958)
Cercospora carotae Solh. possibly (Pass.) Kazn. & Siemaszko 1929
Cercospora caryae Chupp & Doidge 1948
Cercospora cassinopsidis G. Winter 1885
Cercospora cichorii Davis 1919
Cercospora circumscissa Sacc. (1878), accepted as Pruniphilomyces circumscissus (Sacc.) Crous & Bulgakov, (2020)
Cercospora clerodendri I. Miyake 1913, accepted as Pseudocercospora clerodendri (I. Miyake) Deighton, (1976)
Cercospora clutiae Kalchbr. & Cooke [as cluytiae],(1880) accepted as Pseudocercospora clutiae (Kalchbr. & Cooke) Deighton [as cluytiae], (1976)
Cercospora coffeicola Berk. & Cooke 1881
Cercospora columnaris Ellis & Everh. 1894, accepted as Pseudocercospora griseola (Sacc.) Crous & U. Braun, (2006)
Cercospora commelinae Kalchbr. & Cooke [as commelynae], (1880)
Cercospora corchori Sawada 1919
Cercospora cruenta Sacc., (1880), accepted as Mycosphaerella cruenta (Sacc.) Latham, (1934)
Cercospora curtisiae Chupp & Doidge 1948, accepted as Pseudocercospora curtisiae (Chupp & Doidge) Crous & U. Braun, (1996)
Cercospora delicatissima Kalchbr. & Cooke 1880 accepted as Passalora delicatissima (Kalchbr. & Cooke) U. Braun & Crous,(2003)
Cercospora demetrioniana G. Winter 1884 {as demetrionana]
Cercospora dissotidis Chupp & Doidge 1948, accepted as Pseudocercospora dissotidis (Chupp & Doidge) Crous & U. Braun, (1996)
Cercospora dovyalidis Chupp & Doidge 1948, accepted as Pseudocercospora dovyalidis (Chupp & Doidge) Deighton, (1976)
Cercospora egenula (Syd.) Chupp & Doidge 1948, accepted as Pseudocercospora egenula (Syd.) U. Braun & Crous, (2003)
Cercospora faureae Chupp & Doidge 1948, accepted as Podosporiella faureae (Chupp & Doidge) M.B. Ellis, (1976)
Cercospora fici Heald & F.A. Wolf 1911 accepted as Pseudocercospora fici (Heald & F.A. Wolf) X.J. Liu & Y.L. Guo, (1991)
Cercospora fukushiana (Matsuura) W. Yamam. 1934
Cercospora fusca F.V. Rand 1914
Cercospora fusimaculans G.F. Atk. 1892, accepted as Catenulocercospora fusimaculans (G.F. Atk.) C. Nakash., Videira & Crous, (2017)
Cercospora gossypina Cooke (1883), accepted as Mycosphaerella gossypina (Cooke) Everh.
Cercospora haemanthi Kalchbr. & Cooke 1880
Cercospora grandissima Rangel 1915
Cercospora guliana Sacc. 1913
Cercospora halleriae Chupp & Doidge 1948, accepted as Pseudocercospora halleriae (Chupp & Doidge) Deighton, (1976)
Cercospora Allesch. (1895) accepted as Clarohilum henningsii (Allesch.) Videira & Crous, (2017)
Cercospora heteromalla Syd. 1924, accepted as Pseudocercospora heteromalla (Syd.) Deighton, (1987)
Cercospora insulana Chupp possibly Sacc. 1915,
Cercospora juglandis Kellerm. & Swingle 1889, accepted as Pseudocercospora juglandis (Kellerm. & Swingle) U. Braun & Crous, (2003)
Cercospora jussiaeae G.F. Atk. 1892, [as jussieuae], accepted as Pseudocercospora jussiaeae (G.F. Atk.) Deighton, (1976)
Cercospora kiggelariae Syd. 1924, accepted as Pseudocercospora kiggelariae (Syd.) Crous & U. Braun, (1994)
Cercospora koepkei W. Krüger 1890, [as kopkei] accepted as Passalora koepkei (W. Krüger) U. Braun & Crous, (2003)
Cercospora latimaculans Wakef. 1918
Cercospora leoni Săvul. & Rayss 1935
Cercospora leonotidis Cooke 1879, accepted as Passalora leonotidis (Cooke) U. Braun & Crous, (2003)
Cercospora liebenbergii Syd. 1935, accepted as Cercostigmina liebenbergii (Syd.) Crous & U. Braun, (1996)
Cercospora longipes E.J. Butler 1906,
Cercospora malayensis F. Stevens & Solheim 1931, as mayalensis
Cercospora melaena Syd. 1924, accepted as Pseudocercospora melaena (Syd.) Deighton, (1976)
Cercospora melanochaeta Ellis & Everh. 1894, accepted as Passalora melanochaeta (Ellis & Everh.) U. Braun, (1999)
Cercospora momordicae McRae 1929
Cercospora musae Massee 1914
Cercospora musae Zimm. 1902 accepted as Pseudocercospora musae (Zimm.) Deighton, (1976)
Cercospora myrti Erikss. 1885,
Cercospora myrticola Speg. 1886, accepted as Pseudocercospora myrticola (Speg.) Deighton, (1976)
Cercospora nicotianae Ellis & Everh. 1893,
Cercospora oblecta Syd. 1935, accepted as Pseudocercospora oblecta (Syd.) Crous & U. Braun, (2008)
Cercospora occidentalis Cooke 1878, accepted as Passalora occidentalis (Cooke) U. Braun, (2000)
Cercospora oliniae Verwoerd & Dippen. 1930, accepted as Pseudocercospora oliniae (Verwoerd & Dippen.) Crous & U. Braun, (1996)
Cercospora omphacodes Ellis & Holw. 1885, accepted as Passalora omphacodes (Ellis & Holw.) Crous & U. Braun, (1996)
Cercospora pachycarpi Chupp & Doidge 1948, accepted as Passalora pachycarpi (Chupp & Doidge) Crous & U. Braun, (1996)
Cercospora pareirae Speg. 1910, accepted as Pseudocercospora pareirae (Speg.) Crous & U. Braun, (1996)
Cercospora pastinacae (Sacc.) Peck 1912, accepted as Passalora pastinacae (Sacc.) U. Braun, (1992)
Cercospora persicariae W. Yamam. 1934, accepted as Pseudocercospora persicariae (W. Yamam.) Deighton, (1976)
Cercospora personata (Berk. & M.A. Curtis) Ellis, (1885), accepted as Nothopassalora personata (Berk. & M.A. Curtis) U. Braun, C. Nakash., Videira & Crous, (2017)
Cercospora phaeocarpa Mitter 1937, accepted as Scolecostigmina phaeocarpa (Mitter) U. Braun, (1999)
Cercospora pouzolziae Syd. 1935, accepted as Pseudocercospora pouzolziae (Syd.) Y.L. Guo & X.J. Liu, (1992)
Cercospora pretoriensis Chupp & Doidge 1948
Cercospora protearum Cooke 1883, accepted as Pseudocercospora protearum (Cooke) U. Braun & Crous, (2002)
Cercospora protearum var. leucadendri Cooke 1883, accepted as Pseudocercospora leucadendri (Cooke) U. Braun & Crous, (2012)
Cercospora protearum var. leucospermi Cooke 1883
Cercospora punctiformis Sacc. & Roum. 1881
Cercospora purpurea-cincta Nel.*
Cercospora resedae Fuckel 1866,
Cercospora rhoicissi Syd. & P. Syd. 1912, accepted as Pseudocercospora rhoicissi (Syd. & P. Syd.) Deighton, (1976)
Cercospora riachueli Speg. 1880, accepted as Pseudocercospora riachueli (Speg.) Deighton, (1976)
Cercospora richardiicola G.F. Atk. [as richardiaecola], (1892)
Cercospora ricinella Sacc. & Berl. 1885,
Cercospora rubrotincta Ellis & Everh. 1887,
Cercospora scitula Syd. 1935, accepted as Pseudocercospora scitula (Syd.) Deighton, (1976)
Cercospora sesami Zimm. 1904,
Cercospora solani-melongenae Chupp 1948,
Cercospora solani-melongenae Hori.*
Cercospora sorghi Ellis & Everh. 1887,
Cercospora sphaeroidea Speg. 1880, accepted as Phaeoisariopsis sphaeroidea (Speg.) L.G. Br. & Morgan-Jones, (1976)
Cercospora stizolobii Syd. & P. Syd. 1913, accepted as Pseudocercospora stizolobii (Syd. & P. Syd.) Deighton, (1976)
Cercospora transvaalensis Syd. 1935, accepted as Pseudocercospora transvaalensis (Syd.) Deighton, (1976)
Cercospora tremae Chupp. possibly Cercospora trematis (F. Stevens & Solheim) Chupp, in Chardón & Toro, (1934), accepted as Passalora trematis (F. Stevens & Solheim) U. Braun & Crous, (2003)
Cercospora vaginae W. Krüger, (1896),accepted as Passalora vaginae (W. Krüger) U. Braun & Crous, (2003)
Cercospora violae Sacc. 1876,
Cercospora viticola (Ces.) Sacc. [as viticolum], (1886), accepted as Pseudocercospora vitis (Lév.) Speg., (1910)
Cercospora vitis Sacc. (1881), accepted as Pseudocercospora vitis (Lév.) Speg., (1910)
Cercospora withaniae Syd. & P. Syd. 1912, accepted as Pseudocercospora withaniae (Syd. & P. Syd.) Deighton, (1976)
Cercospora ziziphi Petch 1909, [as zizphyi], accepted as Pseudocercospora ziziphi (Petch) Crous & U. Braun [as zizyphi], (1996)
Cercospora sp.
Genus: Cercosporella Sacc. 1880
Cercosporella brassicae (Fautrey & Roum.) Höhn. 1924, accepted as Neopseudocercosporella capsellae (Ellis & Everh.) Videira & Crous, (2016)
Cercosporella delicatissima (Kalchbr. & Cooke) Chupp 1948, accepted as Passalora delicatissima (Kalchbr. & Cooke) U. Braun & Crous, (2003)
Cercosporella ekebergiae Syd. & P. Syd. 1914, accepted as Phaeophloeosporella ekebergiae (Syd. & P. Syd.) Crous & B. Sutton, (1997)
Cercosporella gossypii Speg. 1886, accepted as Ramularia gossypii (Speg.) Cif., (1962)
Cercosporella herpotrichoides Fron 1912, [asherpotrichioides], accepted as Oculimacula yallundae (Wallwork & Spooner) Crous & W. Gams, (2003)
Genus: Cercosporina Speg. 1910, accepted as Cercospora Fresen. ex Fuckel, (1863)
Cercosporina ricinella (Sacc. & Berl.) Speg. 1910, accepted as Cercospora ricinella Sacc. & Berl., (1885)
Genus: Cerebella Ces. 1851, accepted as Epicoccum Link, (1816)
Cerebella cynodontis Syd. & P. Syd. 1912
Cerebella sp.
Genus: Cerotelium Arthur 1906
Cerotelium fici (Castagne) Arthur 1917
Cerotelium gossypii (Lagerh.) Arthur, (1917), accepted as Phakopsora desmium (Berk. & Broome) Cummins, (1945)
Genus: Cetraria Ach. 1803
Cetraria aculeata (Schreb.) Fr. 1826
Genus: Ceuthospora Fr. 1825, accepted as Phacidium Fr., (1815)
Ceuthospora foliicola (Lib.) Cooke 1879, accepted as Phacidium foliicola (Lib.) W.J. Li & K.D. Hyde, (2020)
Ceuthospora oleae Kalchbr. & Cooke 1880
Ch
Genus: Chaetodimerina Hansf. 1946 accepted as Rizalia Syd. & P. Syd., (1914)
Chaetodimerina schiffnerulae Hansf. 1946, accepted as Rizalia schiffnerulae (Hansf.) E. Müll., (1962)
Genus: Chaetomella Fuckel 1870
Chaetomella artemisiae Cooke 1882
Chaetomella tritici Tehon & E.Y. Daniels 1925
Genus: Chaetominum , probably Chaetomium Kunze 1817
Chaetomium chartarum Ehrenb. 1818, [as Chaetominum chartarum] accepted as Chaetomium globosum Kunze, (1817)
Chaetomium elatum Kunze 1818, as Chaetominum elatum
Chaetomium funicola Cooke 1873, as Chaetominum funicolum accepted as Dichotomopilus funicola (Cooke) X.Wei Wang & Samson, (2016)
Chaetomium globosum Kunze 1817, [as Chaetominum globosum]
Chaetomium indicum Corda 1840, [as Chaetominum indicum] accepted as Dichotomopilus indicus (Corda) X.Wei Wang & Samson, (2016)
Genus: Chaetopeltopsis Theiss. 1913, accepted as Chaetothyrina Theiss., (1913)
Chaetopeltopsis sp.
Genus: Chaetosphaeria Tul. & C. Tul. 1863
Chaetosphaeria insectivora Hansf. 1946, accepted as Koordersiella insectivora (Hansf.) D. Hawksw. & O.E. Erikss., (1987)
Genus: Chaetostigmella Syd. & P. Syd. 1917, accepted as Dimerium (Sacc. & P. Syd.) McAlpine, (1903)
Chaetostigmella asterinicola Doidge*
Chaetostigmella capensis (Doidge) Toro 1934, accepted as Chaetothyrium capense (Doidge) Hansf., (1950)
Family: Chaetothyriaceae Hansf. ex M.E. Barr 1979
Genus: Chaetothyrium Speg. 1888
Chaetothyrium capense (Doidge) Hansf. 1950,
Chaetothyrium syzygii Hansf. 1946,
Chaetothyrium transvaalensis v.d.Byl.*
Genus: Cheilymenia Boud. 1885,
Cheilymenia coprinaria (Cooke) Boud. 1907
Cheilymenia pulcherrima (P. Crouan & H. Crouan) Boud. 1907
Genus: Chiodecton Ach. 1814
Chiodecton capense (A. Massal.) Zahlbr. 1923, accepted as Chiodecton colensoi (A. Massal.) Müll. Arg., (1894)
Chiodecton direnium Nyl.*
Chiodecton galactinum Zahlbr. 1932
Chiodecton natalense Nyl. 1869
Chiodecton sanguineum f. roseocinctum (Fr.) Vain. 1890, [as f. rosaceocinctum], accepted as Herpothallon roseocinctum (Fr.) Aptroot, Lücking & G. Thor, (2009)
Chiodecton subnanum Vain. 1930
Chiodecton vanderbylii Zahlbr. 1932
Chiodecton venosum (Pers.) Zahlbr. 1905, accepted as Enterographa crassa (DC.) Fée, (1825)
Family: Chiodectonaceae Zahlbr. 1905
Genus: Chlamydopus Speg. 1898,
Chlamydopus meyenianus (Klotzsch) Lloyd 1903,
Genus: Chloridium Link 1809, accepted as Chaetosphaeria Tul. & C. Tul.,(1863)
Chloridium meliolae Hansf. 1946, accepted as Ramichloridium meliolae (Hansf.) de Hoog, (1977)
Genus: Chlorociboria Seaver 1936,
Chlorociboria aeruginosa (Oeder) Seaver (1958)
Genus: Chlorodothis Clem. 1909, accepted as Tomasellia A. Massal., (1856)
Chlorodothis lahmii (Müll. Arg.) Clem. 1909
Genus: Chlorosplenium Fr. 1849
Chlorosplenium aeruginosum (Oeder) De Not. 1863, accepted as Chlorociboria aeruginosa (Oeder) Seaver, (1936)
Family: Choanephoraceae J. Schröt. 1897
Genus: Chondrioderma Rostaf. 1873, accepted as Diderma Pers., (1794) Protozoa
Chondrioderma difforme (Pers.) Rostaf. 1873, accepted as Didymium difforme (Pers.) Gray, (1821)
Chondrioderma subdictyospermum Rostaf. 1876 as subdictyosperum; Protozoa
Genus: Chondromyces Berk. & M.A. Curtis 1874
Chondromyces aurantiacus Thaxter.*
Genus: Chroolepus C. Agardh 1824, accepted as Cystocoleus Thwaites, (1849)
Chroolepus afrum Massal. possibly Müll. Arg. 1861
Genus: Chrysomyces Theiss. & Syd. 1917, accepted as Perisporiopsis Henn., (1904)
Chrysomyces brachystegiae (Henn.) Theiss. & Syd. 1917, [as brachtystegiae], accepted as Perisporiopsis brachystegiae (Henn.) Arx, (1962)
Family: Chrysothricaceae*
Order Chytridiales Cohn 1879
Ci
Genus: Cicinnobella Henn. 1904, accepted as Perisporiopsis Henn., (1904)
Cicinnobella sp.
Genus: Cicinnobolus Ehrenb. 1853, accepted as Ampelomyces Ces. ex Schltdl., (1852)
Cicinnobolus cesatii de Bary, 1870, accepted as Ampelomyces quisqualis Ces., 1852[
Genus: Cienkowskia Rostaf. 1873, accepted as Willkommlangea Kuntze, (1891)
Cienkowskia reticulata (Alb. & Schwein.) Rostaf. 1875, accepted as Willkommlangea reticulata (Alb. & Schwein.) Kuntze, (1891)
Genus: Ciliciopodium Corda 1831
Ciliciopodium caespitosum (Welw. & Curr.) Sacc. 1886
Genus: Cintractia Cornu 1883
Cintractia axicola (Berk.) Cornu 1883
Cintractia capensis (Reess) Cif. 1931, accepted as Bauerago capensis (Reess) Vánky, (1999)
Cintractia caricicola Henn. 1895
Cintractia crus-galli (Tracy & Earle) Magnus 1896, accepted as Ustilago crus-galli Tracy & Earle, (1895)
Cintractia leucoderma (Berk.) Henn. 1895, accepted as Leucocintractia leucoderma (Berk.) M. Piepenbr., (2000)
Cintractia melinidis Zundel [as melinis], (1938)
Cintractia piluliformis (Berk.) Henn. 1898, accepted as Heterotolyposporium piluliforme (Berk.) Vánky, (1997)
Cintractia sorghi-vulgaris (Tul. & C.Tul.) G.P.Clinton (1897), accepted as Sporisorium sorghi Ehrenb. ex Link (1825)
Cintractia togoensis Henn. 1905, accepted as Cintractia limitata G.P. Clinton, (1904)
Genus: Circinella Tiegh. & G. Le Monn. 1873
Circinella sydowii Lendn. 1913, accepted as Circinella muscae (Sorokīn) Berl. & De Toni, (1888)
Cl
Genus: Cladia Nyl. 1870,
Cladia aggregata (Sw.) Nyl. 1870
Family: Cladochytriaceae J. Schröt. 1897
Genus: Cladoderris Pers. ex Berk. 1842, accepted as Cymatoderma Jungh., (1840)
Cladoderris australica Berk. 1888, accepted as Cymatoderma elegans Jungh., (1840)
Cladoderris elegans (Jungh.) Fr., (1849) accepted as Cymatoderma elegans Jungh. 1840
Cladoderris funalis Henn. 1905, accepted as Pterygellus funalis (Henn.) D.A. Reid, (1976)
Cladoderris infundibuliformis (Klotzsch) Fr. 1845, accepted as Cymatoderma infundibuliforme (Klotzsch) Boidin, (1959)
Cladoderris spongiosa Fr. 1845, accepted as Cymatoderma elegans Jungh., (1840)
Cladoderris spongiosa var. subsessilis Fr. 1849, accepted as Cymatoderma elegans Jungh., (1840)
Cladoderris thwaitesii Berk. & Broome 1873, accepted as Stereopsis radicans (Berk.) D.A. Reid, (1965)
Family: Cladoniaceae Zenker 1827,
Genus: Cladonia P. Browne 1756,
Cladonia aggregata Ach. possibly (Sw.) Spreng. 1827, accepted as Cladia aggregata (Sw.) Nyl., (167) (1870)
Cladonia bacillaris Nyl. possibly (Ach.) Genth 1835
Cladonia bacillaris f. pityropoda Nyl. ex Cromb. 1894, accepted as Cladonia macilenta Hoffm., [1795]
Cladonia caespiticia Floerke. possibly (Pers.) P. Gaertn., B. Mey & Scherb. 1802
Cladonia centrophora Müll. Arg. 1887
Cladonia chlorophaea (Flörke ex Sommerf.) Spreng. 1827
Cladonia chordalis Ach. possibly (Flörke) Nyl.
Cladonia didyma var. muscigena (Eschw.) Vain. 1887
Cladonia didyma var. muscigena f. subulata Sandst.*
Cladonia fimbriata (L.) Fr. 1831
Cladonia fimbriata f. abortiva (Flörke) Harm. 1896
Cladonia fimbriata var. balfourii (Cromb.) Vain. 1894, accepted as Cladonia subradiata (Vain.) Sandst., (1922)
Cladonia fimbriata var. chlorophaeoides (Vain.) C.W. Dodge 1950;
Cladonia fimbriata var. chondroidea Vain. 1894
Cladonia fimbriata var. chondroidea f. balfourii*
Cladonia fimbriata var. chondroidea f. chlorophaeoides Wain.*
Cladonia fimbriata var. chondroidea f. subradiata Wain.*
Cladonia fimbriata var. coniocraea Wain. possibly (Flörke) Nyl. 1858, accepted as Cladonia coniocraea (Flörke) Spreng., (1827)
Cladonia fimbriata var. fibula Stizenb. possibly (Hoffm.) Nyl. 1861
Cladonia fimbriata var. nemoxyna (Ach.) Coem. ex Vain. 1894, accepted as Cladonia rei Schaer., (1823)
Cladonia fimbriata var. nemoxyna f. fibula Wain.*
Cladonia fimbriata var. ochrochlora Wain. possibly (Flörke) Schaer. 1833, accepted as Cladonia ochrochlora Flörke, (1827)
Cladonia fimbriata var. radiata Coem. possibly (Schreb.) Cromb. 1831, accepted as Cladonia subulata (L.) Weber ex F.H. Wigg., (1780)
Cladonia fimbriata var. radiata f. nemoxyna Flotow.*
Cladonia fimbriata var. simplex (Weiss) Flot. ex Vain. 1894
Cladonia fimbriata var. subcornuta Nyl. ex Arnold 1875, accepted as Cladonia subulata (L.) Weber ex F.H. Wigg., (1780)
Cladonia fimbriata var. subradiata Vain. 1894,
Cladonia fimbriata var. subulata (L.) Vain. 1894, accepted as Cladonia subulata (L.) Weber ex F.H. Wigg., (1780)
Cladonia fimbriata var. subulata f. abortiva Harm.*
Cladonia fimbriata var. subulata f. chordalis Ach.*
Cladonia fimbriata var. subulata f. subcornuta Zahlbr.*
Cladonia fimbriata var. tubaeformis Ach. possibly (Hoffm.) Fr. 1831
Cladonia flabelliformis f. tenella (Müll. Arg.) Zahlbr. 1926
Cladonia fiabelliformis var. tenella Müll. Arg. 1891
Cladonia floerkeana Sommerf. possibly (Fr.) Flörke 1828
Cladonia furcata Sehrad. possibly (Huds.) Baumg. 1790
Cladonia gorgonina var. subrangiferina (Nyl.) Vain. 1887, accepted as Cladia aggregata (Sw.) Nyl., (1870)
Cladonia macilenta Hoffm. 1796
Cladonia macilenta var. corticata (Vain.) Doidge 1950
Cladonia multiformis Merrill f. subascypha Evans.*
Cladonia neglecta (Flörke) Spreng. 1827, accepted as Cladonia pyxidata (L.) Hoffm., (1796) [1795]
Cladonia ochrochlora Flörke 1827
Cladonia pityrea (Flörke) Fr. 1826, accepted as Cladonia ramulosa (With.) J.R. Laundon, (1984)
Cladonia pityrea f. scyphifera (Delise) Vain. 1894, accepted as Cladonia ramulosa (With.) J.R. Laundon, (1984)
Cladonia pityrea var. subareolata Vain. 1894
Cladonia pityrea var. zwackii Vain. 1894 f. scyphifera Vain.*
Cladonia pungens Floerke possibly (Ach.) Gray 1821, accepted as Cladonia pertricosa Kremp., (1881) [1880]
Cladonia pungens f. foliolosa Nyi. possibly (Flörke) Leight. 1879
Cladonia pycnoclada (Pers.) Nyl. 1867,
Cladonia pycnoclada f. exalbescens (Vain.) Petr. 1948, accepted as Cladonia confusa R. Sant., (1942)
Cladonia pyxidata Fr. possibly (L.) Hoffm. 1796
Cladonia pyxidata f. staphylea Nyl. possibly (Ach.) Harm. 1896
Cladonia pyxidata var. chlorophaea (Flörke ex Sommerf.) Flörke 1894, accepted as Cladonia chlorophaea (Flörke ex Sommerf.) Spreng., (1827)
Cladonia pyxidate var. chlorophaea f. staphylea Harm.*
Cladonia pyxidata var. pocillum (Ach.) Flot. ex Vain. 1894, accepted as Cladonia pocillum (Ach.) O.J. Rich., (1877)
Cladonia rangiferina (L.) Weber 1780,
Cladonia rangiformis var. foliosa Flörke ex Vain. 1887
Cladonia rangiformis var. pungens (Ach.) Vain. 1887, accepted as Cladonia pertricosa Kremp., (1881) [1880]
Cladonia squamosa Hoffm. 1796
Cladonia subcornuta Stizenb. possibly (Nyl. ex Arnold) Cromb. 1880, accepted as Cladonia subulata (L.) Weber ex F.H. Wigg., (1780)
Cladonia sylvatica (L.) Hoffm. 1796, accepted as Cladonia portentosa (Dufour) Coem., (1865)
Cladonia verticillata (Hoffm.) Ach. 1799, accepted as Cladonia cervicornis (Ach.) Flot., (1849)
Cladonia sp.
Genus: Cladosporium Link 1816
Cladosporium aphidis Thüm. 1877
Cladosporium asteromatoides Sacc. 1885
Cladosporium baccae Verwoerd & Dippen. 1930
Cladosporium berkheyae Syd. & P. Syd. 1914, accepted as Passalora berkheyae (Syd. & P. Syd.) U. Braun & Crous, (2003)
Cladosporium carpophilum Thüm. (1877), accepted as Venturia carpophila E.E.Fisher (1961)
Cladosporium citri possibly Massee 1899
Cladosporium cucumerinum Ellis & Arthur 1889
Cladosporium elatum (Harz) Nannf. 1934
Cladosporium fulvum Cooke 1883 accepted as Fulvia fulva (Cooke) Cif., (1954)
Cladosporium herbarum (Pers.) Link 1816
Cladosporium laxum Kalchbr. & Cooke 1880, accepted as Passalora laxa (Kalchbr. & Cooke) U. Braun & Crous, (2003)
Cladosporium macrocarpum Preuss 1848
Cladosporium melanophloei Thüm. 1877
Cladosporium pisicola W.C. Snyder [as pisicolum], (1934)
Cladosporium tenuissimum Cooke 1878
Cladosporium vignae Rac. possibly M.W. Gardner 1925,
Cladosporium zeae Peck 1894
Cladosporium sp.
Genus: Clasterosporium Schwein. 1832
Clasterosporium carpophilum (Lév.) Aderh. (1901), accepted as Stigmina carpophila (Lév.) M.B. Ellis, (1959)
Clasterosporium celastri (Thüm.) Sacc. 1886
Clasterosporium clavatum (Lév.) Sacc. 1886
Clasterosporium densum Syd. & P. Syd. 1912 accepted as Annellophorella densa (Syd. & P. Syd.) Subram., (1962)
Family: Clathraceae Chevall. 1826
Genus: Clathroporina Müll. Arg. 1882
Clathroporina locuples (Stizenb.) Zahlbr. 1922
Genus: Clathrus P. Micheli ex L. 1753, (formerly Clathrella)
Clathrus angolensis (Welw. & Curr.) E. Fisch. 1886, accepted as Blumenavia angolensis (Welw. & Curr.) Dring, (1980)
Clathrus baumii Henn. 1903
Clathrus camerunensis Henn. 1890
Clathrus cancellatus Tourn. ex Fr. 1823, accepted as Clathrus ruber P. Micheli ex Pers., (1801)
Clathrus cibarius (Tul. & C. Tul.) E. Fisch. 1886, accepted as Ileodictyon cibarium Tul. & C. Tul. [as 'cibaricum'], (1844)
Clathrus gracilis (Berk.) Schltdl. 1862, accepted as Ileodictyon gracile Berk., (1845)
Clathrus pseudocancellatus (E. Fisch.) Lloyd 1909, [as pseudocancellata]
Clathrus sp.
Genus: Claudopus Gillet 1876, accepted as Entoloma (Fr.) P. Kumm., (1871)
Claudopus proteus Kalchbr. possibly Sacc. 1887, accepted as Melanotus proteus (Sacc.) Singer, (1946)
Claudopus variabilis (Pers.) Gillet 1876, accepted as Crepidotus variabilis (Pers.) P. Kumm., (1871)
Family: Clavariaceae Chevall. 1826
Genus: Clavaria P. Micheli 1729
Clavaria abietina Pers. 1794, accepted as Phaeoclavulina abietina (Pers.) Giachini, (2011)
Clavaria byssiseda Pers. 1796
Clavaria capensis Thunb. 1800
Clavaria cinerea Bull. 1788, accepted as Clavulina cinerea (Bull.) J. Schröt., (1888)
Clavaria cladoniae Kalchbr. 1882, accepted as Ramaria cladoniae (Kalchbr.) D.A. Reid, (1974)
Clavaria contorta Holmsk. 1790, accepted as Typhula contorta (Holmsk.) Olariaga, (2013)
Clavaria corniculata Schaeff. 1774, accepted as Clavulinopsis corniculata (Schaeff.) Corner, (1950)
Clavaria corniculata var. pratensis (Pers.) Cotton & Wakefield 1919, accepted as Clavulinopsis corniculata (Schaeff.) Corner, (1950)
Clavaria cristata (Holmsk.) Pers. 1801, accepted as Clavulina coralloides (L.) J. Schröt., (1888)
Clavaria cyanocephala Berk. & M.A. Curtis 1868, accepted as Phaeoclavulina cyanocephala (Berk. & M.A. Curtis) Giachini, (2011)
Clavaria dealbata Berk. 1856, accepted as Ramariopsis dealbata (Berk.) R.H. Petersen, (1984)
Clavaria dichotoma Kalchbr. 1882, accepted as Ramaria saccardoi (P. Syd.) Corner, (1950)
Clavaria durbana Van der Byl 1932, accepted as Scytinopogon pallescens (Bres.) Singer, (1945)
Clavaria fastigiata L. 1753, accepted as Clavulinopsis corniculata (Schaeff.) Corner, (1950)
Clavaria flaccida Fr. 1821, accepted as Phaeoclavulina flaccida (Fr.) Giachini, (2011)
Clavaria furcellata Fr. 1830
Clavaria inaequalis O.F. Müll. 1780
Clavaria kalchbrenneri Sacc. 1888
Clavaria kunzei Fr. (1821), accepted as Ramariopsis kunzei (Fr.) Corner (1950)
Clavaria laeticolor Berk. & M.A. Curtis 1868, accepted as Clavulinopsis laeticolor (Berk. & M.A. Curtis) R.H. Petersen, (1965)
Clavaria ligula Schaeff. 1774, accepted as Clavariadelphus ligula (Schaeff.) Donk, (1933)
Clavaria lorithamnus Berk. 1872, accepted as Ramaria lorithamnus (Berk.) R.H. Petersen,
Clavaria miniata Berk. (1843), accepted as Clavulinopsis sulcata Overeem (1923)
Clavaria muscoides L. 1753
Clavaria persimilis Cotton 1910, accepted as Clavulinopsis laeticolor (Berk. & M.A. Curtis) R.H. Petersen, (1965)
Clavaria pistillaris L. 1753, accepted as Clavariadelphus pistillaris (L.) Donk, (1933)
Clavaria pulchra Peck 1876, accepted as Clavulinopsis laeticolor (Berk. & M.A. Curtis) R.H. Petersen, (1965)
Clavaria semivestita Berk. & Broome 1873, accepted as Clavulinopsis semivestita (Berk. & Broome) Corner, (1950)
Clavaria setacea Kalchbr. 1882, accepted as Pterula setacea (Kalchbr.) Corner, (1950)
Clavaria stricta Pers. 1795, accepted as Ramaria stricta (Pers.) Quél., (1888)
Clavaria vermicularis Fr. , possibly Sw. (1811), accepted as Clavaria fragilis Holmsk. (1790)
Clavaria zippelii Lév. 1844, accepted as Phaeoclavulina zippelii (Lév.) Overeem, (1923)
Genus: Claviceps Tul. 1853
Claviceps digitariae Hansf. 1941
Claviceps microcephala (Wallr.) Tul. 1853, accepted as Claviceps purpurea (Fr.) Tul., (1853)
Claviceps paspali F. Stevens & J.G. Hall 1910
Claviceps purpurea (Fr.) Tul. 1853
Claviceps sp.
Genus: Clitocybe (Fr.) Staude 1857,
Clitocybe amara Quel. (Alb. & Schwein.) P. Kumm. 1871, accepted as Lepista amara (Alb. & Schwein.) Maire, (1930)
Clitocybe expallens Quel. (Pers.) P. Kumm. 1871, accepted as Pseudoclitocybe expallens (Pers.) M.M. Moser, (1967)
Clitocybe fragrans Quel. (With.) P. Kumm. 1871
Clitocybe gentianea Quél. 1873, accepted as Leucopaxillus gentianeus (Quél.) Kotl. (1966)
Clitocybe infundibuliformis var. membranacea Gill. (Vahl) Massee 1893, accepted as Infundibulicybe gibba (Pers.) Harmaja, (2003)
Clitocybe membranacea Fr. (Vahl) Sacc. 1887, accepted as Infundibulicybe gibba (Pers.) Harmaja, (2003)
Clitocybe sinopica Gill. (Fr.) P. Kumm. 1871, accepted as Bonomyces sinopicus (Fr.) Vizzini, (2014)
Clitocybe splendens (Pers.) Gillet 1874, accepted as Paralepista splendens (Pers.) Vizzini, (2012)
Clitocybe trulliformis (Fr.) P. Karst. [as trullaeformis] accepted as Infundibulicybe trulliformis (Fr.) Gminder 2016
Clitocybe ziziphina (Viv.) Sacc. (1887), [as zizyphina].
Genus: Clitopilus (Fr. ex Rabenh.) P. Kumm. 1871
Clitopilus prunulus Quel. (Scop.) P. Kumm. 1871
Genus: Clypeolella Höhn. 1910, accepted as Sarcinella Sacc. (1880)
Clypeolella psychotriae (Doidge) Doidge 1942, accepted as Schiffnerula psychotriae (Doidge) S. Hughes, (1987)
Clypeolella rhamnicola (Doidge) Doidge 1942, accepted as Schiffnerula rhamnicola (Doidge) S. Hughes, (1987)
Genus: Clypeosphaeria Fuckel 1870
Clypeosphaeria natalensis Doidge 1922,
Co
Genus: Cochliobolus Drechsler 1934, accepted as Bipolaris Shoemaker, (1959)
Cochliobolus stenospilus T. Matsumoto & W. Yamam. 1936, [as Cochliobilus stenospilus] accepted as Bipolaris stenospila (Drechsler ex Faris) Shoemaker, (2018)
Family: Coccidioidaceae Cif. 1932
Genus: Coccocarpia Pers. 1827
Coccocarpia pellita var. parmelioides (Hook.) Müll. Arg. 1887, accepted as Coccocarpia erythroxyli (Spreng.) Swinscow & Krog (1976)
Genus: Coccochora Höhn. 1909,
Coccochora lebeckiae Verwoerd & Dippen. 1930, accepted as Coleroa lebeckiae (Verwoerd & Dippen.) Arx, (1962)
Genus: Cocconia Sacc. 1889
Cocconia capensis Doidge 1921, accepted as Cycloschizon capense (Doidge) Arx, (1962)
Cocconia concentrica (Syd. & P. Syd.) Syd. 1915
Cocconia porrigo (Cooke) Sacc. 1889, accepted as Cycloschizon porrigo (Cooke) Arx, (1962)
Family: Coenogoniaceae Stizenb. 1862
temp
Genus: Coenogonium
Coenogonium afrum Massal.
Coenogonium interplexum Nyl.
Genus: Coleophoma
Coleophoma Oleae Petr. & Syd.
Genus: Coleosporium
Coleosporium clematidis Bare.
Coleosporium detergibile Thuem.
Coleosporium hedyotidis Kalchbr. & Cooke
Coleosporium ipomoeae Burr.
Coleosporium ochraceum Bon.
Family: Collemaceae
Genus: Collema
Collema aggregatum Rohl.
Collema bullatum Ach.
Collema byrsinum Ach.
Collema caespitosum Tayl.
Collema crispum Wigg.
Collema fuliginellum Nyl.
Collema furvum DC.
Collema lanatum Pers.
Collema nigrescens DC.
Collema pulposum Ach. var. tenax Nyl.
Collema redundans Nyl.
Collema satuminum DC.
Collema tenax Ach.
Collema thysaneum Ach.
Collema tremelloides Ach.
Genus: Colletotrichum
Colletotrichum agaves Cav.
Colletotrichum anonicola Speg.
Colletotrichum antirrhini Stew.
Colletotrichum atramentarium Taubenh.
Colletotrichum brachytrichum Del.
Colletotrichum cameiliae Mass.
Colletotrichum carica Stev. & Hall.
Colletotrichum cereale Manns.
Colletotrichum circinans Vogl.
Colletotrichum coffeanum Noack.
Colletotrichum dematium Grove.
Colletotrichum falcatum Went.
Colletotrichum gloeosporioides Penz.
Colletotrichum glycines Hori.
Colletotrichum gossypii Edgert.
Colletotrichum graminicolum Wilson.
Colletotrichum kickxiae Verw. & du Pless.
Colletotrichum lagenarium Ell. & Halst.
Colletotrichum lindemuthianum Shear
Colletotrichum malvarum Braun & Casp.
Colletotrichum nigrum Ell. & Halst.
Colletotrichum omnivorum Halst.
Colletotrichum orchidearum Allesch.
Colletotrichum papayae Syd.
Colletotrichum phomoides Chester.
Colletotrichum pterocelastri Wakef.
Colletotrichum rhodocyclum Petrak.
Colletotrichum trifolii Bain & Essary.
Colletotrichum sp.
Genus: Colonnaria
Colonnaria columnata Ed.Fisch.
Genus: Collybia
Collybia acervata Gill.
Collybia albuminosa Petch.
Collybia alveolata Sacc.
Collybia butyracea Quel.
Collybia chortophila Sacc.
Collybia confluens Quel.
Collybia dryophila Quel.
Collybia dryophila var. oedipus Quel.
Collybia extuberans Quel.
Collybia fusipes Quel. var. contorta Gill. & Lucand.
Collybia homotricha Sacc.
Collybia macilenta Gill.
Collybia melinosarca Sacc.
Collybia radicata Quel.
Collybia radicata var. brachypus Sacc.
Collybia ratticauda Fayod.
Collybia stipulincola Kalchbr.
Collybia stridula Quel.
Collybia velutipes Quel.
Collybia sp.
Genus: Comatricha
Comatricha irregularis Peck.
Comatricha longa Peck.
Comatricha nigra Schroet.
Comatricha nigra var. alta Lister.
Comatricha tenerrima G. Lister.
Comatricha typhoides Rost.
Genus: Combea
Combea mollusca Ach.
Combea pruinosa de Not.
Genus: Coniocybe
Coniocybe owanii Korb.
Genus: Coniodictyum
Coniodictyum evansii Höhn.
Genus: Coniophora
Coniophora atrocinerea Karst.
Coniophora betulae Karst.
Coniophora cerebella Pers.
Coniophora papillosa Talbot.
Coniophora pulverulenta Mass.
Coniophora puteana Karst.
Coniophora olivacea Karst.
Genus: Coniothorium
Coniosporium schiraianum Bubak.
Genus: Coniothecium
Coniothecium chomatosporum Corda.
Coniothecium citri McAlp.
Coniothecium macowanii Sace.
Coniothecium punctiforme Wint.
Coniothecium scabrum McAlp.
Coniothecium sp.
Genus: Coniothyrina
Coniothyrina agaves Petrak & Syd.
Genus: Coniothyrium
Coniothyrium fuckelii Sacc.
Coniothyrium insigne Syd.
Coniothyrium occultum Syd.
Coniothyrium palmicolum Starb.
Coniothyrium rafniicola Petrak.
Genus: Conotrema
Conotrema volvarioides Müll.Arg.
Genus: Cookeina
Cookeina colensoi Seaver
Genus: Coprinus
Coprinus atramentarius Fr. (= Coprinopsis atramentaria)
Coprinus cheesmani Th.Gibbs
Coprinus cinereus S.F.Gray (= Coprinopsis cinerea)
Coprinus comatus S.F.Gray
Coprinus comatus var. ovatus Quel.
Coprinus curtus Kalchbr.
Coprinus digitalis Fr.
Coprinus ephemerus Fr.
Coprinus flocculosus Fr.
Coprinus macrorhizus Rea.
Coprinus micaceus Fr. (= Coprinellus micaceus)
Coprinus micaceus var. truncorum Quel.
Coprinus niveus Fr. Schaeff. ex Fr. (= Coprinellus niveus)
Coprinus plicatilis Fr.
Coprinus punctatus Kalchbr. & Cooke
Coprinus radiatus S.F.Gray (= Coprinopsis radiata)
Coprinus stercorarius Fr.
Coprinus truncorum Fr.
Coprinus sp.
Genus: Cordana
Cordana musae Höhn.
Cordana pauciseptata Preuss.
Cordana sp.
Genus: Cordyceps
Cordyceps flabella Berk. & Curt.
Cordyceps velutipes Mass.
Cordyceps sp.
Genus: Cornicularia
Cornicularia tenuissima Zahlbr.
Genus: Corollospora
Corollospora maritima Werd.
Genus: Corticium
Corticium abeuns Burt.
Corticium argillaceum Höhn. & Litsch
Corticium armeniacum Sacc. (= Cerocorticium molle)
Corticium atrocinereum Kalchbr.
Corticium bombycinum Bres.
Corticium caeruleum Fr.
Corticium calceum Fr. emend. Romell & Burt
Corticium calceum Fr.
Corticium calceum var. lacteum Fr.
Corticium ceraceum Berk. & Rav. (= Cerocorticium molle)
Corticium cinereum Fr. (= Peniophora cinerea)
Corticium confluens Fr.
Corticium dregeanum Berk.
Corticium gloeosporum Talbot
Corticium lacteum Fr. (= Phanerochaete tuberculata)
Corticium laetum Bres.
Corticium luteocystidiatum Talbot
Corticium nudum Fr.
Corticium pelliculare Karst.
Corticium portentosum Berk. & Curt.
Corticium pulverulentum Cooke
Corticium salmonicolor Berk. & Br. (= Phanerochaete salmonicolor)
Corticium scutellare Berk. & Curt.
Corticium solani Bourd. & Galz
Corticium tumulosum Talbot
Corticium vagum Berk. & Curt.
Corticium vagum var. solani Burt.
Corticium sp.
Genus: Cortinarius
Cortinarius argutus Fr.
Cortinarius camurus Fr.
Cortinarius castaneus Fr.
Cortinarius fusco-tinctus Rea.
Cortinarius lepidopus Cooke.
Cortinarius multiformis Fr.
Family: Coryneliaceae
Genus: Corynelia
Corynelia carpophila Syd.
Corynelia clavata Sacc.
Corynelia clavata f. fructicola Rehm.
Corynelia fructicola Höhn.
Corynelia tripos Cooke
Corynelia uberata Fr. ex Ach.
Genus: Coryneliospora
Coryneliospora fructicola Fitz.
Genus: Coryneum
Coryneum beyerinckii Oud. (Stigmina carpophila)
Coryneum cocoes P.Henn.
Coryneum disciforme Kunze & Schum.
Coryneum dovyalidis v.d.Byl.
Coryneum kunzei Corda.
Cr
Genus: Craterellus
Craterellus comucopioides Pers.
Genus: Craterium
Craterium aureum Rost.
Craterium eonfusum Mass.
Craterium leucocephalum Ditm. var. scyphoides Lister.
Craterium minutum Fr.
Genus: Crepidotus
Crepidotus applanatus Karst.
Crepidotus episphaeria Sacc.
Crepidotus inandae Sacc.
Crepidotus mollis Quel.
Crepidotus pogonatus Sacc.
Crepidotus scalaris Karst, var. lobulatus Sacc.
Genus: Cribraria
Cribraria argillacea Pers.
Cribraria intricata Schrad.
Cribraria tenella Schrad.
Genus: Crocynia
Crocynia membranacea Zahlbr.
Genus: Cronartium
Cronartium bresadoleanum P.Henn.
Cronartium bresadoleanum var. eucleae P.Henn.
Cronartium gilgianum P.Henn.
Cronartium zizyphi Syd. & Butl.
Genus: Crossopsora
Crossopsora gilgiana Syd.
Genus: Crucibulum
Crucibulum vulgare Tul.
Genus: Cryptococcus
Cryptococcus histolycus Stoddard & Cutler.
Cryptococcus linguae-pilosa Castellani & Chalmers.
Cryptococcus sp.
Genus: Cryptodidymosphaeria
Cryptodidymosphaeria clandestina Syd.
Genus: Cryptogene
Cryptogene parodiellae Syd.
Genus: Cryptogenella
Cryptogenella parodiellae Syd.
Genus: Cryptomyces
Cryptomyces eugeniacearum Sacc.
Cryptomyces melianthi Sacc.
Cryptomyces myricae Sacc.
Genus: Cryptosporella
Cryptosporella umbrina Wehm.
Genus: Cryptosporium
Cryptosporium fatale Kalchbr.
Genus: Cryptostictus
Cryptostictis dryopteris Verw. & du Pless.
Genus: Cryptothecia
Cryptothecia subnidulans Stirt.
Family: Cryptotheciaceae
Cu
Genus: Cunninghamiella
Cunninghamella sp.
Genus: Curvularia
Curvularia intermedia Boedijn.
Curvularia lunata Boedijn. (= Cochliobolus lunatus)
Curvularia maculans Boedijn.
Curvularia spicifera Boedijn. (= Cochliobolus spicifer)
Curvularia sp.
Cy
Genus: Cyanisticta
Cyanisticta crocata Gyeln.
Cyanisticta crocata var. isidiata Gyeln.
Cyanisticta gilva Gyeln.
Cyanisticta gilva var. angustilobata Gyeln.
Cyanisticta gilva var. lanata Gyeln.
Cyanisticta gilva var. pseudogilva Gyeln.
Cyanisticta subcrocata Gyeln.
Cyanisticta thouarsii Gyeln.
Genus: Cyathus
Cyathus berkeleyanus Lloyd
Cyathus dasypus Nees
Cyathus hookeri Berk.
Cyathus laevis Thunb.
Cyathus microsporus Tul.
Cyathus minutosporus Lloyd
Cyathus montagnei Tul.
Cyathus olla Pers.
Cyathus pallidus Berk. & Curt.
Cyathus plicatulus Poepp.
Cyathus poeppigii Tul.
Cyathus rufipes Ellis.
Cyathus stercoreus de Toni.
Cyathus stercoreus f. leseurii Tul.
Cyathus sulcatus Kalchbr.
Cyathus verrucosus DC.
Genus: Cyeloconium
Cyeloconuim oleaginum Cast.
Genus: Cycloschizon
Cycloschizon brachylaenae P.Henn.
Cycloschizon fimbriatum Doidge
Genus: Cyclotheca
Cyclotheca bosciae Doidge
Genus: Cylindrocarpon
Cylindrocarpon album Wollenw. var. crassum Wollenw.
Genus: Cylindrocladium
Cylindrocladium scoparium Morgan
Genus: Cylindrosporium
Cylindrosporium castanicolum Berl.
Cylindrosporium chrysanthemi Ell. & Dearn.
Cylindrosporium juglandis Wolf
Cylindrosporium kilimandscharicum Allesch.
Cylindrosporium padi Karst.
Cylindrosporium ribis Davis.
Genus: Cymatoderma
Cymatoderma elegans Jungh.
Family: Cyphellaceae
Genus: Cyphella
Cyphella alboviolascens Karst.
Cyphella cheesmanni Mass.
Cyphella farinacea Kalchbr. & Cooke
Cyphella friesii Crouan.
Cyphella fulvodisca Cooke & Mass.
Cyphella pelargonii Kalchbr.
Cyphella punetiformis Karst.
Cyphella tabacina Cooke & Phil.
Cyphella variolosa Kalchbr.
Genus: Cystingophora
Cystingophora deformans Syd.
Genus: Cystopus
Cystopus amaranthi Berk.
Cystopus austro-africanus Sacc. & Trott.
Cystopus austro-africanus Wakef.
Cystopus bliti de Bary.
Cystopus candidus de Bary.
Cystopus cubicus de Bary.
Cystopus evansii Sacc. & Trott.
Cystopus evansii Wakef.
Cystopus impomoeae-panduranae Stev. & Swing.
Cystopus portulacae de Bary.
Cystopus quadratus Kalchbr. & Cooke
Cystopus schlechteri Syd.
Cystopus tragopogonis Schroet.
Genus: Cystotelium
Cystotelium inornatum Syd.
Genus: Cytidea
Cytidea cornea Lloyd
Cytidea flocculenta Höhn. & Litsch
Cytidea habgallae Martin
Cytidea simulans Lloyd
Genus: Cytoplea
Cytoplea adeniae du Pless.
Genus: Cytospora
Cytospora australiae Speg.
Cytospora chrysosperma Fr.
Cytospora foliicola Libert
Cytospora leucostoma Sacc.
Cytospora sacchari Butl.
Cytospora verrucula Saco. & Berl.
Cytospora xanthosperma Fr.
Genus: Cytosporella
Cytosporella aloes du Pless.
References
Sources
See also
List of fungi of South Africa
List of fungi of South Africa – A
List of fungi of South Africa – B
List of fungi of South Africa – D
List of fungi of South Africa – E
List of fungi of South Africa – F
List of fungi of South Africa – G
List of fungi of South Africa – H
List of fungi of South Africa – I
List of fungi of South Africa – L
List of fungi of South Africa – M
List of fungi of South Africa – P
List of fungi of South Africa – R
List of fungi of South Africa – S
List of fungi of South Africa – T
List of fungi of South Africa – U
Further reading
Fungi C
South Africa | List of fungi of South Africa – C | Biology | 19,481 |
78,058,995 | https://en.wikipedia.org/wiki/Explanatory%20indispensability%20argument | The explanatory indispensability argument is an argument in the philosophy of mathematics for the existence of mathematical objects. It claims that rationally we should believe in mathematical objects such as numbers because they are indispensable to scientific explanations of empirical phenomena. An altered form of the Quine–Putnam indispensability argument, it differs from that argument in its increased focus on specific explanations instead of whole theories and in its shift towards inference to the best explanation as a justification for belief in mathematical objects rather than confirmational holism.
Specific explanations proposed as examples of mathematical explanations in science include why periodical cicadas have prime-numbered life cycles, why bee honeycomb has a hexagonal structure, and the solution to the Seven Bridges of Königsberg problem. Objections to the argument include the idea that mathematics is only used as a representational device, even when it features in scientific explanations; that mathematics does not need to be true to be explanatory because it could be a useful fiction; and that the argument is circular and so begs the question in favour of mathematical objects.
Background
The explanatory indispensability argument is an altered form of the Quine–Putnam indispensability argument first raised by W. V. Quine and Hilary Putnam in the 1960s and 1970s. The Quine–Putnam indispensability argument supports the conclusion that mathematical objects exist with the idea that mathematics is indispensable to the best scientific theories. It relies on the view, called confirmational holism, that scientific theories are confirmed as wholes, and that the confirmations of science extend to the mathematics it makes use of.
The reliance of the Quine–Putnam argument on confirmational holism is controversial, and it has faced influential challenges from Penelope Maddy and Elliott Sober. The argument has also been criticized for failing to specify the way in which mathematics is indispensable to science; according to Joseph Melia, one would only need to believe in mathematics if it is indispensable in the right way. Specifically, it needs to be indispensable to scientific explanations for it to be as strongly justified as theoretical entities such as electrons. This claim by Melia arose through a debate with Mark Colyvan in the early 2000s over the argument, with Colyvan claiming that mathematics enhances the explanatory power of science. Inspired by this debate, Alan Baker developed an explicitly explanatory form of the indispensability argument, which he termed the enhanced indispensability argument. He was also motivated by the objections against confirmational holism; his formulation aimed to replace confirmational holism with an inference to the best explanation. As such, it is more focused on individual scientific explanations than whole theories.
Among Baker's influences was Hartry Field, who has been credited with being the first person to draw a connection between indispensability arguments and explanation. Baker cited Field as originating an explanatory form of the argument, although Sorin Bangu states that Field merely alluded to such an argument without fully developing it, and Russell Marcus argues he was discussing explanation within the context of the original Quine–Putnam indispensability argument rather than suggesting a new explanatory indispensability argument. According to Marcus, Colyvan's discussion of explanatory power was also initially restricted to its role within the Quine–Putnam indispensability argument. He credits Baker with originating the explanatory indispensability argument. Others, such as Christopher Pincock, place the beginning of the argument's development with Colyvan while noting that Baker sharpened its explanatory focus.
Overview of the argument
A standard formulation of the explanatory indispensability argument is given as follows:
We ought rationally to believe in the existence of any entity which plays an indispensable explanatory role in our best scientific theories.
Mathematical objects play an indispensable explanatory role in science.
Therefore, we ought rationally to believe in the existence of mathematical objects.
The argument is premised on the idea that inference to the best explanation, which is often used to justify theoretical entities such as electrons, can provide a similar kind of support for mathematical objects. It also requires that there are genuinely mathematical explanations in science. For explanations to be genuinely mathematical, it is not enough that they are expressed with the help of mathematics. Instead, mathematics must play an essential part in the explanatory work. Given the argument's reliance on the existence of such explanations, much of the discussion on it has focused on evaluating specific case studies to assess if they are genuine mathematical explanations or not.
Case studies
Periodical cicadas
The most influential case study is the example of periodical cicadas provided by Baker. Periodical cicadas are a type of insect that usually have life cycles of 13 or 17 years. It is hypothesized that this is an evolutionary advantage because 13 and 17 are prime numbers. Because prime numbers have no non-trivial factors, this means it is less likely that periodic predators and other competing species of cicada can synchronize with periodic cicadas' life cycles. Baker argues that this is an explanation in which mathematics, specifically number theory, plays a key role in explaining an empirical phenomenon.
A number of non-mathematical explanations have been proposed for the length of periodical cicadas' life cycles. For example, a prominent alternative explanation claims that prime-numbered life cycles could have emerged from non-prime life cycles due to developmental delays. This hypothesis is supported by the fact that there are many other species of cicada that have non-prime life cycles, and that developmental changes with 4-year periods have often been observed in periodical cicadas. Some philosophers have also argued that the concept of primeness in the case study by Baker can be replaced with a non-numeric concept of "intersection-minimizing periods", although Baker has argued that this would reduce the generality and depth of the explanation. Others, such as Chris Daly and Simon Langford, argue that using years as a unit of measurement rather than months or seasons is arbitrary; Baker and Colyvan argue that years are an appropriate unit of time for biological development and are the unit used by biologists.
The case study has also been criticized for assuming that periodical cicadas have had predators with periodic life cycles in their evolutionary history. Baker has responded to this worry by arguing that it would be impossible to provide direct evidence that periodical cicadas have had periodic predators because "periodicity is not something that can be gleaned from the fossil record". However, he has attempted to make the claim more plausible by arguing that ecological constraints could have restricted the range of the cicadas' possible life cycles, lessening the requirements on periodic predators for the case study to remain mathematically sound. This problem can also be avoided by focusing on other ways in which the prime life cycles could be explanatorily relevant, such as avoidance of competing species of cicada or periodic migration of predators.
Bee honeycomb
Another prominent case study suggested by Aidan Lyon and Colyvan concerns the hexagonal structure of bee honeycomb. Lyon and Colyvan contend that the hexagonal structure of bee honeycomb can be explained by the mathematical proof of the honeycomb conjecture, which states that hexagons are the most efficient regular tiling of the plane. The explanation goes that there is an evolutionary pressure for honeybees to conserve wax in the construction of their combs, so the efficiency of the hexagonal grid explains why it is selected for.
The explanation based on the honeycomb conjecture is potentially incomplete because the proof is a solution to a tiling problem in two dimensions, and disregards the 3D structure of comb cells. Furthermore, many mathematicians do not see the proof of the honeycomb conjecture as an explanatory proof as it employs concepts outside of geometry to establish a geometrical result, although Baker argues that the proof need not be explanatory for the theorem to feature in genuine explanations in science. It is also controversial amongst philosophers whether the subject matter of geometry is purely mathematical, or whether it concerns physical space and structures, leading them to question if the explanation is truly mathematical.
There are also non-mathematical explanations for the honeycomb case study. Darwin believed that the hexagonal shape of bee combs was the result of tightly packed spherical cells being pushed together and pressed into hexagons, with bees fixing breakages with flat surfaces of wax further contributing to a hexagonal shape. More modern presentations hold that the shape of honeycomb is due to the flow of molten wax during the construction process.
Others
Another key example is the Seven Bridges of Königsberg, which concerns the impossibility of crossing each of the historical seven bridges in the Prussian city of Königsberg a single time in a continuous walk around the city. The explanation was found by Leonhard Euler in 1735 when he considered whether such a journey was possible. Euler's solution involved abstracting away from the concrete details of the problem to a mathematical representation in the form of a graph, with nodes representing landmasses and lines representing bridges. He reasoned that for each landmass, unless it is a starting or ending point, there must be a path to both enter and exit it. Therefore, there must be at most two nodes in the graph with an uneven number of lines connected to them for such a journey to be possible. But this is not the case for the graph representing the seven bridges in Königsberg, so it is mathematically impossible to cross all seven without crossing over one of the bridges multiple times.
The existence at any particular time of antipodal points on the Earth's surface with equal temperature and pressure has been cited as another example. According to Colyvan, this is explained by the Borsuk–Ulam theorem, which entails that for any physical property that varies continuously across the surface of a sphere, there are antipodal points on that sphere with equal values of that property. In response to this example, Baker has argued that it is a prediction rather than an explanation because antipodal points with equal pressure and temperature have not already been measured. Mary Leng also questions whether it is appropriate to model temperature or pressure as continuous functions across individual points on the Earth's surface. Other examples proposed by Colyvan include geometrical explanations for Lorentz contraction and gravitational lensing. Baker and Melia have objected to the geometrical aspects of these explanations, which could be interpreted physically instead of mathematically.
A key class of mathematical explanations is solutions to optimization problems, which includes the cicada and bee honeycomb case studies. In these cases, a certain feature is explained by showing that it is mathematically optimal. Such explanations are important in evolutionary biology, as mathematical demonstrations of optimality may help to explain why a given trait has been selected for, but also appear in other areas of science such as physics, engineering and economics. Some examples from evolutionary biology are sunflowers' seeds being arranged in a spiral pattern because it produces the densest packing of seeds, and marine predators engaging in Lévy walks because they minimize the average energy consumption required to find prey.
A number of case studies draw from dynamical systems. Marc Lange, for example, argues that the fact that double pendulums always have four or more equilibrium configurations can be explained by the configuration space of the system forming the surface of a torus, which must have at least four stationary points. Lyon and Colyvan point to the use of phase spaces and the Poincaré map to explain the behaviour of a Hénon–Heiles system, such as the stability of a star's orbit through a galaxy.
Some examples are drawn from outside science. For example, widely discussed cases include the explanation for why 23 strawberries cannot be divided equally amongst three people, why it is impossible to square the circle, and why it is impossible to untie a trefoil knot. However, it is unclear to what extent each of these cases are mathematical explanations of physical facts rather than either purely physical or purely mathematical explanations.
Objections
The main response to the explanatory indispensability argument, adopted by philosophers such as Melia, Daly, Langford, and Saatsi, is to deny there are genuinely mathematical explanations of empirical phenomena, instead framing the role of mathematics as representational or indexical. According to this response, if mathematics features in scientific explanations, its role is just to help pick out physical facts instead of contributing to the explanatory power of the explanation. Saatsi, and others including Jonathan Tallant and Davide Rizza, have rephrased case studies such as the periodic cicada example to remove reference to mathematical entities in an attempt to provide the true non-mathematical versions of these explanations. Defenders of the explanatory indispensability argument typically argue that the non-mathematical explanations provided are less general and modally weaker than mathematical explanations. They also argue that such explanations contradict scientific practice because scientists often accept the mathematical explanations as genuine scientific explanations.
Others, particularly mathematical fictionalists like Mary Leng and Stephen Yablo, have accepted that mathematics plays a genuinely explanatory role in science but argue it can play this role even if mathematical objects do not exist. They point to the use of idealizations like point masses that are used in scientific explanations but are not viewed as literally real. Leng argues that the explanatory power of mathematics can be explained by structural similarities between mathematical theory (viewed fictionally) and features of the real world. Yablo appeals to the expressive power of figurative language, claiming it shows that literally untrue statements can often convey more than literally true statements. Colyvan has challenged these types of responses by arguing that fictional or metaphorical language cannot play a role in genuine explanations: "when some piece of language is delivering an explanation, either that piece of language must be interpreted literally or the non-literal reading of the language in question stands proxy for the real explanation."
An objection advanced by Bangu states that the explanatory indispensability argument begs the question because it is circular. Bangu argues that examples like the periodic cicada case aim to explain statements that already contain mathematical content, namely the primeness of the cicadas' life cycles. But an inference to the best explanation assumes that the statement being explained is true, so the inclusion of mathematical concepts such as primeness assumes the truth of the mathematics in question. Baker has responded to this objection by arguing that the statements being explained in such case studies can be reformulated to remove reference to mathematical entities, leaving mathematics indispensable only to the explanation itself and not the thing being explained.
Notes
References
Citations
Sources
Further reading
Indispensability Arguments in Mathematics, Explanation in Mathematics, and Mathematical Explanation at PhilPapers
Philosophy of mathematics
Philosophical arguments | Explanatory indispensability argument | Mathematics | 3,076 |
75,226,153 | https://en.wikipedia.org/wiki/Govorestat | Govorestat (AT-007) is an aldose reductase inhibitor and experimental drug to treat galactosemia and sorbitol dehydrogenase deficiency.
After a report circulating on the internet accused the developer Applied Therapeutics of cutting corners in its studies of the drug, the FDA put a hold on it in 2020. Applied Therapeutics said that the report was a fraudulent attempt to manipulate its stock price.
References
Aldose reductase inhibitors
Trifluoromethyl compounds
Benzothiazoles
Carboxylic acids
Pyridazines
Thienopyridines | Govorestat | Chemistry | 124 |
66,256,557 | https://en.wikipedia.org/wiki/Nitrogen%20solubility%20index | The nitrogen solubility index (NSI) is a measure of the solubility of the protein in a substance. It is typically used as a quick measure of the functionality of a protein, for example to predict the ability of the protein to stabilise foams, emulsions or gels. To determine the NSI, the sample is dried, dispersed in a 0.1 M salt solution, centrifuged and filtered. The NSI is the amount of Nitrogen in this filtered solution divided by the nitrogen in the initial sample, as measured by the Kjeldahl method.
The relevance of the NSI is based on the fact that proteins are the major biological source of Nitrogen: for various types of protein, there are empirical formulas which correlate the nitrogen content to the protein content. Other related measures of protein solubility are the Protein Solubility Index (PSI), the Protein Dispersibility index (PDI). These are based on a specific protein assay, rather than a nitrogen assay, and the dispersibility index differs from the solubility index, in that the sample is dispersed with a high-shear mixer and then strained through a screen instead of being centrifuged and filtered.
References
Protein methods
Food analysis | Nitrogen solubility index | Chemistry,Biology | 261 |
5,220,510 | https://en.wikipedia.org/wiki/List%20of%20centroids | The following is a list of centroids of various two-dimensional and three-dimensional objects. The centroid of an object in -dimensional space is the intersection of all hyperplanes that divide into two parts of equal moment about the hyperplane. Informally, it is the "average" of all points of . For an object of uniform composition, or in other words, has the same density at all points, the centroid of a body is also its center of mass. In the case of two-dimensional objects shown below, the hyperplanes are simply lines.
2-D Centroids
For each two-dimensional shape below, the area and the centroid coordinates are given:
Where the centroid coordinates are marked as zero, the coordinates are at the origin, and the equations to get those points are the lengths of the included axes divided by two, in order to reach the center which in these cases are the origin and thus zero.
3-D Centroids
For each three-dimensional body below, the volume and the centroid coordinates are given:
See also
List of moments of inertia
List of second moments of area
References
External links
http://www.engineering.com/Library/ArticlesPage/tabid/85/articleType/ArticleView/articleId/109/Centroids-of-Common-Shapes.aspx
Mechanics
Physics-related lists
Geometric centers | List of centroids | Physics,Mathematics,Engineering | 281 |
78,642,145 | https://en.wikipedia.org/wiki/Jumart | A jumart or jumar is a cryptozoological or folkloric hybrid between cattle (Bos taurus) and a species of equine (horse or donkey). Jumarts were once widely believed by Europeans to actually exist, and many people claimed to own or have encountered the animals. While they were discussed extensively in early scientific writings, such hybridization is now known to be biologically impossible, and many "jumarts" examined by later researchers were found to be hinnies (a horse/donkey hybrid) instead. The term is of French origin and most historical writings about supposed jumarts were in the French language.
References
Hybrid animals
French folklore | Jumart | Biology | 140 |
11,177,823 | https://en.wikipedia.org/wiki/Multi-link%20trunking | Multi-link trunking (MLT) is a link aggregation technology developed at Nortel in 1999. It allows grouping several physical Ethernet links into one logical Ethernet link to provide fault-tolerance and high-speed links between routers, switches, and servers.
MLT allows the use of several links (from 2 up to 8) and combines them to create a single fault-tolerant link with increased bandwidth. This produces server-to-switch or switch-to-switch connections that are up to 8 times faster. Prior to MLT and other aggregation techniques, parallel links were underutilized due to Spanning Tree Protocol’s loop protection.
Fault-tolerant design is an important aspect of Multi-Link Trunking technology. Should any one or more than one link fail, the MLT technology will automatically redistribute traffic across the remaining links. This automatic redistribution is accomplished in less than half a second (typically less than 100 millisecond) so no outage is noticed by end users. This high speed recovery is required by many critical networks where outages can cause loss of life or very large monetary losses in critical networks. Combining MLT technology with Distributed Split Multi-Link Trunking (DSMLT), Split multi-link trunking (SMLT), and R-SMLT technologies create networks that support the most critical applications.
A general limitation of standard MLT is that all the physical ports in the link aggregation group must reside on the same switch. SMLT, DSMLT and R-SMLT technologies removes this limitation by allowing the physical ports to be split between two switches.
Split multi-link trunking
Split multi-link trunking (SMLT) is a Layer-2 link aggregation technology in computer networking originally developed by Nortel as an enhancement to standard multi-link trunking (MLT) as defined in IEEE 802.3ad.
Link aggregation or MLT allows multiple physical network links between two network switches and another device (which could be another switch or a network device such as a server) to be treated as a single logical link and load balance the traffic across all available links. For each packet that needs to be transmitted, one of the physical links is selected based on a load-balancing algorithm (usually involving a hash function operating on the source and destination MAC address information). For real-world network traffic this generally results in an effective bandwidth for the logical link equal to the sum of the bandwidth of the individual physical links. Redundant links that were once unused due to Spanning Tree’s loop protection can now be used to their full potential.
A general limitation of standard link aggregation, MLT or EtherChannel is that all the physical ports in the link aggregation group must reside on the same switch. The SMLT, DSMLT and RSMLT protocols remove this limitation by allowing the physical ports to be split between two switches, allowing for the creation of Active load sharing high availability network designs that meet five nines availability requirements.
SMLT topologies
The two switches between which the SMLT is split are known as aggregation switches and form a logical cluster which appears to the other end of the SMLT link as a single switch.
The split may be at one or at both ends of the MLT. If both ends of the link are split, the resulting topology is referred to as an "SMLT square" when there is no cross-connect between diagonally opposite aggregation switches, or an "SMLT mesh" when each aggregation switch has a SMLT connection with both aggregation switches in the other pair. If only one end is split, the topology is referred to as an SMLT triangle.
In an SMLT triangle, the end of the link which is not split does not need to support SMLT. This allows non-Avaya devices including third-party switches and servers to benefit from SMLT. The only requirement is that IEEE 802.3ad static mode must be supported.
Operation
The key to the operation of SMLT is the Inter-Switch Trunk (IST). The IST is a (standard) MLT connection between the aggregation switches which allows the exchange of information regarding traffic forwarding and the status of individual SMLT links.
For each SMLT connection, the aggregation switches have a standard MLT or individual port with which an SMLT identifier is associated. For a given SMLT connection, the same SMLT ID must be configured on each of the peer aggregation switches.
For example, when one switch receives a response to an ARP request from an end station on a port that is part of an SMLT, it will inform its peer switch across the IST and request the peer to update its own ARP table with a record pointing to its own connection with the corresponding SMLT ID.
In general, normal network traffic does not traverse the IST unless this is the only path to reach a host which is connected only to the peer switch. By ensuring all devices have SMLT connections to the aggregation switches, traffic never needs to traverse the IST and the total forwarding capacity of the switches in the cluster is also aggregated.
The communication between peer switches across the IST allows both unicast and multicast routing information to be exchanged allowing protocols such as Open Shortest Path First (OSPF) and Protocol Independent Multicast-Sparse Mode (PIM-SM) to operate correctly.
Failure scenarios
The use of SMLT not only allows traffic to be load-balanced across all the links in an aggregation group but also allows traffic to be redistributed very quickly in the event of link or switch failure. In general the failure of any one component results in a traffic disruption lasting less than half a second (normal less than 100 millisecond) making SMLT appropriate in environments running time- and loss-sensitive applications such as voice and video.
In a network using SMLT, it is often no longer necessary to run a spanning tree protocol of any kind since there are no logical bridging loops introduced by the presence of the IST. This eliminates the need for spanning tree reconvergence or root-bridge failovers in failure scenarios which causes interruptions in network traffic longer than time-sensitive applications are able to cater for.
Product support
SMLT is supported within the following Avaya Ethernet Routing Switch (ERS) and Virtual Services Platform (VSP) Product Families: ERS 1600, ERS 5500, ERS 5600, ERS 7000, ERS 8300, ERS 8800, ERS 8600, MERS 8600, VSP 9000
SMLT is fully interoperable with devices supporting standard MLT (IEEE 802.3ad static mode).
R-SMLT
Routed-SMLT (R-SMLT) is a computer networking protocol developed at Nortel as an enhancement to split multi-link trunking (SMLT) enabling the exchange of Layer 3 information between peer nodes in a switch cluster for resiliency and simplicity for both L3 and L2.
In many cases, core network convergence time after a failure is dependent on the length of time a routing protocol requires to successfully converge (change or re-route traffic around the fault). Depending on the specific routing protocol, this convergence time can cause network interruptions ranging from seconds to minutes. The R-SMLT protocol works with SMLT and distributed Split Multi-Link Trunking (DSMLT) technologies to provide sub-second failover (normally less than 100 milliseconds) so no outage is noticed by end users. This high speed recovery is required by many critical networks where outages can cause loss of life or very large monetary losses in critical networks.
RSMLT routing topologies providing an active-active router concept to core SMLT networks. The protocol supports networks designed with SMLT or DSMLT triangles, squares, and SMLT or DSMLT full mesh topologies, with routing enabled on the core VLANs. R-SMLT takes care of packet forwarding in core router failures and works with any of the following protocol types: IP Unicast Static Routes, RIP1, RIP2, OSPF, BGP and IPX RIP.
Product support
R-SMLT is supported on Avaya's Ethernet Routing Switch ERS 8600, ERS 8800, VSP9000, ERS 8300 and MERS 8600 products.
Distributed multi-link trunking
Distributed multi-link trunking (DMLT) or distributed MLT is a proprietary computer networking protocol designed by Nortel Networks, and now owned by Extreme Networks, used to load balance the network traffic across connections and also across multiple switches or modules in a chassis. The protocol is an enhancement to the Multi-Link Trunking (MLT) protocol.
DMLT allows the ports in a trunk (MLT) to span multiple units of a stack of switches or to span multiple cards in a chassis, preventing network outages when one switch in a stack fails or a card in a chassis fails.
DMLT is described in an expired United States Patent.
Distributed split multi-link trunking
Distributed split multi-link trunking (DSMLT) or Distributed SMLT is a computer networking technology developed at Nortel to enhance the Split Multi-Link Trunking (SMLT) protocol. DSMLT allows the ports in a trunk to span multiple units of a stack of switches or to span multiple cards in a chassis, preventing network outages when one switch in a stack fails or one card in a chassis fails.
Fault-tolerance is a very important aspect of Distributed Split Multi-Link Trunking (DSMLT) technology. Should any one switch, port, or more than one link fail, the DSMLT technology will automatically redistribute traffic across the remaining links. Automatic redistribution is accomplished in less than half a second (typically less than 100 milliseconds) so no outage is noticed by end users. This high speed recovery is required by many critical networks where outages can cause loss of life or very large monetary losses in critical networks. Combining Multi-Link Trunking (MLT), DMLT, SMLT, DSMLT and R-SMLT technologies create networks that support the most critical networks.
Product support
SMLT is supported on Avaya's Ethernet Routing Switch 1600, 5500, 8300, ERS 8600, MERS 8600, VSP-7000 and VSP-9000 products.
References
US 7173934 Lapuh, Roger & Yili Zhao "System, device, and method for improving communication network reliability using trunk splitting"; (SMLT) issued 2007-02-06
Further reading
Technical Brief Split Multi-Link Trunking Ethernet Routing Switch 8600
Desktop Connectivity
Using Distributed Multi-Link Trunking
Distributed multi-link trunking method and apparatus Google Patents
Distributed multi-link trunking method and apparatus Patent Genius
Distributed multi-link trunking method and apparatus Patent Storm
External links
Tolly Benchmarks -Retrieved 29 July 2011
See IEEE.org for info on 802.3ad standard -Retrieved 29 July 2011
Avaya
Communication circuits
Ethernet
Link protocols
Network topology
Nortel protocols
Bonding protocols
Nortel
Network architecture
Reliability engineering
Articles containing video clips | Multi-link trunking | Mathematics,Engineering | 2,276 |
22,775,177 | https://en.wikipedia.org/wiki/Graham%20Richards | (William) Graham Richards (born 1 October 1939) is a chemist and Emeritus Fellow of Brasenose College, Oxford. He served as head of the department of chemistry at the University of Oxford from 1997 to 2006.
Richards is a pioneer in the field of computer-aided molecular design, in particular its application to the pharmaceuticals industry. He was the founding scientist of Oxford Molecular Ltd., and introduced a novel model for the funding of research at Oxford University, which has been copied elsewhere. Richards was one of the scientific co-founders of Oxford Molecular Limited (OMG). The company developed software for modelling of small molecules and proteins, and drug design. Benefiting from the economic and legal changes, the company was floated on the London Stock Exchange in 1992, making the university £10 million. The company was worth £450 million at its peak but was eventually sold for £70 million. Richards has published more than 300 scientific papers, including 15 books.
Education
Graham Richards was born on 1 October 1939 in Hoylake, England, to Percy Richards and Gwendoline Julia Richards (née Evans). Both parents were of Welsh extraction. Richards was educated at Birkenhead School. He won a scholarship to Brasenose College, Oxford, starting his studies there in 1958.
He received his bachelor's degree in Chemistry with first class honours from the University of Oxford in 1961. He then studied the electronic spectroscopy of diatomic molecules with Richard F. Barrow, earning his Master of Arts and Doctor of Philosophy degrees from the University of Oxford in 1964.
Career and research
After his DPhil, he continued his spectroscopic work with fellowships in Oxford (ICI Research Fellowship, Junior Research Fellowship at Balliol College) and Paris, France (Centre de Mécanique Ondulatoire Appliquée).
Graham Richards soon returned to Oxford as a research fellow at Balliol College, Oxford (1964–1966). He was promoted to a lecturer at Oxford University (1966–1994), to reader (1994–1996), and to professor (1996–2007). He served as chairman of the chemistry department from 1997 to 2006. Richards celebrated his formal retirement from the University of Oxford on 18 May 2007. He is now an Emeritus fellow of Brasenose College.
Industry involvement
In the fourth year of his degree course Richard's research project led him to using Oxford's Ferranti Mercury computer to solve integrals.
During a fellowship year in France at Centre de Mécanique Ondulatoire Appliquée, he was able to use more powerful computers.
Returning to Oxford, he worked on ab initio computations and applied computational techniques to solving quantum mechanical problems in theoretical chemistry, in particular studying spin-orbit coupling.
His influential paper Third age of quantum chemistry (1979) marked the development of computational techniques for theoretical analysis whose precision equaled or surpassed experimental results.
Richards saw the potential to apply computer techniques for examining the structure and properties of compounds in the area of pharmaceutical applications. He became a pioneer in the field of computer-aided molecular design. He was the first to produce coloured images modelling molecular structure graphically, and introduced many of the techniques now widely used in academia and industry.
In 1982, Richards became a founding member of the Molecular Graphics Society (now the Molecular Graphics and Modelling Society, MGMS). The society started the Journal of Molecular Graphics in 1983. Richards served as the editor-in-chief of the journal from 1984 to 1996. The journal's name changed to Journal of Molecular Graphics and Modelling in 1997.
In 1989 Richards was the scientific co-founder (with Tony Marchington, David Ricketts, James Hiddleston, and Anthony Rees) of Oxford Molecular Limited. The company developed software for modelling of small molecules and proteins, and drug design.
The company was possible in part because of economic and legal changes under the government of Margaret Thatcher that enabled British universities to become involved with venture capital and technology transfer. As Oxford Molecular Group, Ltd. (OMG) the company was floated on the London Stock Exchange in 1992, making the university £10 million. The company was worth £450 million at its peak but was eventually sold for £70 million. It was one of several companies that combined to form Accelrys in 2001.
Richards was instrumental in raising £64 million to fund a new laboratory for Oxford University through an innovative funding approach. £20 million worth of funding began with an "unusual collaboration" between Graham and David Norwood. Norwood then arranged for Beeson-Gregory to provide £20 million in exchange for half the University's equity share of any spin-out companies emanating from the Chemistry Department for 15 years. In 2003, Beeson-Gregory and Evolution Group merged, later creating a subsidiary, IP2IPO ("Intellectual property to initial public offering".
Graham Richards became a non-executive director of IP2IPO in 2001, and non-executive chairman of IP2IPO in 2004.
Through this arrangement the Chemistry Department has contributed over £100 million to the University of Oxford.
Richards served as a director of ISIS Innovation Ltd., the University of Oxford's technology transfer company. It became Oxford University Innovation as of June 2016.
It has brought around 60 spin-out companies into existence.
The Financial Times has described the approach as "the way universities should be financed in the future".
Richards also introduced the use of distributed computing in pharmaceutical design. Started in 2000, his Screensaver Lifesaver project exploited idle time on more than 3.5 million personal computers in over 200 countries, whose owners agreed to be involved and downloaded the project's screensaver. Using idle time from these computers, the project's software created a virtual supercomputer that screened billions of compounds against protein targets, searching for possible drug treatments for cancer, anthrax and smallpox.
The project involved collaboration between Intel, United Devices, and the Centre for Computational Drug Discovery at the University of Oxford, headed by Richards and funded by the National Foundation for Cancer Research (NFCR).
Graham formed the spin-out company InhibOx Ltd. in 2001.
InhibOx applied cloud computing techniques to computational chemistry and drug discovery, and developed a searchable database of small-molecules called Scopius.
In 2002, Richards donated his shares, twenty-five per cent of the company, to the National Foundation for Cancer Research.
In 2017, InhibOx relaunched as Oxford Drug Design Ltd., with a new focus on antibiotic discovery.
Richards joined the Science Advisory Panel of Oxford Medical Diagnostics.
Richards is a non-executive director of IP Group plc, having also served as its chairman.
Selected books
Awards and honours
Richards is a council member of the Royal Society of Chemistry and of The Royal Institution,
a Fellow of the Royal Society, and was appointed Commander of the Order of the British Empire (CBE). The Times Higher Education Supplement (2006) considered Richards to be one of twelve academic "super-earners" in the United Kingdom.
Times magazine's first Eureka issue (2010) included Richards in its list of the top 100 British scientists. Richards' work has been acknowledged through a number of more formal awards and honours, including the following:
2018, elected a Fellow of the Royal Society (FRS)
2018, Richard J. Bolte Sr. Award by the Science History Institute
2011, Fellow of the Learned Society of Wales
2010, Co-Vice-President of the Royal Society of Chemistry
2004, Award of the American Chemical Society for Computers in Chemical and Pharmaceutical Research
2001, Commander of the Order of the British Empire, Queen's Birthday Honours
1996, The Lord Lloyd of Kilgerran Award, Foundation for Science and Technology
1998, Mullard Award, Royal Society
1972, Marlow Medal, Royal Society of Chemistry
Personal
Richards was married to his first wife, Jessamy Kershaw on 12 December 1970. She died of cancer in November 1988. As of 5 October 1996, Richards married Mary Elizabeth Phillips, director of research planning at University College London. He has two sons and three stepchildren.
References
1939 births
Living people
People from Hoylake
People educated at Birkenhead School
Alumni of Brasenose College, Oxford
Fellows of Brasenose College, Oxford
English chemists
Theoretical chemists
Biotechnologists
Computational chemists
Fellows of the Royal Society
Fellows of the Learned Society of Wales | Graham Richards | Chemistry,Biology | 1,697 |
31,386,944 | https://en.wikipedia.org/wiki/Social%20trading | Social trading is a form of investing that allows investors to observe the trading behavior of their peers and expert traders. The primary objective is to follow their investment strategies using copy trading or mirror trading. Social trading requires little or no knowledge about financial markets.
History
One of the first social trading platforms was Collective2] which began offering a social trading functionality to retail traders as early as 2003 (preceding ZuluTrade by four years). In 2010, social trading started to achieve a greater degree of mainstream appeal with eToro, followed by Wikifolio in 2012. Europe-based NAGA, listed on Frankfurt Stock Exchange since 2017, claims more than EUR 27 billion was traded on its platform in the second half of 2019. Some of the contemporary social trading platforms other than the ones mentioned already are Trading Motion, iSystems, and FX Junction, among others.
Research
MIT Computer Scientist and researcher Yaniv Altshuler described social trading networks as complex adaptive systems, and in his 2014 research on eToro's OpenBook, wrote that "Having the inherent ability to share ideas and information between each others, OpenBook's users are given a new source of information they can use in order to enhance their trading performance. As the users are not playing against each other but rather – against the market, this situation becomes a non zero-sum game, hence incentivizing the users to share as much information as possible." His paper concludes that "social trading provides much better opportunities for profiting compared with individual trading," but that users make "excellent but sometimes not optimal decisions in selecting experts when they can see others' choices."
A 2015 World Economic Forum report described social trading networks as disruptors, which "have emerged to provide low-cost, sophisticated alternatives to traditional wealth managers. These solutions cater to a broader customer base and empower customers to have more control of their wealth management," and "pose a tangible threat to the traditional practices of the wealth management industry".
Economist Nouriel Roubini's thinktank predicted in 2016 that "newer forms of investment, such as socially responsible investments and social trading will bring some of the largest industry growth in the coming years."
A 2017 St. John's University study found that 'leader' traders, or those with followers, are more susceptible to the disposition effect than investors that are not being followed by any other traders, with the authors suggesting the observation may be explained by "leaders feeling responsible towards their followers and an urge to not let them down, by fear of losing followers when admitting a bad investment decision and signaling confidence in their initial investment choice, or by an attempt of newly appointed leaders to manage their self-image."
Social trading may potentially also change how much risk investors take. A recent experimental study argues that merely providing information on the success of others may lead to a significant increase in risk taking. This increase in risk taking may even be larger when subjects are provided with the option to directly copy others.
Characteristics
Social trading is an alternative way of analyzing financial data by looking at what other traders are doing and comparing and copying their techniques and strategies. Prior to the advent of social trading, investors and traders were relying on fundamental or technical analysis to form their investment decisions. Using social trading investors and traders could integrate into their investment decision-process social indicators from trading data-feeds of other traders. Social trading platforms or networks can be considered a subcategory of social networking services.
Social trading allows traders to trade online with the help of others and some have claimed shortens the learning curve from novice to experienced trader. Traders can interact with others, watch others take trades, then duplicate their trades and learn what prompted the top performer to take a trade in the first place. By copying trades, traders can learn which strategies work and which do not work. Social trading is used to do speculation; in the moral context speculative practices are considered negatively and to be avoided by each individual. who conversely should maintain a long term horizon avoiding any types of short term speculation.
Social Media has permeated the trading world such that two main types of trading has evolved:
Traditional Trades
Single (or non-social) trade: Trader A places a normal trade by himself or herself; This can by manual or automated
Social Trading
There are two main types of social trading:
Copy trade: Trader A places exactly the same trade as trader B's one single trade; (iii)
Mirror trade: Trader A automatically executes trader B's every single trade, i.e., trader A follows exactly trader B's trading activities.
Other variations offered on some platforms allow users to copy another trader's portfolio (copy portfolio), and follow a trader's dividends (copy dividends), where whenever a followed trader withdraws money from his or her account, a proportional amount of money will be withdrawn from the balance of their follower, in real time.
Key features
Information flow: Unencumbered access to information is important in financial markets and that makes the free exchange of information of interest to small scale as well as individual investors.
Cooperative trading: Social trading offers traders the opportunity to work together in trading teams which can trade the markets collaboratively, whether by pooling funds, dividing research or through sharing information.
Monetization: As with social networks in the broader sense, monetization strategies are not always clear. As with social networks in general, it is possible, however, that the long-term worth of such websites may come from the variety and depth of data about their users which their active communities are likely to generate.
Transparency: Social trading platforms reveal traders' performance stats, open and past positions, and market sentiment, giving members complete information to assess the credibility of the contributors they follow on the platform.
See also
Algorithmic trading
Trading strategy
References
Financial services
Social media | Social trading | Technology | 1,180 |
18,347,577 | https://en.wikipedia.org/wiki/Ghost%20imaging | Ghost imaging (also called "coincidence imaging", "two-photon imaging" or "correlated-photon imaging") is a technique that produces an image of an object by combining information from two light detectors: a conventional, multi-pixel detector that does not view the object, and a single-pixel (bucket) detector that does view the object. Two techniques have been demonstrated. A quantum method uses a source of pairs of entangled photons, each pair shared between the two detectors, while a classical method uses a pair of correlated coherent beams without exploiting entanglement. Both approaches may be understood within the framework of a single theory.
History
The first demonstration of ghost imaging, performed by T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko in 1995, was based on quantum correlations between entangled photon pairs. One of the photons of the pair strikes the object and then the bucket detector while the other follows a different path to a (multi-pixel) camera. The camera is constructed to only record pixels from entangled photon pairs that hit both the bucket detector and the camera's image plane (as opposed to entangled photon pairs where one hit the image plane but the other does not hit the bucket detector, which are not registered). Then a large number of registered entangled pairs gradually forms a full image.
Later experiments indicated that the correlations between the light beam that hits the camera and the beam that hits the object may be explained by purely classical physics. If quantum correlations are present, the signal-to-noise ratio of the reconstructed image can be improved. In 2009 'pseudothermal ghost imaging' and 'ghost diffraction' were demonstrated by implementing the 'computational ghost-imaging' scheme, which relaxed the need to evoke quantum correlations arguments for the pseudothermal source case.
Recently, it was shown that the principles of 'Compressed-Sensing' can be directly utilized to reduce the number of measurements required for image reconstruction in ghost imaging. This technique allows an N pixel image to be produced with far less than N measurements and may have applications in LIDAR and microscopy.
Advances in military research
The U.S. Army Research Laboratory (ARL) developed remote ghost imaging in 2007 with the goal of applying advanced technology to the ground, satellites and unmanned aerial vehicles. Ronald E. Meyers and Keith S. Deacon of ARL, received a patent in 2013 for their quantum imaging technology called, "System and Method for Image Enhancement and Improvement." The researchers received the Army Research and Development Achievement Award for outstanding research in 2009 with the first ghost image of a remote object.
Mechanism
A simple example clarifies the basic principle of ghost imaging. Imagine two transparent boxes: one that is empty and one that has an object within it. The back wall of the empty box contains a grid of many pixels (i.e. a camera), while the back wall of the box with the object is a large single-pixel (a bucket detector). Next, shine laser light into a beamsplitter and reflect the two resulting beams such that each passes through the same part of its respective box at the same time. For example, while the first beam passes through the empty box to hit the pixel in the top-left corner at the back of the box, the second beam passes through the filled box to hit the top-left corner of the bucket detector.
Now imagine moving the laser beam around in order to hit each of the pixels at the back of the empty box, meanwhile moving the corresponding beam around the box with the object. While the first light beam will always hit a pixel at the back of the empty box, the second light beam will sometimes be blocked by the object and will not reach the bucket detector. A processor receiving a signal from both light detectors only records a pixel of an image when light hits both detectors at the same time. In this way, a silhouette image can be constructed, even though the light going towards the multi-pixel camera did not touch the object.
In this simple example, the two boxes are illuminated one pixel at a time. However, using quantum correlation between photons from the two beams, the correct image can also be recorded using complex light distributions. Also, the correct image can be recorded using only the single beam passing through a computer-controlled light modulator to a single-pixel detector.
Applications
Bessel beam illumination
, ARL scientists developed a diffraction-free light beam, also called Bessel beam illumination. In a paper published February 10, 2012, the team outlined their feasibility study of virtual ghost imaging using the Bessel beam, to address adverse conditions with limited visibility, such as cloudy water, jungle foliage, or around corners. Bessel beams produce concentric-circle patterns. When the beam is blocked or obscured along its trajectory, the original pattern eventually reforms to create a clear picture.
Imaging with very low light levels
The spontaneous parametric down-conversion (SPDC) process provides a convenient source of entangled-photon pairs with strong spatial correlations. Such heralded single photons can be used to achieve a high signal-to-noise ratio, virtually eliminating background counts from the recorded images. By applying principles of image compression and associated image reconstruction, high-quality images of objects can be formed from raw data with an average of fewer than one detected photon per image pixel.
Photon-sparse microscopy with infrared light
Infrared cameras that combine low-noise with single-photon sensitivity are not readily available. Infrared illumination of a vulnerable target with sparse photons can be combined with a camera counting visible photons through the use of ghost imaging with correlated photons that have significantly different wavelengths, generated by a highly non-degenerate SPDC process. Infrared photons with a wavelength of 1550 nm illuminate the target and are detected by an InGaAs/InP single-photon avalanche diode. The image data are recorded from the coincidently detected, position-correlated, visible photons with a wavelength of 460 nm using a highly efficient, low-noise, photon-counting camera. Light-sensitive biological samples can thereby be imaged.
Remote sensing
Ghost imaging is being considered for application in remote-sensing systems as a possible competitor with imaging laser radars (LIDAR). A theoretical performance comparison between a pulsed, computational ghost imager and a pulsed, floodlight-illumination imaging laser radar identified scenarios in which a reflective ghost-imaging system has advantages.
X-ray and electron ghost imaging
Ghost-imaging has been demonstrated for a variety of photon science applications. A ghost-imaging experiment for hard x-rays was recently achieved using data obtained at the European Synchrotron. Here, speckled pulses of x-rays from individual electron synchrotron bunches were used to generate a ghost-image basis, enabling proof-of-concept for experimental x-ray ghost imaging. At the same time that this experiment was reported, a Fourier-space variant of x-ray ghost imaging was published. Ghost imaging has also been proposed for X-ray FEL applications. Classical ghost imaging with compressive sensing has also been demonstrated with ultra-relativistic electrons.
References
External links
Quantum camera snaps objects it cannot 'see' by Belle Dume, New Scientist, 2 May 2008. Accessed July 2008
Air Force Demonstrates 'Ghost Imaging' By Sharon Weinberger, Wired, 3 June 2008. Accessed July 2008
Army scientists' 19 patents lead to quantum imaging advances Army Research Laboratory News DECEMBER 19, 2013. Accessed Feb 2014
Quantum mechanics
Photographic techniques | Ghost imaging | Physics | 1,542 |
55,127,327 | https://en.wikipedia.org/wiki/Diplomitoporus%20microsporus | Diplomitoporus microsporus is a species of poroid fungus in the family Polyporaceae. Found in the Andes region of Venezuela, it was described as a new species in 2010 by mycologists Teresa Iturriaga and Leif Ryvarden.
References
Fungi described in 2010
Fungi of Venezuela
Polyporaceae
Taxa named by Leif Ryvarden
Fungus species | Diplomitoporus microsporus | Biology | 79 |
2,964,744 | https://en.wikipedia.org/wiki/Slype | The term slype is a variant of slip in the sense of a narrow passage; in architecture, the name for the covered passage usually found in monasteries or cathedrals between the transept and the chapter house, as at St Andrews, Winchester, Gloucester, Exeter, Durham, St. Albans, Sherborne and Christ Church Cathedral, Oxford. At St. Mary's Abbey, Dublin, it is, with the chapter house, one of only two remaining rooms.
References
Rooms
Church architecture | Slype | Engineering | 100 |
972,005 | https://en.wikipedia.org/wiki/Autofrettage | Autofrettage is a work-hardening process in which a pressure vessel (thick walled) is subjected to enormous pressure, causing internal portions of the part to yield plastically, resulting in internal compressive residual stresses once the pressure is released. The goal of autofrettage is to increase the pressure-carrying capacity of the final product. Inducing residual compressive stresses into materials can also increase their resistance to stress corrosion cracking; that is, non-mechanically assisted cracking that occurs when a material is placed in a corrosive environment in the presence of tensile stress. The technique is commonly used in manufacture of high-pressure pump cylinders, warship and gun barrels, and fuel injection systems for diesel engines. Due to work-hardening process it also enhances wear life of the barrel marginally. While autofrettage will induce some work hardening, that is not the primary mechanism of strengthening.
The start point is a single steel tube of internal diameter slightly less than the desired calibre. The tube is subjected to internal pressure of sufficient magnitude to enlarge the bore and in the process the inner layers of the metal are stretched in tension beyond their elastic limit. This means that the inner layers have been stretched to a point where the steel is no longer able to return to its original shape once the internal pressure has been removed. Although the outer layers of the tube are also stretched, the degree of internal pressure applied during the process is such that they are not stretched beyond their elastic limit. The reason why this is possible is that the stress distribution through the walls of the tube is non-uniform. Its maximum value occurs in the metal adjacent to the source of pressure, decreasing markedly towards the outer layers of the tube. The strain is proportional to the stress applied within the elastic limit; therefore the expansion at the outer layers is less than at the bore. Because the outer layers remain elastic they attempt to return to their original shape; however, they are prevented from doing so completely by the new permanently stretched inner layers. The effect is that the inner layers of the metal are put under compression by the outer layers in much the same way as though an outer layer of metal had been shrunk on as with a built-up gun. This can be better understood by assuming thick walled tube as multilayer tube. The next step is to subject the compressively strained inner layers to a low-temperature treatment (LTT) which results in the elastic limit being raised to at least the autofrettage pressure employed in the first stage of the process. Finally, the elasticity of the barrel can be tested by applying internal pressure once more, but this time care is taken to ensure that the inner layers are not stretched beyond their new elastic limit. The end result is an inner surface of the gun barrel with a residual compressive stress able to counterbalance the tensile stress that would be induced when the gun is discharged. In addition the material has a higher tensile strength due to work hardening.
Early in the history of artillery, people observed that, after firing a small number of rounds, the bore of a new gun slightly enlarges and hardens. Historically, the first type of autofrettage avant la lettre was mandrelling bronze gun barrels, invented and patented in 1869 by Samuel B. Dean of the South Boston Iron Company. But it found no use on the American continent and was copied without a license by Franz von Uchatius in mid-1870s. It found some use in several European countries lacking steel industry, but was quickly displaced by cast steel everywhere except Austro-Hungary, which stuck to the obsolete technology until WWI and therefore had their artillery handicapped.
The problem of strengthening steel gun barrels using the same principle was tackled by French colonial artillery colonel Louis Frédéric Gustave Jacob, who suggested in 1907 to pressurize them hydraulically and coined the term "autofrettage". In 1913, Schneider-Creusot made a 14 cm L/50 naval gun by such a method and applied for a patent. However, implementing such a technique on an industrial scale required numerical methods to approximate the solutions of transcedental equations of plastic deformation, which were developed in France during WWI by math professor Maurice d'Ocagne and Schneider engineer Louis Potin.
In modern practice, a slightly oversized die is pushed slowly through the barrel by a hydraulically driven ram. The amount of initial underbore and oversize of the die are calculated to strain the material around the bore past its elastic limit into plastic deformation. A residual compressive stress remains on the barrel's inner surface, even after final honing and rifling.
The technique has been applied to the expansion of tubular components down hole in oil and gas wells. The method has been patented by the Norwegian oil service company, Meta, which uses it to connect concentric tubular components with sealing and strength properties outlined above.
The term autofrettage is also used to describe a step in manufacturing of composite overwrapped pressure vessel (COPV) where the liner is expanded (by plastic deformation), inside the composite overwrap.
See also
Shot peening, which also induces compressive residual stresses
Built-up gun, an older method for strengthening gun barrels
References
External links
White Paper Autofrettage
Metalworking
Firearm construction | Autofrettage | Engineering | 1,078 |
16,258,350 | https://en.wikipedia.org/wiki/FRAS1 | Extracellular matrix protein FRAS1 is a protein that in humans is encoded by the FRAS1 (Fraser syndrome 1) gene. This gene encodes an extracellular matrix protein that appears to function in the regulation of epidermal-basement membrane adhesion and organogenesis during development.
Metastatic prostate cancer
A single nucleotide switch (polymorphism) in FRAS1 promoter region is associated with metastatic Prostate cancer. The promoter region is directly related to the NFkB pathway and has been shown to be associated with lethal prostate cancer.
Fras1 related extracellular matrix (FREM1) directly relates to congenital diaphragmatic hernia in developing fetuses. Decreased expression of FREM1 may be linked with disruptions in the growth of diaphragm cells. Both FRAS1 and FREM1 are among the proteins that are primarily interacting during embryonic development. It is shown that a decrease in these two proteins lead to an increase of congenital diaphragmatic hernia in both humans and mice.
Clinical significance
Mutations in this gene have been observed to cause fraser syndrome.
See also
Fraser syndrome
References
Further reading
Extracellular matrix proteins | FRAS1 | Chemistry,Biology | 237 |
73,722,146 | https://en.wikipedia.org/wiki/Exidia%20saccharina | Exidia saccharina is a species of fungus in the family Auriculariaceae. Basidiocarps (fruit bodies) are gelatinous, reddish brown, button-shaped at first then often coalescing and becoming irregularly effused. In the UK, it has the recommended English name of pine jelly. It grows on dead branches of conifers and is known from Europe, North America, and northern Asia.
Taxonomy
The species was first described in 1805 from Germany as Tremella spiculosa var. saccharina by mycologists Johannes Baptista von Albertini and Lewis David de Schweinitz and raised to species level in Exidia by Elias Magnus Fries in 1822. Recent molecular research, based on cladistic analysis of DNA sequences, has shown that the species is distinct. Exidia subrepanda, originally described from Finland on spruce (Picea), is considered a synonym.
Description
The basidiocarps of E. saccharina are orange-brown, gelatinous, button-shaped at first but sometimes coalescing to form effused, irregular, often ridged masses up to 10 cm across. They become leathery, dark, and shriveled when dry.
Microscopic characters
The translucent hyphae are 0.5–2.5 μm in diameter, monomitic, branched, thin-walled, and form clamp connections. Hyphae frequently form anastomoses. Basidia are typically 13 to 15.5 μm long, elliptical, and consist of four longitudinally septate cells. Basidiospores are allantoid (sausage shaped), 10 to 14 by 3 to 4.5 μm, with thin, smooth walls.
Similar species
Fruit bodies of Exidia subsaccharina (known from France and England) also occur on conifers and are not distinguishable in the field, but have larger basidia and spores (12.5 to 17.5 by 4 to 5.5 μm).
Distribution and habitat
Exidia saccharina is most common in Scandinavia, but can also be found elsewhere in Europe, in northern parts of Asia, and in North America.
Exidia saccharina grows only on dead conifers, including species of Abies, Larix, Picea, and most commonly Pinus. It seems to grow most preferentially on Pinus strobus. In its anamorphic (asexual) state, it has been found in association with bark beetles.
Conservation status
Exidia saccharina is currently listed on the register of protected and endangered fungi of Poland.
References
Fungi described in 1805
Fungi of Europe
Auriculariales
Fungus species
Taxa named by Lewis David de Schweinitz | Exidia saccharina | Biology | 565 |
53,883,295 | https://en.wikipedia.org/wiki/Merisa | Merisa, merissa or marissa () is a traditional fermented beverage popular in South Sudan.
It is made by Sudanese women as a source of income. Merisa is made by brewing dates, millet and sorghum. The brewing process has been described as complex by Western beer making standards with over a dozen steps. Merisa has an 8-10 hour fermentation process and has an alcohol content of up to 6%.
It is illegal to drink or sell Marissa in Northern Sudan under Muslim Sharia laws, under penalty of 40 lashes, fines and imprisonment.
Baboons in Sudan are known to drink merisa when offered.
References
South Sudanese cuisine
Sudanese cuisine
Fermented drinks
Alcoholic drinks | Merisa | Biology | 145 |
23,754,958 | https://en.wikipedia.org/wiki/Lucio%20Lombardo-Radice | Lucio Lombardo-Radice (Catania, 10 July 1916; Brussels, 21 November 1982) was an Italian mathematician. A student of Gaetano Scorza, Lombardo-Radice contributed to finite geometry and geometric combinatorics together with Guido Zappa and Beniamino Segre, and wrote important works concerning the Non-Desarguesian plane. He was also a leading member of the Italian Communist Party and a member of its central committee. He had a long professional and political friendship with German natural scientist and dissident Robert Havemann.
Lombardo-Radice's parents were Giuseppe Lombardo Radice and Gemma Harasim. His children included the writer and actor Giovanni Lombardo Radice.
The Istituto Tecnico Statale Commerciale "Lucio Lombardo Radice" per Programmatori, a school in Rome, Italy, founded in 1982 as the XXV Istituto Tecnico Commerciale per Programmatori, was in 1992 renamed after Lombardo-Radice. It is now named Istituto di Istruzione Superiore Lombardo Radice
References
External links
1916 births
1982 deaths
20th-century Italian mathematicians
Italian communists
Combinatorialists
People from Catania
Mathematicians from Sicily | Lucio Lombardo-Radice | Mathematics | 255 |
1,113,185 | https://en.wikipedia.org/wiki/Wolstenholme%27s%20theorem | In mathematics, Wolstenholme's theorem states that for a prime number p ≥ 5, the congruence
holds, where the parentheses denote a binomial coefficient. For example, with p = 7, this says that 1716 is one more than a multiple of 343. The theorem was first proved by Joseph Wolstenholme in 1862. In 1819, Charles Babbage showed the same congruence modulo p2, which holds for p ≥ 3. An equivalent formulation is the congruence
for p ≥ 5, which is due to Wilhelm Ljunggren (and, in the special case b = 1, to J. W. L. Glaisher) and is inspired by Lucas's theorem.
No known composite numbers satisfy Wolstenholme's theorem and it is conjectured that there are none (see below). A prime that satisfies the congruence modulo p4 is called a Wolstenholme prime (see below).
As Wolstenholme himself established, his theorem can also be expressed as a pair of congruences for (generalized) harmonic numbers:
since
(Congruences with fractions make sense, provided that the denominators are coprime to the modulus.)
For example, with p = 7, the first of these says that the numerator of 49/20 is a multiple of 49, while the second says the numerator of 5369/3600 is a multiple of 7.
Wolstenholme primes
A prime p is called a Wolstenholme prime iff the following condition holds:
If p is a Wolstenholme prime, then Glaisher's theorem holds modulo p4. The only known Wolstenholme primes so far are 16843 and 2124679 ; any other Wolstenholme prime must be greater than 1011. This result is consistent with the heuristic argument that the residue modulo p4 is a pseudo-random multiple of p3. This heuristic predicts that the number of Wolstenholme primes between K and N is roughly ln ln N − ln ln K. The Wolstenholme condition has been checked up to 1011, and the heuristic says that there should be roughly one Wolstenholme prime between 1011 and 1024. A similar heuristic predicts that there are no "doubly Wolstenholme" primes, for which the congruence would hold modulo p5.
A proof of the theorem
There is more than one way to prove Wolstenholme's theorem. Here is a proof that directly establishes Glaisher's version using both combinatorics and algebra.
For the moment let p be any prime, and let a and b be any non-negative integers. Then a set A with ap elements can be divided into a rings of length p, and the rings can be rotated separately. Thus, the a-fold direct sum of the cyclic group of order p acts on the set A, and by extension it acts on the set of subsets of size bp. Every orbit of this group action has pk elements, where k is the number of incomplete rings, i.e., if there are k rings that only partly intersect a subset B in the orbit. There are orbits of size 1 and there are no orbits of size p. Thus we first obtain Babbage's theorem
Examining the orbits of size p2, we also obtain
Among other consequences, this equation tells us that the case a = 2 and b = 1 implies the general case of the second form of Wolstenholme's theorem.
Switching from combinatorics to algebra, both sides of this congruence are polynomials in a for each fixed value of b. The congruence therefore holds when a is any integer, positive or negative, provided that b is a fixed positive integer. In particular, if a = −1 and b = 1, the congruence becomes
This congruence becomes an equation for using the relation
When p is odd, the relation is
When p ≠ 3, we can divide both sides by 3 to complete the argument.
A similar derivation modulo p4 establishes that
for all positive a and b if and only if it holds when a = 2 and b = 1, i.e., if and only if p is a Wolstenholme prime.
The converse as a conjecture
It is conjectured that if
when k = 3, then n is prime. The conjecture can be understood by considering k = 1 and 2 as well as 3. When k = 1, Babbage's theorem implies that it holds for n = p2 for p an odd prime, while Wolstenholme's theorem implies that it holds for n = p3 for p > 3, and it holds for n = p4 if p is a Wolstenholme prime. When k = 2, it holds for n = p2 if p is a Wolstenholme prime. These three numbers, 4 = 22, 8 = 23, and 27 = 33 are not held for () with k = 1, but all other prime square and prime cube are held for () with k = 1. Only 5 other composite values (neither prime square nor prime cube) of n are known to hold for () with k = 1, they are called Wolstenholme pseudoprimes, they are
27173, 2001341, 16024189487, 80478114820849201, 20378551049298456998947681, ...
The first three are not prime powers , the last two are 168434 and 21246794, 16843 and 2124679 are Wolstenholme primes . Besides, with an exception of 168432 and 21246792, no composites are known to hold for () with k = 2, much less k = 3. Thus the conjecture is considered likely because Wolstenholme's congruence seems over-constrained and artificial for composite numbers. Moreover, if the congruence does hold for any particular n other than a prime or prime power, and any particular k, it does not imply that
The number of Wolstenholme pseudoprimes up to x is , so the sum of reciprocals of those numbers converges. The constant 499712 follows from the existence of only three Wolstenholme pseudoprimes up to 1012. The number of Wolstenholme pseudoprimes up to 1012 should be at least 7 if the sum of its reciprocals diverged, and since this is not satisfied because there are only 3 of them in this range, the counting function of these pseudoprimes is at most for some efficiently computable constant C; we can take C as 499712. The constant in the big O notation is also effectively computable in .
Generalizations
Leudesdorf has proved that for a positive integer n coprime to 6, the following congruence holds:
In 1900, Glaisher showed further that: for prime p > 3,
Where Bn is the Bernoulli number.
See also
Fermat's little theorem
Wilson's theorem
Wieferich prime
Wilson prime
Wall–Sun–Sun prime
List of special classes of prime numbers
Table of congruences
Notes
References
.
.
.
.
.
.
R. Mestrovic, Wolstenholme's theorem: Its Generalizations and Extensions in the last hundred and fifty years (1862—2012).
.
External links
The Prime Glossary: Wolstenholme prime
Status of the search for Wolstenholme primes
Classes of prime numbers
Factorial and binomial topics
Articles containing proofs
Theorems about prime numbers | Wolstenholme's theorem | Mathematics | 1,657 |
22,201,177 | https://en.wikipedia.org/wiki/Chlorinated%20polycyclic%20aromatic%20hydrocarbon | Chlorinated polycyclic aromatic hydrocarbons (Cl-PAHs) are a group of compounds comprising polycyclic aromatic hydrocarbons with two or more aromatic rings and one or more chlorine atoms attached to the ring system. Cl-PAHs can be divided into two groups: chloro-substituted PAHs, which have one or more hydrogen atoms substituted by a chlorine atom, and chloro-added Cl-PAHs, which have two or more chlorine atoms added to the molecule. They are products of incomplete combustion of organic materials. They have many congeners, and the occurrences and toxicities of the congeners differ. Cl-PAHs are hydrophobic compounds and their persistence within ecosystems is due to their low water solubility. They are structurally similar to other halogenated hydrocarbons such as polychlorinated dibenzo-p-dioxins (PCDDs), dibenzofurans (PCDFs), and polychlorinated biphenyls (PCBs). Cl-PAHs in the environment are strongly susceptible to the effects of gas/particle partitioning, seasonal sources, and climatic conditions.
Sources
Chlorinated polycyclic aromatic hydrocarbons are generated by combustion of organic compounds. Cl-PAHs enter the environment from a multiplicity of sources and tend to persist in soil and in particulate matter in air. Environmental data and emission sources analysis for Cl-PAHs reveal that the dominant process of generation is by reaction of PAHs with chlorine in pyrosynthesis. Cl-PAHs have commonly been detected in tap water, fly ash from an incineration plant for radioactive waste, emissions from coal combustion and municipal waste incineration, automobile exhaust, snow, and urban air. They have also been detected in electronic wastes, workshop-floor dust, vegetation, and surface soil collected from the vicinity of an electronic waste (e-waste) recycling facility and in surface soil from a chemical industrial complex (comprising a coke-oven plant, a coal-fired power plant, and a chlor-alkali plant), and agricultural areas in central and eastern China. In addition, the combustion of polyvinylchloride and plastic wrap made from polyvinylidene chloride result in the production of Cl-PAHs, suggesting that combustion of organic materials including chlorine is a possible source of environmental pollution.
A specific class of Cl-PAHs, polychlorinated naphthalenes (PCNs), are persistent, bioaccumulative, and toxic contaminants that have been reported to occur in a wide variety of environmental and biological matrixes. Cl-PAHs with three to five rings have been reported to occur in air from road tunnels, sediment, snow, and kraft pulp mills.
Recently, the occurrence of particulate Cl-PAHs has been investigated. Results have shown that most particulate Cl-PAH concentration detected in urban air tended to be high in colder seasons and low in warmer seasons. This study also determined through compositional analysis that relatively low molecular weight Cl-PAHs dominated in warmer seasons and high molecular weight Cl-PAHs dominated in colder seasons.
Toxicity
Some Cl-PAHs have structural similarities to dioxins, they are suspected of having similar toxicities. These types of compounds are known to be carcinogenic, mutagenic, and teratogenic. Toxicological studies have shown that some Cl-PAHs possess greater mutagenicity, aryl-hydorcarbon receptor activity, and dioxin-like toxicity than the corresponding parent PAHs.
The relative potency of three ring Cl-PAHs was found to increase with increasing degree of chlorination as well as with increasing degree of chlorination. However, the relative potencies of the most toxic Cl-PAHs assessed up to now have been found to be 100,000-fold lower than the relative potency of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). Even though Cl-PAHs aren’t as toxic as TCDD, it has been determined using recombinant bacterial cells that the toxicities of exposure to Cl-PAHs based on AhR activity were approximately 30-50 times higher than that of dioxins. Cl-PAHs demonstrate a high enough toxicity to be a potential health risk to human populations that come into contact with them.
DNA interaction
One of the well-established mechanisms by which chlorinated polycyclic aromatic hydrocarbons can exert their toxic effects is via the function of the aryl hydrocarbon receptor (AhR). The AhR-mediated activities of Cl-PAHs have been determined by using yeast assay systems. Aryl Hydrocarbon Receptor (AhR) is a cytosolic, ligand-activated transcription receptor. Cl-PAHs have the ability to bind to and activate the AhR. The biological pathway involves translocation of the activated AhR to the nucleus. In the nucleus, the AhR binds with the AhR nuclear translator protein to form a heterodimer. This process leads to transcriptional modulation of genes, causing adverse changes in cellular processes and function.
Several Cl-PAHs have been determined to be AhR-active. One such Cl-PAH, 6-chlorochrysene, has been shown to have a high affinity for the Ah receptor and to be a potent AHH inducer. Therefore, Cl-PAHs may be toxic to humans, and it is important to better understand their behavior in the environment.
Several Cl-PAHs have also been found to exhibit mutagenic activity toward Salmonella typhimurium in the Ames assay.
References
Chloroarenes
Incineration | Chlorinated polycyclic aromatic hydrocarbon | Chemistry,Engineering | 1,202 |
258,986 | https://en.wikipedia.org/wiki/Ergodic%20theory | Ergodic theory is a branch of mathematics that studies statistical properties of deterministic dynamical systems; it is the study of ergodicity. In this context, "statistical properties" refers to properties which are expressed through the behavior of time averages of various functions along trajectories of dynamical systems. The notion of deterministic dynamical systems assumes that the equations determining the dynamics do not contain any random perturbations, noise, etc. Thus, the statistics with which we are concerned are properties of the dynamics.
Ergodic theory, like probability theory, is based on general notions of measure theory. Its initial development was motivated by problems of statistical physics.
A central concern of ergodic theory is the behavior of a dynamical system when it is allowed to run for a long time. The first result in this direction is the Poincaré recurrence theorem, which claims that almost all points in any subset of the phase space eventually revisit the set. Systems for which the Poincaré recurrence theorem holds are conservative systems; thus all ergodic systems are conservative.
More precise information is provided by various ergodic theorems which assert that, under certain conditions, the time average of a function along the trajectories exists almost everywhere and is related to the space average. Two of the most important theorems are those of Birkhoff (1931) and von Neumann which assert the existence of a time average along each trajectory. For the special class of ergodic systems, this time average is the same for almost all initial points: statistically speaking, the system that evolves for a long time "forgets" its initial state. Stronger properties, such as mixing and equidistribution, have also been extensively studied.
The problem of metric classification of systems is another important part of the abstract ergodic theory. An outstanding role in ergodic theory and its applications to stochastic processes is played by the various notions of entropy for dynamical systems.
The concepts of ergodicity and the ergodic hypothesis are central to applications of ergodic theory. The underlying idea is that for certain systems the time average of their properties is equal to the average over the entire space. Applications of ergodic theory to other parts of mathematics usually involve establishing ergodicity properties for systems of special kind. In geometry, methods of ergodic theory have been used to study the geodesic flow on Riemannian manifolds, starting with the results of Eberhard Hopf for Riemann surfaces of negative curvature. Markov chains form a common context for applications in probability theory. Ergodic theory has fruitful connections with harmonic analysis, Lie theory (representation theory, lattices in algebraic groups), and number theory (the theory of diophantine approximations, L-functions).
Ergodic transformations
Ergodic theory is often concerned with ergodic transformations. The intuition behind such transformations, which act on a given set, is that they do a thorough job "stirring" the elements of that set. E.g. if the set is a quantity of hot oatmeal in a bowl, and if a spoonful of syrup is dropped into the bowl, then iterations of the inverse of an ergodic transformation of the oatmeal will not allow the syrup to remain in a local subregion of the oatmeal, but will distribute the syrup evenly throughout. At the same time, these iterations will not compress or dilate any portion of the oatmeal: they preserve the measure that is density.
The formal definition is as follows:
Let be a measure-preserving transformation on a measure space , with . Then is ergodic if for every in with (that is, is invariant), either or .
The operator Δ here is the symmetric difference of sets, equivalent to the exclusive-or operation with respect to set membership. The condition that the symmetric difference be measure zero is called being essentially invariant.
Examples
An irrational rotation of the circle R/Z, T: x → x + θ, where θ is irrational, is ergodic. This transformation has even stronger properties of unique ergodicity, minimality, and equidistribution. By contrast, if θ = p/q is rational (in lowest terms) then T is periodic, with period q, and thus cannot be ergodic: for any interval I of length a, 0 < a < 1/q, its orbit under T (that is, the union of I, T(I), ..., Tq−1(I), which contains the image of I under any number of applications of T) is a T-invariant mod 0 set that is a union of q intervals of length a, hence it has measure qa strictly between 0 and 1.
Let G be a compact abelian group, μ the normalized Haar measure, and T a group automorphism of G. Let G* be the Pontryagin dual group, consisting of the continuous characters of G, and T* be the corresponding adjoint automorphism of G*. The automorphism T is ergodic if and only if the equality (T*)n(χ) = χ is possible only when n = 0 or χ is the trivial character of G. In particular, if G is the n-dimensional torus and the automorphism T is represented by a unimodular matrix A then T is ergodic if and only if no eigenvalue of A is a root of unity.
A Bernoulli shift is ergodic. More generally, ergodicity of the shift transformation associated with a sequence of i.i.d. random variables and some more general stationary processes follows from Kolmogorov's zero–one law.
Ergodicity of a continuous dynamical system means that its trajectories "spread around" the phase space. A system with a compact phase space which has a non-constant first integral cannot be ergodic. This applies, in particular, to Hamiltonian systems with a first integral I functionally independent from the Hamilton function H and a compact level set X = {(p,q): H(p,q) = E} of constant energy. Liouville's theorem implies the existence of a finite invariant measure on X, but the dynamics of the system is constrained to the level sets of I on X, hence the system possesses invariant sets of positive but less than full measure. A property of continuous dynamical systems that is the opposite of ergodicity is complete integrability.
Ergodic theorems
Let T: X → X be a measure-preserving transformation on a measure space (X, Σ, μ) and suppose ƒ is a μ-integrable function, i.e. ƒ ∈ L1(μ). Then we define the following averages:
Time average: This is defined as the average (if it exists) over iterations of T starting from some initial point x:
Space average: If μ(X) is finite and nonzero, we can consider the space or phase average of ƒ:
In general the time average and space average may be different. But if the transformation is ergodic, and the measure is invariant, then the time average is equal to the space average almost everywhere. This is the celebrated ergodic theorem, in an abstract form due to George David Birkhoff. (Actually, Birkhoff's paper considers not the abstract general case but only the case of dynamical systems arising from differential equations on a smooth manifold.) The equidistribution theorem is a special case of the ergodic theorem, dealing specifically with the distribution of probabilities on the unit interval.
More precisely, the pointwise or strong ergodic theorem states that the limit in the definition of the time average of ƒ exists for almost every x and that the (almost everywhere defined) limit function is integrable:
Furthermore, is T-invariant, that is to say
holds almost everywhere, and if μ(X) is finite, then the normalization is the same:
In particular, if T is ergodic, then must be a constant (almost everywhere), and so one has that
almost everywhere. Joining the first to the last claim and assuming that μ(X) is finite and nonzero, one has that
for almost all x, i.e., for all x except for a set of measure zero.
For an ergodic transformation, the time average equals the space average almost surely.
As an example, assume that the measure space (X, Σ, μ) models the particles of a gas as above, and let ƒ(x) denote the velocity of the particle at position x. Then the pointwise ergodic theorems says that the average velocity of all particles at some given time is equal to the average velocity of one particle over time.
A generalization of Birkhoff's theorem is Kingman's subadditive ergodic theorem.
Probabilistic formulation: Birkhoff–Khinchin theorem
Birkhoff–Khinchin theorem. Let ƒ be measurable, E(|ƒ|) < ∞, and T be a measure-preserving map. Then with probability 1:
where is the conditional expectation given the σ-algebra of invariant sets of T.
Corollary (Pointwise Ergodic Theorem): In particular, if T is also ergodic, then is the trivial σ-algebra, and thus with probability 1:
Mean ergodic theorem
Von Neumann's mean ergodic theorem, holds in Hilbert spaces.
Let U be a unitary operator on a Hilbert space H; more generally, an isometric linear operator (that is, a not necessarily surjective linear operator satisfying ‖Ux‖ = ‖x‖ for all x in H, or equivalently, satisfying U*U = I, but not necessarily UU* = I). Let P be the orthogonal projection onto {ψ ∈ H | Uψ = ψ} = ker(I − U).
Then, for any x in H, we have:
where the limit is with respect to the norm on H. In other words, the sequence of averages
converges to P in the strong operator topology.
Indeed, it is not difficult to see that in this case any admits an orthogonal decomposition into parts from and respectively. The former part is invariant in all the partial sums as grows, while for the latter part, from the telescoping series one would have:
This theorem specializes to the case in which the Hilbert space H consists of L2 functions on a measure space and U is an operator of the form
where T is a measure-preserving endomorphism of X, thought of in applications as representing a time-step of a discrete dynamical system. The ergodic theorem then asserts that the average behavior of a function ƒ over sufficiently large time-scales is approximated by the orthogonal component of ƒ which is time-invariant.
In another form of the mean ergodic theorem, let Ut be a strongly continuous one-parameter group of unitary operators on H. Then the operator
converges in the strong operator topology as T → ∞. In fact, this result also extends to the case of strongly continuous one-parameter semigroup of contractive operators on a reflexive space.
Remark: Some intuition for the mean ergodic theorem can be developed by considering the case where complex numbers of unit length are regarded as unitary transformations on the complex plane (by left multiplication). If we pick a single complex number of unit length (which we think of as U), it is intuitive that its powers will fill up the circle. Since the circle is symmetric around 0, it makes sense that the averages of the powers of U will converge to 0. Also, 0 is the only fixed point of U, and so the projection onto the space of fixed points must be the zero operator (which agrees with the limit just described).
Convergence of the ergodic means in the Lp norms
Let (X, Σ, μ) be as above a probability space with a measure preserving transformation T, and let 1 ≤ p ≤ ∞. The conditional expectation with respect to the sub-σ-algebra ΣT of the T-invariant sets is a linear projector ET of norm 1 of the Banach space Lp(X, Σ, μ) onto its closed subspace Lp(X, ΣT, μ). The latter may also be characterized as the space of all T-invariant Lp-functions on X. The ergodic means, as linear operators on Lp(X, Σ, μ) also have unit operator norm; and, as a simple consequence of the Birkhoff–Khinchin theorem, converge to the projector ET in the strong operator topology of Lp if 1 ≤ p ≤ ∞, and in the weak operator topology if p = ∞. More is true if 1 < p ≤ ∞ then the Wiener–Yoshida–Kakutani ergodic dominated convergence theorem states that the ergodic means of ƒ ∈ Lp are dominated in Lp; however, if ƒ ∈ L1, the ergodic means may fail to be equidominated in Lp. Finally, if ƒ is assumed to be in the Zygmund class, that is |ƒ| log+(|ƒ|) is integrable, then the ergodic means are even dominated in L1.
Sojourn time
Let (X, Σ, μ) be a measure space such that μ(X) is finite and nonzero. The time spent in a measurable set A is called the sojourn time. An immediate consequence of the ergodic theorem is that, in an ergodic system, the relative measure of A is equal to the mean sojourn time:
for all x except for a set of measure zero, where χA is the indicator function of A.
The occurrence times of a measurable set A is defined as the set k1, k2, k3, ..., of times k such that Tk(x) is in A, sorted in increasing order. The differences between consecutive occurrence times Ri = ki − ki−1 are called the recurrence times of A. Another consequence of the ergodic theorem is that the average recurrence time of A is inversely proportional to the measure of A, assuming that the initial point x is in A, so that k0 = 0.
(See almost surely.) That is, the smaller A is, the longer it takes to return to it.
Ergodic flows on manifolds
The ergodicity of the geodesic flow on compact Riemann surfaces of variable negative curvature and on compact manifolds of constant negative curvature of any dimension was proved by Eberhard Hopf in 1939, although special cases had been studied earlier: see for example, Hadamard's billiards (1898) and Artin billiard (1924). The relation between geodesic flows on Riemann surfaces and one-parameter subgroups on SL(2, R) was described in 1952 by S. V. Fomin and I. M. Gelfand. The article on Anosov flows provides an example of ergodic flows on SL(2, R) and on Riemann surfaces of negative curvature. Much of the development described there generalizes to hyperbolic manifolds, since they can be viewed as quotients of the hyperbolic space by the action of a lattice in the semisimple Lie group SO(n,1). Ergodicity of the geodesic flow on Riemannian symmetric spaces was demonstrated by F. I. Mautner in 1957. In 1967 D. V. Anosov and Ya. G. Sinai proved ergodicity of the geodesic flow on compact manifolds of variable negative sectional curvature. A simple criterion for the ergodicity of a homogeneous flow on a homogeneous space of a semisimple Lie group was given by Calvin C. Moore in 1966. Many of the theorems and results from this area of study are typical of rigidity theory.
In the 1930s G. A. Hedlund proved that the horocycle flow on a compact hyperbolic surface is minimal and ergodic. Unique ergodicity of the flow was established by Hillel Furstenberg in 1972. Ratner's theorems provide a major generalization of ergodicity for unipotent flows on the homogeneous spaces of the form Γ \ G, where G is a Lie group and Γ is a lattice in G.
In the last 20 years, there have been many works trying to find a measure-classification theorem similar to Ratner's theorems but for diagonalizable actions, motivated by conjectures of Furstenberg and Margulis. An important partial result (solving those conjectures with an extra assumption of positive entropy) was proved by Elon Lindenstrauss, and he was awarded the Fields medal in 2010 for this result.
See also
Chaos theory
Ergodic hypothesis
Ergodic process
Kruskal principle
Lindy effect
Lyapunov time – the time limit to the predictability of the system
Maximal ergodic theorem
Ornstein isomorphism theorem
Statistical mechanics
Symbolic dynamics
References
Historical references
.
.
.
.
.
.
.
.
Modern references
Vladimir Igorevich Arnol'd and André Avez, Ergodic Problems of Classical Mechanics. New York: W.A. Benjamin. 1968.
Leo Breiman, Probability. Original edition published by Addison–Wesley, 1968; reprinted by Society for Industrial and Applied Mathematics, 1992. . (See Chapter 6.)
(A survey of topics in ergodic theory; with exercises.)
Karl Petersen. Ergodic Theory (Cambridge Studies in Advanced Mathematics). Cambridge: Cambridge University Press. 1990.
Françoise Pène, Stochastic properties of dynamical systems, Cours spécialisés de la SMF, Volume 30, 2022
Joseph M. Rosenblatt and Máté Weirdl, Pointwise ergodic theorems via harmonic analysis, (1993) appearing in Ergodic Theory and its Connections with Harmonic Analysis, Proceedings of the 1993 Alexandria Conference, (1995) Karl E. Petersen and Ibrahim A. Salama, eds., Cambridge University Press, Cambridge, . (An extensive survey of the ergodic properties of generalizations of the equidistribution theorem of shift maps on the unit interval. Focuses on methods developed by Bourgain.)
A. N. Shiryaev, Probability, 2nd ed., Springer 1996, Sec. V.3. .
(A detailed discussion about the priority of the discovery and publication of the ergodic theorems by Birkhoff and von Neumann, based on a letter of the latter to his friend Howard Percy Robertson.)
Andrzej Lasota, Michael C. Mackey, Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics. Second Edition, Springer, 1994.
Manfred Einsiedler and Thomas Ward, Ergodic Theory with a view towards Number Theory. Springer, 2011.
Jane Hawkins, Ergodic Dynamics: From Basic Theory to Applications, Springer, 2021.
External links
Ergodic Theory (16 June 2015) Notes by Cosma Rohilla Shalizi
Ergodic theorem passes the test From Physics World | Ergodic theory | Mathematics | 3,995 |
48,467,716 | https://en.wikipedia.org/wiki/Military%20Scenario%20Definition%20Language | Military Scenario Definition Language (MSDL) is a standard developed by Simulation Interoperability Standards Organization (SISO).
The MSDL is now incorporated within the C2SIM Development group within SISO
Version 1. SISO-STD-007-2008: Military Scenario Definition Language (MSDL)
Links
References
Military simulation
Specification languages | Military Scenario Definition Language | Engineering | 70 |
36,436,244 | https://en.wikipedia.org/wiki/Chemical%20genetics | Chemical genetics is the investigation of the function of proteins and signal transduction pathways in cells by the screening of chemical libraries of small molecules. Chemical genetics is analogous to classical genetic screen where random mutations are introduced in organisms, the phenotype of these mutants is observed, and finally the specific gene mutation (genotype) that produced that phenotype is identified. In chemical genetics, the phenotype is disturbed not by introduction of mutations, but by exposure to small molecule tool compounds. Phenotypic screening of chemical libraries is used to identify drug targets (forward genetics or chemoproteomics) or to validate those targets in experimental models of disease (reverse genetics). Recent applications of this topic have been implicated in signal transduction, which may play a role in discovering new cancer treatments. Chemical genetics can serve as a unifying study between chemistry and biology. The approach was first proposed by Tim Mitchison in 1994 in an opinion piece in the journal Chemistry & Biology entitled "Towards a pharmacological genetics".
Method
Chemical genetic screens are performed using libraries of small molecules that have known activities or simply diverse chemical structures. These screens can be done in a high-throughput mode, using 96 well-plates, where each well contains cells treated with a unique compound. In addition to cells, Xenopus or zebrafish embryos can also be screened in 96 well format where compounds are dissolved in the media in which embryos grow. Embryos are developed until the stage of interest and then the phenotype can be analyzed. Several concentrations can be tested in order to determine the toxic and the optimal concentrations.
Applications
Adding compounds to developing embryos allow comprehension of mechanism of action of drugs, their toxicity and developmental processes involving their targets. Chemical screens have been mostly performed on either wild type or transgenic Xenopus and zebrafish organisms as they produce a large amount of synchronized, fast-to-develop and transparent eggs easy to visually score. The use of chemicals in developmental biology offers two main advantages. Firstly, it is easy to perform high-throughput screen using wide spectrum or specific target compounds and reveal important genes or pathways involved in developmental processes. Secondly, it allows narrowing the time of action of a particular gene. It can also be used as a tool in drug development to test toxicity in whole organism. Procedures such as FETAX (Frog Embryo Teratogenesis Assay – Xenopus) are being developed to implement chemical screenings to test toxicity. Zebrafish and Xenopus embryos have also been used to identify new drugs targeting a particular gene of interest.
See also
Chemoproteomics
Chemical biology
Chemogenomics
References
Genetics | Chemical genetics | Biology | 537 |
65,192 | https://en.wikipedia.org/wiki/Three%20Gorges%20Dam | The Three Gorges Dam () is a hydroelectric gravity dam that spans the Yangtze River near Sandouping in Yiling District, Yichang, Hubei province, central China, downstream of the Three Gorges. The world's largest power station by installed capacity (22,500 MW), the Three Gorges Dam generates 95±20 TWh of electricity per year on average, depending on the amount of precipitation in the river basin. After the extensive monsoon rainfalls of 2020, the dam's produced nearly 112 TWh in a year, breaking the previous world record of ~103 TWh set by Itaipu Dam in 2016.
The dam's body was completed in 2006; the power plant became fully operational in 2012, when the last of the main water turbines in the underground plant began production. The last major component of the project, the ship lift, was completed in 2015.
Each of the main water turbines, state-of-the-art at their installation, has a capacity of 700 MW. Combining the capacity of the dam's 32 main turbines with the two smaller generators (50 MW each) that provide power to the plant itself, the total electric generating capacity of the Three Gorges Dam is 22,500 MW with minimal greenhouse gas emissions.
The dam increases the Yangtze River's shipping capacity and reduces the likelihood of the sort of flooding downstream that have killed millions of people on the Yangtze Plain. As a result, the Chinese government regards the project as a monumental social and economic success.
However, it is controversial domestically and abroad. The dam's construction displaced more than 31 million people and inundated ancient and culturally significant sites. In operation, the dam has caused some ecological changes, including an increased risk of landslides.
History
Sun Yat-sen envisioned a large dam across the Yangtze River in The International Development of China (1919). He wrote that a dam capable of generating 30 million horsepower (22 GW) was possible downstream of the Three Gorges. In 1932, the Nationalist government, led by Chiang Kai-shek, began preliminary work on plans in the Three Gorges. In 1939, during the Second Sino-Japanese War, Japanese military forces occupied Yichang and surveyed the area.
In 1944, the United States Bureau of Reclamation's head design engineer, John L. Savage, surveyed the area and drew up a dam proposal for a "Yangtze River Project". Some 54 Chinese engineers went to the US for training. The original plans called for the dam to employ a unique method for moving ships: the ships would enter locks at the dam's lower and upper ends and then cranes would move them from each lock to the next. Groups of craft would be lifted together for efficiency. It is not known whether this solution was considered for its water-saving performance or because the engineers thought the difference in height between the river above and below the dam too great for alternative methods. No construction work was performed because of the Nationalists' worsening situation in the Chinese Civil War.
After the 1949 Communist Revolution, Mao Zedong supported the project, but began the Gezhouba Dam project nearby first, and economic problems including the Great Leap Forward and the Cultural Revolution slowed progress. After the 1954 Yangtze River Floods, in 1956, Mao wrote "Swimming", a poem about his fascination with a dam on the Yangtze River. In 1958, after the Hundred Flowers Campaign, some engineers who spoke out against the project were imprisoned.
During China's emphasis on the Four Modernizations during its early period of Reform and Opening Up, The Communist Party revived plans for the dam and proposed to start construction in 1986. It emphasized the need to develop hydroelectric power.
The Chinese People's Political Consultative Conference became a center of opposition to the proposed dam. It convened panels of experts who recommended delaying the project.
The National People's Congress approved the dam in 1992: of 2,633 delegates, 1,767 voted in favour, 177 voted against, 664 abstained, and 25 members did not vote, giving the legislation an unusually low 67.75% approval rate. Construction started on December 14, 1994. The dam was expected to be fully operational in 2009, but additional projects, such as the underground power plant with six additional generators, delayed full operation until 2012. The ship lift was completed in 2015. The dam raised the water level in the reservoir to above sea level by 2008 and to the designed maximum level of by 2010.
Composition and dimensions
Made of concrete and steel, the dam is long and above sea level at its top. The project used of concrete (mainly for the dam wall), used 463,000 tonnes of steel (enough to build 63 Eiffel Towers), and moved about of earth. The concrete dam wall is high above the rock basis.
When the water level is at its maximum of above sea level, higher than the river level downstream, the dam reservoir is on average about in length and in width. It contains of water and has a total surface area of . On completion, the reservoir flooded a total area of of land, compared to the of reservoir created by the Itaipu Dam.
Economics
The Chinese government estimated that the Three Gorges Dam project would cost 180 billion yuan (US$22.5 billion). By the end of 2008, spending had reached 148.365 billion yuan, of which 64.613 billion yuan was spent on construction, 68.557 billion yuan on relocating affected residents, and 15.195 billion yuan on financing. It was estimated in 2009 that the cost of construction would be fully recouped when the dam had generated of electricity, yielding 250 billion yuan; total cost recovery was thus expected to be completed ten years after the dam became fully operational. In fact, the entire cost of the Three Gorges Dam was recovered by December 20, 2013.
Funding sources include the Three Gorges Dam Construction Fund, profits from the Gezhouba Dam, loans from the China Development Bank, loans from domestic and foreign commercial banks, corporate bonds, and revenue from both before and after the dam had become fully operational. Additional charges were assessed as follows: every province receiving power from the Three Gorges Dam had to pay an extra ¥7.00 per MWh, and the other provinces had to pay an additional charge of ¥4.00 per MWh. No surcharge was imposed on the Tibet Autonomous Region.
Power generation and distribution
Generating capacity
Power generation is managed by China Yangtze Power, a listed subsidiary of China Three Gorges Corporation (CTGC), a Central Enterprise administered by SASAC. The Three Gorges Dam is the world's largest capacity hydroelectric power station, with 34 generators: 32 main generators, each with a capacity of 700 MW, and two plant power generators, each with capacity of 50 MW, for a total of 22,500 MW. Among the 32 main generators, 14 are installed on the dam's north side, 12 on the south side, and the remaining six in the underground power plant in the mountain south of the dam. Annual electricity generation in 2018 was 101.6 TWh, which is 20 times more than the Hoover Dam.
Generators
The main generators each weigh about 6,000 tonnes and are designed to produce more than 700 MW of power each. The designed hydraulic head of the generators is . The flow rate varies between depending on the head available; the greater the head, the less water needed to reach full power. Three Gorges uses Francis turbines with a diameter of 9.7/10.4 m (VGS design/Alstom's design) and a rotation speed of 75 revolutions per minute. This means that in order to generate power at 50 Hz, the generator rotors have 80 poles. Rated power is 778 MVA, with a maximum of 840 MVA and a power factor of 0.9. The generator produces electrical power at 20 kV. The electricity generated is then stepped up to 500 kV for transmission at 50 Hz. The generator's stator, the biggest of its kind, is 3.1/3 m in height; the outer diameter of the stator is 21.4/20.9 m, the inner diameter is 18.5/18.8 m, and the bearing load is 5,050/5,500 tonnes. Average efficiency is over 94%, with a maximum efficiency of 96.5% reached.
The generators were manufactured by two joint ventures: Alstom, ABB, Kvaerner, and the Chinese company Harbin Motor; and Voith, General Electric, Siemens (abbreviated as VGS), and the Chinese company Oriental Motor. The technology transfer agreement was signed together with the contract. Most of the generators are water-cooled. Some of the newer ones are air-cooled, making them simpler in design and easier to manufacture and maintain.
Generator installation progress
The first north-side main generator (No. 2) started up on July 10, 2003. The north side became completely operational on September 7, 2005, with the implementation of generator No. 9. Full power (9,800 MW) was eventually achieved on October 18, 2006, after the water level reached 156 meters.
On the south side, main generator No. 22 started up on June 11, 2007, and No. 15 became operational on October 30, 2008. The sixth (No. 17) began operation on December 18, 2007, raising capacity to 14.1 GW, exceeding that of Itaipu dam (14.0 GW) to become the world's largest hydro power plant by capacity.
When the last main generator (No. 27) finished its final test on May 23, 2012, the six underground main generators were all operational, raising the capacity to 22.5 GW. After nine years of construction, installation and testing, the power plant was fully operational by July 2012.
Output milestones
By August 16, 2011, the plant had generated 500 TWh of electricity. In July 2008 it generated 10.3 TWh of electricity, its first month over 10 TWh. On June 30, 2009, after the river flow rate increased to over 24,000 m3/s, all 28 generators were switched on, producing only 16,100 MW because the head available during flood season is insufficient. During an August 2009 flood, the plant first reached its maximum output for a short period.
During the November to May dry season, power output is limited by the river's flow rate, as seen in the diagrams on the right. When there is enough flow, power output is limited by plant generating capacity. The maximum power-output curves were calculated based on the average flow rate at the dam site, assuming the water level is 175 m and the plant gross efficiency is 90.15%. The actual power output in 2008 was obtained based on the monthly electricity sent to the grid.
The Three Gorges Dam reached its design-maximum reservoir water level of for the first time on October 26, 2010, in which the intended annual power-generation capacity of 84.7 TWh was realized. It has a combined generating capacity of 22.5 gigawatts and a designed annual generation capacity of 88.2 TWh. In 2012, the dam's 32 generating units generated a record 98.1 TWh of electricity, which accounts for 14% of China's total hydro generation. Between 2012 (first year with all 32 generating units operating) and 2021, the dam generated an average of 97.22 TWh of electricity per year, higher than Itaipu dam's average of 89.22 TWh of electricity per year during the same period. Due to the extensive 2020 monsoon season rainfall, the annual production reached ~112 TWh that year, which broke the previous world record of annual production by Itaipu Dam equal to ~103 TWh.
Distribution
The State Grid Corporation and China Southern Power Grid paid a flat rate of ¥250 per MWh (US$35.7) until July 2, 2008. Since then, the price has varied by province, from ¥228.7 to ¥401.8 per MWh. Higher-paying customers, such as Shanghai, receive priority. Nine provinces and two cities consume power from the dam.
Power distribution and transmission infrastructure cost about 34.387 billion yuan. Construction was completed in December 2007, one year ahead of schedule.
Power is distributed over multiple 500 kV transmission lines. Three direct current (DC) lines to the East China Grid carry 7,200 MW: Three Gorges – Shanghai (3,000 MW), HVDC Three Gorges – Changzhou (3,000 MW), and HVDC Gezhouba – Shanghai (1,200 MW). The alternating current (AC) lines to the Central China Grid have a total capacity of 12,000 MW. The DC transmission line HVDC Three Gorges – Guangdong to the South China Grid has a capacity of 3,000 MW.
The dam was expected to provide 10% of China's power. However, electricity demand has increased more quickly than previously projected. Even fully operational and despite its size, on average, it supported only about 1.7% of electricity demand in China in the year of 2011, when the Chinese electricity demand reached 4,692.8 TWh.
Environmental impact
Emissions
According to the National Development and Reform Commission, 366 grams of coal would produce 1 kWh of electricity during 2006. From 2003 to 2007, power production equaled that of 84 million tonnes of standard coal.
Erosion and sedimentation
Two hazards are uniquely identified with the dam: that sedimentation projections are not agreed upon, and that the dam sits on a seismic fault. At current levels, 80% of the land in the area is eroding, depositing about 40 million tons of sediment into the Yangtze annually. Because the flow is slower above the dam, much of this sediment settles there instead of flowing downstream, and there is less sediment downstream.
The absence of silt downstream has three effects:
Some hydrologists expect downstream riverbanks to become more vulnerable to flooding.
Shanghai, more than away, rests on a massive sedimentary plain. The "arriving siltso long as it does arrivestrengthens the bed on which Shanghai is built ... the less the tonnage of arriving sediment the more vulnerable is this biggest of Chinese cities to inundation".
Benthic sediment buildup causes biological damage and reduces aquatic biodiversity.
Landslides
Erosion in the reservoir, induced by rising water, causes frequent major landslides that have led to noticeable disturbance in the reservoir surface, including two incidents in May 2009 when somewhere between of material plunged into the flooded Wuxia Gorge of the Wu River. In the first four months of 2010, there were 97 significant landslides.
Waste management
The dam catalyzed improved upstream wastewater treatment around Chongqing and its suburban areas. According to the Ministry of Environmental Protection, as of April 2007, more than 50 new plants could treat 1.84 million tonnes per day, 65% of the total need. About 32 landfills were added, which could handle 7,664.5 tonnes of solid waste every day. Over one billion tons of wastewater are released annually into the river, which was more likely to be swept away before the reservoir was created. This has left the water stagnant, polluted and murky.
Forest cover
In 1997, the Three Gorges area had 10% forestation, down from 20% in the 1950s.
Research by the United Nations Food and Agriculture Organization suggested that the Asia-Pacific region would gain about of forest by 2008. That is a significant change from the net loss of forest each year in the 1990s. This is largely due to China's large reforestation effort. This accelerated after the 1998 Yangtze River floods convinced the government that it should restore tree cover, especially in the Yangtze's basin upstream of the Three Gorges Dam.
Wildlife
Concerns about the dam's impact on wildlife predate the National People's Congress's approval in 1992. This region has long been known for its rich biodiversity. It is home to 6,388 plant species, which belong to 238 families and 1,508 genera. Of these species, 57 are endangered. These rare species are also used as ingredients in traditional Chinese medicines. The proportion of forested area in the region surrounding the Three Gorges Dam dropped from 20% in 1950 to less than 10% as of 2002, adversely affecting all plant species there. The region also provides habitats to hundreds of freshwater and terrestrial animal species. Freshwater fish are especially affected by dams due to changes in the water temperature and flow regime. Many other fish are injured in the hydroelectric plants' turbine blades. This is particularly detrimental to the region's ecosystem because the Yangtze River basin is home to 361 different fish species and accounts for 27% of China's endangered freshwater fish species. Other aquatic species have been endangered by the dam, particularly the baiji, or Chinese river dolphin, now extinct. In fact, Chinese Government scholars even claim that the Three Gorges Dam directly caused the extinction of the baiji.
Of the 3,000 to 4,000 remaining critically endangered Siberian crane, many spend the winter in wetlands that the Three Gorges Dam will destroy. Populations of the Yangtze sturgeon are guaranteed to be "negatively affected" by the dam. In 2022 the Chinese paddlefish was declared extinct, with the last confirmed sighting in 2003.
Terrestrial impact
In 2005, NASA scientists calculated that the shift of water mass stored by the dams would increase the total length of the Earth's day by 0.06 microseconds and make the Earth slightly more round in the middle and flat on the poles. A study published in 2022 in the journal Open Geosciences suggests that the change of reservoir water level affects the gravity field in western Sichuan, which in turn affects the seismicity in that area.
Floods, agriculture, industry
An important function of the dam is to control flooding, which is a major problem for the seasonal river of the Yangtze. Millions of people live downstream of the dam, with many large, important cities like Wuhan, Nanjing, and Shanghai located adjacent to the river. Large areas of farmland and China's most important industrial area are situated beside the river.
The reservoir's flood storage capacity is . This capacity will reduce the frequency of major downstream flooding from once every 10 years to once every 100 years. The dam is expected to minimize the effect of even a "super" flood. The river flooded in 1954 over an area of , killing 33,169 people and forcing almost 18.9 million people to move. The flood waters covered Wuhan, a city of eight million people, for over three months, and the Jingguang Railway was out of service for more than 100 days. The 1954 flood carried of water. The dam could only divert the water above Chenglingji, leaving to be diverted. The dam cannot protect against some of the large tributaries downstream, including the Xiang, Zishui, Yuanshui, Lishui, Hanshui, and Gan.
In 1998, a flood in the same area caused billions of dollars worth of damage, when of farmland were flooded. The flood affected more than 2.3 million people, killing 1,526. In early August 2009, the largest flood in five years passed through the dam site. During this flood, the dam limited the water flow to less than per second, raising the upstream water level from on August 1, to on August 8. A full of flood water was captured and the river flow was cut by as much as per second.
The dam discharges its reservoir during the dry season every year, between December and March. This increases the flow rate of the river downstream, providing fresh water for agricultural and industrial usage, and improving shipping conditions. The water level upstream drops from , in preparation for the rainy season. The water also powers the Gezhouba Dam downstream.
Since the filling of the reservoir in 2003, the Three Gorges Dam has supplied an extra of fresh water to downstream cities and farms over the course of the dry season.
During the South China floods in July 2010, inflows at the Three Gorges Dam reached a peak of , exceeding the peak inflow during the 1998 Yangtze River floods. The dam's reservoir rose nearly in 24 hours and reduced the outflow to in discharges downstream, preventing any significant impact on the middle and lower river.
Navigating the dam
Locks
The installation of ship locks is intended to increase river shipping from ten million to 100 million tonnes annually; as a result transportation costs will be cut between 30 and 37%. Shipping will become safer, since the gorges are notoriously dangerous to navigate.
There are two series of ship locks installed near the dam (). Each of them is made up of five stages, with transit time at around four hours. Maximum vessel size is 10,000 tons. The locks are 280 m long, 35 m wide, and 5 m deep (918 × 114 × 16.4 ft). That is longer than those on the St Lawrence Seaway, but half as deep. Before the dam was constructed, the maximum freight capacity at the Three Gorges site was 18.0 million tonnes per year. From 2004 to 2007, a total of 198 million tonnes of freight passed through the locks. The freight capacity of the river increased six times and the cost of shipping was reduced by 25%. Originally, the total capacity of the ship locks was expected to reach 100 million tonnes per year. In 2022, their cargo turnover reached 159.65 million tons, with an annual increase of 6% over the previous few years.
These locks are staircase locks, whereby inner lock gate pairs serve as both the upper gate of the chamber below and the lower gate of the chamber above. The gates are the vulnerable hinged type, which, if damaged, could temporarily render the entire flight unusable. As there are separate sets of locks for upstream and downstream traffic, this system is more water efficient than bi-directional staircase locks.
Ship lift
In addition to the canal locks, there is a ship lift, a kind of elevator for vessels. The ship lift can lift ships of up to 3,000 tons. The vertical distance traveled is , and the size of the ship lift's basin is . The ship lift takes 30 to 40 minutes to transit, as opposed to the three to four hours for stepping through the locks. One complicating factor is that the water level can vary dramatically. The ship lift must work even if water levels vary by on the lower side, and on the upper side.
The ship lift's design uses a helical gear system, to climb or descend a toothed rack.
The ship lift was not yet complete when the rest of the project was officially opened on May 20, 2006.
In November 2007, it was reported in the local media that construction of the ship lift started in October 2007.
In February 2012, Xinhua reported that the four towers intended to support the ship lift were nearly complete. The report said that by that time, the towers had reached of the anticipated .
As of May 2014, the ship lift was expected to be completed by July 2015. It was tested in December 2015 and announced complete in January 2016. Lahmeyer, the German firm that designed the ship lift, said it will take a vessel less than an hour to transit the lift. An article in Steel Construction says the actual time of the lift will be 21 minutes. It says that the expected dimensions of the passenger vessels the ship lift's basin was designed to carry will be . The moving mass (including counterweights) is 34,000 tonnes.
The trials of elevator finished in July 2016, the first cargo ship was lifted on July 15; the lift time comprised 8 minutes. Shanghai Daily reported that the first operational use of the lift was on September 18, 2016, when limited "operational testing" of the lift began.
Portage railways
Plans also exist for the construction of short portage railways bypassing the dam area altogether. Two short rail lines, one on each side of the river, are to be constructed. The northern portage railway () will run from the Taipingxi port facility () on the northern side of the Yangtze, just upstream from the dam, via Yichang East Railway Station to the Baiyang Tianjiahe port facility in Baiyang Town (白洋镇), below Yichang. The southern portage railway () will run from Maoping (upstream of the dam) via Yichang South Railway Station to Zhicheng (on the Jiaozuo–Liuzhou Railway).
In late 2012, preliminary work started along both future railway routes.
Displacement of residents
During planning, it was estimated that 13 cities, 140 towns and 1,350 villages would be partially or completely flooded by the reservoir, amounting to roughly 1.5% of Hubei's 60.3 million people and Chongqing Municipality's 31.44 million people. These people were moved to new homes by the Chinese government, which considered the displacement justified by the flood protection provided for the communities downstream of the dam.
Between 2002 and 2005, Canadian photographer Edward Burtynsky documented the impact of the project on the surrounding areas, including the town of Wanzhou. Other photographers who recorded the change include Chengdu-based Muge, Paris-based Zeng Nian (originally from Jiangsu), and Israeli Nadav Kander. Living conditions deteriorated for many, and hundreds of thousands of people could not find work. The older generation was particularly affected, but younger generations benefited from the educational and career opportunities afforded by moving to large cities with new, modern companies and schools.
Some 2007 reports claimed that Chongqing Municipality would encourage four million more people to move away from the dam to Chongqing's main urban area by 2020. The municipal government asserted that the relocation was driven by urbanization, rather than a direct result of the dam project, and that the people involved included other areas of the municipality.
By June 2008, China had moved 1.24 million residents as far as Gaoyang in Hebei Province, and the moves concluded the following month.
Other effects
Cultural and history
The area which would fill with water behind the dam included locations with significant cultural history. The State Council authorized a ¥505 million archaeology salvage effort. Over the course of several years, archaeologists excavated 723 sites and conducted surface archaeology recovery missions at an additional 346 sites. Archaeologists recovered 200,000 artifacts of which 13,000 were considered as particularly historically or culturally notable. As part of this effort, the old Chongqing City Museum was replaced by the Chongqing China Sanxia Museum to house many of the recovered artifacts.
Recovered structures that were too large for museums were moved upland to reconstruction districts (fu jian qu), which are outdoor museum parks. Recovered structures placed in such parks include temples, pavilions, houses, and bridges, among others.
Some sites could not be moved because of their location, size, or design, such as the hanging coffins site high in the Shen Nong Gorge, part of the cliffs.
National security
The United States Department of Defense reported that in Taiwan, "proponents of strikes against the mainland apparently hope that merely presenting credible threats to China's urban population or high-value targets, such as the Three Gorges Dam, will deter Chinese military coercion". Destroying the Three Gorges Dam has been a tactic discussed and debated in Taiwan since the early 1990s, when the dam was still in the planning phase. The notion that the military in Taiwan would seek to destroy the dam provoked an angry response from the mainland Chinese media. People's Liberation Army General Liu Yuan was quoted in the China Youth Daily saying that the People's Republic of China would be "seriously on guard against threats from Taiwan independence terrorists". Former Taiwanese Ministry of Defense advisor Sung Chao-wen, called the notion of using cruise missiles to destroy the Three Gorges Dam "ridiculous", saying missiles would deliver minimal damage to the reinforced concrete, and any attack attempts would have to go through multiple layers of ground and air defenses.
The Three Gorges Dam is a steel-concrete gravity dam. The water is held back by the innate mass of the individual dam sections. As a result, damage to an individual section should not affect other parts of the dam. Zhang Boting, deputy secretary-general of China Society for Hydropower Engineering, suggested that concrete gravity dams such as the Three Gorges Dam are resistant to nuclear strikes.
Debate among Chinese scholars and analysts about the basic principles of China's no first use of nuclear weapons policy includes whether to include narrow exceptions, such as acts that produce catastrophic consequences equivalent to that of a nuclear attack, including attacks intended to destroy the Three Gorges Dam.
Structural integrity
Immediately after the reservoir was first filled, around 80 hairline cracks were observed in the dam's structure. Still, an experts group gave the project overall a good-quality rating. The 163,000 concrete units all passed quality testing, with normal deformation within design limits.
Upstream dams
In order to maximize the utility of the Three Gorges Dam and cut down on sedimentation from the Jinsha River, the upper course of the Yangtze River, authorities are building a series of dams on the Jinsha, including the now completed Wudongde, Baihetan, Xiluodu, and Xiangjiaba dams. The total capacity of those four dams is 38,500 MW, almost double the capacity of the Three Gorges.
Baihetan became fully operational in 2022. Wudongde was opened in June 2021. Another eight dams are in the midstream of the Jinsha and eight more upstream of it.
See also
Baiheliang Underwater Museum
South–North Water Transfer Project
Energy policy of China
List of largest power stations
List of largest hydroelectric power stations
List of power stations in China
List of dams and reservoirs in China
Three Gorges Museum
Liang Weiyan, one of the leading engineers who designed the water turbines for the dam
References
Locks of China
Gravity dams
2008 establishments in China
Dam controversies
Dams completed in 2008
Dams on the Yangtze River
Energy infrastructure completed in 2012
Hydroelectric power stations in Hubei
Underground power stations
Yichang
Megaprojects | Three Gorges Dam | Engineering | 6,174 |
48,827,817 | https://en.wikipedia.org/wiki/Industry%20Documents%20Library | The UCSF Industry Documents Library (IDL) is a digital archive of internal tobacco, drug, food, chemical and fossil fuel corporate documents, acquired largely through litigation, which illustrate industry efforts to influence policies and regulations meant to protect public health. Created and maintained by the UCSF Library, the mission of the UCSF Industry Documents Library is to "identify, collect, curate, preserve, and make freely accessible internal documents created by industries and their partners which have an impact on public health, for the benefit and use of researchers, clinicians, educators, students, policymakers, media, and the general public at UCSF and internationally".
Collections
The IDL includes the following archives:
the Truth Tobacco Industry Documents
the Drug Industry Documents Archive
the Food Industry Documents Archive
the Chemical Industry Documents Archive
the Fossil Fuel Industry Document Archive
References
External links
Official site
Tobacco industry
Pharmaceutical industry
Chemical industry
Business and industry archives
American digital libraries
University of California, San Francisco
Libraries in San Francisco | Industry Documents Library | Chemistry,Biology | 194 |
4,333,700 | https://en.wikipedia.org/wiki/Chiral%20knot | In the mathematical field of knot theory, a chiral knot is a knot that is not equivalent to its mirror image (when identical while reversed). An oriented knot that is equivalent to its mirror image is an amphicheiral knot, also called an achiral knot. The chirality of a knot is a knot invariant. A knot's chirality can be further classified depending on whether or not it is invertible.
There are only five knot symmetry types, indicated by chirality and invertibility: fully chiral, invertible, positively amphicheiral noninvertible, negatively amphicheiral noninvertible, and fully amphicheiral invertible.
Background
The possible chirality of certain knots was suspected since 1847 when Johann Listing asserted that the trefoil was chiral, and this was proven by Max Dehn in 1914. P. G. Tait found all amphicheiral knots up to 10 crossings and conjectured that all amphicheiral knots had even crossing number. Mary Gertrude Haseman found all 12-crossing and many 14-crossing amphicheiral knots in the late 1910s. But a counterexample to Tait's conjecture, a 15-crossing amphicheiral knot, was found by Jim Hoste, Morwen Thistlethwaite, and Jeff Weeks in 1998. However, Tait's conjecture was proven true for prime, alternating knots.
The simplest chiral knot is the trefoil knot, which was shown to be chiral by Max Dehn. All nontrivial torus knots are chiral. The Alexander polynomial cannot distinguish a knot from its mirror image, but the Jones polynomial can in some cases; if Vk(q) ≠ Vk(q−1), then the knot is chiral, however the converse is not true. The HOMFLY polynomial is even better at detecting chirality, but there is no known polynomial knot invariant that can fully detect chirality.
Invertible knot
A chiral knot that can be smoothly deformed to itself with the opposite orientation is classified as a invertible knot. Examples include the trefoil knot.
Fully chiral knot
If a knot is not equivalent to its inverse or its mirror image, it is a fully chiral knot, for example the 9 32 knot.
Amphicheiral knot
An amphicheiral knot is one which has an orientation-reversing self-homeomorphism of the 3-sphere, α, fixing the knot set-wise.
All amphicheiral alternating knots have even crossing number. The first amphicheiral knot with odd crossing number is a 15-crossing knot discovered by Hoste et al.
Fully amphicheiral
If a knot is isotopic to both its reverse and its mirror image, it is fully amphicheiral. The simplest knot with this property is the figure-eight knot.
Positive amphicheiral
If the self-homeomorphism, α, preserves the orientation of the knot, it is said to be positive amphicheiral. This is equivalent to the knot being isotopic to its mirror. No knots with crossing number smaller than twelve are positive amphicheiral and noninvertible .
Negative amphicheiral
If the self-homeomorphism, α, reverses the orientation of the knot, it is said to be negative amphicheiral. This is equivalent to the knot being isotopic to the reverse of its mirror image. The noninvertible knot with this property that has the fewest crossings is the knot 817.
References | Chiral knot | Chemistry | 744 |
5,327,631 | https://en.wikipedia.org/wiki/Pitavastatin | Pitavastatin (usually as a calcium salt) is a member of the blood cholesterol lowering medication class of statins.
Pitavastatin is an inhibitor of HMG-CoA reductase, the enzyme that catalyses the first step of cholesterol synthesis.
It was patented in 1987 and approved for medical use in 2003. It is available in Japan, South Korea and in India. In the US, it received FDA approval in 2009.
Kowa Pharmaceuticals, a subsidiary of Kowa Company, is the owner of the American patent to pitavastatin.
Medical uses
Pitavastatin is indicated for hypercholesterolaemia (elevated cholesterol) and for the prevention of cardiovascular disease.
A 2009 study of the 104-week LIVES trial found pitavastatin increased HDL cholesterol, especially in patients with HDL lower than 40 mg/dL, who had a 24.6% rise, in addition to reducing LDL cholesterol 31.3%. HDL improved in patients who switched from other statins and rose over time. In the 70-month CIRCLE observational study, pitavastatin increased HDL more than atorvastatin.
It has neutral or possibly beneficial effects on glucose control. As a consequence, pitavastatin is likely to be appropriate for patients with metabolic syndrome plus high LDL, low HDL and diabetes mellitus.
Side effects
Common statin-related side effects (headaches, stomach upset, abnormal liver function tests and muscle cramps) were similar to other statins. Pitavastatin is a lipophilic statin. Reports indicate that this statin may lead to fewer muscle side effects than other statins. One study found that coenzyme Q10 was not reduced as much as with certain other statins (though this is unlikely given the inherent chemistry of the HMG-CoA reductase pathway that all statin drugs inhibit).
There is evidence that pitavastatin does not increase insulin resistance in humans, with insulin resistance assessed by the homeostatic model assessment (HOMA-IR) method.
Hyperuricemia or increased levels of serum uric acid have been reported with pitavastatin.
Metabolism and interactions
Statins are metabolised in part by one or more liver cytochrome P450 enzymes, leading to an increased potential for drug interactions and problems with certain foods (such as grapefruit juice). The primary metabolism pathway of pitavastatin is glucuronidation. It is minimally metabolized by the CYP450 enzymes CYP2C9 and CYP2C8, but not by CYP3A4 (which is a common source of interactions in other statins). As a result, it is less likely to interact with drugs that are metabolized via CYP3A4, which might be important for elderly patients who need to take multiple medicines.
History
Pitavastatin (previously known as itavastatin, itabavastin, nisvastatin, NK-104, or NKS-104) was discovered in Japan by Nissan Chemical Industries and developed further by Kowa Pharmaceuticals, Tokyo. Pitavastatin was approved for use in the United States by the FDA in August 2009, under the brand name Livalo. Pitavastatin has been also approved by the Medicines and Healthcare products Regulatory Agency (MHRA) in UK in August 2010. Zypitamag (pitavastatin magnesium), a pharmaceutical alternative to Livalo, was approved for use in the United States by the FDA in 2017.
Society and culture
Brand names
Pitavastatin is marketed in the United States under the brand names Livalo and Zypitamag, and in the European Union and Russia under the brand name Livazo.
References
Statins
Quinolines
Diols
Carboxylic acids
4-Fluorophenyl compounds
Cyclopropyl compounds
Beta hydroxy acids
Drugs developed by Eli Lilly and Company | Pitavastatin | Chemistry | 826 |
37,110,580 | https://en.wikipedia.org/wiki/PlanGrid | PlanGrid is a construction productivity software designed for onsite construction workers. PlanGrid digitises blueprints. It features version control and collaboration tools such as field markups, progress photos and issues tracking.
History
PlanGrid is headquartered in San Francisco, California and was founded in 2011. It is available on mobile and windows devices. The software can store, view, and communicate construction blueprints.
The company's board of directors included former Salesforce COO George Hu, Sequoia Capital Partner Doug Leone, and former Autodesk CEO Carol Bartz.
Autodesk announced plans in November 2018 to acquire PlanGrid for US$875 million. The acquisition was completed on December 20, 2018.
PlanGrid's CEO, Tracy Young, was named to Fast Companys "Most Creative People in Business" in 2015.
Fundraising history
The company's seed-stage investors include Y-Combinator, Sam Altman, Paul Buchheit and 500 Startups.
In May 2015, PlanGrid raised a $18 million Series A from Sequoia Capital.
In November 2015, PlanGrid raised a $40 million Series B led by Tenaya Capital.
References
External links
Technology companies of the United States
Software companies based in California
Construction software
Construction documents
Autodesk acquisitions
2018 mergers and acquisitions
Defunct software companies of the United States
2012 establishments in California
Software companies established in 2012
American companies established in 2012 | PlanGrid | Engineering | 292 |
3,183,108 | https://en.wikipedia.org/wiki/Test%20light | A test light, test lamp, voltage tester, or mains tester is a piece of electronic test equipment used to determine the presence of electricity in a piece of equipment under test. A test light is simpler and less costly than a measuring instrument such as a multimeter, and often suffices for checking for the presence of voltage on a conductor. Properly designed test lights include features to protect the user from accidental electric shock. Non-contact test lights can detect voltage on insulated conductors.
Two-contact test lights
The test light is an electric lamp connected with one or two insulated wire leads. Often, it takes the form of a screwdriver with the lamp connected between the tip of the screwdriver and a single lead that projects out the back of the screwdriver. By connecting the flying lead to an earth (ground) reference and touching the screwdriver tip to various points in the circuit, the presence or absence of voltage at each point can be determined, allowing simple faults to be detected and traced to their root cause. For higher voltages, a statiscope consisting of a neon glow tube mounted on a long insulating handle can be used to detect AC voltages of 2000 volts or more.
For low voltage work (for example, in automobiles), the lamp used is usually a small, low-voltage incandescent light bulb. These lamps usually are designed to operate on approximately 12 V; application of an automotive test lamp on mains voltage will destroy the lamp and may cause a short-circuit fault in the tester.
For line voltage (mains) work, the lamp is usually a small neon lamp connected in series with an appropriate ballast resistor. These lamps often can operate across a wide range of voltages from 90V up to several hundred volts. In some cases, several separate lamps are used with resistive voltage dividers arranged to allow additional lamps to strike as the applied voltage rises higher. The lamps are mounted in order from lowest voltage to highest, this minimal bar graph providing a crude indication of voltage.
Incandescent bulbs may also be used in some electronic equipment repair, and a trained technician can usually tell the approximate voltage by using the brightness as a crude indicator.
Safety
A hand-held test lamp necessarily puts the user in proximity to live circuits. Accidental contact with live wiring can result in a short circuit or electric shock. Inexpensive or home-made test lamps may not include sufficient protection against high-energy faults. It is customary to connect a test lamp to a known live circuit both before and after testing an unknown circuit, to check for failure of the test lamp itself.
In the UK, guidelines established by the Health and Safety Executive (HSE) provide recommendations for the construction and use of test lamps. Probes must be well-insulated, with minimal exposure of live terminals, with finger guards to prevent accidental contact, and must not expose live wires if the test lamp glass bulb is broken. To limit the energy delivered in case of a short circuit, test lights must have a current-limiting fuse or current-limiting resistor and fuse. The HSE guidelines also recommend procedures to validate operation of the test light. When a known live circuit is not available, a separate proving unit that provides a known test voltage and sufficient power to illuminate the lamp is used to confirm operation of the lamp before and after testing a circuit.
Since energy to operate the test lamp is drawn from the circuit under test, some high-impedance leakage voltages may not be detectable using this type of non-amplified test equipment.
One-contact neon test lights
A low-cost type of test lamp contacts only one side of the circuit under test, and relies on stray capacitance and current passing through the user's body to complete the circuit. The device may have the form of a screwdriver. The tip of the tester is touched to the conductor being tested (for instance, it can be used on a wire in a switch, or inserted into a hole of an electric socket). A neon lamp takes very little current to light, and thus can use the user's body capacitance to earth ground to complete the circuit.
Screwdriver-type test lamps are very inexpensive, but cannot meet the construction requirements of UK GS 38. If the shaft is exposed, a shock hazard to the user exists, and the internal construction of the tester provides no protection against short-circuit faults. Failure of the resistor and lamp series network can put the user in direct metallic contact with the circuit under test. For example, water trapped inside the screwdriver may allow enough leakage current to shock the user. Even if an internal short circuit does not electrocute the user, the resulting electric shock may result in a fall or other injury. The lamp provides no indication below the strike voltage of the neon lamp, and so cannot detect certain hazardous leakage conditions. Since it relies on capacitance to complete the circuit, direct-current potential cannot be reliably indicated. If the user of the screwdriver is isolated from ground and capacitively coupled to other nearby live wires, a false negative may occur when testing a live circuit, and a false positive when testing a dead circuit. False negatives may also occur in brightly lit areas which make the neon glow hard to see.
Non-contact voltage detectors
Amplified electronic testers (informally called electrical tester pens, test pens, or voltage detectors) rely on capacitive current only, and essentially detect the changing electric field around AC energized objects. This means that no direct metallic contact with the circuit is required. The user must touch the top of the handle to provide a ground reference (through stray capacitance to ground), at which point the indicator LED will light up or a speaker will buzz, if the conductor being tested is live. Additional energy to light the lamp and power the amplifier is supplied by a small internal battery, and does not flow through the user's body.
When the device is placed near a live conductor, a capacitive voltage divider is established, comprising the parasitic capacitance between the conductor and the sensor, and between the sensor to ground (through the user's body). When the tester detects current flowing through this divider, it indicates the presence of voltage.
Some amplified testers will give a stronger indication (brighter light or louder buzz) to gauge relative strength of the detected field, thus giving some clues about the location of an energized object. Other testers give only a simple on/off indication of a detected electric field. Professional-grade testers will also have a feature to reassure the user that the battery and lamp are working.
Voltage detector pens are made for either line-voltage or lower-voltage (around 50 volt) ranges. A tester intended for mains-voltage detection may not provide any indication on lower-voltage control circuits such as those used for doorbells or HVAC control.
Unlike tong ammeters which sense changing magnetic fields, these detectors can be used even if no current is flowing through the wire in question, because they sense the alternating electric field radiating from the AC voltage on the conductor.
A non-contact tester which senses electric fields cannot detect voltage inside shielded or armored cables (a fundamental limitation due to the Faraday cage effect). Another limitation is that DC voltage cannot be detected by this method, since DC current does not pass through capacitors (in the steady state), so the tester is not activated.
These types of testers can be used on series-connected strings of mini Christmas lights to detect which bulb has failed and broken the circuit, causing the set (or a section of it) not to light. By pointing the end of the detector at the tip of each bulb, it can be determined whether it is still connected at least on one side. The first bulb which does not register is likely the one just past the problem bulb. (Burnt-out bulbs will still show as good, if there is a bypass shunt which completes the circuit.) Flipping the set's plug over and reinserting it in the outlet will cause the opposite end of the set or circuit to register instead.
Receptacle tester
A receptacle tester (outlet tester or socket tester) plugs into an outlet, and can detect some types of wiring errors. The particular error in wiring is shown by various combinations of three lights. Detectable errors include reversed hot/neutral, missing electrical ground or neutral, and others. However, leakage current through surge protective metal-oxide varistors connected between neutral and ground of a power strip can give a false indication that a ground connection exists.
Continuity tester lights
A lamp and battery can be used to test for contact closure or wire continuity. Care must be taken to ensure that all circuits are completely de-energized before use of a continuity tester lamp, or the lamp will be destroyed. Sometimes a flashlight (torch) is field-modified or factory-manufactured with test leads, to allow the flashlight to be used as a continuity tester.
See also
Solenoid voltmeter
References
External links
Recall of one model of voltage detector pen
Electrical test equipment
Electronic test equipment | Test light | Technology,Engineering | 1,896 |
52,218,795 | https://en.wikipedia.org/wiki/Transcription%20factor%20binding%20site%20databases | Transcription factors are proteins that bind genomic regulatory sites. Identification of genomic regulatory elements is essential for understanding the dynamics of developmental, physiological and pathological processes. Recent advances in chromatin immunoprecipitation followed by sequencing (ChIP-seq) have provided powerful ways to identify genome-wide profiling of DNA-binding proteins and histone modifications. The application of ChIP-seq methods has reliably discovered transcription factor binding sites and histone modification sites.
Transcription factor binding site databases
Comprehensive List of transcription factor binding sites (TFBSs) databases based on ChIP-seq data as follows:
References
Bioinformatics
Genetics databases | Transcription factor binding site databases | Engineering,Biology | 135 |
1,088,035 | https://en.wikipedia.org/wiki/Sentence%20%28music%29 | In Western music theory, the term sentence is analogous to the way the term is used in linguistics, in that it usually refers to a complete, somewhat self-contained statement. Usually a sentence refers to musical spans towards the lower end of the durational scale; i.e. melodic or thematic entities well below the level of movement or section, but above the level of motif or measure. The term is usually encountered in discussions of thematic construction. In the last fifty years, an increasing number of theorists such as William Caplin have used the term to refer to a specific theme-type involving repetition and development.
Sentence as a metaphor for musical statement
Since the word "sentence" is borrowed from the study of (verbal) grammar—where its accepted meaning is one that does not admit of straightforward application to musical structures—its use in music has frequently been metaphorical. Especially before the latter half of the twentieth century, different musicians and theorists employ and define the term in different ways. For example, Stewart Macpherson defines a musical sentence as "the smallest period in a musical composition that can give in any sense the impression of a complete statement." It "may be defined as a period containing two or more phrases, and most frequently ending with some form of perfect cadence." Among the simplest examples he gives are what he calls "duple sentences" -- themes (from Mozart's D major Piano Sonata and Beethoven's Third Piano Concerto) in which we find pairs of "balanced" phrases (four-bar "announcing phrase" ending in half-cadence, followed by four-bar "responsive phrase" ending with perfect cadence): to many modern theorists this kind of structure is called a period. Similarly, the Grove Dictionary of Music states that the term "sentence" "has much the same meaning as 'period', though it lacks the flexibility of the latter term."
Sentence as a specific form-type: Schoenberg tradition
Arnold Schoenberg applied the term "sentence" to a very specific structural type distinct from the antecedent-consequent period. In a sentence's first part, a statement of a "basic motive" is followed by a "complementary repetition" (e.g. the first, "tonic version", of the shape reappears in a "dominant version"); in its second part this material is subjected to "reduction" or "condensation" with the intention of bringing the statement to a properly "liquidated" state and cadential conclusion. The sentence was one of a number of basic form-types Schoenberg described through analysis; another was the period. In Schoenberg's view, "the sentence is a higher form of construction than the period. It not only makes a statement of an idea, but at once starts a kind of development".
Schoenberg's conception of the sentence has been widely adopted in music theory, and appears in many introductory music theory textbooks. While Schoenberg's conception of the sentence is traditionally used in analysis of music from the Classical period, it has also been applied to the Classical music of the 19th and 20th centuries, and to American popular songs from the early twentieth century.
See also
Period (music)
Phrase (music)
References
Sources
Nattiez, Jean-Jacques (1990). Discourse: Toward a Semiology of Music (, 1987). Translated by Carolyn Abbate (1990). .
MacPherson (1930). Form in Music, Joseph Williams Ltd., London.
Caplin, William E. (1998). Classical Form: A theory of Formal Functions. .
Schoenberg, Arnold (1967). "Fundamentals of Music Composition".
External links
Musical Phrases Unleashed: Basic Ideas and Sentences on Art of Composing
Formal sections in music analysis | Sentence (music) | Technology | 770 |
26,675,250 | https://en.wikipedia.org/wiki/Aegean%20numerals | Aegean numbers was an additive sign-value numeral system used by the Minoan and Mycenaean civilizations. They are attested in the Linear A and Linear B scripts. They may have survived in the Cypro-Minoan script, where a single sign with "100" value is attested so far on a large clay tablet from Enkomi.
Unicode
See also
Linear A
Linear B
Greek numerals
References
External links
Open source font for rendering aegean numerals correctly - Google Noto Fonts
Aegean languages in the Bronze Age
Numeral systems
Linear B
Linear A | Aegean numerals | Mathematics | 118 |
6,452,667 | https://en.wikipedia.org/wiki/Expanded%20granular%20sludge%20bed%20digestion | An expanded granular sludge bed (EGSB) reactor is a variant of the upflow anaerobic sludge blanket digestion (UASB) concept for anaerobic wastewater treatment. The distinguishing feature is that a faster rate of upward-flow velocity is designed for the wastewater passing through the sludge bed. The increased flux permits partial expansion (fluidisation) of the granular sludge bed, improving wastewater-sludge contact as well as enhancing segregation of small inactive suspended particle from the sludge bed. The increased flow velocity is either accomplished by utilizing tall reactors, or by incorporating an effluent recycle (or both). A scheme depicting the EGSB design concept is shown in this EGSB diagram.
The EGSB design is appropriate for low strength soluble wastewaters (less than 1 to 2 g soluble COD/l) or for wastewaters that contain inert or poorly biodegradable suspended particles which should not be allowed to accumulate in the sludge bed.
See also
Anaerobic digester types
Biogas
List of wastewater treatment technologies
Water pollution
References
Anaerobic digester types
Environmental engineering
Water treatment | Expanded granular sludge bed digestion | Chemistry,Engineering,Environmental_science | 235 |
10,266,438 | https://en.wikipedia.org/wiki/MERODE | MERODE is an Object Oriented Enterprise Modeling method developed at KU Leuven (Belgium). Its name is the abbreviation of Model driven, Existence dependency Relation, Object oriented DEvelopment. MERODE is a method for creating domain models (also called conceptual models) as basis for building information systems making use of two prominent UML diagramming techniques - class diagram and state diagrams. Starting from a high-level PIM (close to a Computational Independent Model (CIM)) allows removing or hiding details irrelevant for a conceptual modelling view which makes the approach easier to understand.
The method is grounded in process algebra, which enables mathematical reasoning on models. Thanks to this, models can be checked for internal consistency and mutual completeness, i.e. inter/intra model consistency and syntactical quality. The automated reasoning ("consistency by construction") also caters for autocomplete functionality, which allows creating correct models faster.
A typical MERODE analysis or conceptualisation consists of three views or diagrams: a so-called existence dependency graph (EDG) similar to a UML class diagram, a proprietary concept namely an object event table (OET) and a group of finite state machines.
MERODE fosters a model-driven engineering approach to software development. It targets platform independent domain models that are sufficiently complete for execution, i.e. transformation to platform-specific models and to code. In order to achieve automated transformation of models, MERODE limits the use of UML to a number of well-defined constructs with clear semantics and complements this with the notion of "existence dependency" and a proprietary approach to object interaction modelling.
MERODE-models can be created with the opensource case tool JMermaid. The tool also allows checking the models for consistency and readiness for transformation.
A companion code generator allows to generate a fully working prototype. One-click prototype production lowers the required skill-set for its useful application. By embedding the models into the application, the behaviour of the prototype can be traced back to the models, i.e. making it possible to validate the semantic quality of models. MERODE prototypes are augmented with feedback (textual and graphical) that links the test results to their causes in the model.
References
External links
More information on Merode
sourceforge website for Mermaid
Enterprise modelling | MERODE | Engineering | 472 |
75,478,671 | https://en.wikipedia.org/wiki/Cendakimab | Cendakimab (RPC4046; ABT 308; CC-93538) is a monoclonal antibody against interleukin 13. It is developed by Bristol Myers Squibb for eosinophilic esophagitis.
References
Drugs developed by Bristol Myers Squibb
Anti-interleukin monoclonal antibodies | Cendakimab | Chemistry | 75 |
57,800,229 | https://en.wikipedia.org/wiki/Cram%C3%A9r%27s%20theorem%20%28large%20deviations%29 | Cramér's theorem is a fundamental result in the theory of large deviations, a subdiscipline of probability theory. It determines the rate function of a series of iid random variables.
A weak version of this result was first shown by Harald Cramér in 1938.
Statement
The logarithmic moment generating function (which is the cumulant-generating function) of a random variable is defined as:
Let be a sequence of iid real random variables with finite logarithmic moment generating function, i.e. for all .
Then the Legendre transform of :
satisfies,
for all
In the terminology of the theory of large deviations the result can be reformulated as follows:
If is a series of iid random variables, then the distributions satisfy a large deviation principle with rate function .
References
Large deviations theory
Probability theorems | Cramér's theorem (large deviations) | Mathematics | 174 |
31,387,682 | https://en.wikipedia.org/wiki/Dynamic%20insulation | Dynamic insulation is a form of insulation where cool outside air flowing through the thermal insulation in the envelope of a building will pick up heat from the insulation fibres. Buildings can be designed to exploit this to reduce the transmission heat loss (U-value) and to provide pre-warmed, draft free air to interior spaces. This is known as dynamic insulation since the U-value is no longer constant for a given wall or roof construction but varies with the speed of the air flowing through the insulation (climate adaptive building shell). Dynamic insulation is different from breathing walls. The positive aspects of dynamic insulation need to be weighed against the more conventional approach to building design which is to create an airtight envelope and provide appropriate ventilation using either natural ventilation or mechanical ventilation with heat recovery. The air-tight approach to building envelope design, unlike dynamic insulation, results in a building envelope that provides a consistent performance in terms of heat loss and risk of interstitial condensation that is independent of wind speed and direction. Under certain wind conditions a dynamically insulated building can have a higher heat transmission loss than an air-tight building with the same thickness of insulation. Often the air enters at about 15 °C.
Introduction
The primary function of the walls and roof of a building is to be wind and watertight. Depending on the function of the building there will be also a requirement to maintain the inside within a suitable temperature range in a way that minimises both the use of energy and the associated carbon dioxide emissions.
Dynamic insulation is normally implemented in timber frame walls and in ceilings. It turns on its head the long accepted wisdom of building designers and building services engineers to "build tight and ventilate right". It requires air permeable walls and/or roof/ceiling so that when the building is depressurised air can flow from outside to inside through the insulation in the wall or roof or ceiling (Figs 1 and 2). The following explanation of dynamic insulation will, for simplicity, be set in the context of temperate or cold climates where the main energy use is for heating rather than cooling the building. In hot climates it may have application in increasing the heat loss from the building.
As air flows inwards through the insulation it picks up, via the insulation fibres, the heat that is being conducted to the outside. Dynamic insulation is thus able to achieve the dual function of reducing the heat loss through the walls and/or roof whilst at the same time supplying pre-warmed air to the indoor spaces. Dynamic insulation would appear, therefore, to overcome the major disadvantage of airtight envelopes which is that the quality of the indoor air will deteriorate unless there is natural or mechanical ventilation. However, dynamic insulation also requires mechanical ventilation with heat recovery (MVHR) in order to recover the heat in the exhaust air.
For the air to be continually drawn through the walls and/or roof/ceiling, a fan is needed to hold the building at a pressure of 5 to 10 Pascals below the ambient pressure. The air that is being continuously drawn through the wall or roof needs to be continuously vented to outside. This represents a heat loss which must be recovered. An air-to-air heat exchanger (Fig 2) is the simplest way to do this.
Annotation for Air Tight Timber Frame Construction
Annotation for Air Permeable Wall Construction
Science of dynamic insulation
All the main features of dynamic insulation can be understood by considering the ideal case of one-dimensional steady state heat conduction and air flow through a uniform sample of air permeable insulation. Equation (), which determines the temperature T at a distance x measured from the cold side of the insulation, is derived from the total net flow of conduction and convective heat across a small element of insulation being constant.
where
uair speed through the insulation (m/s)
caspecific heat of air (J/kg K)
ρadensity of air (kg/m3)
λathermal conductivity of the insulation(W/m K)
For two- and three-dimensional geometries computational fluid dynamics (CFD) tools are required to solve simultaneously the fluid flow and heat transfer equations through porous media. The idealised 1D model of dynamic insulation provide a great deal of physical insight into the conductive and convective heat transfer processes which provides a means of testing the validity of the results of CFD calculations. Furthermore, just as simple 1D steady state heat flow is assumed in the calculation of the heat transmission coefficients (U-values) that are used in the design, approval and building energy performance rating of buildings so the simple 1D steady state model of dynamic insulation is adequate for designing and assessing the performance of a dynamically insulated building or element of the building.
Insulations such as polyurethane (PUR) boards, which due to their micro-structure, are not air permeable are not suitable for dynamic insulation. Insulations such as rock wool, glass wool, sheep's wool, cellulose are all air permeable and so can be used in a dynamically insulated envelope. In equation () the air speed through the insulation, u is taken as positive when the air flow is in the opposite direction to the conductive heat flow (contra-flux). Equation () also applies to steady state heat flow in multi-layered walls.
Equation () has an analytical solution
For the boundary conditions:
T(x) = To at x = 0
T(x) = TL at x = L
where the parameter A, with dimensions of length, is defined by:
The temperature profile as calculated using equation () for air flowing through a slab of cellulose insulation 0.2 m thick in which one side is at a temperature of 20 °C and the other is at 0 °C is shown in Fig 3. The thermal conductivity of cellulose insulation was taken to be 0.04 W/m2K.
Contra-flux
Fig 3 shows the typical behaviour of the temperature profile through dynamic insulation where the air flows in the opposite direction to the heat flux. As the air flow increases from zero, the temperature profile becomes increasingly more curved. On the cold side of the insulation (x/L = 0) the temperature gradient becomes increasingly horizontal. As the conduction heat flow is proportional to the temperature gradient, the slope of the temperature profile on the cold side is a direct indication of the conduction heat loss through a wall or roof. On the cold side of the insulation the temperature gradient is close zero which is the basis for the claim often made that dynamic insulation can achieve a U-value of zero W/m2K.
On the warm side of the insulation the temperature gradient gets steeper with increasing air flow. This implies heat is flowing into the wall at a greater rate than for conventional insulation (air speed = 0 mm/s). For the case shown of air flowing through the insulation at 1mm/s the temperature gradient on the warm side of the insulation x/L = 1) is 621 °C/m which compares with only 100 °C/m for the conventional insulation. This implies that with an air flow of 1mm/s the inner surface is absorbing 6 times as much heat as that for conventional insulation.
A consequence of this is that considerably more heat has to be put into the wall if there is air flowing through from outside. Specifically a space heating system six time larger than that for a conventionally insulated house would be needed. It is frequently stated that in dynamic insulation the outside air is being warmed up by heat that would be lost in any case. The implication being that the outside air is being warmed by "free" heat. The fact that the heat flow into the wall increases with air speed is evidenced by the decreasing temperature of the inner surface (Table 2 and Fig 4 below). A dynamically insulated house requires also an air-to-air heat exchanger as does an airtight house. The latter has the further advantage that if it is well insulated it will require only a minimal space heating system.
The temperature gradient at point in dynamic insulation can be obtained by differentiating equation ()
From this the temperature gradient on the cold side of the insulation (x = 0) is given by
and the temperature gradient on the warm side of the insulation (x = L) is given by
From the temperature gradient on the cold side of the insulation (equation ()) a transmission heat loss or U-value for a dynamically insulated wall, Udyn can be calculated (Table 1)
This definition of dynamic U-value would appear to be consistent with Wallenten's definition.
The ratio of the dynamic U-value to the static U-value (u=0 m/s) is
Table 1 Dynamic U-value
With this definition, the U-value of the dynamic wall decreases exponentially with increasing air speed.
As stated above the conductive heat flow into the insulation on the warm side is very much greater than that leaving the cold side. In this case it is 6.21 X 4 / 0.0504 = 493 times for an air speed of 1 mm/s (Table 1). This imbalance in conductive heat flow is raising the temperature of the incoming air.
This large heat flow into the wall has a further consequence. At the surface of a wall, floor or ceiling there is thermal resistance which takes account of the convective and radiant heat transfer at these surfaces. For a vertical internal surface this thermal resistance has a value of 0.13 m2 K/W. In a dynamically insulated wall, as the conduction heat flow into the wall increases then so does the temperature drop across this internal thermal resistance increase. The wall surface temperature will become increasingly colder (Table 2). The temperature profiles through dynamic insulation taking into account the decrease in surface temperature with increasing air flow is shown in Fig 4.
Table 2 Temperature drop across air film thermal resistance
As the operative temperature of a room is a combination of the air temperature and the mean temperature of all the surfaces in the room this implies that people will feel increasingly cooler as the air flow through the wall increases. Occupants may be tempted to turn up the room thermostat to compensate and thereby increasing the heat loss.
Pro-flux
Fig 5 shows the typical behaviour of the dynamic insulation temperature profile when the air flows in the same direction to the conductive heat flow (pro-flux). As air at room temperature flows outwards with increasing speed the temperature profile becomes increasingly more curved. On the warm side of the insulation the temperature gradient becomes increasingly horizontal as the warm air prevents the insulation cooling down in the linear way that would occur with no air flow. The conductive heat loss into the wall is very much less than that for conventional insulation. This does not mean that the transmission heat loss for the insulation is very low.
On the cold side of the insulation the temperature gradient gets steeper with increasing air outward flow. This is because the air, having now cooled, is no longer able to transfer heat to the insulation fibres. In pro-flux mode heat is flowing out of the wall at a greater rate than the case for conventional insulation. Warm moist air flowing out through the insulation and cooling rapidly increases the risk of condensation occurring within the insulation which will degrade the thermal performance of the wall and could, if prolonged, lead to mould growth and timber decay.
How the heat flow (W/m2K) from the outer or cold surface of the insulation varies with air flow through the insulation is shown in Fig 6. When the air, which is also cold, flows inwards (air speed is positive) then the heat loss decreases from that of conventional insulation towards zero. However, when warm air flows outwards through the insulation (air speed is negative) then the heat losses increase dramatically. This is why in a conventionally insulated building it is desirable to make the envelope airtight. In a dynamically insulated wall it is necessary to ensure the air flow is inward at all points of the building under all wind speeds and directions.
Influence of the wind
In general when the wind blows on a building then the air pressure, Pw varies all over the building surface (Fig 7).
where
Poa reference pressure (Pa)
Cpwind pressure coefficient (dimensionless)
Liddament, and CIBSE, provide approximate wind pressure coefficient data for low rise buildings (up to 3 storeys). For a square plan building on an exposed site with the wind blowing directly on to the face of the building the wind pressure coefficients are as shown in Fig 8. For a wind speed of 5.7 m/s at ridge height (taken as 8m) there is zero pressure difference across the side walls when the building is depressurised to -10 Pa. The insulation in the windward and leeward walls is behaving dynamically in the contra-flux mode with U-values of 0.0008 W/(m2K) and 0.1 W/(m2K) respectively. Since the building has a square footprint the average U-value for the walls is 0.1252 W/m2K. For other wind speeds and directions, the U-values will be different.
For wind speeds greater than 5.7 m/s at ridge height then the side walls are in pro-flux mode with a U value dramatically increasing with wind speed (Fig 6) At wind speeds greater than 9.0 m/s at ridge height the lee-ward switches from contra-flux to pro-flux mode. The average U-value for the four walls is now 0.36 W/(m2K), which is significantly greater than the 0.2 W/(m2K) for an air-tight construction. These changes from contra-flux to pro-flux mode could be delayed by depressurising the building below -10 Pa.
By locating this building in a particular geographical location then wind speed data for this site may be used to estimate the proportion of the year in which one or more of the walls will be operating in the risky and high heat loss pro-flux mode. From the Rayleigh distribution of wind speed at the site of the building, it is possible to estimate the number of hours in a year during which the wind speed at a height of 10.0 m exceeds 7.83 m/s (estimated from the wind speed of 5.7 m/s at ridge height of 8.0 m). This is the total time during an average year in which a building with dynamically insulated walls has significant heat losses.
If, by way of example, the building in Fig 8 were located in Footdee, Aberdeen, the Ordnance Survey Land Ranger grid reference is NJ955065. Entering NJ9506 into the UK windspeed data base returns for this site an average annual wind speed of 5.8 m/s at a height of 10 m. The Rayleigh distribution for this mean wind speed indicates wind speeds in excess of 8 m/s are likely to occur for 2348 hours in the year or about 27% of the year. The wind pressure coefficients for the walls of the building vary also with wind direction which changes throughout the year. Nevertheless, the above calculations indicate that a square plan building of 2 storeys located in Footdee, Aberdeen could have one or more of the walls operating in the risky and high heat loss pro-flux mode for about a quarter of the year.
A more robust way of introducing dynamic insulation to a building that avoids the pressure variation around the building envelope is to make use of the fact that in a ventilated roof space the pressure is relatively uniform over the ceiling (Fig 9 ). Thus a building with a dynamically insulated ceiling would offer a consistent performance independent of a varying wind speed and direction.
Air control layer
The maximum depressurisation for a dynamically insulated building is normally limited to 10 Pa in order to avoid doors slamming shut or difficulty in opening doors. Dalehaug also recommended that the pressure difference through the construction at the design minimum air flow (> 0.5 m3/m2h) should be about 5 Pa. The function of the air control layer (Fig 1) in a dynamically insulated wall or ceiling is provide sufficient resistance to the air flow to achieve the required pressure drop at the design air flow rate. The air control layer requires to have a suitable air permeability and this is the key to making dynamic insulation work.
The permeability of a material to air flow, Φ, (m2/hPa) is defined as the volume of air that flows through a cube of material 1m X 1m X 1m in one hour
where
Aarea of material through which air flows (m2)
Lthickness of material through which air flows (m)
V'volume flow rate of air (m3/h)
ΔPpressure difference along the length L of material (Pa)
Equation () is a simplified form of Darcy's Law. In building applications the air is at ambient pressure and temperature and small changes in the viscosity of air are not significant. Darcy's Law can be used to calculate the air permeability of a porous medium if the permeability of the medium (m2) is known.
The air permeability of some materials that could be used in dynamically insulated walls or ceiling are listed in Table 3. Air permeability data is crucial to the selection of the correct material for the air control layer. Further sources of air permeability data include ASHRAE and Kumaran.
Table 3: Measured Air Permeability of Building Materials
(1) Pressure drop calculated at flow rate of 1 m3/m2h
Design of a dynamic insulated building
The application of the theory of dynamic insulation is best explained by way of an example. Assume a house of 100 m2 floor area with a dynamically insulated ceiling. Putting dynamic insulation in the ceiling effectively limits the house to a single storey.
The first step is to decide on an appropriate air change rate for good air quality. As this air flow rate will be supplied through the dynamically insulated ceiling and a mechanical ventilation and heat recovery system (MVHR), energy loss is not a major concern so 1 air change per hour (ach) will be assumed. If the floor to ceiling height is 2.4 m this implies an air flow rate of 240 m3/h, part of which is supplied through the dynamically insulated ceiling and partly through the MVHR.
Next the material for the air control layer is chosen to provide a suitable air flow rate at the chosen depressurisation, taken as 10 Pa in this case. (The air flow rate could be determined from the desired U-value at the depressurisation of 10 Pa.) From Table 4, fibreboard has an appropriate air permeability of 1.34x10−3 (m2/hPa).
For a 12mm thick sheet of fibreboard this gives, for the maximum pressure difference of 10 Pa, an air flow rate of 1.12 m3/h per m2 of ceiling. This is equivalent to an air speed through the ceiling of 1.12 m/h or 0.31 mm/s. The 100 m2 ceiling will thus provide 112 m3/h and therefore an air-to-air heat exchanger will provide the balance of 128 m3/h
Dynamic insulation works best with a good thickness of insulation so taking 200 mm of cellulose insulation (k = 0.04 W/m °C) the dynamic U value for an air flow of 0.31 mm/s is calculated using equation () above to be 0.066 W/m2 °C. If a lower dynamic U-value is required then a material with lower air permeability than fibreboard would need to be selected for the air control layer, so that a higher air speed through the insulation at 10 Pa can be achieved.
The final step would be to select an air-to-air heat exchanger that had a good heat recovery efficiency with a supply air flow rate of 128 m3/h and an extract air flow rate of 240 m3/h.
See also
List of insulation material
Wind loads on buildings
Laminar flow
Porous medium
References
External links
“OpenAir@RGU” Further resources on the theory and applications of dynamic insulation can be found on OpenAIR@RGU, the open access institutional repository of Robert Gordon University.
Insulators
Engineering thermodynamics | Dynamic insulation | Physics,Chemistry,Engineering | 4,177 |
61,532,875 | https://en.wikipedia.org/wiki/C14H19N3O | {{DISPLAYTITLE:C14H19N3O}}
The molecular formula C14H19N3O (molar mass: 245.320 g/mol, exact mass: 245.1528 u) may refer to:
Methafurylene, an antihistamine
Oxolamine, a cough suppressant
Molecular formulas | C14H19N3O | Physics,Chemistry | 74 |
740,872 | https://en.wikipedia.org/wiki/Calcium%20aluminosilicate | Calcium aluminosilicate, an aluminosilicate compound with calcium cations, most typically has formula CaAl2Si2O8.
In minerals, as a feldspar, it can be found as anorthite, an end-member of the plagioclase series.
Uses
As a food additive, it is sometimes designated E556. It is known to the FDA as, at under 2% by weight, an anti-caking agent for table salt, and as an ingredient in vanilla powder.
References
Aluminosilicates
Calcium compounds
E-number additives | Calcium aluminosilicate | Chemistry | 124 |
1,416,046 | https://en.wikipedia.org/wiki/History%20of%20chemistry | The history of chemistry represents a time span from ancient history to the present. By 1000 BC, civilizations used technologies that would eventually form the basis of the various branches of chemistry. Examples include the discovery of fire, extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass,
and making alloys like bronze.
The protoscience of chemistry, and alchemy, was unsuccessful in explaining the nature of matter and its transformations. However, by performing experiments and recording the results, alchemists set the stage for modern chemistry.
The history of chemistry is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs.
Ancient history
Early humans
Fire
Arguably the first chemical reaction used in a controlled manner was fire. However, for millennia fire was seen simply as a mystical force that could transform one substance into another (burning wood, or boiling water) while producing heat and light. Fire affected many aspects of early societies. These ranged from the simplest facets of everyday life, such as cooking and habitat heating and lighting, to more advanced uses, such as making pottery and bricks and melting of metals to make tools. It was fire that led to the discovery of glass and the purification of metals; this was followed by the rise of metallurgy.
Paint
A 100,000-year-old ochre-processing workshop was found at Blombos Cave in South Africa. It indicates that early humans had an elementary knowledge of mineral processing. Paintings drawn by early humans consisting of early humans mixing animal blood with other liquids found on cave walls also indicate a small knowledge of chemistry.
Early metallurgy
The earliest recorded metal employed by humans seems to be gold, which can be found free or "native". Small amounts of natural gold have been found in Spanish caves used during the late Paleolithic period, around 40,000 BC. The earliest gold metallurgy is known from the Varna culture in Bulgaria, dating from c. 4600 BC.
Silver, copper, tin and meteoric iron can also be found native, allowing a limited amount of metalworking in ancient cultures. Egyptian weapons made from meteoric iron in about 3000 BC were highly prized as "daggers from Heaven".
During the early stages of metallurgy, methods of purification of metals were sought, and gold, known in ancient Egypt as early as 2900 BC, became a precious metal.
Bronze Age
Tin, lead, and copper smelting
Certain metals can be recovered from their ores by simply heating the rocks in a fire: notably tin, lead and (at a higher temperature) copper. This process is known as smelting. The first evidence of this extractive metallurgy dates from the 6th and 5th millennia BC, and was found in the archaeological sites of the Vinča culture, Majdanpek, Jarmovac and Pločnik in Serbia. The earliest copper smelting is found at the Belovode site; these examples include a copper axe from 5500 BC. Other signs of early metals are found from the third millennium BC in places like Palmela (Portugal), Los Millares (Spain), and Stonehenge (United Kingdom). However, as often happens in the study of prehistoric times, the ultimate beginnings cannot be clearly defined and new discoveries are ongoing.
Bronze
These first metals were single elements, or else combinations as naturally occurred. By combining copper and tin, a superior metal could be made, an alloy called bronze. This was a major technological shift that began the Bronze Age about 3500 BC. The Bronze Age was a period in human cultural development when the most advanced metalworking (at least in systematic and widespread use) included techniques for smelting copper and tin from naturally occurring outcroppings of copper ores, and then smelting those ores to cast bronze. These naturally occurring ores typically included arsenic as a common impurity. Copper/tin ores are rare, as reflected in the absence of tin bronzes in western Asia before 3000 BC.
After the Bronze Age, the history of metallurgy was marked by armies seeking better weaponry. States in Eurasia prospered when they made the superior alloys, which, in turn, made better armor and better weapons.
The Chinese are credited with the first ever use of Chromium to prevent rusting. Modern archaeologists discovered that bronze-tipped crossbow bolts at the tomb of Qin Shi Huang showed no sign of corrosion after more than 2,000 years, because they had been coated in chromium. Chromium was not used anywhere else until the experiments of French pharmacist and chemist Louis Nicolas Vauquelin (1763–1829) in the late 1790s.In multiple Warring States period tombs, sharp swords and other weapons were also found to be coated with 10 to 15 micrometers of chromium oxide, which left them in pristine condition to this day.
Significant progress in metallurgy and alchemy was also made in ancient India.
Iron Age
Ferrous metallurgy
The extraction of iron from its ore into a workable metal is much more difficult than copper or tin. While iron is not better suited for tools than bronze (until steel was discovered), iron ore is much more abundant and common than either copper or tin, and therefore more often available locally, with no need to trade for it.
Iron working appears to have been invented by the Hittites in about 1200 BC, beginning the Iron Age. The secret of extracting and working iron was a key factor in the success of the Philistines.
Cast iron smithing as well as the innovation of the Blast Furnace and Cupola furnace was invented in ancient China, during the Warring States period when armies sought to develop better weaponry and armor in state-armories. Many other applications, practices, and devices associated with or involved in metallurgy were also established in ancient China, with the innovations of hydraulic-powered trip hammers, and double-acting piston bellows.
The Iron Age is named after the advent of iron working (ferrous metallurgy). Historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. These include the ancient and medieval kingdoms and empires of the Middle East and Near East, ancient Iran, ancient Egypt, ancient Nubia, and Anatolia (Turkey), Ancient Nok, Carthage, the Greeks and Romans of ancient Europe, medieval Europe, ancient and medieval China, ancient and medieval India, ancient and medieval Japan, amongst others.
Classical antiquity and atomism
Philosophical attempts to rationalize why different substances have different properties (color, density, smell), exist in different states (gaseous, liquid, and solid), and react in a different manner when exposed to environments, for example to water or fire or temperature changes, led ancient philosophers to postulate the first theories on nature and chemistry. The history of such philosophical theories that relate to chemistry can probably be traced back to every single ancient civilization. The common aspect in all these theories was the attempt to identify a small number of primary classical elements that make up all the various substances in nature. Substances like air, water, and soil/earth, energy forms, such as fire and light, and more abstract concepts such as thoughts, aether, and heaven, were common in ancient civilizations even in the absence of any cross-fertilization: for example ancient Greek, Indian, Mayan, and Chinese philosophies all considered air, water, earth and fire as primary elements.
Ancient world
Around 420 BC, Empedocles stated that all matter is made up of four elemental substances: earth, fire, air and water. The early theory of atomism can be traced back to ancient Greece. Greek atomism was made popular by the Greek philosopher Democritus, who declared that matter is composed of indivisible and indestructible particles called "atomos" around 380 BC. Earlier, Leucippus also declared that atoms were the most indivisible part of matter. This coincided with a similar declaration by the Indian philosopher Kanada in his Vaisheshika sutras around the same time period. Aristotle opposed the existence of atoms in 330 BC. A Greek text attributed to Polybus the physician (ca. 380 BC) argued that the human body is composed of four humours instead. Epicurus (fl. 300 BC) postulated a universe of indestructible atoms in which man himself is responsible for achieving a balanced life.
With the goal of explaining Epicurean philosophy to a Roman audience, the Roman poet and philosopher Lucretius wrote De rerum natura (On the Nature of Things) in the middle of the first century BC. In the work, Lucretius presents the principles of atomism; the nature of the mind and soul; explanations of sensation and thought; the development of the world and its phenomena; and explains a variety of celestial and terrestrial phenomena.
The earliest alchemists in the Western tradition seemed to have come from Greco-Roman Egypt in the first centuries AD. In addition to technical work, many of them invented chemical apparatuses. The bain-marie, or water bath, is named for Mary the Jewess. Her work also gives the first descriptions of the tribikos and kerotakis. Cleopatra the Alchemist described furnaces and has been credited with the invention of the alembic. Later, Zosimos of Panopolis wrote books on alchemy, which he called cheirokmeta, the Greek word for "things made by hand." These works include many references to recipes and procedures, as well as descriptions of instruments. Much of the early development of purification methods were described earlier by Pliny the Elder in his Naturalis Historia. He tried to explain those methods, as well as making acute observations of the state of many minerals.
Medieval alchemy
The elemental system used in medieval alchemy was developed primarily by the Persian or Arab alchemist Jābir ibn Hayyān and was rooted in the classical elements of Greek tradition. His system consisted of the four Aristotelian elements of air, earth, fire, and water in addition to two philosophical elements: sulphur, characterizing the principle of combustibility, "the stone which burns"; and mercury, characterizing the principle of metallic properties. They were seen by early alchemists as idealized expressions of irreducible components of the universe and are of larger consideration within philosophical alchemy.
The three metallic principles (sulphur to flammability or combustion, mercury to volatility and stability, and salt to solidity) became the tria prima of the Swiss alchemist Paracelsus. He reasoned that Aristotle's four-element theory appeared in bodies as three principles. Paracelsus saw these principles as fundamental and justified them by recourse to the description of how wood burns in fire. Mercury included the cohesive principle, so that when it left the wood (in smoke) the wood fell apart. Smoke described the volatility (the mercurial principle), the heat-giving flames described flammability (sulphur), and the remnant ash described solidity (salt).
The philosopher's stone
Alchemy is defined by the Hermetic quest for the philosopher's stone, the study of which is steeped in symbolic mysticism, and differs greatly from modern science. Alchemists toiled to make transformations on an esoteric (spiritual) and/or exoteric (practical) level. It was the protoscientific, exoteric aspects of alchemy that contributed heavily to the evolution of chemistry in Greco-Roman Egypt, in the Islamic Golden Age, and then in Europe. Alchemy and chemistry share an interest in the composition and properties of matter, and until the 18th century they were not separate disciplines. The term chymistry has been used to describe the blend of alchemy and chemistry that existed before that time.
During the Renaissance, exoteric alchemy remained popular in the form of Paracelsian iatrochemistry, while spiritual alchemy flourished, realigned to its Platonic, Hermetic, and Gnostic roots. Consequently, the symbolic quest for the philosopher's stone was not superseded by scientific advances, and was still the domain of respected scientists and doctors until the early 18th century. Early modern alchemists who are renowned for their scientific contributions include Jan Baptist van Helmont, Robert Boyle, and Isaac Newton.
Alchemy in the Islamic world
In the Islamic World, the Muslims were translating the works of ancient Greek and Hellenistic philosophers into Arabic and were experimenting with scientific ideas. The Arabic works attributed to the 8th-century alchemist Jābir ibn Hayyān introduced a systematic classification of chemical substances, and provided instructions for deriving an inorganic compound (sal ammoniac or ammonium chloride) from organic substances (such as plants, blood, and hair) by chemical means. Some Arabic Jabirian works (e.g., the "Book of Mercy", and the "Book of Seventy") were later translated into Latin under the Latinized name "Geber", and in 13th-century Europe an anonymous writer, usually referred to as pseudo-Geber, started to produce alchemical and metallurgical writings under this name. Later influential Muslim philosophers, such as Abū al-Rayhān al-Bīrūnī and Avicenna disputed the theories of alchemy, particularly the theory of the transmutation of metals.
Problems encountered with alchemy
There were several problems with alchemy, as seen from today's standpoint. There was no systematic naming scheme for new compounds, and the language was esoteric and vague to the point that the terminologies meant different things to different people. In fact, according to The Fontana History of Chemistry (Brock, 1992):
The language of alchemy soon developed an arcane and secretive technical vocabulary designed to conceal information from the uninitiated. To a large degree, this language is incomprehensible to us today, though it is apparent that readers of Geoffery Chaucer's Canon's Yeoman's Tale or audiences of Ben Jonson's The Alchemist were able to construe it sufficiently to laugh at it.
Chaucer's tale exposed the more fraudulent side of alchemy, especially the manufacture of counterfeit gold from cheap substances. Less than a century earlier, Dante Alighieri also demonstrated an awareness of this fraudulence, causing him to consign all alchemists to the Inferno in his writings. Soon afterwards, in 1317, the Avignon Pope John XXII ordered all alchemists to leave France for making counterfeit money. A law was passed in England in 1403 which made the "multiplication of metals" punishable by death. Despite these and other apparently extreme measures, alchemy did not die. Royalty and privileged classes still sought to discover the philosopher's stone and the elixir of life for themselves.
There was also no agreed-upon scientific method for making experiments reproducible. Indeed, many alchemists included in their methods irrelevant information such as the timing of the tides or the phases of the moon. The esoteric nature and codified vocabulary of alchemy appeared to be more useful in concealing the fact that they could not be sure of very much at all. As early as the 14th century, cracks seemed to grow in the facade of alchemy; and people became sceptical. Clearly, there needed to be a scientific method in which experiments could be repeated by other people, and results needed to be reported in a clear language that laid out both what was known and what was unknown.
17th and 18th centuries: Early chemistry
Practical attempts to improve the refining of ores and their extraction to smelt metals was an important source of information for early chemists in the 16th century, among them Georg Agricola (1494–1555), who published his great work De re metallica in 1556. His work describes the highly developed and complex processes of mining metal ores, metal extraction and metallurgy of the time. His approach removed the mysticism associated with the subject, creating the practical base upon which others could build. The work describes the many kinds of furnace used to smelt ore, and stimulated interest in minerals and their composition. It is no coincidence that he gives numerous references to the earlier author, Pliny the Elder and his Naturalis Historia. Agricola has been described as the "father of metallurgy" and the founder of geology as a scientific discipline.
In 1605, Sir Francis Bacon published The Proficience and Advancement of Learning, which contains a description of what would later be known as the scientific method. In 1605, Michal Sedziwój publishes the alchemical treatise A New Light of Alchemy which proposed the existence of the "food of life" within air, much later recognized as oxygen. In 1615 Jean Beguin published the Tyrocinium Chymicum, an early chemistry textbook, and in it draws the first-ever chemical equation. In 1637 René Descartes publishes Discours de la méthode, which contains an outline of the scientific method.
The Dutch chemist Jan Baptist van Helmont's work Ortus medicinae was published posthumously in 1648; the book is cited by some as a major transitional work between alchemy and chemistry, and as an important influence on Robert Boyle. The book contains the results of numerous experiments and establishes an early version of the law of conservation of mass. Working during the time just after Paracelsus and iatrochemistry, Jan Baptist van Helmont suggested that there are insubstantial substances other than air and coined a name for them – "gas", from the Greek word chaos. In addition to introducing the word "gas" into the vocabulary of scientists, van Helmont conducted several experiments involving gases. Jan Baptist van Helmont is also remembered today largely for his ideas on spontaneous generation and his 5-year tree experiment, as well as being considered the founder of pneumatic chemistry.
Robert Boyle
Anglo-Irish chemist Robert Boyle (1627–1691) is considered to have initiated the gradual separation of chemistry from alchemy. Although skeptical of elements and convinced of alchemy, Boyle played a key part in elevating the "sacred art" as an independent, fundamental and philosophical discipline. He is best known for Boyle's law, which he presented in 1662, though he was not the first to discover it. The law describes the inversely proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system.
Boyle is also credited for his landmark publication The Sceptical Chymist (1661), which advocated for a rigorous approach to experimentation among chemists. In the work, Boyle questioned some commonly held alchemical theories and argued for practitioners to be more "philosophical" and less commercially focused. He rejected the classical four elements of earth, fire, air, and water, and proposed a mechanistic alternative of atoms and chemical reactions that could be subject to rigorous experiment.
Boyle also tried to purify chemicals to obtain reproducible reactions. He was a vocal proponent of the mechanical philosophy proposed by René Descartes to explain and quantify the physical properties and interactions of material substances. Boyle was an atomist, but favoured the word corpuscle over atoms. He commented that the finest division of matter where the properties are retained is at the level of corpuscles.
Boyle repeated the tree experiment of van Helmont, and was the first to use indicators which changed colors with acidity. He also performed numerous investigations with an air pump, and noted that the mercury fell as air was pumped out. He also observed that pumping the air out of a container would extinguish a flame and kill small animals placed inside. Through his works, Boyle helped to lay the foundations for the chemical revolution two centuries later.
Development and dismantling of phlogiston
In 1702, German chemist Georg Stahl coined the name "phlogiston" for the substance believed to be released in the process of burning. Around 1735, Swedish chemist Georg Brandt analyzed a dark blue pigment found in copper ore. Brandt demonstrated that the pigment contained a new element, later named cobalt. In 1751, a Swedish chemist and pupil of Stahl's named Axel Fredrik Cronstedt, identified an impurity in copper ore as a separate metallic element, which he named nickel. Cronstedt is one of the founders of modern mineralogy. Cronstedt also discovered the mineral scheelite in 1751, which he named tungsten, meaning "heavy stone" in Swedish.
In 1754, Scottish chemist Joseph Black isolated carbon dioxide, which he called "fixed air". In 1757, Louis Claude Cadet de Gassicourt, while investigating arsenic compounds, creates Cadet's fuming liquid, later discovered to be cacodyl oxide, considered to be the first synthetic organometallic compound. In 1758, Joseph Black formulated the concept of latent heat to explain the thermochemistry of phase changes. In 1766, English chemist Henry Cavendish isolated hydrogen, which he called "inflammable air". Cavendish discovered hydrogen as a colorless, odourless gas that burns and can form an explosive mixture with air, and published a paper on the production of water by burning inflammable air (that is, hydrogen) in dephlogisticated air (now known to be oxygen), the latter a constituent of atmospheric air (phlogiston theory).
In 1773, Swedish German chemist Carl Wilhelm Scheele discovered oxygen, which he called "fire air", but did not immediately publish his achievement. In 1774, English chemist Joseph Priestley independently isolated oxygen in its gaseous state, calling it "dephlogisticated air", and published his work before Scheele. During his lifetime, Priestley's considerable scientific reputation rested on his invention of soda water, his writings on electricity, and his discovery of several "airs" (gases), the most famous being what Priestley dubbed "dephlogisticated air" (oxygen). However, Priestley's determination to defend phlogiston theory and to reject what would become the chemical revolution eventually left him isolated within the scientific community.
In 1781, Carl Wilhelm Scheele discovered that a new acid, tungstic acid, could be made from Cronstedt's scheelite (at the time named tungsten). Scheele and Torbern Bergman suggested that it might be possible to obtain a new metal by reducing this acid. In 1783, José and Fausto Elhuyar found an acid made from wolframite that was identical to tungstic acid. Later that year, in Spain, the brothers succeeded in isolating the metal now known as tungsten by reduction of this acid with charcoal, and they are credited with the discovery of the element.
Volta and the Voltaic pile
Italian physicist Alessandro Volta constructed a device for accumulating a large charge by a series of inductions and groundings. He investigated the 1780s discovery "animal electricity" by Luigi Galvani, and found that the electric current was generated from the contact of dissimilar metals, and that the frog leg was only acting as a detector. Volta demonstrated in 1794 that when two metals and brine-soaked cloth or cardboard are arranged in a circuit they produce an electric current.
In 1800, Volta stacked several pairs of alternating copper (or silver) and zinc discs (electrodes) separated by cloth or cardboard soaked in brine (electrolyte) to increase the electrolyte conductivity. When the top and bottom contacts were connected by a wire, an electric current flowed through this voltaic pile and the connecting wire. Thus, Volta is credited with constructing the first electrical battery to produce electricity.
Thus, Volta is considered to be the founder of the discipline of electrochemistry. A Galvanic cell (or voltaic cell) is an electrochemical cell that derives electrical energy from a spontaneous redox reaction taking place within the cell. It generally consists of two different metals connected by a salt bridge, or individual half-cells separated by a porous membrane.
Antoine-Laurent de Lavoisier
Antoine-Laurent de Lavoisier demonstrated with careful measurements that transmutation of water to earth was not possible, but that the sediment observed from boiling water came from the container. He burnt phosphorus and sulfur in air, and proved that the products weighed more than the original samples, with the mass gained being lost from the air. Thus, in 1789, he established the Law of Conservation of Mass, which is also called "Lavoisier's Law."
Repeating the experiments of Priestley, he demonstrated that air is composed of two parts, one of which combines with metals to form calxes. In (1778), he demonstrated that the "air" responsible for combustion was also the source of acidity. The next year, he named this portion oxygen (Greek for acid-former), and the other azote (Greek for no life). Because of his more thorough characterization of it as an element, Lavoisier thus has a claim to the discovery of oxygen along with Priestley and Scheele. He also discovered that the "inflammable air" discovered by Cavendish – which he termed hydrogen (Greek for water-former) – combined with oxygen to produce a dew, as Priestley had reported, which appeared to be water. In (1783), Lavoisier showed the phlogiston theory of combustion to be inconsistent. Mikhail Lomonosov independently established a tradition of chemistry in Russia in the 18th century; he also rejected the phlogiston theory, and anticipated the kinetic theory of gases. Lomonosov regarded heat as a form of motion, and stated the idea of conservation of matter.
Lavoisier worked with Claude Louis Berthollet and others to devise a system of chemical nomenclature, which serves as the basis of the modern system of naming chemical compounds. In his Methods of Chemical Nomenclature (1787), Lavoisier invented the system of naming and classification still largely in use today, including names such as sulfuric acid, sulfates, and sulfites. In 1785, Berthollet was the first to introduce the use of chlorine gas as a commercial bleach. In the same year he first determined the elemental composition of the gas ammonia. Berthollet first produced a modern bleaching liquid in 1789 by passing chlorine gas through a solution of sodium carbonate – the result was a weak solution of sodium hypochlorite. Another strong chlorine oxidant and bleach which he investigated and was the first to produce, potassium chlorate (KClO3), is known as Berthollet's Salt. Berthollet is also known for his scientific contributions to the theory of chemical equilibrium via the mechanism of reversible reactions.
Lavoisier's (Elementary Treatise of Chemistry, 1789) was the first modern chemical textbook, and presented a unified view of new theories of chemistry, contained a clear statement of the Law of Conservation of Mass, and denied the existence of phlogiston. In addition, it contained a list of elements, or substances that could not be broken down further, which included oxygen, nitrogen, hydrogen, phosphorus, mercury, zinc, and sulfur. His list, however, also included light and caloric, which he believed to be material substances. In the work, Lavoisier underscored the observational basis of his chemistry, stating "I have tried...to arrive at the truth by linking up facts; to suppress as much as possible the use of reasoning, which is often an unreliable instrument which deceives us, in order to follow as much as possible the torch of observation and of experiment." Nevertheless, he believed that the real existence of atoms was philosophically impossible. Lavoisier demonstrated that organisms disassemble and reconstitute atmospheric air in the same manner as a burning body.
With Pierre-Simon Laplace, Lavoisier used a calorimeter to estimate the heat evolved per unit of carbon dioxide produced. They found the same ratio for a flame and animals, indicating that animals produced energy by a type of combustion. Lavoisier believed in the radical theory, which stated that radicals, which function as a single group in a chemical reaction, would combine with oxygen in reactions. He believed all acids contained oxygen. He also discovered that diamond is a crystalline form of carbon.
Although many of Lavoisier's partners were influential for the advancement of chemistry as a scientific discipline, his wife Marie-Anne Lavoisier was arguably the most influential of them all. Upon their marriage, Mme. Lavoisier began to study chemistry, English, and drawing in order to help her husband in his work either by translating papers into English, a language which Lavoisier did not know, or by keeping records and drawing the various apparatuses that Lavoisier used in his labs. Through her ability to read and translate articles from Britain for her husband, Lavoisier had access to knowledge of many of the chemical advances happening outside of his lab. Furthermore, Mme. Lavoisier kept records of her husband's work and ensured that his works were published. The first sign of Marie-Anne's true potential as a chemist in Lavoisier's lab came when she was translating a book by the scientist Richard Kirwan. While translating, she stumbled upon and corrected multiple errors. When she presented her translation, along with her notes, to Lavoisier, her contributions led to Lavoisier's refutation of the theory of phlogiston.
Lavoisier made many fundamental contributions to the science of chemistry. Following his work, chemistry acquired a strict, quantitative nature, allowing reliable predictions to be made. The revolution in chemistry which he brought about was a result of a conscious effort to fit all experiments into the framework of a single theory. He established the consistent use of chemical balance, used oxygen to overthrow the phlogiston theory, and developed a new system of chemical nomenclature. Further potential contributions were cut short when Lavoisier was beheaded during the French Revolution.
19th century
Throughout the 19th century, chemistry was divided between those who followed the atomic theory of John Dalton and the energeticists, such as Wilhelm Ostwald and Ernst Mach. Although such proponents of the atomic theory as Amedeo Avogadro and Ludwig Boltzmann made great advances in explaining the behavior of gases, this dispute was not finally settled until Jean Perrin's experimental investigation of Einstein's atomic explanation of Brownian motion in the first decade of the 20th century.
Well before the dispute had been settled, many had already applied the concept of atomism to chemistry. A major example was the ion theory of Svante Arrhenius which anticipated ideas about atomic substructure that did not fully develop until the 20th century. Michael Faraday was another early worker, whose major contribution to chemistry was electrochemistry, in which (among other things) a certain quantity of electricity during electrolysis or electrodeposition of metals was shown to be associated with certain quantities of chemical elements, and fixed quantities of the elements therefore with each other, in specific ratios. These findings, like those of Dalton's combining ratios, were early clues to the atomic nature of matter.
John Dalton
In 1803, English meteorologist and chemist John Dalton proposed Dalton's law, which describes the relationship between the components in a mixture of gases and the relative pressure each contributes to that of the overall mixture. Discovered in 1801, this concept is also known as Dalton's law of partial pressures.
Dalton also proposed a modern atomic theory in 1803 which stated that all matter was composed of small indivisible particles termed atoms, atoms of a given element possess unique characteristics and weight, and three types of atoms exist: simple (elements), compound (simple molecules), and complex (complex molecules). In 1808, Dalton first published New System of Chemical Philosophy (1808–1827), in which he outlined the first modern scientific description of the atomic theory. This work identified chemical elements as a specific type of atom, therefore rejecting Newton's theory of chemical affinities.
Instead, Dalton inferred proportions of elements in compounds by taking ratios of the weights of reactants, setting the atomic weight of hydrogen to be identically one. Following Jeremias Benjamin Richter (known for introducing the term stoichiometry), he proposed that chemical elements combine in integral ratios. This is known as the law of multiple proportions or Dalton's law, and Dalton included a clear description of the law in his New System of Chemical Philosophy. The law of multiple proportions is one of the basic laws of stoichiometry used to establish the atomic theory. Despite the importance of the work as the first view of atoms as physically real entities and the introduction of a system of chemical symbols, New System of Chemical Philosophy devoted almost as much space to the caloric theory as to atomism.
French chemist Joseph Proust proposed the law of definite proportions, which states that elements always combine in small, whole number ratios to form compounds, based on several experiments conducted between 1797 and 1804 Along with the law of multiple proportions, the law of definite proportions forms the basis of stoichiometry. The law of definite proportions and constant composition do not prove that atoms exist, but they are difficult to explain without assuming that chemical compounds are formed when atoms combine in constant proportions.
Jöns Jacob Berzelius
A Swedish chemist and disciple of Dalton, Jöns Jacob Berzelius embarked on a systematic program to try to make accurate and precise quantitative measurements and to ensure the purity of chemicals. Along with Lavoisier, Boyle, and Dalton, Berzelius is known as the father of modern chemistry. In 1828 he compiled a table of relative atomic weights, where oxygen was used as a standard, with its weight set at 100, and which included all of the elements known at the time. This work provided evidence in favor of Dalton's atomic theory – that inorganic chemical compounds are composed of atoms combined in whole number amounts. He determined the exact elementary constituents of a large number of compounds; the results strongly supported Proust's Law of Definite Proportions. In discovering that atomic weights are not integer multiples of the weight of hydrogen, Berzelius also disproved Prout's hypothesis that elements are built up from atoms of hydrogen.
Motivated by his extensive atomic weight determinations and in a desire to aid his experiments, he introduced the classical system of chemical symbols and notation with his 1808 publication Lärbok i Kemien, in which elements are abbreviated to one or two letters to make a distinct symbol from their Latin name. This system of chemical notation—in which the elements were given simple written labels, such as O for oxygen, or Fe for iron, with proportions denoted by numbers—is the same basic system used today. The only difference is that instead of the subscript number used today (e.g., H2O), Berzelius used a superscript (H2O). Berzelius is credited with identifying the chemical elements silicon, selenium, thorium, and cerium. Students working in Berzelius's laboratory also discovered lithium and vanadium.
Berzelius developed the radical theory of chemical combination, which holds that reactions occur as stable groups of atoms called radicals are exchanged between molecules. He believed that salts are compounds formed of acids and bases, and discovered that the anions in acids were attracted to a positive electrode (the anode), whereas the cations in a base were attracted to a negative electrode (the cathode). Berzelius did not believe in the Vitalism Theory, but instead in a regulative force which produced organization of tissues in an organism. Berzelius is also credited with originating the chemical terms "catalysis", "polymer", "isomer", and "allotrope", although his original definitions differ dramatically from modern usage. For example, he coined the term "polymer" in 1833 to describe organic compounds which shared identical empirical formulas but which differed in overall molecular weight, the larger of the compounds being described as "polymers" of the smallest. By this long-superseded, pre-structural definition, glucose (C6H12O6) was viewed as a polymer of formaldehyde (CH2O).
New elements and gas laws
English chemist Humphry Davy was a pioneer in the field of electrolysis, using Alessandro Volta's voltaic pile to split up common compounds and thus isolate a series of new elements. He went on to electrolyse molten salts and discovered several new metals, especially sodium and potassium, highly reactive elements known as the alkali metals. Potassium, the first metal that was isolated by electrolysis, was discovered in 1807 by Davy, who derived it from caustic potash (KOH). Before the 19th century, no distinction was made between potassium and sodium. Sodium was first isolated by Davy in the same year by passing an electric current through molten sodium hydroxide (NaOH). When Davy heard that Berzelius and Pontin prepared calcium amalgam by electrolyzing lime in mercury, he tried it himself. Davy was successful, and discovered calcium in 1808 by electrolyzing a mixture of lime and mercuric oxide. He worked with electrolysis throughout his life and, in 1808, he isolated magnesium, strontium and barium.
Davy also experimented with gases by inhaling them. This experimental procedure nearly proved fatal on several occasions, but led to the discovery of the unusual effects of nitrous oxide, which came to be known as laughing gas. Chlorine was discovered in 1774 by Swedish chemist Carl Wilhelm Scheele, who called it "dephlogisticated marine acid" (see phlogiston theory) and mistakenly thought it contained oxygen. Scheele observed several properties of chlorine gas, such as its bleaching effect on litmus, its deadly effect on insects, its yellow-green colour, and the similarity of its smell to that of aqua regia. However, Scheele was unable to publish his findings at the time. In 1810, chlorine was given its current name by Humphry Davy (derived from the Greek word for green), who insisted that chlorine was in fact an element. He also showed that oxygen could not be obtained from the substance known as oxymuriatic acid (HCl solution). This discovery overturned Lavoisier's definition of acids as compounds of oxygen. Davy was a popular lecturer and able experimenter.
French chemist Joseph Louis Gay-Lussac shared the interest of Lavoisier and others in the quantitative study of the properties of gases. From his first major program of research in 1801–1802, he concluded that equal volumes of all gases expand equally with the same increase in temperature: this conclusion is usually called "Charles's law", as Gay-Lussac gave credit to Jacques Charles, who had arrived at nearly the same conclusion in the 1780s but had not published it. The law was independently discovered by British natural philosopher John Dalton by 1801, although Dalton's description was less thorough than Gay-Lussac's. In 1804 Gay-Lussac made several daring ascents of over 7,000 meters above sea level in hydrogen-filled balloons—a feat not equaled for another 50 years—that allowed him to investigate other aspects of gases. Not only did he gather magnetic measurements at various altitudes, but he also took pressure, temperature, and humidity measurements and samples of air, which he later analyzed chemically.
In 1808 Gay-Lussac announced what was probably his single greatest achievement: from his own and others' experiments he deduced that gases at constant temperature and pressure combine in simple numerical proportions by volume, and the resulting product or products—if gases—also bear a simple proportion by volume to the volumes of the reactants. In other words, gases under equal conditions of temperature and pressure react with one another in volume ratios of small whole numbers. This conclusion subsequently became known as "Gay-Lussac's law" or the "Law of Combining Volumes". With his fellow professor at the École Polytechnique, Louis Jacques Thénard, Gay-Lussac also participated in early electrochemical research, investigating the elements discovered by its means. Among other achievements, they decomposed boric acid by using fused potassium, thus discovering the element boron. The two also took part in contemporary debates that modified Lavoisier's definition of acids and furthered his program of analyzing organic compounds for their oxygen and hydrogen content.
The element iodine was discovered by French chemist Bernard Courtois in 1811. Courtois gave samples to his friends, Charles Bernard Desormes (1777–1862) and Nicolas Clément (1779–1841), to continue research. He also gave some of the substance to Gay-Lussac and to physicist André-Marie Ampère. On December 6, 1813, Gay-Lussac announced that the new substance was either an element or a compound of oxygen. It was Gay-Lussac who suggested the name "iode", from the Greek word ιώδες (iodes) for violet (because of the color of iodine vapor). Ampère had given some of his sample to Humphry Davy. Davy did some experiments on the substance and noted its similarity to chlorine. Davy sent a letter dated December 10 to the Royal Society of London stating that he had identified a new element. Arguments erupted between Davy and Gay-Lussac over who identified iodine first, but both scientists acknowledged Courtois as the first to isolate the element.
In 1815, Humphry Davy invented the Davy lamp, which allowed miners within coal mines to work safely in the presence of flammable gases. There had been many mining explosions caused by firedamp or methane often ignited by open flames of the lamps then used by miners. Davy conceived of using an iron gauze to enclose a lamp's flame, and so prevent the methane burning inside the lamp from passing out to the general atmosphere. Although the idea of the safety lamp had already been demonstrated by William Reid Clanny and by the then unknown (but later very famous) engineer George Stephenson, Davy's use of wire gauze to prevent the spread of flame was used by many other inventors in their later designs. There was some discussion as to whether Davy had discovered the principles behind his lamp without the help of the work of Smithson Tennant, but it was generally agreed that the work of both men had been independent. Davy refused to patent the lamp, and its invention led to him being awarded the Rumford medal in 1816.
After Dalton published his atomic theory in 1808, certain of his central ideas were soon adopted by most chemists. However, uncertainty persisted for half a century about how atomic theory was to be configured and applied to concrete situations; chemists in different countries developed several different incompatible atomistic systems. A paper that suggested a way out of this difficult situation was published as early as 1811 by the Italian physicist Amedeo Avogadro (1776–1856), who hypothesized that equal volumes of gases at the same temperature and pressure contain equal numbers of molecules, from which it followed that relative molecular weights of any two gases are the same as the ratio of the densities of the two gases under the same conditions of temperature and pressure. Avogadro also reasoned that simple gases were not formed of solitary atoms but were instead compound molecules of two or more atoms. Thus Avogadro was able to overcome the difficulty that Dalton and others had encountered when Gay-Lussac reported that above 100 °C the volume of water vapor was twice the volume of the oxygen used to form it. According to Avogadro, the molecule of oxygen had split into two atoms in the course of forming water vapor.
Avogadro's hypothesis was neglected for half a century after it was first published. Many reasons for this neglect have been cited, including some theoretical problems, such as Jöns Jacob Berzelius's "dualism", which asserted that compounds are held together by the attraction of positive and negative electrical charges, making it inconceivable that a molecule composed of two electrically similar atoms—as in oxygen—could exist. An additional barrier to acceptance was the fact that many chemists were reluctant to adopt physical methods (such as vapour-density determinations) to solve their problems. By mid-century, however, some leading figures had begun to view the chaotic multiplicity of competing systems of atomic weights and molecular formulas as intolerable. Moreover, purely chemical evidence began to mount that suggested Avogadro's approach might be right after all. During the 1850s, younger chemists, such as Alexander Williamson in England, Charles Gerhardt and Charles-Adolphe Wurtz in France, and August Kekulé in Germany, began to advocate reforming theoretical chemistry to make it consistent with Avogadrian theory.
Wöhler, von Liebig, organic chemistry and the vitalism debate
In 1825, Friedrich Wöhler and Justus von Liebig performed the first confirmed discovery and explanation of isomers, earlier named by Berzelius. Working with cyanic acid and fulminic acid, they correctly deduced that isomerism was caused by differing arrangements of atoms within a molecular structure. In 1827, William Prout classified biomolecules into their modern groupings: carbohydrates, proteins and lipids. After the nature of combustion was settled, a dispute about vitalism and the essential distinction between organic and inorganic substances began. The vitalism question was revolutionized in 1828 when Friedrich Wöhler synthesized urea, thereby establishing that organic compounds could be produced from inorganic starting materials and disproving the theory of vitalism.
This opened a new research field in chemistry, and by the end of the 19th century, scientists were able to synthesize hundreds of organic compounds. The most important among them are mauve, magenta, and other synthetic dyes, as well as the widely used drug aspirin. The discovery of the artificial synthesis of urea contributed greatly to the theory of isomerism, as the empirical chemical formulas for urea and ammonium cyanate are identical (see Wöhler synthesis). In 1832, Friedrich Wöhler and Justus von Liebig discovered and explained functional groups and radicals in relation to organic chemistry, as well as first synthesizing benzaldehyde. Liebig, a German chemist, made major contributions to agricultural and biological chemistry, and worked on the organization of organic chemistry, being considered one of its principal founders. Liebig is also considered the "father of the fertilizer industry" for his discovery of nitrogen as an essential plant nutrient, and his formulation of the Law of the Minimum which described the effect of individual nutrients on crops.
Vladimir Markovnikov
Vladimir Markovnikov, born in 1838, was a Russian scientist who did most of his work at Kazan University in Russia. At Kazan, he studied under Butlerov in a laboratory better known as "the cradle of Russian organic chemistry", after which he also studied chemistry in Germany for two years. Markovnikov's contributions to the fields of organic chemistry included the development of the eponymous Markovnikov's rule, which states that hydrogen halides when added to alkenes and alkynes would add in a way that hydrogens would bond to the side of the carbon with the most hydrogen substituents. Products in chemistry that follow this rule are considered Markovnikov products and those that did not are considered anti-Markovnikov products. Markovnikov's rule was an early example of regioselectivity in organic synthesis and the modern understanding of it continues to be important in the chemical industry, where catalysts have been developed to produce anti-Markovnikov products. A significant aspect of Markovnikov's rule is that it explains reactivity based on the structural arrangement of atoms, as many chemists at the time did not consider chemical formulas as representing physical arrangement of atoms (see also radical theory).
Mid-1800s
In 1840, Germain Hess proposed Hess's law, an early statement of the law of conservation of energy, which establishes that energy changes in a chemical process depend only on the states of the starting and product materials and not on the specific pathway taken between the two states. In 1847, Hermann Kolbe obtained acetic acid from completely inorganic sources, further disproving vitalism. In 1848, William Thomson, 1st Baron Kelvin (commonly known as Lord Kelvin) established the concept of absolute zero, the temperature at which all molecular motion ceases. In 1849, Louis Pasteur discovered that the racemic form of tartaric acid is a mixture of the levorotatory and dextrotatory forms, thus clarifying the nature of optical rotation and advancing the field of stereochemistry. In 1852, August Beer proposed Beer's law, which explains the relationship between the composition of a mixture and the amount of light it will absorb. Based partly on earlier work by Pierre Bouguer and Johann Heinrich Lambert, it established the analytical technique known as spectrophotometry. In 1855, Benjamin Silliman, Jr. pioneered methods of petroleum cracking, which made the entire modern petrochemical industry possible.
Avogadro's hypothesis began to gain broad appeal among chemists only after his compatriot and fellow scientist Stanislao Cannizzaro demonstrated its value in 1858, two years after Avogadro's death. Cannizzaro's chemical interests had originally centered on natural products and on reactions of aromatic compounds; in 1853 he discovered that when benzaldehyde is treated with concentrated base, both benzoic acid and benzyl alcohol are produced—a phenomenon known today as the Cannizzaro reaction. In his 1858 pamphlet, Cannizzaro showed that a complete return to the ideas of Avogadro could be used to construct a consistent and robust theoretical structure that fit nearly all of the available empirical evidence. For instance, he pointed to evidence that suggested that not all elementary gases consist of two atoms per molecule—some were monatomic, most were diatomic, and a few were even more complex.
Another point of contention had been the formulas for compounds of the alkali metals (such as sodium) and the alkaline earth metals (such as calcium), which, in view of their striking chemical analogies, most chemists had wanted to assign to the same formula type. Cannizzaro argued that placing these metals in different categories had the beneficial result of eliminating certain anomalies when using their physical properties to deduce atomic weights. Unfortunately, Cannizzaro's pamphlet was published initially only in Italian and had little immediate impact. The real breakthrough came with an international chemical congress held in the German town of Karlsruhe in September 1860, at which most of the leading European chemists were present. The Karlsruhe Congress had been arranged by Kekulé, Wurtz, and a few others who shared Cannizzaro's sense of the direction chemistry should go. Speaking in French (as everyone there did), Cannizzaro's eloquence and logic made an indelible impression on the assembled body. Moreover, his friend Angelo Pavesi distributed Cannizzaro's pamphlet to attendees at the end of the meeting; more than one chemist later wrote of the decisive impression the reading of this document provided. For instance, Lothar Meyer later wrote that on reading Cannizzaro's paper, "The scales seemed to fall from my eyes." Cannizzaro thus played a crucial role in winning the battle for reform. The system advocated by him, and soon thereafter adopted by most leading chemists, is substantially identical to what is still used today.
Perkin, Crookes, and Nobel
In 1856, Sir William Henry Perkin, age 18, given a challenge by his professor, August Wilhelm von Hofmann, sought to synthesize quinine, the anti-malaria drug, from coal tar. In one attempt, Perkin oxidized aniline using potassium dichromate, whose toluidine impurities reacted with the aniline and yielded a black solid—suggesting a "failed" organic synthesis. Cleaning the flask with alcohol, Perkin noticed purple portions of the solution: a byproduct of the attempt was the first synthetic dye, known as mauveine or Perkin's mauve. Perkin's discovery is the foundation of the dye synthesis industry, one of the earliest successful chemical industries.
German chemist August Kekulé von Stradonitz's most important single contribution was his structural theory of organic composition, outlined in two articles published in 1857 and 1858 and treated in great detail in the pages of his extraordinarily popular ("Textbook of Organic Chemistry"), the first installment of which appeared in 1859 and gradually extended to four volumes. Kekulé argued that tetravalent carbon atoms – that is, carbon forming exactly four chemical bonds – could link together to form what he called a "carbon chain" or a "carbon skeleton," to which other atoms with other valences (such as hydrogen, oxygen, nitrogen, and chlorine) could join. He was convinced that it was possible for the chemist to specify this detailed molecular architecture for at least the simpler organic compounds known in his day. Kekulé was not the only chemist to make such claims in this era. The Scottish chemist Archibald Scott Couper published a substantially similar theory nearly simultaneously, and the Russian chemist Aleksandr Butlerov did much to clarify and expand structure theory. However, it was predominantly Kekulé's ideas that prevailed in the chemical community.
British chemist and physicist William Crookes is noted for his cathode ray studies, fundamental in the development of atomic physics. His researches on electrical discharges through a rarefied gas led him to observe the dark space around the cathode, now called the Crookes dark space. He demonstrated that cathode rays travel in straight lines and produce phosphorescence and heat when they strike certain materials. A pioneer of vacuum tubes, Crookes invented the Crookes tube – an early experimental discharge tube, with partial vacuum with which he studied the behavior of cathode rays. With the introduction of spectrum analysis by Robert Bunsen and Gustav Kirchhoff (1859–1860), Crookes applied the new technique to the study of selenium compounds. Bunsen and Kirchhoff had previously used spectroscopy as a means of chemical analysis to discover caesium and rubidium. In 1861, Crookes used this process to discover thallium in some seleniferous deposits. He continued work on that new element, isolated it, studied its properties, and in 1873 determined its atomic weight. During his studies of thallium, Crookes discovered the principle of the Crookes radiometer, a device that converts light radiation into rotary motion. The principle of this radiometer has found numerous applications in the development of sensitive measuring instruments.
In 1862, Alexander Parkes exhibited Parkesine, one of the earliest synthetic polymers, at the International Exhibition in London. This discovery formed the foundation of the modern plastics industry. In 1864, Cato Maximilian Guldberg and Peter Waage, building on Claude Louis Berthollet's ideas, proposed the law of mass action. In 1865, Johann Josef Loschmidt determined the number of molecules in a mole, later named Avogadro's number.
In 1865, August Kekulé, based partially on the work of Loschmidt and others, established the structure of benzene as a six carbon ring with alternating single and double bonds. Kekulé's novel proposal for benzene's cyclic structure was much contested but was never replaced by a superior theory. This theory provided the scientific basis for the dramatic expansion of the German chemical industry in the last third of the 19th century. Kekulé is also famous for having clarified the nature of aromatic compounds, which are compounds based on the benzene molecule. In 1865, Adolf von Baeyer began work on indigo dye, a milestone in modern industrial organic chemistry which revolutionized the dye industry.
Swedish chemist and inventor Alfred Nobel found that when nitroglycerin was incorporated in an absorbent inert substance like kieselguhr (diatomaceous earth) it became safer and more convenient to handle, and this mixture he patented in 1867 as dynamite. Nobel later on combined nitroglycerin with various nitrocellulose compounds, similar to collodion, but settled on a more efficient recipe combining another nitrate explosive, and obtained a transparent, jelly-like substance, which was a more powerful explosive than dynamite. Gelignite, or blasting gelatin, as it was named, was patented in 1876; and was followed by a host of similar combinations, modified by the addition of potassium nitrate and various other substances.
Mendeleev's periodic table
An important breakthrough in making sense of the list of known chemical elements (as well as in understanding the internal structure of atoms) was Dmitri Mendeleev's development of the first modern periodic table, or the periodic classification of the elements. Mendeleev, a Russian chemist, felt that there was some type of order to the elements and he spent more than thirteen years of his life collecting data and assembling the concept, initially with the idea of resolving some of the disorder in the field for his students. Mendeleev found that, when all the known chemical elements were arranged in order of increasing atomic weight, the resulting table displayed a recurring pattern, or periodicity, of properties within groups of elements. Mendeleev's law allowed him to build up a systematic periodic table of all the 66 elements then known based on atomic mass, which he published in Principles of Chemistry in 1869. His first Periodic Table was compiled on the basis of arranging the elements in ascending order of atomic weight and grouping them by similarity of properties.
Mendeleev had such faith in the validity of the periodic law that he proposed changes to the generally accepted values for the atomic weight of a few elements and, in his version of the periodic table of 1871, predicted the locations within the table of unknown elements together with their properties. He even predicted the likely properties of three yet-to-be-discovered elements, which he called ekaboron (Eb), ekaaluminium (Ea), and ekasilicon (Es), which proved to be good predictors of the properties of scandium, gallium, and germanium, respectively, which each fill the spot in the periodic table assigned by Mendeleev.
At first the periodic system did not raise interest among chemists. However, with the discovery of the predicted elements, notably gallium in 1875, scandium in 1879, and germanium in 1886, it began to win wide acceptance. The subsequent proof of many of his predictions within his lifetime brought fame to Mendeleev as the founder of the periodic law. This organization surpassed earlier attempts at classification by Alexandre-Émile Béguyer de Chancourtois, who published the telluric helix, an early, three-dimensional version of the periodic table of the elements in 1862, John Newlands, who proposed the law of octaves (a precursor to the periodic law) in 1864, and Lothar Meyer, who developed an early version of the periodic table with 28 elements organized by valence in 1864. Mendeleev's table did not include any of the noble gases, however, which had not yet been discovered. Gradually the periodic law and table became the framework for a great part of chemical theory. By the time Mendeleev died in 1907, he enjoyed international recognition and had received distinctions and awards from many countries.
In 1873, Jacobus Henricus van 't Hoff and Joseph Achille Le Bel, working independently, developed a model of chemical bonding that explained the chirality experiments of Pasteur and provided a physical cause for optical activity in chiral compounds. van 't Hoff's publication, called , etc. (Proposal for the development of 3-dimensional chemical structural formulae) and consisting of twelve pages of text and one page of diagrams, gave the impetus to the development of stereochemistry. The concept of the "asymmetrical carbon atom", dealt with in this publication, supplied an explanation of the occurrence of numerous isomers, inexplicable by means of the then current structural formulae. At the same time he pointed out the existence of relationship between optical activity and the presence of an asymmetrical carbon atom.
Josiah Willard Gibbs
American mathematical physicist J. Willard Gibbs's work on the applications of thermodynamics was instrumental in transforming physical chemistry into a rigorous deductive science. During the years from 1876 to 1878, Gibbs worked on the principles of thermodynamics, applying them to the complex processes involved in chemical reactions. He discovered the concept of chemical potential, or the "fuel" that makes chemical reactions work. In 1876 he published his most famous contribution, "On the Equilibrium of Heterogeneous Substances", a compilation of his work on thermodynamics and physical chemistry which laid out the concept of free energy to explain the physical basis of chemical equilibria. In these essays were the beginnings of Gibbs' theories of phases of matter: he considered each state of matter a phase, and each substance a component. Gibbs took all of the variables involved in a chemical reaction – temperature, pressure, energy, volume, and entropy – and included them in one simple equation known as Gibbs' phase rule.
Within this paper was perhaps his most outstanding contribution, the introduction of the concept of free energy, now universally called Gibbs free energy in his honor. The Gibbs free energy relates the tendency of a physical or chemical system to simultaneously lower its energy and increase its disorder, or entropy, in a spontaneous natural process. Gibbs's approach allows a researcher to calculate the change in free energy in the process, such as in a chemical reaction, and how fast it will happen. Since virtually all chemical processes and many physical ones involve such changes, his work has significantly impacted both the theoretical and experiential aspects of these sciences. In 1877, Ludwig Boltzmann established statistical derivations of many important physical and chemical concepts, including entropy, and distributions of molecular velocities in the gas phase. Together with Boltzmann and James Clerk Maxwell, Gibbs created a new branch of theoretical physics called statistical mechanics (a term that he coined), explaining the laws of thermodynamics as consequences of the statistical properties of large ensembles of particles. Gibbs also worked on the application of Maxwell's equations to problems in physical optics. Gibbs's derivation of the phenomenological laws of thermodynamics from the statistical properties of systems with many particles was presented in his highly influential textbook Elementary Principles in Statistical Mechanics, published in 1902, a year before his death. In that work, Gibbs reviewed the relationship between the laws of thermodynamics and the statistical theory of molecular motions. The overshooting of the original function by partial sums of Fourier series at points of discontinuity is known as the Gibbs phenomenon.
Late 19th century
Carl von Linde and the modern chemical process
German engineer Carl von Linde's invention of a continuous process of liquefying gases in large quantities formed a basis for the modern technology of refrigeration and provided both impetus and means for conducting scientific research at low temperatures and very high vacuums. He developed a dimethyl ether refrigerator (1874) and an ammonia refrigerator (1876). Though other refrigeration units had been developed earlier, Linde's were the first to be designed with the aim of precise calculations of efficiency. In 1895 he set up a large-scale plant for the production of liquid air. Six years later he developed a method for separating pure liquid oxygen from liquid air that resulted in widespread industrial conversion to processes utilizing oxygen (e.g., in steel manufacture). He founded the Linde plc, the world's largest industrial gas company by market share and revenue.
In 1883, Svante Arrhenius developed an ion theory to explain conductivity in electrolytes. In 1884, Jacobus Henricus van 't Hoff published (Studies in Dynamic Chemistry), a seminal study on chemical kinetics. In this work, van 't Hoff entered for the first time the field of physical chemistry. Of great importance was his development of the general thermodynamic relationship between the heat of conversion and the displacement of the equilibrium as a result of temperature variation. At constant volume, the equilibrium in a system will tend to shift in such a direction as to oppose the temperature change which is imposed upon the system. Thus, lowering the temperature results in heat development while increasing the temperature results in heat absorption. This principle of mobile equilibrium was subsequently (1885) put in a general form by Henry Louis Le Chatelier, who extended the principle to include compensation, by change of volume, for imposed pressure changes. The van 't Hoff-Le Chatelier principle, or simply Le Chatelier's principle, explains the response of dynamic chemical equilibria to external stresses.
In 1884, Hermann Emil Fischer proposed the structure of purine, a key structure in many biomolecules, which he later synthesized in 1898. He also began work on the chemistry of glucose and related sugars. In 1885, Eugen Goldstein named the cathode ray, later discovered to be composed of electrons, and the canal ray, later discovered to be positive hydrogen ions that had been stripped of their electrons in a cathode-ray tube; these would later be named protons. The year 1885 also saw the publishing of J. H. van 't Hoff's (Chemical equilibria in gaseous systems or strongly diluted solutions), which dealt with this theory of dilute solutions. Here he demonstrated that the "osmotic pressure" in solutions which are sufficiently dilute is proportionate to the concentration and the absolute temperature so that this pressure can be represented by a formula that only deviates from the formula for gas pressure by a coefficient i. He also determined the value of i by various methods, for example by means of the vapor pressure and François-Marie Raoult's results on the lowering of the freezing point. Thus van 't Hoff was able to prove that thermodynamic laws are not only valid for gases, but also for dilute solutions. His pressure laws, given general validity by the electrolytic dissociation theory of Arrhenius (1884–1887) – the first foreigner who came to work with him in Amsterdam (1888) – are considered the most comprehensive and important in the realm of natural sciences. In 1893, Alfred Werner discovered the octahedral structure of cobalt complexes, thus establishing the field of coordination chemistry.
Ramsay's discovery of the noble gases
The most celebrated discoveries of Scottish chemist William Ramsay were made in inorganic chemistry. Ramsay was intrigued by the British physicist John Strutt, 3rd Baron Rayleigh's 1892 discovery that the atomic weight of nitrogen found in chemical compounds was lower than that of nitrogen found in the atmosphere. He ascribed this discrepancy to a light gas included in chemical compounds of nitrogen, while Ramsay suspected a hitherto undiscovered heavy gas in atmospheric nitrogen. Using two different methods to remove all known gases from air, Ramsay and Lord Rayleigh were able to announce in 1894 that they had found a monatomic, chemically inert gaseous element that constituted nearly 1 percent of the atmosphere; they named it argon.
The following year, Ramsay liberated another inert gas from a mineral called cleveite; this proved to be helium, previously known only in the solar spectrum. In his book The Gases of the Atmosphere (1896), Ramsay showed that the positions of helium and argon in the periodic table of elements indicated that at least three more noble gases might exist. In 1898 Ramsay and the British chemist Morris W. Travers isolated these elements—called neon, krypton, and xenon—from air and brought them to a liquid state at low temperature and high pressure. Sir William Ramsay worked with Frederick Soddy to demonstrate, in 1903, that alpha particles (helium nuclei) were continually produced during the radioactive decay of a sample of radium. Ramsay was awarded the 1904 Nobel Prize for Chemistry in recognition of "services in the discovery of the inert gaseous elements in the air, and his determination of their place in the periodic system."
In 1897, J. J. Thomson discovered the electron using the cathode-ray tube. In 1898, Wilhelm Wien demonstrated that canal rays (streams of positive ions) can be deflected by magnetic fields and that the amount of deflection is proportional to the mass-to-charge ratio. This discovery would lead to the analytical technique known as mass spectrometry in 1912.
Marie and Pierre Curie
Marie Skłodowska-Curie was a Polish-born French physicist and chemist who is famous for her pioneering research on radioactivity. She and her husband are considered to have laid the cornerstone of the nuclear age with their research on radioactivity. Marie was fascinated with the work of Henri Becquerel, a French physicist who discovered in 1896 that uranium casts off rays similar to the X-rays discovered by Wilhelm Röntgen. Marie Curie began studying uranium in late 1897 and theorized, according to a 1904 article she wrote for Century magazine, "that the emission of rays by the compounds of uranium is a property of the metal itself—that it is an atomic property of the element uranium independent of its chemical or physical state." Curie took Becquerel's work a few steps further, conducting her own experiments on uranium rays. She discovered that the rays remained constant, no matter the condition or form of the uranium. The rays, she theorized, came from the element's atomic structure. This revolutionary idea created the field of atomic physics and the Curies coined the word radioactivity to describe the phenomenon.
Pierre and Marie further explored radioactivity by working to separate the substances in uranium ores and then using the electrometer to make radiation measurements to 'trace' the minute amount of unknown radioactive element among the fractions that resulted. Working with the mineral pitchblende, the pair discovered a new radioactive element in 1898. They named the element polonium, after Marie's native country of Poland. On December 21, 1898, the Curies detected the presence of another radioactive material in the pitchblende. They presented this finding to the French Academy of Sciences on December 26, proposing that the new element be called radium. The Curies then went to work isolating polonium and radium from naturally occurring compounds to prove that they were new elements. In 1902, the Curies announced that they had produced a decigram of pure radium, demonstrating its existence as a unique chemical element. While it took three years for them to isolate radium, they were never able to isolate polonium. Along with the discovery of two new elements and finding techniques for isolating radioactive isotopes, Curie oversaw the world's first studies into the treatment of neoplasms, using radioactive isotopes. With Henri Becquerel and her husband, Pierre Curie, she was awarded the 1903 Nobel Prize for Physics. She was the sole winner of the 1911 Nobel Prize for Chemistry. She was the first woman to win a Nobel Prize, and she is the only woman to win the award in two different fields.
While working with Marie to extract pure substances from ores, an undertaking that really required industrial resources but that they achieved in relatively primitive conditions, Pierre himself concentrated on the physical study (including luminous and chemical effects) of the new radiations. Through the action of magnetic fields on the rays given out by the radium, he proved the existence of particles that were electrically positive, negative, and neutral; these Ernest Rutherford was afterward to call alpha, beta, and gamma rays. Pierre then studied these radiations by calorimetry and also observed the physiological effects of radium, thus opening the way to radium therapy. Among Pierre Curie's discoveries were that ferromagnetic substances exhibited a critical temperature transition, above which the substances lost their ferromagnetic behavior – this is known as the "Curie point." He was elected to the Academy of Sciences (1905), having in 1903 jointly with Marie received the Royal Society's prestigious Davy Medal and jointly with her and Becquerel the Nobel Prize for Physics. He was run over by a carriage in the rue Dauphine in Paris in 1906 and died instantly. His complete works were published in 1908.
Ernest Rutherford
New Zealand-born chemist and physicist Ernest Rutherford is considered to be "the father of nuclear physics." Rutherford is best known for devising the names alpha, beta, and gamma to classify various forms of radioactive "rays" which were poorly understood at his time (alpha and beta rays are particle beams, while gamma rays are a form of high-energy electromagnetic radiation). Rutherford deflected alpha rays with both electric and magnetic fields in 1903. Working with Frederick Soddy, Rutherford explained that radioactivity is due to the transmutation of elements, now known to involve nuclear reactions.
He also observed that the intensity of radioactivity of a radioactive element decreases over a unique and regular amount of time until a point of stability, and he named the halving time the "half-life". In 1901 and 1902 he worked with Frederick Soddy to prove that atoms of one radioactive element would spontaneously turn into another, by expelling a piece of the atom at high velocity. In 1906 at the University of Manchester, Rutherford oversaw an experiment conducted by his students Hans Geiger (known for the Geiger counter) and Ernest Marsden. In the Geiger–Marsden experiment, a beam of alpha particles, generated by the radioactive decay of radon, was directed normally onto a sheet of very thin gold foil in an evacuated chamber. Under the prevailing plum pudding model, the alpha particles should all have passed through the foil and hit the detector screen, or have been deflected by, at most, a few degrees.
However, the actual results surprised Rutherford. Although many of the alpha particles did pass through as expected, many others were deflected at small angles while others were reflected back to the alpha source. They observed that a very small percentage of particles were deflected through angles much larger than 90 degrees. The gold foil experiment showed large deflections for a small fraction of incident particles. Rutherford realized that, because some of the alpha particles were deflected or reflected, the atom had a concentrated centre of positive charge and of relatively large mass – Rutherford later termed this positive center the "atomic nucleus". The alpha particles had either hit the positive centre directly or passed by it close enough to be affected by its positive charge. Since many other particles passed through the gold foil, the positive centre would have to be a relatively small size compared to the rest of the atom – meaning that the atom is mostly open space. From his results, Rutherford developed a model of the atom that was similar to the solar system, known as the Rutherford model. Like planets, electrons orbited a central, sun-like nucleus. For his work with radiation and the atomic nucleus, Rutherford received the 1908 Nobel Prize in Chemistry.
20th century
In 1903, Mikhail Tsvet invented chromatography, an important analytic technique. In 1904, Hantaro Nagaoka proposed an early nuclear model of the atom, where electrons orbit a dense massive nucleus. In 1905, Fritz Haber and Carl Bosch developed the Haber process for making ammonia, a milestone in industrial chemistry with deep consequences in agriculture. The Haber process, or Haber–Bosch process, combined nitrogen and hydrogen to form ammonia in industrial quantities for the production of fertilizer and munitions. The food production for half the world's current population depends on this method for producing fertilizer. Haber, along with Max Born, proposed the Born–Haber cycle as a method for evaluating the lattice energy of an ionic solid. Haber has also been described as the "father of chemical warfare" for his work developing and deploying chlorine and other poisonous gases during World War I.
In 1905, Albert Einstein explained Brownian motion in a way that definitively proved atomic theory. Leo Baekeland invented bakelite, one of the first commercially successful plastics. In 1909, American physicist Robert Andrews Millikan – who had studied in Europe under Walther Nernst and Max Planck – measured the charge of individual electrons with unprecedented accuracy through the oil drop experiment, in which he measured the electric charges on tiny falling water (and later oil) droplets. His study established that any particular droplet's electrical charge is a multiple of a definite, fundamental value—the electron's charge—and thus a confirmation that all electrons have the same charge and mass. Beginning in 1912, he spent several years investigating and finally proving Albert Einstein's proposed linear relationship between energy and frequency, and providing the first direct photoelectric support for the Planck constant. In 1923 Millikan was awarded the Nobel Prize for Physics.
In 1909, S. P. L. Sørensen invented the pH concept and developed methods for measuring acidity. In 1911, Antonius Van den Broek proposed the idea that the elements on the periodic table are more properly organized by positive nuclear charge rather than atomic weight. In 1911, the first Solvay Conference was held in Brussels, bringing together most of the most prominent scientists of the day. In 1912, William Henry Bragg and William Lawrence Bragg proposed Bragg's law and established the field of X-ray crystallography, an important tool for elucidating the crystal structure of substances. In 1912, Peter Debye used the concept of a molecular dipole to describe asymmetric charge distribution in some molecules.
Otto Hahn
Otto Hahn was a German chemist and a pioneer in the fields of radioactivity and radiochemistry. He played a leading role in the discovery of nuclear fission and established nuclear chemistry as a scientic field. Hahn, Lise Meitner and Fritz Strassmann discovered radioactive isotopes of radium, thorium, protactinium and uranium. He also discovered the phenomena of atomic recoil and nuclear isomerism, and pioneered rubidium–strontium dating. In 1938, Hahn, Meitner and Strassmann discovered nuclear fission. In their second publication on nuclear fission in February 1939, Hahn and Strassmann predicted the existence and liberation of additional neutrons during the fission process, opening up the possibility of a nuclear chain reaction. Hahn received the 1944 Nobel Prize for Chemistry for the discoveries. Nuclear fission was the basis for nuclear reactors and nuclear weapons.
Niels Bohr
In 1913, Niels Bohr, a Danish physicist, introduced the concepts of quantum mechanics to atomic structure by proposing what is now known as the Bohr model of the atom, where electrons exist only in strictly defined circular orbits around the nucleus similar to rungs on a ladder. The Bohr Model is a planetary model in which the negatively charged electrons orbit a small, positively charged nucleus similar to the planets orbiting the Sun (except that the orbits are not planar) – the gravitational force of the solar system is mathematically akin to the attractive Coulomb (electrical) force between the positively charged nucleus and the negatively charged electrons.
In the Bohr model, however, electrons orbit the nucleus in orbits that have a set size and energy – the energy levels are said to be quantized, which means that only certain orbits with certain radii are allowed; orbits in between simply do not exist. The energy of the orbit is related to its size – that is, the lowest energy is found in the smallest orbit. Bohr also postulated that electromagnetic radiation is absorbed or emitted when an electron moves from one orbit to another. Because only certain electron orbits are permitted, the emission of light accompanying a jump of an electron from an excited energy state to ground state produces a unique emission spectrum for each element. Bohr later received the Nobel Prize in physics for this work.
Niels Bohr also worked on the principle of complementarity, which states that an electron can be interpreted in two mutually exclusive and valid ways. Electrons can be interpreted as wave or particle models. His hypothesis was that an incoming particle would strike the nucleus and create an excited compound nucleus. This formed the basis of his liquid drop model and later provided a theory base for nuclear fission after its discovery by chemists Otto Hahn and Fritz Strassman, and explanation and naming by physicists Lise Meitner and Otto Frisch.
In 1913, Henry Moseley, working from Van den Broek's earlier idea, introduced the concept of atomic number to fix some inadequacies of Mendeleev's periodic table, which had been based on atomic weight. The peak of Frederick Soddy's career in radiochemistry was in 1913 with his formulation of the concept of isotopes, which stated that certain elements exist in two or more forms which have different atomic weights but which are indistinguishable chemically. He is remembered for proving the existence of isotopes of certain radioactive elements, and is also credited, along with others, with the discovery of the element protactinium in 1917. In 1913, J. J. Thomson expanded on the work of Wien by showing that charged subatomic particles can be separated by their mass-to-charge ratio, a technique known as mass spectrometry.
Gilbert N. Lewis
American physical chemist Gilbert N. Lewis laid the foundation of valence bond theory; he was instrumental in developing a bonding theory based on the number of electrons in the outermost "valence" shell of the atom. In 1902, while Lewis was trying to explain valence to his students, he depicted atoms as constructed of a concentric series of cubes with electrons at each corner. This "cubic atom" explained the eight groups in the periodic table and represented his idea that chemical bonds are formed by electron transference to give each atom a complete set of eight outer electrons (an "octet").
Lewis's theory of chemical bonding continued to evolve and, in 1916, he published his seminal article "The Atom of the Molecule", which suggested that a chemical bond is a pair of electrons shared by two atoms. Lewis's model equated the classical chemical bond with the sharing of a pair of electrons between the two bonded atoms. Lewis introduced the "electron dot diagrams" in this paper to symbolize the electronic structures of atoms and molecules. Now known as Lewis structures, they are discussed in virtually every introductory chemistry book.
Shortly after the publication of his 1916 paper, Lewis became involved with military research. He did not return to the subject of chemical bonding until 1923, when he masterfully summarized his model in a short monograph entitled Valence and the Structure of Atoms and Molecules. His renewal of interest in this subject was largely stimulated by the activities of the American chemist and General Electric researcher Irving Langmuir, who between 1919 and 1921 popularized and elaborated Lewis's model. Langmuir subsequently introduced the term covalent bond. In 1921, Otto Stern and Walther Gerlach established the concept of quantum mechanical spin in subatomic particles.
For cases where no sharing was involved, Lewis in 1923 developed the electron pair theory of acids and base: Lewis redefined an acid as any atom or molecule with an incomplete octet that was thus capable of accepting electrons from another atom; bases were, of course, electron donors. His theory is known as the concept of Lewis acids and bases. In 1923, G. N. Lewis and Merle Randall published Thermodynamics and the Free Energy of Chemical Substances, first modern treatise on chemical thermodynamics.
The 1920s saw a rapid adoption and application of Lewis's model of the electron-pair bond in the fields of organic and coordination chemistry. In organic chemistry, this was primarily due to the efforts of the British chemists Arthur Lapworth, Robert Robinson, Thomas Lowry, and Christopher Ingold; while in coordination chemistry, Lewis's bonding model was promoted through the efforts of the American chemist Maurice Huggins and the British chemist Nevil Sidgwick.
Quantum mechanics
In 1924, French quantum physicist Louis de Broglie published his thesis, in which he introduced a revolutionary theory of electron waves based on wave–particle duality. In his time, the wave and particle interpretations of light and matter were seen as being at odds with one another, but de Broglie suggested that these seemingly different characteristics were instead the same behavior observed from different perspectives—that particles can behave like waves, and waves (radiation) can behave like particles. Broglie's proposal offered an explanation of the restricted motion of electrons within the atom. The first publications of Broglie's idea of "matter waves" had drawn little attention from other physicists, but a copy of his doctoral thesis chanced to reach Einstein, whose response was enthusiastic. Einstein stressed the importance of Broglie's work both explicitly and by building further on it.
In 1925, Austrian-born physicist Wolfgang Pauli developed the Pauli exclusion principle, which states that no two electrons around a single nucleus in an atom can occupy the same quantum state simultaneously, as described by four quantum numbers. Pauli made major contributions to quantum mechanics and quantum field theory – he was awarded the 1945 Nobel Prize for Physics for his discovery of the Pauli exclusion principle – as well as solid-state physics, and he successfully hypothesized the existence of the neutrino. In addition to his original work, he wrote masterful syntheses of several areas of physical theory that are considered classics of scientific literature.
In 1926 at the age of 39, Austrian theoretical physicist Erwin Schrödinger produced the papers that gave the foundations of quantum wave mechanics. In those papers he described his partial differential equation that is the basic equation of quantum mechanics and bears the same relation to the mechanics of the atom as Newton's equations of motion bear to planetary astronomy. Adopting a proposal made by Louis de Broglie in 1924 that particles of matter have a dual nature and in some situations act like waves, Schrödinger introduced a theory describing the behaviour of such a system by a wave equation that is now known as the Schrödinger equation. The solutions to Schrödinger's equation, unlike the solutions to Newton's equations, are wave functions that can only be related to the probable occurrence of physical events. The readily visualized sequence of events of the planetary orbits of Newton is, in quantum mechanics, replaced by the more abstract notion of probability. (This aspect of the quantum theory made Schrödinger and several other physicists profoundly unhappy, and he devoted much of his later life to formulating philosophical objections to the generally accepted interpretation of the theory that he had done so much to create.)
German theoretical physicist Werner Heisenberg was one of the key creators of quantum mechanics. In 1925, Heisenberg discovered a way to formulate quantum mechanics in terms of matrices. For that discovery, he was awarded the Nobel Prize for Physics for 1932. In 1927 he published his uncertainty principle, upon which he built his philosophy and for which he is best known. Heisenberg was able to demonstrate that if you were studying an electron in an atom you could say where it was (the electron's location) or where it was going (the electron's velocity), but it was impossible to express both at the same time. He also made important contributions to the theories of the hydrodynamics of turbulent flows, the atomic nucleus, ferromagnetism, cosmic rays, and subatomic particles, and he was instrumental in planning the first West German nuclear reactor at Karlsruhe, together with a research reactor in Munich, in 1957. Considerable controversy surrounds his work on atomic research during World War II.
Quantum chemistry
Some view the birth of quantum chemistry in the discovery of the Schrödinger equation and its application to the hydrogen atom in 1926. However, the 1927 article of Walter Heitler and Fritz London is often recognised as the first milestone in the history of quantum chemistry. This is the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. In the following years much progress was accomplished by Edward Teller, Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Linus Pauling, Erich Hückel, Douglas Hartree and Vladimir Aleksandrovich Fock, to cite a few.
Still, skepticism remained as to the general power of quantum mechanics applied to complex chemical systems. The situation around 1930 is described by Paul Dirac:
Hence the quantum mechanical methods developed in the 1930s and 1940s are often referred to as theoretical molecular or atomic physics to underline the fact that they were more the application of quantum mechanics to chemistry and spectroscopy than answers to chemically relevant questions. In 1951, a milestone article in quantum chemistry is the seminal paper of Clemens C. J. Roothaan on Roothaan equations. It opened the avenue to the solution of the self-consistent field equations for small molecules like hydrogen or nitrogen. Those computations were performed with the help of tables of integrals which were computed on the most advanced computers of the time.
In the 1940s many physicists turned from molecular or atomic physics to nuclear physics (like J. Robert Oppenheimer or Edward Teller). Glenn T. Seaborg was an American nuclear chemist best known for his work on isolating and identifying transuranium elements (those heavier than uranium). He shared the 1951 Nobel Prize for Chemistry with Edwin Mattison McMillan for their independent discoveries of transuranium elements. Seaborgium was named in his honour, making him the only person, along with Albert Einstein and Yuri Oganessian, for whom a chemical element was named during his lifetime.
Molecular biology and biochemistry
By the mid 20th century, in principle, the integration of physics and chemistry was extensive, with chemical properties explained as the result of the electronic structure of the atom; Linus Pauling's book on The Nature of the Chemical Bond used the principles of quantum mechanics to deduce bond angles in ever-more complicated molecules. However, though some principles deduced from quantum mechanics were able to predict qualitatively some chemical features for biologically relevant molecules, they were, till the end of the 20th century, more a collection of rules, observations, and recipes than rigorous ab initio quantitative methods.
This heuristic approach triumphed in 1953 when James Watson and Francis Crick deduced the double helical structure of DNA by constructing models constrained by and informed by the knowledge of the chemistry of the constituent parts and the X-ray diffraction patterns obtained by Rosalind Franklin. This discovery lead to an explosion of research into the biochemistry of life.
In the same year, the Miller–Urey experiment demonstrated that basic constituents of protein, simple amino acids, could themselves be built up from simpler molecules in a simulation of primordial processes on Earth. This first attempt by chemists to study hypothetical processes in the laboratory under controlled conditions helped kickstart bountiful research, within the natural sciences, into the origins of life.
In 1983 Kary Mullis devised a method for the in-vitro amplification of DNA, known as the polymerase chain reaction (PCR), which revolutionized the chemical processes used in the laboratory to manipulate it. PCR could be used to synthesize specific pieces of DNA and made possible the sequencing of DNA of organisms, which culminated in the huge human genome project.
An important piece in the double helix puzzle was solved by one of Pauling's students Matthew Meselson and Frank Stahl, the result of their collaboration (Meselson–Stahl experiment) has been called as "the most beautiful experiment in biology".
They used a centrifugation technique that sorted molecules according to differences in weight. Because nitrogen atoms are a component of DNA, they were labelled and therefore tracked in replication in bacteria.
Late 20th century
In 1970, John Pople developed the Gaussian program greatly easing computational chemistry calculations. In 1971, Yves Chauvin offered an explanation of the reaction mechanism of olefin metathesis reactions. In 1975, Karl Barry Sharpless and his group discovered stereoselective oxidation reactions including Sharpless epoxidation, Sharpless asymmetric dihydroxylation, and Sharpless oxyamination.
In 1985, Harold Kroto, Robert Curl and Richard Smalley discovered fullerenes, a class of large carbon molecules superficially resembling the geodesic dome designed by architect R. Buckminster Fuller. In 1991, Sumio Iijima used electron microscopy to discover a type of cylindrical fullerene known as a carbon nanotube, though earlier work had been done in the field as early as 1951. This material is an important component in the field of nanotechnology. In 1994, K. C. Nicolaou with his group and Robert A. Holton and his group, achieved the first total synthesis of Taxol. In 1995, Eric Cornell and Carl Wieman produced the first Bose–Einstein condensate, a substance that displays quantum mechanical properties on the macroscopic scale.
Mathematics and chemistry
Before the 20th century, chemistry was defined as the science of the nature of matter and its transformations. It was therefore distinct from physics which was not concerned with such dramatic transformation of matter. Moreover, in contrast to physics, chemistry remained predominantly a descriptive and empirical science until the end of the 19th century. Though they developed a consistent quantitative foundation based on notions of atomic and molecular weights, combining proportions, and thermodynamic quantities, chemists had less use of advanced mathematics. Some even expressed reluctance about the use of mathematics within chemistry. For example, the philosopher Auguste Comte wrote in 1830:
However, in the second part of the 19th century, the situation began to change as August Kekulé wrote in 1867:
Scope of chemistry
As understanding of the nature of matter has evolved, so too has the self-understanding of the science of chemistry by its practitioners. This continuing historical process of evaluation includes the categories, terms, aims and scope of chemistry. Additionally, the development of the social institutions and networks which support chemical enquiry are highly significant factors that enable the production, dissemination and application of chemical knowledge. (See Philosophy of chemistry)
Chemical industry
The later part of the nineteenth century saw a huge increase in the exploitation of petroleum extracted from the earth for the production of a host of chemicals and largely replaced the use of whale oil, coal tar and naval stores used previously. Large-scale production and refinement of petroleum provided feedstocks for liquid fuels such as gasoline and diesel, solvents, lubricants, asphalt, waxes, and for the production of many of the common materials of the modern world, such as synthetic fibers, plastics, paints, detergents, pharmaceuticals, adhesives and ammonia as fertilizer and for other uses. Many of these required new catalysts and the utilization of chemical engineering for their cost-effective production.
In the mid-twentieth century, control of the electronic structure of semiconductor materials was made precise by the creation of large ingots of extremely pure single crystals of silicon and germanium. Accurate control of their chemical composition by doping with other elements made the production of the solid state transistor in 1951 and made possible the production of tiny integrated circuits for use in electronic devices, especially computers.
See also
Histories and timelines
Atomic theory
Cupellation
History of chromatography
History of electrochemistry
History of energy
History of materials science
History of molecular biology
History of molecular theory
History of physics
History of science and technology
History of the molecule
History of the periodic table
History of thermodynamics
List of years in science
Nobel Prize in chemistry
Timeline of scientific discoveries
Timeline of atomic and subatomic physics
Timeline of chemical elements discoveries
Timeline of chemistry
Timeline of crystallography
Timeline of historic inventions
Timeline of materials technology
Timeline of thermodynamics, statistical mechanics, and random processes
The Chemical History of a Candle
The Mystery of Matter: Search for the Elements (PBS film)
Notable chemists
listed chronologically:
List of chemists
Robert Boyle, 1627–1691
Joseph Black, 1728–1799
Joseph Priestley, 1733–1804
Carl Wilhelm Scheele, 1742–1786
Antoine Lavoisier, 1743–1794
Alessandro Volta, 1745–1827
Jacques Charles, 1746–1823
Claude Louis Berthollet, 1748–1822
Amedeo Avogadro, 1776–1856
Joseph-Louis Gay-Lussac, 1778–1850
Humphry Davy, 1778–1829
Jöns Jacob Berzelius, inventor of modern chemical notation, 1779–1848
Justus von Liebig, 1803–1873
Louis Pasteur, 1822–1895
Stanislao Cannizzaro, 1826–1910
Friedrich August Kekulé von Stradonitz, 1829–1896
Dmitri Mendeleev, 1834–1907
Josiah Willard Gibbs, 1839–1903
Carl von Linde, 1842–1934
J. H. van 't Hoff, 1852–1911
William Ramsay, 1852–1916
Svante Arrhenius, 1859–1927
Walther Nernst, 1864–1941
Marie Curie, 1867–1934
Fritz Haber, 1868–1934
Gilbert N. Lewis, 1875–1946
Otto Hahn, 1879–1968
Irving Langmuir, 1881–1957
Linus Pauling, 1901–1994
Glenn T. Seaborg, 1912–1999
Robert Burns Woodward, 1917–1979
Frederick Sanger, 1918–2013
Geoffrey Wilkinson, 1921–1996
Rudolph A. Marcus, 1923–
George Andrew Olah, 1926–2017
Elias James Corey, 1928–
Akira Suzuki, 1930–
Richard F. Heck, 1931–2015
Harold Kroto, 1939–2016
Jean-Marie Lehn, 1939–
Peter Atkins, 1940–
Barry Sharpless, 1941–
Richard Smalley, 1943–2005
Jean-Pierre Sauvage, 1944–
Notes
References
Selected classic papers from the history of chemistry
Biographies of Chemists
CHEM-HIST: International Mailing List for the History of Chemistry
Eric R. Scerri, The Periodic Table: Its Story and Its Significance, Oxford University Press, 2006.
Further reading
(four volumes)
(general overview of the history of alchemy and chemistry, with a focus on the relationship between the two; written in a highly accessible style)
Documentaries
BBC (2010). Chemistry: A Volatile History.
External links
ChemisLab – Chemists of the Past
SHAC: Society for the History of Alchemy and Chemistry
Chemistry
Chemistry
Chemistry | History of chemistry | Technology | 20,374 |
23,201,284 | https://en.wikipedia.org/wiki/List%20of%20unrecovered%20and%20unusable%20flight%20recorders | Flight data recorders (FDRs) and cockpit voice recorders (CVRs) in commercial aircraft continuously record information and can provide key evidence in determining the causes of an aircraft loss. The greatest depth from which a flight recorder has been recovered is , for the CVR of South African Airways Flight 295. Most flight recorders are equipped with underwater locator beacons to assist searchers in recovering them from offshore crash sites, however these beacons run off a battery and eventually stop transmitting. For various reasons, a flight recorder cannot always be recovered, and many recorders that are recovered are too damaged to provide any data.
See also
List of missing ships
References
Recorders
Aircraft recorders
Aviation-related lists | List of unrecovered and unusable flight recorders | Technology | 145 |
287,615 | https://en.wikipedia.org/wiki/William%20Henry%20Hudson | William Henry Hudson (4 August 1841 – 18 August 1922), known in Argentina as Guillermo Enrique Hudson, was an Anglo-Argentine author, naturalist and ornithologist. Born in the Argentinian pampas where he roamed free in his youth, he observed bird life and collected specimens for the Smithsonian Institution. The Patagonian birds Knipolegus hudsoni and Asthenes hudsoni are named after him. He would later write about life in Patagonia that drew special admiration for his style. His most popular work Green Mansions (1904), a romance set in the Venezuelan forest inspired a Hollywood movie and several other works.
Life
Hudson was the fourth child of Daniel Hudson (1804–1868) and his wife Caroline Augusta (1804–1859), United States settlers of English and Irish origin. His paternal grandfather was from Clyst Hydon in Devon. He was born and lived his first years in a small estancia called "Los Veinte-cinco Ombues" which was on the banks of the Arroyo Conchitas stream which flows into the Plata river in what is now Ingeniero Allan, Florencio Varela, Argentina.
In 1846, the family established at a pulpería further south, Las Acacias, in the surroundings of Chascomús, not far from the lake of the same name. In this natural environment, Hudson spent his youth studying the local flora and fauna and observing both natural and human dramas on what was then a lawless frontier. He was taught by three tutors who lived on the ranch. He became keenly interested in the life of the pampas, and grew up with gaucho herders, native Indians, settlers with whom he explored the pampas and developed a special love for Patagonia. At the age of fifteen he suffered from a serious typhus fever and still later suffered from rheumatic fever. At sixteen he read Gilbert White's Natural History of Selbourne and was deeply influenced to study natural history. In 1859, his mother, a devout Christian, died, and in the same year he read Darwin's Origin of Species. From 1866, he collected bird skins for S. F. Baird at the Smithsonian Institution but he would note later the glory of birds in life and the ugliness of taxidermy. In 1866, he also served in the Argentinian army during the war with Paraguay. He later collected insect specimens for Hermann Burmeister in Buenos Aires and sent bird specimens to the Zoological Society of London from 1870. In 1870, he wrote his a series of nine letters on the ornithology of Buenos Ayres that were published by Philip Sclater in the Proceedings of the Royal Zoological Society. In his third letter of 1870 Hudson takes on some statements made by Darwin on Patagonian birds. Darwin noted that the woodpecker Colaptes campestris occurred on the Pampas where not a tree grew and Hudson argued that there were indeed trees on the La Plata and that in much vaster grassland areas, the woodpecker was never found. Darwin responded, accepting that he may have been mistaken in some of his observations but that there was no wilful error and clarified the location where he had made his observations. In 1872 Hudson sent specimens of birds from Patagonia, including a species Sclater would describe and name after Hudson as Cnipolegus hudsoni (spelling used in the paper). Hudson was initially skeptical about evolution but he would later be a grudging evolutionist.
Hudson saw the pampas being destroyed by European immigrants and in April 1874 he boarded the steamer Ebro to England. He slept in Hyde park after arrival and struggled to find employment. He met John Gould in the hope of finding work but found a cold response from Gould who was ill and the sight of dead hummingbirds all around sickened Hudson. He then sought to work as a genealogy researcher for a Chester Waters who turned out to be deeply indebted and unable to pay. In 1876, he married singer Emily (1829-1921) daughter of John Hanmer Wingrave and lived in her home at Southwick Crescent where she ran boarding houses. They later moved to rented rooms and she tried to make a living by giving music lessons. They later moved to a larger three-story house in Bayswater that Emily inherited. They lived in a flat and rented out the others which paid back their debts. They had no children. He struggled to make a living through writing and among the few that he managed to write was an article in a women's magazine in 1876 that he wrote under the pseudonym Maud Merryweather. In 1880, he met Morley Roberts and through his connections he was able to contribute stories to magazines. He wrote several books including a two-volume work on Argentine Ornithology (1888), Idle Days in Patagonia (1893), and The Naturalist in La Plata (1892). He began to travel in England and wrote Nature in Downland (1900). His books on the English countryside, all of them set in Wiltshire, including Hampshire Days (1903), Afoot in England (1909), and A Shepherd's Life (1910), which helped foster the back-to-nature movement of the 1920s and 1930s. He was a supporter of the Society for the Protection of Birds from its early days and was often the only man who sat in the meetings organized by Eliza Phillips. He later wrote some pamphlets for the organization in 1898 against the trade in plumes. Hudson became a British citizen in 1900 and in 1901 he received a Civil list pension of £150 per year for his writings on natural history. This was made possible by the influence Sir Edward and his wife Lady Dorothy Grey.
Hudson was more than six feet tall and he loved to talk to people from rural working classes and would live among them during his travels in the countryside. Hudson was a friend of the late-19th century English author George Gissing, whom he met in 1889. They corresponded until the latter's death in 1903, occasionally exchanging their publications, discussing literary and scientific matters, and commenting on their respective access to books and newspapers, a matter of supreme importance to Gissing. In September 1890, Morley Roberts, Gissing and Hudson went to Shoreham and were involved in rescuing three drowning girls even though Hudson could not swim. Other close friends included Cunninghame Graham. He campaigned (1900) against the building of the National Physical Laboratory in the grounds of Kew Gardens. He began to write fiction, his most popular work being Green Mansions (1904) which was set in a Venezuelan forest. In 1959 it was made into a movie. Other works of fiction included The Purple Land (1904), A Crystal Age (1906), Tales of the Pampas (1916), and A Little Boy Lost (1905). He wrote an autobiographical book Far Away and Long Ago (1918). In 1911, his wife became invalid and she was taken care of by nurse in Worthing, Sussex, until her death in 1921. Hudson lived in London with a weak heart and died on 18 August 1922, at 40 St Luke’s Road, Westbourne Park, Bayswater, and was buried in Broadwater and Worthing Cemetery, Worthing, on 22 August 1922, next to his wife, who had died early in 1921. He left some bequests but nearly his entire estate of £8225 was left to the Royal Society for the Protection of Birds (including earnings from his works) of which he was an early member. His Executors were the publisher Ernest Bell and Wynnard Hooper, a journalist. He wanted his notebooks and papers to be destroyed and did not want his life to be written about.<ref>Shrubsall, Dennis and Pierre Coustillas eds. Landscape and literati: unpublished letters of W. H. Hudson and George Gissing. Salisbury: Michael Russell, 1985. Also various references in Coustillas, Pierre ed.London and the Life of Literature in Late Victorian England: the Diary of George Gissing. Brighton: Harvester Press, 1978.</ref>
Personal views
Hudson was an advocate of Lamarckian evolution. Early in his life he was a critic of Darwinism and defended vitalism. He was influenced by the non-Darwinian evolutionary writings of Samuel Butler.Haymaker, Richard E. (1954). From Pampas to Hedgerows and Downs: A Study of W. H. Hudson. Bookman Associates. p. 197 Hudson considered himself an animist and although he was familiar with Christian tradition from his mother he did not belong to any denomination.
Recognition and awards
In 1925, a memorial to him was inaugurated in Hyde Park by Stanley Baldwin. A stone panel made by Jacob Epstein depicting Rima from Green Mansions. The engravings are by the designer Eric Gill. It stands in the Hudson Memorial Bird Sanctuary in Hyde Park, not far from where he slept upon arrival to England.
At the headquarters of the RSPB in Sandy, Bedfordshire, a portrait of Hudson painted by Frank Brooks hangs over the fireplace noting his role in the early days of the Society and for his bequest.
Ernest Hemingway referred to Hudson's The Purple Land (1885) in his novel The Sun Also Rises, and to Far Away and Long Ago in his posthumous novel The Garden of Eden (1986). He listed Far Away and Long Ago in a suggested reading list for a young writer. Joseph Conrad stated that Hudson's writing "was like the grass that the good God made to grow and when it was there you could not tell how it came."
James Rebanks' 2015 book The Shepherd's Life about a Lake District farmer was inspired by Hudson's work of the same name: "But even more than Orwell or Hemingway, W.H. Hudson turned me into a book obsessive ..." (p. 115), and: "One day, I pulled A Shepherd's Life by W.H. Hudson from the bookcase ...and the sudden life-changing realization it gave me that we could be in books – great books." (p. 114)
In Argentina, Hudson is considered to belong to the national literature as Guillermo Enrique Hudson, the Spanish version of his name. A town in Berazategui Partido and several other public places and institutions are named after him. The town of Hudson in Buenos Aires Province is named for him.
Works
The complete collected works of Hudson produced in 1922-3 went to 24 volumes. Many of his works were translated into other languages. Hudson's best-known novel is Green Mansions (1904), which was adapted into a a film starring Audrey Hepburn and Anthony Perkins, and his best-known non-fiction is Far Away and Long Ago (1918), which was also made into a film.The Purple Land that England Lost: Travels and Adventures in the Banda Oriental, South America (1885)A Crystal Age (1887)Argentine Ornithology (1888)Ralph Herne (1888)Fan–The Story of a Young Girl's Life (1892), as Henry Harford
The Naturalist in la Plata (1892)
Idle Days in Patagonia (1893)
Birds in a Village (1893)Lost British Birds (1894), pamphletBritish Birds (1895), with a chapter by Frank Evers BeddardOsprey; or, Egrets and Aigrettes (1896)Birds in London (1898)Nature in Downland (1900)Birds and Man (1901)El Ombú (1902), stories; later South American SketchesHampshire Days (1903)Green Mansions: A Romance of the Tropical Forest (1904)A Little Boy Lost (1905)Land's End. A Naturalist's Impressions in West Cornwall (1908)Afoot in England (1909)A Shepherd's Life: Impressions of the South Wiltshire Downs (1910)
Adventures Among Birds (1913)Tales of the Pampas (1916)Far Away and Long Ago – A History of My Early Life (1918; new edition by Eland, 2005)The Book of a Naturalist (1919)Birds in Town and Village (1919)Birds of La Plata (1920) two volumesDead Man's Plack and An Old Thorn (1920) – see Dead Man's PlackA Traveller in Little Things (1921)Tired Traveller (1921), essaySeagulls in London. Why They Took To Coming To Town (1922), essayA Hind in Richmond Park (1922)The Collected Works (1922–23), 24 volumes153 Letters from W.H. Hudson (1923), edited by Edward GarnettRare Vanishing & Lost British Birds (1923)Men, Books and Birds (1925)The Disappointed Squirrel (1925) from The Book of a NaturalistMary's Little Lamb (1929)South American Romances (1930) The Purple Land; Green Mansions; El OmbúW.H. Hudson's Letters to R. B. Cunninghame Graham (Golden Cockerel Press 1941; about R. B. Cunninghame Graham)Tales of the Gauchos (1946)Letters on the Ornithology of Buenos Ayres (1951), edited by David W. DewarDiary Concerning his Voyage from Buenos Aires to Southampton on the Ebro (1958)Gauchos of the Pampas and Their Horses (1963), stories, with R.B. Cunninghame GrahamEnglish Birds and Green Places: Selected Writings (1964) Birds of A Feather: Unpublished Letters of W.H. Hudson (1981), edited by D. ShrubsallLandscapes and Literati: Unpublished letters of W.H. Hudson and George Gissing (1985), edited by Dennis Shrubsall and Pierre Coustillas
Bibliographies
G. F. Wilson (1922, 1968) Bibliography of the Writings of W.H. Hudson John R. Payne (1977) W.H. Hudson. a BibliographyBiographies
Morley Roberts (1924) W.H. Hudson Ford Madox Ford (1937) Portraits from Life Robert Hamilton (1946) W.H. Hudson:The Vision of Earth Richard E. Haymaker (1954) From Pampas to Hedgerows and Downs: A Study of W. H. Hudson Alicia Jurado (1971) Vida y obra de W.H. Hudson John T. Frederick (1972) William Henry Hudson D. Shrubsall (1978) W.H. Hudson, Writer and Naturalist Ruth Tomalin (1982) W.H. Hudson – a biography Amy D. Ronner (1986) W.H. Hudson: The Man, The Novelist, The Naturalist David Miller (1990) W.H. Hudson and the Elusive Paradise Felipe Arocena (2003) William Henry Hudson: Life, Literature and Science Jason Wilson: Living in the sound of the wind'', [A Personal Quest For W. H. Hudson, Naturalist And Writer From The River Plate], London : Constable, 2016
Notes
References
External links
Tales of the Pampas (El Ombú and Other Stories), illustrated 1939.
Reserva Hudson
Archival Material at
The Papers of William Henry Hudson at Dartmouth College Library
The naturalist who inspired Ernest Hemmingway and many others to love the Wilderness
1841 births
1922 deaths
19th-century Argentine writers
19th-century English novelists
20th-century Argentine male writers
20th-century English novelists
Argentine male novelists
Argentine ornithologists
Argentine emigrants to England
Argentine naturalists
Argentine people of American descent
Argentine people of English descent
Argentine people of Irish descent
English ornithologists
English people of American descent
English people of Irish descent
Lamarckism
People from Quilmes
Victorian novelists
Vitalists | William Henry Hudson | Biology | 3,206 |
3,045,894 | https://en.wikipedia.org/wiki/Nortel%20Meridian | Nortel Meridian is a private branch exchange telephone switching system. It provides advanced voice features, data connectivity, LAN communications, computer telephony integration (CTI), and information services for communication applications ranging from 60 to 80,000 lines.
History
Exploratory development on digital technology, common for the SL-1 (PBX) and the DMS (public switch) product lines, began in 1969 at Northern Telecom, while R&D activities related to the SL-1 started in 1971. SL stands for Stored Logic.
The original products were developed in a Bell-Northern Research developed proprietary toolset and language, similar to Pascal called Protel, and ran without a specific operating system. In the 1990s it was evolved onto VxWorks, a commercial real time embedded operating system, at which time the model numbers were evolved to add the letter C to the end of the option numbers.
It was introduced by Northern Telecom in December 1974 at the USITA convention in San Francisco, with an original capacity from 100 to 7,600 lines, and became the first fully digital PBX announced on the global market aimed at the smaller PBX market. In the early 1970s, most PBXs were either electromechanical (e.g. cross-bar) or based on a hybrid technology (e.g. switching matrix made from a two-dimensional array of contacts but control performed by an electronic logic). For this reason, the SL-1 enjoyed a great success on the enterprise market both in North-America and globally.
Its success went on to power the company into a leadership position in the telephony world, and led to expanded designs "up and down" to provide products at all sizes, including the DMS series high-end machines, and the Meridian Norstar for smaller installations up to 200 users. The SL-1 was gradually enhanced (peripheral hardware, packaging, etc.) and renamed Meridian-1 in the late 1980s. The Meridian-1 has evolved to support IP telephony and other next generation IP services.
Impact
The Meridian has 43 million installed users worldwide, making it the most widely used PBX.
The Meridian was one of the few PBXs still available from a major communications supplier that can be configured as non-VOIP PBX and could be upgraded to a hybrid system with VoIP added.
Models
The Meridian 1 range currently consists of several models:
Meridian 1 Option 11C (60-800 lines)
Meridian 1 Option 11C Mini (60-128 lines)
Meridian 1 Option 61C (600-2000 lines)
Meridian 1 Option 81C (200-16,000 lines)
Additionally, other products have been sold using the Meridian brand:
The Norstar key telephone system was sold as the Meridian Norstar in some markets, until the mid-1990s
A large-scale switch based on the DMS-100 is sold as the Meridian SL-100
Resellers, and accessory manufacturers frequently but erroneously use the phrase "Meridian Option" to refer to the Meridian 1 range, to distinguish it from the smaller and larger Norstar and SL-100
Digital Line Card
A digital line card is an intelligent peripheral equipment (IPE) device which can be installed in the IPE module. It provides 16 voice and 16 data communication links between a Meridian 1 switch and modular digital telephones.
The digital line card supports voice only or simultaneous voice and data service over a single twisted pair of standard telephone wiring. When a Meridian digital telephone is equipped with the data option, an asynchronous ASCII terminal, or a PC acting as an asynchronous ASCII terminal, can be connected to the system through the digital telephone.
Physical description
Digital line cards are housed in the Intelligent Peripheral Equipment (IPE) Modules. Up to 16 cards are supported.
The digital line card circuitry is mounted on a by ( by ) double-sided printed circuit board. The card connects to the backplane through a 160-pin edge connector. The faceplate of the digital line card is equipped with a red LED that lights when the card is disabled.
When the card is installed, the LED remains lit for two to five seconds as a self-test runs. If the self-test completes successfully, the LED flashes three times and remains lit until the card is configured and enabled in software, then the LED goes out. If the LED continually flashes or remains weakly lit, there is a fault detected in the card.
Functional description
The digital line card is equipped with 16 identical digital line interfaces. Each interface provides a multiplexed voice, data, and signaling path to and from a digital terminal (telephone) over a 2-wire full duplex 512kHz time-division multiplexing digital link. Each digital telephone and associated data terminal is assigned a separate Terminal Number (TN) in the system database, giving a total of 32 addressable units per card.
References
Extracts from Nortel IP Telephony
TCM Loop
Time Compression Multiplexing (TCM) is the standard communication protocol used by Nortel digital telephones on a 2-wire tip and ring circuit. Each digital phone line terminates at a station port on a Digital Voice Card (DVC) in the PBX. The circuit interface operates by a transformer coupling which provides foreign voltage protection between the TCM loop and the digital line. The maximum length of a normal TCM loop is 300 m (1000 ft.) Minimum voltage at telephone 10 V DC and a TCM port should have between 15 and 20 V DC across the Tip and Ring with the phone disconnected. The port connection may be through a Punch-down block or a Modular connector called a TELADAPT plug (or socket) which is Nortel parlance for phone connector.
Evolution Path
The Nortel Meridian 1 can be upgraded to support VoIP in two forms:
VoIP Trunking, the Meridian 1 can have ITG-Trunk cards added to it to support PBX-to-PBX voice trunking using H.323
VoIP Line (VoIP Sets), the Meridian 1 can have IP Line cards added to it to support VoIP sets.
The introduction of Release 3.0 for the Meridian 1, otherwise known as the CS1000 Release 3.0 also provides an upgrade path for the existing customer base to upgrade a Meridian 1 to an IP-PBX
Introduction of E-MetroTel UCX4.5 MDSE package by leveraging MGC card, line cards and cabinets provides upgrade path for existing customers
See also
List of telephone switches
Nortel business phones
Portico TVA
References
External links
Network World. 19 November. 1990
Meridian product group home page
Meridian Product Image Library
Nortel Meridian / Avaya CS1K
Meridian (CS1000) Documentation
Telephone exchanges
Meridian
Computer telephony integration | Nortel Meridian | Technology | 1,383 |
73,169,134 | https://en.wikipedia.org/wiki/HD%20189080 | HD 189080, also known as HR 7621 or rarely 74 G. Telescopii, is a solitary orange-hued star located in the southern constellation Telescopium. It has an apparent magnitude of 6.18, placing it near the limit for naked eye visibility. Gaia DR3 parallax measurements place it at a distance of 357 light years and it is currently receding rapidly with a heliocentric radial velocity of . At its current distance, HD 189080's brightness is diminished by 0.17 magnitudes due to extinction from interstellar dust. It has an absolute magnitude of +1.1.
This is an evolved red giant with a stellar classification of K0 III. It is currently on the red giant branch, fusing a hydrogen shell around an inert helium core. It has 119% the mass of the Sun, but at the age of 4.83 billion years it has expanded to 9.9 times the radius of the Sun. It radiates 43.6 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . HD 189080 is slightly metal deficient with [Fe/H] = −0.11 and spins too slowly to be measured accurately.
References
K-type giants
Telescopium
Telescopii, 74
CD-49 12949
189080
098482
7621 | HD 189080 | Astronomy | 288 |
15,315,411 | https://en.wikipedia.org/wiki/Chung%20Tao%20Yang | Chung Tao Yang, or Chung-Tao Yang, Yang Zhongdao () (May 4, 1923 – 2005), was a notable Chinese American topologist. He was an academician of the Academia Sinica and served as the chair of the Department of Mathematics, University of Pennsylvania.
Life
Born in Pingyang County, Wenzhou, Zhejiang Province, he graduated from Wenzhou Middle School in 1942. He graduated from Zhejiang University in 1946 and his main academic advisor was Su Buqing. From 1946 to 1948 he was an assistant in the Department of Mathematics, Zhejiang University. From 1949 to 1950 he was a lecturer at National Taiwan University. During this time he was an assistant and later a researcher in the Institute of Mathematics, Academia Sinica.
Yang went to the United States and obtained his Ph.D. from Tulane University in 1952. From 1952 to 1954, he taught at the University of Illinois, and from 1954 to 1956, he was a visiting member at the Institute for Advanced Study, where he began a lifelong collaboration with Deane Montgomery. In 1956 he became an assistant professor at the University of Pennsylvania. He was promoted to associate professor two years later and professor in 1961. He served as chair of the department from 1978 through 1983. He retired and became professor emeritus in 1991.
Yang was elected in 1968 to the Academia Sinica. From 1992 to 2004 he was an advisor for the Institute of Mathematics, Academia Sinica.
Research
Yang's earliest research focused on finite projective geometry.
Yang worked mainly in differential topology (especially group actions on manifolds) and published numerous papers in this field, many in collaboration with Deane Montgomery.
His best known work was on the Blaschke conjecture. His theorem, combined with the results of others, established the conjecture for spheres.
References
External links
The Mathematics Genealogy Project - Chung-Tao Yang (English)
Our Mathematical Ancestors (October 1999) (English)
Institute of Mathematics, Academia Sinica, Yang's contributions to topology (Chinese & English)
Obituary from Department of Mathematics University of Pennsylvania
1923 births
2005 deaths
20th-century American mathematicians
21st-century American mathematicians
Taiwanese emigrants to the United States
Zhejiang University alumni
Members of Academia Sinica
Topologists
Chinese emigrants to Taiwan | Chung Tao Yang | Mathematics | 449 |
58,830,921 | https://en.wikipedia.org/wiki/Salt%20crust | A salt crust is a method of cooking by completely covering an ingredient such as fish, chicken or vegetables in salt (sometimes bound together by water or egg white) before baking. The salt layer acts as insulation and helps cook the food in an even and gentle manner. After baking, the salt crust is cracked and discarded, revealing the moist and evenly cooked food.
Technique and foods
Typically delicate foods such as fish, chicken/poultry or vegetables are cooked using this method. In each case the aim is to lock in moisture, protect the food from drying, ensure even cooking, and maximise the flavour.
The salt crust can be created simply by adding a sufficient quantity of coarse or fine salt to cover the food item. Water may be sprayed on top, to help the salt form a hard crust. Alternatively, the salt may be mixed with egg white to form a pliable paste.
Baking typically occurs in an oven at around , with the salt crust acting as a cooking vessel. This slows heat transfer to the food creating a slow and low dry oven, beneficial to most proteins.
To serve, the crust is broken and carefully removed, to avoid leaving excess salt residues in the food.
Steak can also be cooked using a similar method rather than broiling it and risk the loss of its juices. Coating a pan with salt and cooking the steak, ideally approximately an inch thick or less, on top yields a more moist and flavourful cut. The pan is heated until the salt crackles and steaks cooked with this method usually involves a crispy crust made of the salt. This is a key difference to other salt crusts as usually it is discarded but in this case it is used to add texture and flavour to the steak.
History
The earliest recipe found for salt-baked is from the fourth century BCE in Archestratus' Life of Luxury. The recipe details for a whole, round white fish such as sea bass, snapper or sea bream that was cleaned and then gutted. The fish is seasoned with thyme being inserted into the cavity of the fish prior to the salt crust encapsulating it in two pounds of salt glued together with water and egg whites.
In a Muslim cookbook originating from the thirteenth century, a layer of salt is placed on a new terracotta tile as a base and the fish is placed on top and another layer of salt is added on, then finally being placed in an oven.
The first recorded reference in China resembling the technique of baking in a salt crust is salt-baked chicken from Dong Jing in the province of Guangdong during the Qing Dynasty (1644–1911). The chicken was cooked and preserved in the salt fields of the area giving them added flavour. A more recent traditionally recipe is building a cocoon of salt around the chicken protecting it from direct heat ensuring even cooking and the possibility of drying out the meat. The chicken is wrapped in a tight parcel concentrating all the natural flavours of the chicken resulting in a succulent and tender product. This can be seen as the combination of the salt crust method with the French en papillote style.
Around the world
The salt-crusted fish has appeared in many different countries such as France, Italy, Spain and China.
In southern Italy, fish native to the region such as bass, bream trout or snapper is traditionally baked in salt crust, using a combination of coarse salt for the base and fine salt for the top layer.
In Thailand, salt crusted fish is cooked, stuffed with lemongrass.
In Turkey, chicken baked in salt crust is a traditional dish from the province of Hatay, with records dating back to the Ottoman period. It is believed that the technique may have been adopted from China.
In Colombia, beef tenderloin is baked in a salt crust, then wrapped in a kitchen towel and baked on a top of hot coals.
Bibliography
Martin James (2007) "Salt-crusted Sea Bass with runner bean salad"
Dimbleby H., and Baxter J. (2015) "A Sicilian feast with salt-crusted seabass". The Guardian.
See also
References
Cooking techniques
Culinary terminology
Edible salt | Salt crust | Chemistry | 841 |
10,376 | https://en.wikipedia.org/wiki/Euclidean%20domain | In mathematics, more specifically in ring theory, a Euclidean domain (also called a Euclidean ring) is an integral domain that can be endowed with a Euclidean function which allows a suitable generalization of Euclidean division of integers. This generalized Euclidean algorithm can be put to many of the same uses as Euclid's original algorithm in the ring of integers: in any Euclidean domain, one can apply the Euclidean algorithm to compute the greatest common divisor of any two elements. In particular, the greatest common divisor of any two elements exists and can be written as a linear combination
of them (Bézout's identity). In particular, the existence of efficient algorithms for Euclidean division of integers and of polynomials in one variable over a field is of basic importance in computer algebra.
It is important to compare the class of Euclidean domains with the larger class of principal ideal domains (PIDs). An arbitrary PID has much the same "structural properties" of a Euclidean domain (or, indeed, even of the ring of integers), but lacks an analogue of the Euclidean algorithm and extended Euclidean algorithm to compute greatest common divisors. So, given an integral domain , it is often very useful to know that has a Euclidean function: in particular, this implies that is a PID. However, if there is no "obvious" Euclidean function, then determining whether is a PID is generally a much easier problem than determining whether it is a Euclidean domain.
Every ideal in a Euclidean domain is principal, which implies a suitable generalization of the fundamental theorem of arithmetic: every Euclidean domain is also a unique factorization domain. Euclidean domains appear in the following chain of class inclusions:
Definition
Let be an integral domain. A Euclidean function on is a function from to the non-negative integers satisfying the following fundamental division-with-remainder property:
(EF1) If and are in and is nonzero, then there exist and in such that and either or .
A Euclidean domain is an integral domain which can be endowed with at least one Euclidean function. A particular Euclidean function is not part of the definition of a Euclidean domain, as, in general, a Euclidean domain may admit many different Euclidean functions.
In this context, and are called respectively a quotient and a remainder of the division (or Euclidean division) of by . In contrast with the case of integers and polynomials, the quotient is generally not uniquely defined, but when a quotient has been chosen, the remainder is uniquely defined.
Most algebra texts require a Euclidean function to have the following additional property:
(EF2) For all nonzero and in , .
However, one can show that (EF1) alone suffices to define a Euclidean domain; if an integral domain is endowed with a function satisfying (EF1), then can also be endowed with a function satisfying both (EF1) and (EF2) simultaneously. Indeed, for in , one can define as follows:
In words, one may define to be the minimum value attained by on the set of all non-zero elements of the principal ideal generated by .
A Euclidean function is multiplicative if and is never zero. It follows that . More generally, if and only if is a unit.
Notes on the definition
Many authors use other terms in place of "Euclidean function", such as "degree function", "valuation function", "gauge function" or "norm function". Some authors also require the domain of the Euclidean function to be the entire ring ; however, this does not essentially affect the definition, since (EF1) does not involve the value of . The definition is sometimes generalized by allowing the Euclidean function to take its values in any well-ordered set; this weakening does not affect the most important implications of the Euclidean property.
The property (EF1) can be restated as follows: for any principal ideal of with nonzero generator , all nonzero classes of the quotient ring have a representative with . Since the possible values of are well-ordered, this property can be established by showing that for any with minimal value of in its class. Note that, for a Euclidean function that is so established, there need not exist an effective method to determine and in (EF1).
Examples
Examples of Euclidean domains include:
Any field. Define for all nonzero .
, the ring of integers. Define , the absolute value of .
, the ring of Gaussian integers. Define , the norm of the Gaussian integer .
(where is a primitive (non-real) cube root of unity), the ring of Eisenstein integers. Define , the norm of the Eisenstein integer .
, the ring of polynomials over a field . For each nonzero polynomial , define to be the degree of .
, the ring of formal power series over the field . For each nonzero power series , define as the order of , that is the degree of the smallest power of occurring in . In particular, for two nonzero power series and , if and only if divides .
Any discrete valuation ring. Define to be the highest power of the maximal ideal containing . Equivalently, let be a generator of , and be the unique integer such that is an associate of , then define . The previous example is a special case of this.
A Dedekind domain with finitely many nonzero prime ideals . Define , where is the discrete valuation corresponding to the ideal .
Examples of domains that are not Euclidean domains include:
Every domain that is not a principal ideal domain, such as the ring of polynomials in at least two indeterminates over a field, or the ring of univariate polynomials with integer coefficients, or the number ring .
The ring of integers of , consisting of the numbers where and are integers and both even or both odd. It is a principal ideal domain that is not Euclidean.This was proved by Theodore Motzkin and was the first case known.
The ring is also a principal ideal domain that is not Euclidean. To see that it is not a Euclidean domain, it suffices to show that for every non-zero prime , the map induced by the quotient map is not surjective.
Properties
Let R be a domain and f a Euclidean function on R. Then:
R is a principal ideal domain (PID). In fact, if I is a nonzero ideal of R then any element a of I \ {0} with minimal value (on that set) of f(a) is a generator of I. As a consequence R is also a unique factorization domain and a Noetherian ring. With respect to general principal ideal domains, the existence of factorizations (i.e., that R is an atomic domain) is particularly easy to prove in Euclidean domains: choosing a Euclidean function f satisfying (EF2), x cannot have any decomposition into more than f(x) nonunit factors, so starting with x and repeatedly decomposing reducible factors is bound to produce a factorization into irreducible elements.
Any element of R at which f takes its globally minimal value is invertible in R. If an f satisfying (EF2) is chosen, then the converse also holds, and f takes its minimal value exactly at the invertible elements of R.
If Euclidean division is algorithmic, that is, if there is an algorithm for computing the quotient and the remainder, then an extended Euclidean algorithm can be defined exactly as in the case of integers.
If a Euclidean domain is not a field then it has an element a with the following property: any element x not divisible by a can be written as x = ay + u for some unit u and some element y. This follows by taking a to be a non-unit with f(a) as small as possible. This strange property can be used to show that some principal ideal domains are not Euclidean domains, as not all PIDs have this property. For example, for d = −19, −43, −67, −163, the ring of integers of is a PID which is Euclidean, but the cases d = −1, −2, −3, −7, −11 Euclidean.
However, in many finite extensions of Q with trivial class group, the ring of integers is Euclidean (not necessarily with respect to the absolute value of the field norm; see below).
Assuming the extended Riemann hypothesis, if K is a finite extension of Q and the ring of integers of K is a PID with an infinite number of units, then the ring of integers is Euclidean.
In particular this applies to the case of totally real quadratic number fields with trivial class group.
In addition (and without assuming ERH), if the field K is a Galois extension of Q, has trivial class group and unit rank strictly greater than three, then the ring of integers is Euclidean.
An immediate corollary of this is that if the number field is Galois over Q, its class group is trivial and the extension has degree greater than 8 then the ring of integers is necessarily Euclidean.
Norm-Euclidean fields
Algebraic number fields K come with a canonical norm function on them: the absolute value of the field norm N that takes an algebraic element α to the product of all the conjugates of α. This norm maps the ring of integers of a number field K, say OK, to the nonnegative rational integers, so it is a candidate to be a Euclidean norm on this ring. If this norm satisfies the axioms of a Euclidean function then the number field K is called norm-Euclidean or simply Euclidean. Strictly speaking it is the ring of integers that is Euclidean since fields are trivially Euclidean domains, but the terminology is standard.
If a field is not norm-Euclidean then that does not mean the ring of integers is not Euclidean, just that the field norm does not satisfy the axioms of a Euclidean function. In fact, the rings of integers of number fields may be divided in several classes:
Those that are not principal and therefore not Euclidean, such as the integers of
Those that are principal and not Euclidean, such as the integers of
Those that are Euclidean and not norm-Euclidean, such as the integers of
Those that are norm-Euclidean, such as Gaussian integers (integers of )
The norm-Euclidean quadratic fields have been fully classified; they are where takes the values
−11, −7, −3, −2, −1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57, 73 .
Every Euclidean imaginary quadratic field is norm-Euclidean and is one of the five first fields in the preceding list.
See also
Valuation (algebra)
Notes
References
Ring theory
Commutative algebra
Domain | Euclidean domain | Mathematics | 2,221 |
11,665,456 | https://en.wikipedia.org/wiki/Slip%20%28vehicle%20dynamics%29 | In (automotive) vehicle dynamics, slip is the relative motion between a tire and the road surface it is moving on. This slip can be generated either by the tire's rotational speed being greater or less than the free-rolling speed (usually described as percent slip), or by the tire's plane of rotation being at an angle to its direction of motion (referred to as slip angle).
In rail vehicle dynamics, this overall slip of the wheel relative to the rail is called creepage. It is distinguished from the local sliding velocity of surface particles of wheel and rail, which is called micro-slip.
Longitudinal slip
The longitudinal slip is generally given as a percentage of the difference between the surface speed of the wheel compared to the speed between axle and road surface, as:
where is the longitudinal component of the rotational speed of the wheel, is wheel radius at the point of contact and is vehicle speed in the plane of the tire. A positive slip indicates that the wheels are spinning; negative slip indicates that they are skidding. Locked brakes, , means that and sliding without rotating. Rotation with no velocity, and , means that .
Lateral slip
The lateral slip of a tire is the angle between the direction it is moving and the direction it is pointing. This can occur, for instance, in cornering, and is enabled by deformation in the tire carcass and tread. Despite the name, no actual sliding is necessary for small slip angles. Sliding may occur, starting at the rear of the contact patch, as slip angle increases.
The slip angle can be defined as:
References
See also
Contact patch
Frictional contact mechanics
Aristotle's wheel paradox
Explanation with animation of the elastic slip website tec-science.com
Tires
Motorcycle dynamics | Slip (vehicle dynamics) | Physics | 351 |
209,441 | https://en.wikipedia.org/wiki/Perseus%20%28constellation%29 | Perseus is a constellation in the northern sky, named after the Greek mythological hero Perseus. It is one of the 48 ancient constellations listed by the 2nd-century astronomer Ptolemy, and among the 88 modern constellations defined by the International Astronomical Union (IAU). It is located near several other constellations named after ancient Greek legends surrounding Perseus, including Andromeda to the west and Cassiopeia to the north. Perseus is also bordered by Aries and Taurus to the south, Auriga to the east, Camelopardalis to the north, and Triangulum to the west. Some star atlases during the early 19th century also depicted Perseus holding the disembodied head of Medusa, whose asterism was named together as Perseus et Caput Medusae; however, this never came into popular usage.
The galactic plane of the Milky Way passes through Perseus, whose brightest star is the yellow-white supergiant Alpha Persei (also called Mirfak), which shines at magnitude 1.79. It and many of the surrounding stars are members of an open cluster known as the Alpha Persei Cluster. The best-known star, however, is Algol (Beta Persei), linked with ominous legends because of its variability, which is noticeable to the naked eye. Rather than being an intrinsically variable star, it is an eclipsing binary. Other notable star systems in Perseus include X Persei, a binary system containing a neutron star, and GK Persei, a nova that peaked at magnitude 0.2 in 1901. The Double Cluster, comprising two open clusters quite near each other in the sky, was known to the ancient Chinese. The constellation gives its name to the Perseus cluster (Abell 426), a massive galaxy cluster located 250 million light-years from Earth. It hosts the radiant of the annual Perseids meteor shower—one of the most prominent meteor showers in the sky.
History and mythology
In Greek mythology, Perseus was the son of Danaë, who was sent by King Polydectes to bring the head of Medusa the Gorgon—whose visage caused all who gazed upon her to turn to stone. Perseus slew Medusa in her sleep, and Pegasus and Chrysaor appeared from her body. Perseus continued to the realm of Cepheus whose daughter Andromeda was to be sacrificed to Cetus the sea monster.
Perseus rescued Andromeda from the monster by killing it with his sword. He turned Polydectes and his followers to stone with Medusa's head and appointed Dictys the fisherman king. Perseus and Andromeda married and had six children. In the sky, Perseus lies near the constellations Andromeda, Cepheus, Cassiopeia (Andromeda's mother), Cetus, and Pegasus.
In Neo-Assyrian Babylonia (911–605 BC), the constellation of Perseus was known as the Old Man constellation (SU.GI), then associated with East in the MUL.APIN, an astronomical text from the 7th century.
In non-Western astronomy
Four Chinese constellations are contained in the area of the sky identified with Perseus in the West. Tiānchuán (天船), the Celestial Boat, was the third paranatellon (A star or constellation which rises at the same time as another star or object) of the third house of the White Tiger of the West, representing the boats that Chinese people were reminded to build in case of a catastrophic flood season. Incorporating stars from the northern part of the constellation, it contained Mu, Delta, Psi, Alpha, Gamma and Eta Persei. Jīshuǐ (積水), the Swollen Waters, was the fourth paranatellon of the aforementioned house, representing the potential of unusually high floods during the end of August and beginning of September at the beginning of the flood season. Lambda and possibly Mu Persei lay within it. Dàlíng (大陵), the Great Trench, was the fifth paranatellon of that house, representing the trenches where criminals executed en masse in August were interred. It was formed by Kappa, Omega, Rho, 24, 17 and 15 Persei. The pile of corpses prior to their interment was represented by Jīshī (積屍, Pi Persei), the sixth paranatellon of the house. The Double Cluster, h and Chi Persei, had special significance in Chinese astronomy.
In Polynesia, Perseus was not commonly recognized as a separate constellation; the only people that named it were those of the Society Islands, who called it Faa-iti, meaning "Little Valley". Algol may have been named Matohi by the Māori people, but the evidence for this identification is disputed. Matohi ("Split") occasionally came into conflict with Tangaroa-whakapau over which of them should appear in the sky, the outcome affecting the tides. It matches the Maori description of a blue-white star near Aldebaran but does not disappear as the myth would indicate.
Characteristics
Perseus is bordered by Aries and Taurus to the south, Auriga to the east, Camelopardalis and Cassiopeia to the north, and Andromeda and Triangulum to the west. Covering 615 square degrees, it ranks twenty-fourth of the 88 constellations in size. It appears prominently in the northern sky during the Northern Hemisphere's spring. Its main asterism consists of 19 stars. The constellation's boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a 26-sided polygon. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . The International Astronomical Union (IAU) adopted the three-letter abbreviation "Per" for the constellation in 1922.
Features
Stars
Algol (from the Arabic رأس الغول Ra's al-Ghul, which means The Demon's Head), also known by its Bayer designation Beta Persei, is the best-known star in Perseus. Representing the head of the Gorgon Medusa in Greek mythology, it was called Horus in Egyptian mythology and Rosh ha Satan ("Satan's Head") in Hebrew. Located 92.8 light-years from Earth, it varies in apparent magnitude from a minimum of 3.5 to a maximum of 2.3 over a period of 2.867 days.
The star system is the prototype of a group of eclipsing binary stars named Algol variables, though it has a third member to make up what is actually a triple star system. The brightest component is a blue-white main-sequence star of spectral type B8V, which is 3.5 times as massive and 180 times as luminous as the Sun. The secondary component is an orange subgiant star of type K0IV that has begun cooling and expanding to 3.5 times the radius of the Sun, and has 4.5 times the luminosity and 80% of its mass. These two are separated by only 0.05 astronomical units (AU)—five percent of the distance between the Earth and Sun; the main dip in brightness arises when the larger fainter companion passes in front of the hotter brighter primary. The tertiary component is a main sequence star of type A7, which is located on average 2.69 AU from the other two stars. AG Persei is another Algol variable in Perseus, whose primary component is a B-type main sequence star with an apparent magnitude of 6.69. Phi Persei is a double star, although the two components do not eclipse each other. The primary star is a Be star of spectral type B0.5, possibly a giant star, and the secondary companion is likely a stellar remnant. The secondary has a similar spectral type to O-type subdwarfs.
With the historical name Mirfak (Arabic for elbow) or Algenib, Alpha Persei is the brightest star of this constellation with an apparent magnitude of 1.79. A supergiant of spectral type F5Ib located around 590 light-years away from Earth, Mirfak has 5,000 times the luminosity and 42 times the diameter of the Sun. It is the brightest member of the Alpha Persei Cluster (also known as Melotte 20 and Collinder 39), which is an open cluster containing many luminous stars. Neighboring bright stars that are members include the Be stars Delta (magnitude 3.0), Psi (4.3), and 48 Persei (4.0); the Beta Cephei variable Epsilon Persei (2.9); and the stars 29 (5.2), 30 (5.5), 31 (5.0), and 34 Persei (4.7). Of magnitude 4.05, nearby Iota Persei has been considered a member of the group, but is actually located a mere 34 light-years distant. This star is very similar to the Sun, shining with 2.2 times its luminosity. It is a yellow main sequence star of spectral type G0V. Extensive searches have failed to find evidence of it having a planetary system.
Zeta Persei is the third-brightest star in the constellation at magnitude 2.86. Around 750 light-years from Earth, it is a blue-white supergiant 26–27 times the radius of the Sun and 47,000 times its luminosity. It is the brightest star (as seen from Earth) of a moving group of bright blue-white giant and supergiant stars, the Perseus OB2 Association or Zeta Persei Association. Zeta is a triple star system, with a companion blue-white main sequence star of spectral type B8 and apparent magnitude 9.16 around 3,900 AU distant from the primary, and a white main sequence star of magnitude 9.90 and spectral type A2, some 50,000 AU away, that may or may not be gravitationally bound to the other two. X Persei is a double system in this association; one component is a hot, bright star and the other is a neutron star. With an apparent magnitude of 6.72, it is too dim to be seen with the naked eye even in perfectly dark conditions. The system is an X-ray source and the primary star appears to be undergoing substantial mass loss. Once thought to be a member of the Perseus OB2 Association, Omicron Persei (Atik) is a multiple star system with a combined visual magnitude of 3.85. It is composed of two blue-white stars—a giant of spectral class B1.5 and main sequence star of B3—which orbit each other every 4.5 days and are distorted into ovoids due to their small separation. The system has a third star about which little is known. At an estimated distance of 1,475 light-years from Earth, the system is now thought to lie too far from the center of the Zeta Persei group to belong to it.
GRO J0422+32 (V518 Persei) is another X-ray binary in Perseus. One component is a red dwarf star of spectral type M4.5V, which orbits a mysterious dense and heavy object—possibly a black hole—every 5.1 hours. The system is an X-ray nova, meaning that it experiences periodic outbursts in the X-ray band of the electromagnetic spectrum. If the system does indeed contain a black hole, it would be the smallest one ever recorded. Further analysis in 2012 calculated a mass of 2.1 solar masses, which raises questions as to what the object actually is as it appears to be too small to be a black hole.
GK Persei, also known as Nova Persei 1901, is a bright nova that appeared halfway between Algol and Delta Persei. Discovered on 21 February 1901 by Scottish amateur astronomer Thomas David Anderson, it peaked at magnitude 0.2—almost as bright as Capella and Vega. It faded to magnitude 13 around 30 years after its peak brightness. Xi Persei, traditionally known as Menkhib, a blue giant of spectral type O7III, is one of the hottest bright stars in the sky, with a surface temperature of 37,500 K. It is one of the more massive stars, being between 26 and 32 solar masses, and is 330,000 times as luminous as the Sun.
Named Gorgonea Tertia, Rho Persei varies in brightness like Algol, but is a pulsating rather than eclipsing star. At an advanced stage of stellar evolution, it is a red giant that has expanded for the second time to have a radius around 150 times that of the Sun. Its helium has been fused into heavier elements and its core is composed of carbon and oxygen. It is a semiregular variable star of the Mu Cephei type, whose apparent magnitude varies between 3.3 and 4.0 with periods of 50, 120 and 250 days. The Double Cluster contains three even larger stars, each over 700 solar radii: S, RS, and SU Persei are all semiregular pulsating M-type supergiants. The stars are not visible to the naked eye; SU Persei, the brightest of the three, has an apparent magnitude of 7.9 and thus is visible through binoculars. AX Persei is another binary star, the primary component is a red giant in an advanced phase of stellar evolution, which is transferring material onto an accretion disc around a smaller star. The star system is one of the few eclipsing symbiotic binaries, but is unusual because the secondary star is not a white dwarf, but an A-type star. DY Persei is a variable star that is the prototype of DY Persei variables, which are carbon-rich R Coronae Borealis variables that exhibit the variability of asymptotic giant branch stars. DY Persei itself is a carbon star that is too dim to see through binoculars, with an apparent magnitude of 10.6.
Seven stars in Perseus have been found to have planetary systems. V718 Persei is a star in the young open cluster IC 348 that appears to be periodically eclipsed by a giant planet every 4.7 years. This has been inferred to be an object with a maximum mass of 6 times that of Jupiter and an orbital radius of 3.3 AU.
Deep-sky objects
The galactic plane of the Milky Way passes through Perseus, but is much less obvious than elsewhere in the sky as it is mostly obscured by molecular clouds. The Perseus Arm is a spiral arm of the Milky Way galaxy and stretches across the sky from the constellation Cassiopeia through Perseus and Auriga to Gemini and Monoceros. This segment is towards the rim of the galaxy.
Within the Perseus Arm lie two open clusters (NGC 869 and NGC 884) known as the Double Cluster. Sometimes known as h and Chi (χ) Persei, respectively, they are easily visible through binoculars and small telescopes. Both lie more than 7,000 light-years from Earth and are several hundred light-years apart. Both clusters are of approximately magnitude 4 and 0.5 degrees in diameter. The two are Trumpler class I 3 r clusters, though NGC 869 is a Shapley class f and NGC 884 is a Shapley class e cluster. These classifications indicate that they are both quite rich (dense); NGC 869 is the richer of the pair. The clusters are both distinct from the surrounding star field and are clearly concentrated at their centers. The constituent stars, numbering over 100 in each cluster, range widely in brightness.
M34 is an open cluster that appears at magnitude 5.5, and is approximately 1,500 light-years from Earth. It contains about 100 stars scattered over a field of view larger than that of the full moon. M34 can be resolved with good eyesight but is best viewed using a telescope at low magnifications. IC 348 is a somewhat young open cluster that is still contained within the nebula from which its stars formed. It is located about 1,027 light-years from Earth, is about 2 million years old, and contains many stars with circumstellar disks. Many brown dwarfs have been discovered in this cluster due to its age; since brown dwarfs cool as they age, it is easier to find them in younger clusters.
There are many nebulae in Perseus. M76 is a planetary nebula, also called the Little Dumbbell Nebula. It appears two arc-minutes by one arc-minute across and has an apparent brightness of magnitude 10.1. NGC 1499, also known as the California Nebula, is an emission nebula that was discovered in 1884–85 by American astronomer Edward E. Barnard. It is very difficult to observe visually because its low surface brightness makes it appear dimmer than most other emission nebulae. NGC 1333 is a reflection nebula and a star-forming region. Perseus also contains a giant molecular cloud, called the Perseus molecular cloud; it belongs to the Orion Spur and is known for its low rate of star formation compared to similar clouds.
Perseus contains some notable galaxies. NGC 1023 is a barred spiral galaxy of magnitude 10.35, around from Earth. It is the principal member of the NGC 1023 group of galaxies and is possibly interacting with another galaxy. NGC 1260 is either a lenticular or tightly wound spiral galaxy about from Earth. It was the host galaxy of the supernova SN 2006gy, one of the brightest ever recorded. It is a member of the Perseus Cluster (Abell 426), a massive galaxy cluster located from Earth. With a redshift of 0.0179, Abell 426 is the closest major cluster to the Earth. NGC 1275, a component of the cluster, is a Seyfert galaxy containing an active nucleus that produces jets of material, surrounding the galaxy with massive bubbles. These bubbles create sound waves that travel through the Perseus Cluster, sounding a B flat 57 octaves below middle C. This galaxy is a cD galaxy that has undergone many galactic mergers throughout its existence, as evidenced by the "high velocity system"—the remnants of a smaller galaxy—surrounding it. Its active nucleus is a strong source of radio waves. 3C 31 is an active galaxy and radio source in Perseus 237 million light-years from Earth (redshift 0.0173). Its jets, caused by the supermassive black hole at its center, extend several million light-years in opposing directions, making them some of the largest objects in the universe.
Meteor showers
The Perseids are a prominent annual meteor shower that appear to radiate from Perseus from mid-July, peaking in activity between 9 and 14 August each year. Associated with Comet Swift–Tuttle, they have been observed for about 2,000 years. The September Epsilon Perseids, discovered in 2012, are a meteor shower with an unknown parent body in the Oort cloud.
See also
Perseus (Chinese astronomy)
References
Cited texts
External links
The Deep Photographic Guide to the Constellations: Perseus
The clickable Perseus
Warburg Institute Iconographic Database (medieval and early modern images of Perseus)
Constellations
Northern constellations
Constellations listed by Ptolemy | Perseus (constellation) | Astronomy | 4,051 |
2,200,848 | https://en.wikipedia.org/wiki/The%20PracTeX%20Journal | The PracTeX Journal, or simply PracTeX, also known as TPJ, was an online journal focussing on practical use of the TeX typesetting system. The first issue appeared in March 2005. It was published by the TeX Users Group and intended to be a complement to their primary print journal, TUGboat. The PracTeX Journal was last published in October 2012.
Topics covered in PracTeX included:
publishing projects or activities accomplished through the use of TeX
problems that were resolved through the use of TeX or problems with TeX that were resolved
how to use certain LaTeX packages
questions & answers
introductions for beginners
The editorial board included many long-time and well-known TeX developers, including Lance Carnes, Arthur Ogawa, and Hans Hagen.
References
External links
Journal home page
Online magazines published in the United States
Defunct computer magazines published in the United States
Magazines established in 2005
Magazines disestablished in 2012
TeX
Typesetting | The PracTeX Journal | Mathematics,Technology | 192 |
2,761,044 | https://en.wikipedia.org/wiki/Data%20theft | Data theft is the unauthorized duplication or deletion of an organization's electronic information.
Data theft is a growing phenomenon primarily caused by system administrators and office workers with access to technology such as database servers, desktop computers and a growing list of hand-held devices capable of storing digital information, such as USB flash drives, iPods and even digital cameras. Since employees often spend a considerable amount of time developing contacts, confidential, and copyrighted information for the company they work for, they may feel they have some right to the information and are inclined to copy or delete part of it when they leave the company, or misuse it while they are still in employment. Information can be sold and bought and then used by criminals and criminal organizations. Alternatively, an employee may choose to deliberately abuse trusted access to information for the purpose of exposing misconduct by the employer. From the perspective of the society, such an act of whistleblowing can be seen as positive and is protected by law in certain situations in some jurisdictions, such as the United States.
A common scenario is where a sales person makes a copy of the contact database for use in their next job. Typically, this is a clear violation of their terms of employment.
Notable acts of data theft include those by leaker Chelsea Manning and self-proclaimed whistleblowers Edward Snowden and Hervé Falciani.
Data theft methods
Thumbsucking
Thumbsucking, similar to podslurping, is the intentional use of a portable USB mass storage device, such as a USB flash drive (or "thumbdrive"), to illicitly download confidential data from a network endpoint.
A USB flash drive was allegedly used to remove highly classified documents about the design of U.S. nuclear weapons from a vault at Los Alamos without authorization.
The threat of thumbsucking has been amplified for a number of reasons, including the following:
The storage capacity of portable USB storage devices has increased.
The cost of high-capacity portable USB storage devices has decreased.
Networks have grown more dispersed, the number of remote network access points has increased and methods of network connection have expanded, increasing the number of vectors for network infiltration.
Investigating data theft
Techniques to investigate data theft include stochastic forensics, digital artifact analysis (especially of USB drive artifacts), and other computer forensics techniques.
See also
Pod slurping
Bluesnarfing
Sneakernet
Data breach
References
External links
USBs' Giant Sucking Sound
Readers Weigh In: Is the IPod a Threat or a Scapegoat?
Online Behaviours that can Lead to Data Theft
Data security
Theft
Data laws | Data theft | Engineering | 527 |
3,063,191 | https://en.wikipedia.org/wiki/Nielsen%20realization%20problem | The Nielsen realization problem is a question asked by about whether finite subgroups of mapping class groups can act on surfaces, that was answered positively by .
Statement
Given an oriented surface, we can divide the group Diff(S), the group of diffeomorphisms of the surface to itself, into isotopy classes to get the mapping class group π0(Diff(S)). The conjecture asks whether a finite subgroup of the mapping class group of a surface can be realized as the isometry group of a hyperbolic metric on the surface.
The mapping class group acts on Teichmüller space. An equivalent way of stating the question asks whether every finite subgroup of the mapping class group fixes some point of Teichmüller space.
History
asked whether finite subgroups of mapping class groups can act on surfaces.
claimed to solve the Nielsen realization problem but his proof depended on trying to show that Teichmüller space (with the Teichmüller metric) is negatively curved. pointed out a gap in the argument, and showed that Teichmüller space is not negatively curved. gave a correct proof that finite subgroups of mapping class groups can act on surfaces using left earthquakes.
References
Geometric topology
Homeomorphisms
Theorems in topology | Nielsen realization problem | Mathematics | 258 |
22,343,572 | https://en.wikipedia.org/wiki/Flora%20of%20the%20South%20Sandwich%20Islands | This is a list of the flora of the South Sandwich Islands, a group of islands in the subantarctic Atlantic Ocean, part of the British overseas territory of South Georgia and the South Sandwich Islands. The list contains flora in the strict sense; that is, plants only. It comprises a single species of vascular plant, 38 mosses, and 11 liverworts. Unusually, not a single species is known to have naturalised on the islands; all are presumed native.
The islands also contain 5 species of fungus, 41 lichens and 16 diatoms.
Vascular plants
Deschampsia antarctica
Mosses
Andreaea gainii
Andreaea regularis
Anisothecium hookeri
Bartramia patens
Bracythecium austrosalebrosum
Bracythecium fuegianum
Bracythecium glaciale
Bryum argenteum
Bryum dichotomum
Bryum orbiculatifolium
Bryum pseudomicron
Bryum pseudotriquetrum
Campylopus introflexus
Campylopus spiralis
Ceratodon purpureus
Dicranella hilariana
Dicranoweisia brevipes
Dicranoweisia grimmiacea
Ditrichum gemmiferum
Ditrichum heteromallum
Ditrichum hyalinum
Ditrichum lewis-smithii
Hennediella antarctica
Kiaeria pumila
Notoligotrichum trichodon
Pohlia drummondii
Pohlia nutans
Polytrichastrum alpinum
Polytrichum juniperinum
Polytrichum piliferum
Polytrichum strictum
Racomitrium orthotrichaceum
Racomitrium sudeticum
Sanionia georgico-uncinata
Sanionia uncinata
Schizymenium austrogeorgicum
Syntrichia filaris
Syntrichia princeps
Liverworts
Cephalozia badia
Cephaloziella varians
Clasmatocolea rigens
Cryptochila grandiflora
Lepidozia chordulifera
Lophocoela lenta
Lophozia chordulifera
Marchantia berteroana
Pachyglossa dissitifolia
Riccardia georgiensis
Triandrophyllum subtrifidum
References
South Sandwich Islands | Flora of the South Sandwich Islands | Biology | 461 |
18,871,460 | https://en.wikipedia.org/wiki/Van%20Kampen%20diagram | In the mathematical area of geometric group theory, a Van Kampen diagram (sometimes also called a Lyndon–Van Kampen diagram ) is a planar diagram used to represent the fact that a particular word in the generators of a group given by a group presentation represents the identity element in that group.
History
The notion of a Van Kampen diagram was introduced by Egbert van Kampen in 1933. This paper appeared in the same issue of American Journal of Mathematics as another paper of Van Kampen, where he proved what is now known as the Seifert–Van Kampen theorem. The main result of the paper on Van Kampen diagrams, now known as the van Kampen lemma can be deduced from the Seifert–Van Kampen theorem by applying the latter to the presentation complex of a group. However, Van Kampen did not notice it at the time and this fact was only made explicit much later (see, e.g.). Van Kampen diagrams remained an underutilized tool in group theory for about thirty years, until the advent of the small cancellation theory in the 1960s, where Van Kampen diagrams play a central role. Currently Van Kampen diagrams are a standard tool in geometric group theory. They are used, in particular, for the study of isoperimetric functions in groups, and their various generalizations such as isodiametric functions, filling length functions, and so on.
Formal definition
The definitions and notations below largely follow Lyndon and Schupp.
Let
(†)
be a group presentation where all r∈R are cyclically reduced words in the free group F(A). The alphabet A and the set of defining relations R are often assumed to be finite, which corresponds to a finite group presentation, but this assumption is not necessary for the general definition of a Van Kampen diagram. Let R∗ be the symmetrized closure of R, that is, let R∗ be obtained from R by adding all cyclic permutations of elements of R and of their inverses.
A Van Kampen diagram over the presentation (†) is a planar finite cell complex , given with a specific embedding with the following additional data and satisfying the following additional properties:
The complex is connected and simply connected.
Each edge (one-cell) of is labelled by an arrow and a letter a∈A.
Some vertex (zero-cell) which belongs to the topological boundary of is specified as a base-vertex.
For each region (two-cell) of , for every vertex on the boundary cycle of that region, and for each of the two choices of direction (clockwise or counter-clockwise), the label of the boundary cycle of the region read from that vertex and in that direction is a freely reduced word in F(A) that belongs to R∗.
Thus the 1-skeleton of is a finite connected planar graph Γ embedded in and the two-cells of are precisely the bounded complementary regions for this graph.
By the choice of R∗ Condition 4 is equivalent to requiring that for each region of there is some boundary vertex of that region and some choice of direction (clockwise or counter-clockwise) such that the boundary label of the region read from that vertex and in that direction is freely reduced and belongs to R.
A Van Kampen diagram also has the boundary cycle, denoted , which is an edge-path in the graph Γ corresponding to going around once in the clockwise direction along the boundary of the unbounded complementary region of Γ, starting and ending at the base-vertex of . The label of that boundary cycle is a word w in the alphabet A ∪ A−1 (which is not necessarily freely reduced) that is called the boundary label of .
Further terminology
A Van Kampen diagram is called a disk diagram if is a topological disk, that is, when every edge of is a boundary edge of some region of and when has no cut-vertices.
A Van Kampen diagram is called non-reduced if there exists a reduction pair in , that is a pair of distinct regions of such that their boundary cycles share a common edge and such that their boundary cycles, read starting from that edge, clockwise for one of the regions and counter-clockwise for the other, are equal as words in A ∪ A−1. If no such pair of region exists, is called reduced.
The number of regions (two-cells) of is called the area of denoted .
In general, a Van Kampen diagram has a "cactus-like" structure where one or more disk-components joined by (possibly degenerate) arcs, see the figure below:
Example
The following figure shows an example of a Van Kampen diagram for the free abelian group of rank two
The boundary label of this diagram is the word
The area of this diagram is equal to 8.
Van Kampen lemma
A key basic result in the theory is the so-called Van Kampen lemma which states the following:
Let be a Van Kampen diagram over the presentation (†) with boundary label w which is a word (not necessarily freely reduced) in the alphabet A ∪ A−1. Then w=1 in G.
Let w be a freely reduced word in the alphabet A ∪ A−1 such that w=1 in G. Then there exists a reduced Van Kampen diagram over the presentation (†) whose boundary label is freely reduced and is equal to w.
Sketch of the proof
First observe that for an element w ∈ F(A) we have w = 1 in G if and only if w belongs to the normal closure of R in F(A) that is, if and only if w can be represented as
(♠)
where n ≥ 0 and where si ∈ R∗ for i = 1, ..., n.
Part 1 of Van Kampen's lemma is proved by induction on the area of . The inductive step consists in "peeling" off one of the boundary regions of to get a Van Kampen diagram with boundary cycle w and observing that in F(A) we have
where s∈R∗ is the boundary cycle of the region that was removed to get from .
The proof of part two of Van Kampen's lemma is more involved. First, it is easy to see that if w is freely reduced and w = 1 in G there exists some Van Kampen diagram with boundary label w0 such that w = w0 in F(A) (after possibly freely reducing w0). Namely consider a representation of w of the form (♠) above. Then make to be a wedge of n "lollipops" with "stems" labeled by ui and with the "candys" (2-cells) labelled by si. Then the boundary label of is a word w0 such that w = w0 in F(A). However, it is possible that the word w0 is not freely reduced. One then starts performing "folding" moves to get a sequence of Van Kampen diagrams by making their boundary labels more and more freely reduced and making sure that at each step the boundary label of each diagram in the sequence is equal to w in F(A). The sequence terminates in a finite number of steps with a Van Kampen diagram whose boundary label is freely reduced and thus equal to w as a word. The diagram may not be reduced. If that happens, we can remove the reduction pairs from this diagram by a simple surgery operation without affecting the boundary label. Eventually this produces a reduced Van Kampen diagram whose boundary cycle is freely reduced and equal to w.
Strengthened version of Van Kampen's lemma
Moreover, the above proof shows that the conclusion of Van Kampen's lemma can be strengthened as follows. Part 1 can be strengthened to say that if is a Van Kampen diagram of area n with boundary label w then there exists a representation (♠) for w as a product in F(A) of exactly n conjugates of elements of R∗. Part 2 can be strengthened to say that if w is freely reduced and admits a representation (♠) as a product in F(A) of n conjugates of elements of R∗ then there exists a reduced Van Kampen diagram with boundary label w and of area at most n.
Dehn functions and isoperimetric functions
Area of a word representing the identity
Let w ∈ F(A) be such that w = 1 in G. Then the area of w, denoted Area(w), is defined as the minimum of the areas of all Van Kampen diagrams with boundary labels w (Van Kampen's lemma says that at least one such diagram exists).
One can show that the area of w can be equivalently defined as the smallest n≥0 such that there exists a representation (♠) expressing w as a product in F(A) of n conjugates of the defining relators.
Isoperimetric functions and Dehn functions
A nonnegative monotone nondecreasing function f(n) is said to be an isoperimetric function for presentation (†) if for every freely reduced word w such that w = 1 in G we have
where |w| is the length of the word w.
Suppose now that the alphabet A in (†) is finite.
Then the Dehn function of (†) is defined as
It is easy to see that Dehn(n) is an isoperimetric function for (†) and, moreover, if f(n) is any other isoperimetric function for (†) then Dehn(n) ≤ f(n) for every n ≥ 0.
Let w ∈ F(A) be a freely reduced word such that w = 1 in G. A Van Kampen diagram with boundary label w is called minimal if Minimal Van Kampen diagrams are discrete analogues of minimal surfaces in Riemannian geometry.
Generalizations and other applications
There are several generalizations of van-Kampen diagrams where instead of being planar, connected and simply connected (which means being homotopically equivalent to a disk) the diagram is drawn on or homotopically equivalent to some other surface. It turns out, that there is a close connection between the geometry of the surface and certain group theoretical notions. A particularly important one of these is the notion of an annular Van Kampen diagram, which is homotopically equivalent to an annulus. Annular diagrams, also known as conjugacy diagrams, can be used to represent conjugacy in groups given by group presentations. Also spherical Van Kampen diagrams are related to several versions of group-theoretic asphericity and to Whitehead's asphericity conjecture, Van Kampen diagrams on the torus are related to commuting elements, diagrams on the real projective plane are related to involutions in the group and diagrams on Klein's bottle are related to elements that are conjugated to their own inverse.
Van Kampen diagrams are central objects in the small cancellation theory developed by Greendlinger, Lyndon and Schupp in the 1960s-1970s. Small cancellation theory deals with group presentations where the defining relations have "small overlaps" with each other. This condition is reflected in the geometry of reduced Van Kampen diagrams over small cancellation presentations, forcing certain kinds of non-positively curved or negatively cn curved behavior. This behavior yields useful information about algebraic and algorithmic properties of small cancellation groups, in particular regarding the word and the conjugacy problems. Small cancellation theory was one of the key precursors of geometric group theory, that emerged as a distinct mathematical area in the late 1980s and it remains an important part of geometric group theory.
Van Kampen diagrams play a key role in the theory of word-hyperbolic groups introduced by Gromov in 1987. In particular, it turns out that a finitely presented group is word-hyperbolic if and only if it satisfies a linear isoperimetric inequality. Moreover, there is an isoperimetric gap in the possible spectrum of isoperimetric functions for finitely presented groups: for any finitely presented group either it is hyperbolic and satisfies a linear isoperimetric inequality or else the Dehn function is at least quadratic.
The study of isoperimetric functions for finitely presented groups has become an important general theme in geometric group theory where substantial progress has occurred. Much work has gone into constructing groups with "fractional" Dehn functions (that is, with Dehn functions being polynomials of non-integer degree). The work of Rips, Ol'shanskii, Birget and Sapir explored the connections between Dehn functions and time complexity functions of Turing machines and showed that an arbitrary "reasonable" time function can be realized (up to appropriate equivalence) as the Dehn function of some finitely presented group.
Various stratified and relativized versions of Van Kampen diagrams have been explored in the subject as well. In particular, a stratified version of small cancellation theory, developed by Ol'shanskii, resulted in constructions of various group-theoretic "monsters", such as the Tarski Monster, and in geometric solutions of the Burnside problem for periodic groups of large exponent. Relative versions of Van Kampen diagrams (with respect to a collection of subgroups) were used by Osin to develop an isoperimetric function approach to the theory of relatively hyperbolic groups.
See also
Geometric group theory
Presentation of a group
Seifert–Van Kampen theorem
Basic references
Alexander Yu. Ol'shanskii. Geometry of defining relations in groups. Translated from the 1989 Russian original by Yu. A. Bakhturin. Mathematics and its Applications (Soviet Series), 70. Kluwer Academic Publishers Group, Dordrecht, 1991.
Roger C. Lyndon and Paul E. Schupp. Combinatorial Group Theory. Springer-Verlag, New York, 2001. "Classics in Mathematics" series, reprint of the 1977 edition. ; Ch. V. Small Cancellation Theory. pp. 235–294.
Footnotes
External links
Van Kampen diagrams from the files of David A. Jackson
Geometric group theory
Group theory
Combinatorics on words
Diagrams | Van Kampen diagram | Physics,Mathematics | 2,905 |
13,793,754 | https://en.wikipedia.org/wiki/Geophysical%20MASINT | Geophysical MASINT is a branch of Measurement and Signature Intelligence (MASINT) that involves phenomena transmitted through the earth (ground, water, atmosphere) and manmade structures including emitted or reflected sounds, pressure waves, vibrations, and magnetic field or ionosphere disturbances.
According to the United States Department of Defense, MASINT has technically derived intelligence (excluding traditional imagery IMINT and signals intelligence SIGINT) that – when collected, processed, and analyzed by dedicated MASINT systems – results in intelligence that detects, tracks, identifies or describes the signatures (distinctive characteristics) of fixed or dynamic target sources. MASINT was recognized as a formal intelligence discipline in 1986. Another way to describe MASINT is a "non-literal" discipline. It feeds on a target's unintended emissive by-products, the "trails" - the spectral, chemical or RF that an object leaves behind. These trails form distinct signatures, which can be exploited as reliable discriminators to characterize specific events or disclose hidden targets."
As with many branches of MASINT, specific techniques may overlap with the six major conceptual disciplines of MASINT defined by the Center for MASINT Studies and Research, which divides MASINT into Electro-optical, Nuclear, Geophysical, Radar, Materials, and Radiofrequency disciplines.
Military requirements
Geophysical sensors have a long history in conventional military and commercial applications, from weather prediction for sailing, to fish finding for commercial fisheries, to nuclear test ban verification. New challenges, however, keep emerging.
For first-world military forces opposing other conventional militaries, there is an assumption that if a target can be located, it can be destroyed. As a result, concealment and deception have taken on new criticality. "Stealth" low-observability aircraft have gotten much attention, and new surface ship designs feature observability reduction. Operating in a confusing littoral environment produces a great deal of concealing interference.
Of course, submariners feel they invented low observability, and others are simply learning from them. They know that going deep or at least ultraquiet, and hiding among natural features, makes them very hard to detect.
Two families of military applications, among many, represent new challenges against which geophysical MASINT can be tried. Also, see Unattended Ground Sensors.
Deeply buried structures
One of the easiest ways for nations to protect weapons of mass destruction, command posts, and other critical structures is to bury them deeply, perhaps by enlarging natural caves or disused mines. Deep burial is not only a means of protection against physical attack, as even without the use of nuclear weapons, there are deeply penetrating precision-guided bombs that can attack them. Deep burial, with appropriate concealment during construction, is a way to avoid the opponent's knowing the buried facility's position well enough to direct precision-guided weapons against it.
Finding deeply buried structures, therefore, is a critical military requirement. The usual first step in finding a deep structure is IMINT, especially using hyperspectral IMINT sensors to help eliminate concealment. "Hyperspectral images can help reveal information not obtainable through other forms of imagery intelligence such as the moisture content of soil. This data can also help distinguish camouflage netting from natural foliage." Still, a facility dug under a busy city would be extremely hard to find during construction. When the opponent knows that it is suspected that a deeply buried facility exists, there can be a variety of decoys and lures, such as buried heat sources to confuse infrared sensors, or simply digging holes and covering them, with nothing inside.
MASINT using acoustic, seismic, and magnetic sensors would appear to have promise, but these sensors must be fairly close to the target. Magnetic Anomaly Detection (MAD) is used in antisubmarine warfare, for final localization before an attack. The existence of the submarine is usually established through passive listening and refined with directional passive sensors and active sonar.
Once these sensors (as well as HUMINT and other sources) have failed, there is promise for surveying large areas and deeply concealed facilities using gravitimetric sensors. Gravity sensors are a new field, but military requirements are making it important while the technology to do it is becoming possible.
Naval operations in shallow water
Especially in today's "green water" and "brown water" naval applications, navies are looking at MASINT solutions to meet new challenges of operating in littoral areas of operations. This symposium found it useful to look at five technology areas, which are interesting to contrast to the generally accepted categories of MASINT: acoustics and geology and geodesy/sediments/transport, nonacoustical detection (biology/optics/chemistry), physical oceanography, coastal meteorology, and electromagnetic detection.
Although it is unlikely there will ever be another World War II-style opposed landing on a fortified beach, another aspect of the littoral is being able to react to opportunities for amphibious warfare. Detecting shallow-water and beach mines remain a challenge since mine warfare is a deadly "poor man's weapon."
While initial landings from an offshore force would be from helicopters or tiltrotor aircraft, with air cushion vehicles bringing ashore larger equipment, traditional landing craft, portable causeways, or other equipment will eventually be needed to bring heavy equipment across a beach. The shallow depth and natural underwater obstacles can block beach access to these crafts and equipment, as can shallow-water mines. Synthetic Aperture Radar (SAR), airborne laser detection and ranging (LIDAR) and the use of bioluminescence to detect wake trails around underwater obstacles all may help solve this challenge.
Moving onto and across the beach has its own challenges. Remotely operated vehicles may be able to map landing routes, and they, as well as LIDAR and multispectral imaging, may be able to detect shallow water. Once on the beach, the soil has to support heavy equipment. Techniques here include estimating soil type from multispectral imaging, or from an airdropped penetrometer that actually measures the loadbearing capacity of the surface.
Weather and sea intelligence MASINT
The science and art of weather prediction used the ideas of measurement and signatures to predict phenomena, long before there were any electronic sensors. Masters of sailing ships might have no more sophisticated instrument than a wetted finger raised to the wind, and the flapping of sails.
Weather information, in the normal course of military operations, has a major effect on tactics. High winds and low pressures can change artillery trajectories. High and low temperatures cause both people and equipment to require special protection. Aspects of weather, however, also can be measured and compared with signatures, to confirm or reject the findings of other sensors.
The state of art is to fuse meteorological, oceanographic, and acoustic data in a variety of display modes. Temperature, salinity and sound speed can be displayed horizontally, vertically, or in a three-dimensional perspective.
Predicting weather based on measurements and signatures
While early sailors had no sensors beyond their five senses, the modern meteorologist has a wide range of geophysical and electro-optical measuring devices, operating on platforms from the bottom of the sea to deep space. Predictions based on these measurements are based on signatures of past weather events, a deep understanding of theory, and computational models.
Weather predictions can give significant negative intelligence when the signature of some combat systems is such that they can operate only under certain weather conditions. The weather has long been an extremely critical part of modern military operations, as when the decision to land at Normandy on June 6, rather than June 5, 1944, depended on Dwight D. Eisenhower's trust in his staff weather advisor, Group Captain James Martin Stagg. It is rarely understood that something as fast as a ballistic missile reentry vehicle, or as "smart" as a precision guided munition, can still be affected by winds in the target area.
As part of Unattended Ground Sensors,. The Remote Miniature Weather Station (RMWS), from System Innovations, is an air-droppable version with a lightweight, expendable and modular system with two components: a meteorological (MET) sensor and a ceilometer (cloud ceiling height) with limited MET. The basic MET system is surface-based and measures wind speed and direction, horizontal visibility, surface atmospheric pressure, air temperature and relative humidity. The ceilometer sensor determines cloud height and discrete cloud layers. The system provides near-real-time data capable of 24-hour operation for 60 days. The RMWS can also go in with US Air Force Special Operations combat weathermen
The man-portable version, brought in by combat weathermen, has an additional function, as a remote miniature ceilometer. Designed to measure multiple layer cloud ceiling heights and then send that data via satellite communications link to an operator display, the system uses a Neodinum YAG (NdYAG), 4 megawatt non-eye safe laser. According to one weatherman, "We have to watch that one,” he said. “Leaving it out there basically we’re worried about civilian populace going out there and playing with it—firing the laser and there goes somebody’s eye. There are two different units [to RMWS]. One has the laser and one doesn’t. The basic difference is the one with the laser is going to give you cloud height."
Hydrographic sensors
Hydrographic MASINT is subtly different from weather, in that it considers factors such as water temperature and salinity, biological activities, and other factors that have a major effect on sensors and weapons used in shallow water. ASW equipment, especially acoustic performance depends on the season of the specific coastal site. Water column conditions, such as temperature, salinity, and turbidity are more variable in shallow than deep water. Water depth will influence bottom bounce conditions, as will the material of the bottom. Seasonal water column conditions (particularly summer versus winter) are inherently more variable in shallow water than in deep water.
While much attention is given to shallow waters of the littoral, other areas have unique hydrographic characteristics.
Regional areas with freshwater eddies
Open ocean salinity fronts
Near ice floes
Under ice
A submarine tactical development activity observed, "Freshwater eddies exist in many areas of the world. As we have experienced recently in the Gulf of Mexico using the Tactical Oceanographic Monitoring System (TOMS), there exist very distinct surface ducts that causes the Submarine Fleet Mission Program Library (SFMPL) sonar prediction to be unreliable. Accurate bathythermic information is paramount and a precursor for accurate sonar predictions.”
Temperature and salinity
Critical to the prediction of sound, needed by active and passive MASINT systems operating in water is knowing the temperature and salinity at specific depths. Antisubmarine aircraft, ships, and submarines can release independent sensors that measure the water temperature at various depths. The water temperature is critically important in acoustic detections, as changes in water temperature at thermoclines can act as a "barrier" or "layer" to acoustic propagation. To hunt a submarine, which is aware of water temperature, the hunter must drop acoustic sensors below the thermocline.
Water conductivity is used as a surrogate marker for salinity. The current and most recently developed software, however, does not give information on suspended material in the water or bottom characteristics, both considered critical in shallow-water operations.
The US Navy does this by dropping expendable probes, which transmit to a recorder, of 1978-1980 vintage, the AN/BQH-7 for submarines and the AN/BQH-71 for surface ships. While the redesign of the late seventies did introduce digital logic, the devices kept hard-to-maintain analog recorders, and maintainability became critical by 1995. A project was begun to extend with COTS components, to result in the AN/BQH-7/7A EC-3. In 1994-5, the maintainability of the in-service units became critical.
Variables in selecting the appropriate probe include:
Maximum depth sounded
Speed of launching vessel
Resolution of the vertical distance between data points (ft)
Depth accuracy
Biomass
Large schools of fish contain enough entrapped air to conceal the sea floor, or manmade underwater vehicles and structures. Fishfinders, developed for commercial and recreational fishing, are specialized sonars that can identify acoustic reflections between the surface and the bottom. Variations on commercial equipment are apt to be needed, especially in littoral areas rich in marine life.
Sea bottom measurement
A variety of sensors can be used to characterise the sea bottom into, for example, mud, sand, and gravel. Active acoustic sensors are the most obvious, but there is potential information from gravimetric sensors, electro-optical and radar sensors for making inferences from the water surface, etc.
Relatively simple sonars such as echo sounders can be promoted to seafloor classification systems via add-on modules, converting echo parameters into sediment type. Different algorithms exist, but they are all based on changes in the energy or shape of the reflected sounder pings.
Side-scan sonars can be used to derive maps of the topography of an area by moving the sonar across it just above the bottom. Multibeam hull-mounted sonars are not as precise as a sensor near the bottom, but both can give reasonable three-dimensional visualization.
Another approach comes from greater signal processing of existing military sensors. The US Naval Research Laboratory demonstrated both seafloor characterization, as well as subsurface characteristics of the seafloor. Sensors used, in different demonstrations, included normal incidence beams from the AM/UQN-4 surface ship depth finder, and AN/BQN-17 submarine fathometer; backscatter from the Kongsberg EM-121 commercial multibeam sonar; AN/UQN-4 fathometers on mine countermeasures (MCM) ships, and the AN/AQS-20 mine-hunting system. These produced the "Bottom and Subsurface Characterization" graphic.
Weather effects on chemical, biological, and radiological weapon propagation
One of the improvements in the Fuchs 2 reconnaissance vehicle is adding onboard weather instrumentations, including data such as wind direction and speed; air and ground temperature; barometric pressure and humidity.
Acoustic MASINT
This includes the collection of passive or active emitted or reflected sounds, pressure waves or vibrations in the atmosphere (ACOUSTINT) or in the water (ACINT) or conducted through the ground Going well back into the Middle Ages, military engineers would listen to the ground for sounds of telltale digging under fortifications.
In modern times, acoustic sensors were first used in the air, as with artillery ranging in World War I. Passive hydrophones were used by the World War I Allies against German submarines; the UC-3 was sunk with the aid of a hydrophone on 23 April 1916. Since submerged submarines cannot use radar, passive and active acoustic systems are their primary sensors. Especially for the passive sensors, the submarine acoustic sensor operators must have extensive libraries of acoustic signatures, to identify sources of sound.
In shallow water, there are sufficient challenges to conventional acoustic sensors that additional MASINT sensors may be required. Two major confounding factors are:
Boundary interactions. The effects of the seafloor and the sea surface on acoustic systems in shallow water are highly complex, making range predictions difficult. Multi-path degradation affects the overall figure of merit and active classification. As a result, false target identifications are frequent.
Practical limitations. Another key issue is the range dependence of shallow water propagation and reverberation. For example, shallow water limits the depth of towed sound detection arrays, thus increasing the possibility of the system's detecting its own noise. In addition, closer ship spacing increases the potential for mutual interference effects. It is believed that non-acoustic sensors, of magnetic, optical, bioluminescent, chemical, and hydrodynamic disturbances will be necessary for shallow-water naval operations.
Counterbattery and countersniper location and ranging
While now primarily of historical interest, one of the first applications of acoustic and optical MASINT was locating enemy artillery by the sound of their firing and flashes respectively during World War I. Effective sound ranging was pioneered by the British Army under the leadership of the Nobel Lauriate William Bragg. Flash spotting developed in parallel in the British, French and German armies. The combination of sound ranging (i.e., acoustic MASINT) and flash ranging (i.e., before modern optoelectronics) gave information unprecedented for the time, in both accuracy and timeliness. Enemy gun positions were located within 25 to 100 yards, with the information coming in three minutes or less.
Initial WWI counterbattery acoustic systems
In the "Sound Ranging" graphic, the manned Listening (or Advanced) Post, is sited a few 'sound seconds (or about 2000 yards) forward of the line of the unattended microphones, it sends an electrical signal to the recording station to switch on the recording apparatus. The positions of the microphones are precisely known. The differences in sound time of arrival, taken from the recordings, were then used to plot the source of the sound by one of several techniques. See http://nigelef.tripod.com/p_artyint-cb.htm#SoundRanging
Where sound ranging is a time-of-arrival technique not dissimilar to that of modern multistatic sensors, flash spotting used optical instruments to take bearings on the flash from accurately surveyed observation posts. The location of the gun was determined by plotting the bearings reported to the same gun flashes. See http://nigelef.tripod.com/p_artyint-cb.htm#FieldSurveyCoy Flash ranging, today, would be called electro-optical MASINT.
Artillery sound and flash ranging remained in use through World War II and in its latest forms until the present day, although flash spotting generally ceased in the 1950s due to the widespread adoption of flashless propellants and the increasing range of artillery. Mobile counterbattery radars able to detect guns, itself a MASINT radar sensor, became available in the late 1970s, although counter-mortar radars appeared in World War II. These techniques paralleled radio direction finding in SIGINT that started in World War I, using graphical bearing plotting and now, with the precision time synchronization from GPS, is often time-of-arrival.
Modern acoustic artillery locators
Artillery positions now are located primarily with Unmanned Air Systems and IMINT or counterartillery radar, such as the widely used Swedish ArtHuR. SIGINT also may give clues to positions, both with COMINT for firing orders, and ELINT for such things as weather radar. Still, there is renewed interest in both acoustic and electro-optical systems to complement counter-artillery radar.
Acoustic sensors have come a long way since World War I. Typically, the acoustic sensor is part of a combined system, in which it cues radar or electro-optical sensors of greater precision, but a narrower field of view.
HALO
The UK's hostile artillery locating system (HALO) has been in service with the British Army since the 1990s. HALO is not as precise as radar, but especially complements directional radars. It passively detects artillery cannons, mortars and tank guns, with 360-degree coverage and can monitor over 2,000 square kilometers. HALO has worked in urban areas, the mountains of the Balkans, and the deserts of Iraq.
The system consists of three or more unmanned sensor positions, each with four microphones and local processing, these deduce the bearing to a gun, mortar, etc. These bearings are automatically communicated to a central processor that combines them to triangulate the source of the sound. It can compute location data on up to 8 rounds per second, and display the data to the system operator. HALO may be used in conjunction with COBRA and ArtHur counter-battery radars, which are not omnidirectional, to focus on the correct sector.
UTAMS
Another acoustic system is the Unattended Transient Acoustic MASINT Sensor (UTAMS), developed by the U.S. Army Research Laboratory, which detects mortar and rocket launches and impacts. UTAMS remains the primary cueing sensor for the Persistent Threat Detection System (PTDS). ARL mounted aerostats with UTAMS, developing the system in a little over two months. After receiving a direct request from Iraq, ARL merged components from several programs to enable the rapid fielding of this capability.
UTAMS has three to five acoustic arrays, each with four microphones, a processor, a radio link, a power source, and a laptop control computer. UTAMS, which was first operational in Iraq, first tested in November 2004 at a Special Forces Operating Base (SFOB) in Iraq. UTAMS was used in conjunction with AN/TPQ-36 and AN/TPQ-37 counter-artillery radar. While UTAMS was intended principally for detecting indirect artillery fire, Special Forces and their fire support officer learned it could pinpoint improvised explosive device (IED) explosions and small arms/rocket-propelled grenade (RPG) fires. It detected Points of Origin (POO) up to 10 kilometers from the sensor.
Analyzing the UTAMS and radar logs revealed several patterns. The opposing force was firing 60 mm mortars during observed dining hours, presumably since that gave the largest groupings of personnel and the best chance of producing heavy casualties. That would have been obvious from the impact history alone, but these MASINT sensors established a pattern of the enemy firing locations.
This allowed the US forces to move mortars into range of the firing positions, give coordinates to cannons when the mortars were otherwise committed, and use attack helicopters as a backup to both. The opponents changed to night fires, which, again, were countered with mortar, artillery, and helicopter fires. They then moved into an urban area where US artillery was not allowed to fire, but a combination of PSYOPS leaflet drops and deliberate near misses convinced the locals not to give sanctuary to the mortar crews.
Originally for a Marine requirement in Afghanistan, UTAMS was combined with electro-optical MASINT to produce the Rocket Launch Spotter (RLS) system useful against both rockets and mortars.
In the Rocket Launch Spotter (RLS) application, each array consists of four microphones and processing equipment. Analyzing the time delays between an acoustic wavefront’s interaction with each microphone in the array UTAMS provides an azimuth of origin. The azimuth from each tower is reported to the UTAMS processor at the control station, and a POO is triangulated and displayed. The UTAMS subsystem can also detect and locate the point of impact (POI), but, due to the difference between the speeds of sound and light, it may take UTAMS as long as 30 seconds to determine the POO for a rocket launch 13 km away. In this application, the electro-optical component of RLS will detect the rocket POO earlier, while UTAMS may do better with the mortar prediction.
Passive sea-based acoustic sensors (hydrophones)
Modern hydrophones convert sound to electrical energy, which then can undergo additional signal processing, or can be transmitted immediately to a receiving station. They may be directional or omnidirectional.
Navies use a variety of acoustic systems, especially passive, in antisubmarine warfare, both tactical and strategic. For tactical use, passive hydrophones, both on ships and airdropped sonobuoys, are used extensively in antisubmarine warfare. They can detect targets far further away than with active sonar, but generally will not have the precision location of active sonar, approximating it with a technique called Target Motion Analysis (TMA). Passive sonar has the advantage of not revealing the position of the sensor.
The Integrated Undersea Surveillance System (IUSS) consists of multiple subsystems in SOSUS, Fixed Distributed System (FDS), and the Advanced Deployable System (ADS or SURTASS). Reducing the emphasis on Cold War blue-water operations put SOSUS, with more flexible "tuna boat" sensing vessels called SURTASS being the primary blue-water long-range sensors
SURTASS used longer, more sensitive towed passive acoustic arrays than could be deployed from maneuvering vessels, such as submarines and destroyers.
SURTASS is now being complemented by Low-Frequency Active (LFA) sonar; see the sonar section.
Air-dropped passive acoustic sensors
Passive sonobuoys, such as the AN/SSQ-53F, can be directional or omnidirectional and can be set to sink to a specific depth. These would be dropped from helicopters and maritime patrol aircraft such as the P-3.
Fixed underwater passive acoustic sensors
The US installed massive Fixed Surveillance System (FSS, also known as SOSUS) hydrophone arrays on the ocean floor, to track Soviet and other submarines.
Surface ship passive acoustic sensors
Purely from the standpoint of detection, towed hydrophone arrays offer a long baseline and exceptional measurement capability. Towed arrays, however, are not always feasible, because when deployed, their performance can suffer, or they can suffer outright damage, from fast speeds or radical turns.
Steerable sonar arrays on the hull or bow usually have a passive as well as active mode, as do variable-depth sonars
Surface ships may have warning receivers to detect hostile sonar.
Submarine passive acoustic sensors
Modern submarines have multiple passive hydrophone systems, such as a steerable array in a bow dome, fixed sensors along the sides of the submarines, and towed arrays. They also have specialized acoustic receivers, analogous to radar warning receivers, to alert the crew to the use of active sonar against their submarine.
US submarines made extensive clandestine patrols to measure the signatures of Soviet submarines and surface vessels. This acoustic MASINT mission included both routine patrols of attack submarines, and submarines sent to capture the signature of a specific vessel. US antisubmarine technicians on air, surface, and subsurface platforms had extensive libraries of vessel acoustic signatures.
Passive acoustic sensors can detect aircraft flying low over the sea.
Land-based passive acoustic sensors (geophones)
Vietnam-era acoustic MASINT sensors included "Acoubuoy (36 inches long, 26 pounds) floated down by camouflaged parachute and caught in the trees, where it hung to listen. The Spikebuoy (66 inches long, 40 pounds) planted itself in the ground like a lawn dart. Only the antenna, which looked like the stalks of weeds, was left showing above ground."
This was part of Operation Igloo White.
Part of the AN/GSQ-187 Improved Remote Battlefield Sensor System (I-REMBASS) is a passive acoustic sensor, which, with other MASINT sensors, detects vehicles and personnel on a battlefield. Passive acoustic sensors provide additional measurements that can be compared with signatures, and used to complement other sensors. I-REMBASS control will integrate, in approximately 2008, with the Prophet SIGINT/EW ground system.
For example, a ground search radar may not be able to differentiate between a tank and a truck moving at the same speed. Adding acoustic information, however, may quickly distinguish between them.
Active acoustic sensors and supporting measurements
Combatant vessels, of course, made extensive use of active sonar, which is yet another acoustic MASINT sensor. Besides the obvious application in antisubmarine warfare, specialized active acoustic systems have roles in:
Mapping the seafloor for navigation and collision avoidance. These include basic depth gauges, but quickly get into devices that do 3-dimensional underwater mapping
Determining seafloor characteristics, for applications varying from understanding its sound-reflecting properties, to predicting the type of marine life that may be found there, to knowing when a surface is appropriate for anchoring or for using various equipment that will contact the seafloor
Various synthetic aperture sonars have been built in the laboratory and some have entered use in mine-hunting and search systems. An explanation of their operation is given in synthetic aperture sonar.
Water surface, fish interference and bottom characterization
The water surface and bottom are reflecting and scattering boundaries. Large schools of fish, with air in their swim bladder balance apparatus, can also have a significant effect on acoustic propagation.
For many purposes, but not all naval tactical applications, the sea-air surface can be thought of as a perfect reflector. "The effects of the seafloor and the sea surface on acoustic systems in shallow water are highly complex, making range predictions difficult. Multi-path degradation affects the overall figure of merit and active classification. As a result, false target identifications are frequent."
The acoustic impedance mismatch between water and the bottom is generally much less than at the surface and is more complex. It depends on the bottom material types and the depth of the layers. Theories have been developed for predicting the sound propagation in the bottom in this case, for example by Biot and by Buckingham.
Water surface
For high-frequency sonars (above about 1 kHz) or when the sea is rough, some of the incident sounds is scattered, and this is taken into account by assigning a reflection coefficient whose magnitude is less than one.
Rather than measuring surface effects directly from a ship, radar MASINT, in aircraft or satellites, may give better measurements. These measurements would then be transmitted to the vessel's acoustic signal processor.
Under ice
A surface covered with ice, of course, is tremendously different than even storm-driven water Purely from collision avoidance and acoustic propagation, a submarine needs to know how close it is to the bottom of the ice. Less obvious is the need to know the three-dimensional structure of the ice, because submarines may need to break through it to launch missiles, raise electronic masts, or surface the boat. Three-dimensional ice information also can tell the submarine captain whether antisubmarine warfare aircraft can detect or attack the boat.
The state of the art is providing the submarine with a three-dimensional visualization of the ice above: the lowest part (ice keel) and the ice canopy. While sound will propagate differently in ice than in liquid water, the ice still needs to be considered as a volume, to understand the nature of reverberations within it.
Bottom
A typical basic depth measuring device is the US AN/UQN-4A. Both the water surface and bottom are reflecting and scattering boundaries. For many purposes, but not all naval tactical applications, the sea-air surface can be thought of as a perfect reflector. In reality, there are complex interactions of water surface activity, seafloor characteristics, water temperature and salinity, and other factors that make "...range predictions difficult. Multi-path degradation affects overall figure of merit and active classification. As a result, false target identifications are frequent."
This device, however, does not give information on the characteristics of the bottom. In many respects, commercial fishing and marine scientists have equipment that is perceived as needed for shallow-water operations.
Biologic effects on sonar reflection
A further complication is the presence of wind-generated bubbles or fish close to the sea surface.
. The bubbles can also form plumes that absorb some of the incidents and scattered sound, and scatter some of sound themselves.
.
This problem is distinct from biologic interference caused by acoustic energy generated by marine life, such as the squeaks of porpoises and other cetaceans, and measured by acoustic receivers. The signatures of biological sound generators need to be differentiated from more deadly denizens of the depths. Classifying biologics is a very good example of an acoustic MASINT process.
Surface combatants
Modern surface combatants with an ASW mission will have a variety of active systems, with a hull- or bow-mounted array, protected from water by a rubber dome; a "variable-depth" dipping sonar on a cable, and, especially on smaller vessels, a fixed acoustic generator and receiver.
Some, but not all, vessels carry passive towed arrays or combined active-passive arrays. These depend on target noise, which, in the combined littoral environment of ultraquiet submarines in the presence of much ambient noise. Vessels that have deployed towed arrays cannot make radical course maneuvers. Especially when active capabilities are included, the array can be treated as a bistatic or multistatic sensor, and act as a synthetic aperture sonar (SAS)
For ships that cooperate with aircraft, they will need a data link to sonobuoys and a sonobuoy signal processor, unless the aircraft has extensive processing capability and can send information that can be accepted directly by tactical computers and displays.
Signal processors not only analyze the signals but constantly track propagation conditions. The former is usually considered part of a particular sonar, but the US Navy has a separate propagation predictor called the AN/UYQ-25B(V) Sonar in situ Mode Assessment System (SIMAS)
Echo Tracker Classifiers (ETC) are adjuncts, with a clear MASINT flavor, to existing surface ship sonars
.
ETC is an application of synthetic aperture sonar (SAS). SAS is already used for minehunting but could help existing surface combatants, as well as future vessels and unmanned surface vehicles (USV), detect threats, such as very silent air-independent propulsion non-nuclear submarines, outside torpedo range. The torpedo range, especially in shallow water, is considered anything greater than 10 nmi.
Conventional active sonar may be more effective than towed arrays, but the small size of modern littoral submarines makes them difficult threats. Highly variable bottom paths, biologics, and other factors complicate sonar detection. If the target is slow-moving or waiting on the bottom, they have little or no Doppler effect, which current sonars use to recognize threats.
Continual active tracking measurement of all acoustically detected objects, with recognition of signatures as deviations from ambient noise, still gives a high false alarm rate (FAR) with conventional sonar. SAS processing, however, improves the resolution, especially of azimuth measurements, by assembling the data from multiple pings into a synthetic beam that gives the effect of a far larger receiver.
MASINT-oriented SAS measures shape characteristics and eliminate acoustically detected objects that do not conform to the signature of threats. Shape recognition is only one of the parts of the signature, which include course and Doppler when available.
Air-dropped active sonobuoys
Active sonobuoys, containing a sonar transmitter and receiver, can be dropped from fixed-wing maritime patrol aircraft (e.g., P-3, Nimrod, Chinese Y-8, Russian and Indian Bear ASW variants), antisubmarine helicopters, and carrier-based antisubmarine aircraft (e.g., S-3). While there have been some efforts to use other aircraft simply as carriers of sonobuoys, the general assumption is that the sonobuoy-carrying aircraft can issue commands to the sonobuoys and receive, and to some extent process, their signals.
The Directional Hydrophone Command Activated Sonobuoy system (DICASS) both generates sound and listen for it. A typical modern active sonobuoy, such as the AN/SSQ 963D, generates multiple acoustic frequencies
. Other active sonobuoys, such as the AN/SSQ 110B, generate small explosions as acoustic energy sources.
Airborne dipping sonar
Antisubmarine helicopters can carry a "dipping" sonar head at the end of a cable, which the helicopter can raise from or lower into the water. The helicopter would typically dip the sonar when trying to localize a target submarine, usually in cooperation with other ASW platforms or with sonobuoys. Typically, the helicopter would raise its head after dropping an ASW weapon, to avoid damaging the sensitive receiver. Not all variants of the same basic helicopter, even assigned to ASW, carry dipping sonar; some may trade the weight of the sonar for more sonobuoy or weapon capacity.
The EH101 helicopter, used by a number of nations, has a variety of dipping sonars. The (British) Royal Navy version has Ferranti/Thomson-CSF (now Thales) sonar, while the Italian version uses the HELRAS. Russian Ka-25 helicopters carry dipping sonar, as does the US LAMPS, US MH-60R helicopter, which carries the Thales AQS-22 dipping sonar. The older SH-60F helicopter carries the AQS-13F dipping sonar.
Surveillance vessel low-frequency active
Newer Low-Frequency Active (LFA) systems are controversial, as their very high sound pressures may be hazardous to whales and other marine life
.
A decision has been made to employ LFA on SURTASS vessels, after an environmental impact statement that indicated, if LFA is used with decreased power levels in certain high-risk areas for marine life, it would be safe when employed from a moving ship. The ship motion, and the variability of the LFA signal, would limit the exposure to individual sea animals. LFA operates in the low-frequency (LF) acoustic band of 100–500 Hz. It has an active component, the LFA proper, and the passive SURTASS hydrophone array. "The active component of the system, LFA, is a set of 18 LF acoustic transmitting source elements (called projectors) suspended by cable from underneath an oceanographic surveillance vessel, such as the Research Vessel (R/V) Cory Chouest, USNS Impeccable (T-AGOS 23), and the Victorious class (TAGOS 19 class).
"The source level of an individual projector is 215 dB. These projectors produce the active sonar signal or “ping.” A "ping," or transmission, can last between 6 and 100 seconds. The time between transmissions is typically 6 to 15 minutes with an average transmission of 60 seconds. Average duty cycle (ratio of sound “on” time to total time) is less than 20 percent. The typical duty cycle, based on historical LFA operational parameters (2003 to 2007), is normally 7.5 to 10 percent."
This signal "...is not a continuous tone, but rather a transmission of waveforms that vary in frequency and duration. The duration of each continuous frequency sound transmission is normally 10 seconds or less. The signals are loud at the source, but levels diminish rapidly over the first kilometer."
Submarine active acoustic sensors
The primary tactical active sonar of a submarine is usually in the bow, covered with a protective dome. Submarines for blue-water operations used active systems such as the AN/SQS-26 and AN/SQS-53 have been developed but were generally designed for convergence zone and single bottom bounce environments.
Submarines that operate in the Arctic also have specialized sonar for under-ice operation; think of an upside-down fathometer.
Submarines also may have minehunting sonar. Using measurements to differentiate between biologic signatures and signatures of objects that will permanently sink the submarine is as critical a MASINT application as could be imagined.
Active acoustic sensors for minehunting
Sonars optimized to detect objects of the size and shapes of mines can be carried by submarines, remotely operated vehicles, surface vessels (often on a boom or cable) and specialized helicopters.
The classic emphasis on minesweeping, and detonating the mine released from its tether using gunfire, has been replaced with the AN/SLQ-48(V)2 mine neutralization system (MNS)AN/SLQ-48 - (remotely operated) Mine Neutralization Vehicle. This works well for rendering save mines in deep water, by placing explosive charges on the mine and/or its tether. The AN/SLQ-48 is not well suited to the neutralization of shallow-water mines. The vehicle tends to be underpowered and may leave on the bottom a mine that looks like a mine to any subsequent sonar search and an explosive charge subject to later detonation under proper impact conditions.
There is mine-hunting sonar, as well as an (electro-optical) television on the ROV, and AN/SQQ-32 minehunting sonar on the ship.
Acoustic sensing of large explosions
An assortment of time-synchronized sensors can characterize conventional or nuclear explosions. One pilot study, the Active Radio Interferometer for Explosion Surveillance (ARIES). This technique implements an operational system for monitoring ionospheric pressure waves resulting from surface or atmospheric nuclear or chemical explosives. Explosions produce pressure waves that can be detected by measuring phase variations between signals generated by ground stations along two different paths to a satellite. This is a very modernized version, on a larger scale, of World War I sound ranging.
As can many sensors, ARIES can be used for additional purposes. Collaborations are being pursued with the Space Forecast Center to use ARIES data for total electron content measures on a global scale, and with the meteorology/global environment community to monitor global climate change (via tropospheric water vapor content measurements), and by the general ionospheric physics community to study travelling ionospheric disturbances.
Sensors relatively close to a nuclear event, or a high-explosive test simulating a nuclear event, can detect, using acoustic methods, the pressure produced by the blast. These include infrasound microbarographs (acoustic pressure sensors) that detect very low-frequency sound waves in the atmosphere produced by natural and man-made events.
Closely related to the microbarographs, but detecting pressure waves in water, are hydro-acoustic sensors, both underwater microphones and specialized seismic sensors that detect the motion of islands.
Seismic MASINT
US Army Field Manual 2-0 defines seismic intelligence as "The passive collection and measurement of seismic waves or vibrations in the earth surface." One strategic application of seismic intelligence makes use of the science of seismology to locate and characterize nuclear testing, especially underground testing. Seismic sensors also can characterize large conventional explosions that are used in testing the high-explosive components of nuclear weapons. Seismic intelligence also can help locate such things as large underground construction projects.
Since many areas of the world have a great deal of natural seismic activity, seismic MASINT is one of the emphatic arguments that there must be a long-term commitment to measuring, even during peacetime, so that the signatures of natural behavior is known before it is necessary to search for variations from signatures.
Strategic seismic MASINT
For nuclear test detection, seismic intelligence is limited by the "threshold principle" coined in 1960 by George Kistiakowsky, which recognized that while detection technology would continue to improve, there would be a threshold below which small explosions could not be detected.
Tactical seismic MASINT
The most common sensor in the Vietnam-era "McNamara Line" of remote sensors was the ADSID (Air-Delivered Seismic Intrusion Detector) sensed earth motion to detect people and vehicles. It resembled the Spikebuoy, except it was smaller and lighter (31 inches long, 25 pounds).
The challenge for the seismic sensors (and for the analysts) was not so much in detecting the people and the trucks as it was in separating out the false alarms generated by wind, thunder, rain, earth tremors, and animals—especially frogs."
Vibration MASINT
This subdiscipline is also called piezoelectric MASINT after the sensor most often used to sense vibration, but vibration detectors need not be piezoelectric. Note that some discussions treat seismic and vibration sensors as a subset of acoustic MASINT. Other possible detectors could be moving coil or surface acoustic wave.
. Vibration, as a form of geophysical energy to be sensed, has similarities to acoustic and seismic MASINT, but also has distinct differences that make it useful, especially in unattended ground sensors (UGS). In the UGS application, one advantage of a piezoelectric sensor is that it generates electricity when triggered, rather than consuming electricity, an important consideration for remote sensors whose lifetime may be determined by their battery capacity.
While acoustic signals at sea travel through water, on land, they can be assumed to come through the air. Vibration, however, is conducted through a solid medium on land. It has a higher frequency than is typical of seismic conducted signals.
A typical detector, the Thales MA2772 vibration is a piezoelectric cable, shallowly buried below the ground surface, and extended for 750 meters. Two variants are available, a high-sensitivity version for personnel detection, and a lower-sensitivity version to detect vehicles. Using two or more sensors will determine the direction of travel, from the sequence in which the sensors trigger.
In addition to being buried, piezoelectric vibration detectors, in a cable form factor, also are used as part of high-security fencing. They can be embedded in walls or other structures that need protection.
Magnetic MASINT
A magnetometer is a scientific instrument used to measure the strength and/or direction of the magnetic field in the vicinity of the instrument. The measurements they make can be compared to signatures of vehicles on land, submarines underwater, and atmospheric radio propagation conditions. They come in two basic types:
Scalar magnetometers measure the total strength of the magnetic field to which they are subjected, and
Vector magnetometers have the capability to measure the component of the magnetic field in a particular direction.
Earth's magnetism varies from place to place and differences in the Earth's magnetic field (the magnetosphere) can be caused by two things:
The differing nature of rocks
The interaction between charged particles from the sun and the magnetosphere
Metal detectors use electromagnetic induction to detect metal. They can also determine the changes in existing magnetic fields caused by metallic objects.
Indicating loops for detecting submarines
One of the first means for detecting submerged submarines, first installed by the Royal Navy in 1914, was the effect of their passage over an anti-submarine indicator loop on the bottom of a body of water. A metal object passing over it, such as a submarine, will, even if degaussed, have enough magnetic properties to induce a current in the loop's cable. . In this case, the motion of the metal submarine across the indicating coil acts as an oscillator, producing electric current.
MAD
A magnetic anomaly detector (MAD) is an instrument used to detect minute variations in the Earth's magnetic field. The term refers specifically to magnetometers used either by military forces to detect submarines (a mass of ferromagnetic material creates a detectable disturbance in the magnetic field)Magnetic anomaly detectors were first employed to detect submarines during World War II. MAD gear was used by both Japanese and U.S. anti-submarine forces, either towed by ship or mounted in aircraft to detect shallow submerged enemy submarines. After the war, the U.S. Navy continued to develop MAD gear as a parallel development with sonar detection technologies.
To reduce interference from electrical equipment or metal in the fuselage of the aircraft, the MAD sensor is placed at the end of a boom or a towed aerodynamic device. Even so, the submarine must be very near the aircraft's position and close to the sea surface for detection of the change or anomaly. The detection range is normally related to the distance between the sensor and the submarine. The size of the submarine and its hull composition determine the detection range. MAD devices are usually mounted on aircraft or helicopters.
There is some misunderstanding of the mechanism of detection of submarines in water using the MAD boom system. Magnetic moment displacement is ostensibly the main disturbance, yet submarines are detectable even when oriented parallel to the Earth's magnetic field, despite construction with non-ferromagnetic hulls.
For example, the Soviet-Russian Alfa class submarine, was constructed out of titanium. This light, strong material, as well as a unique nuclear power system, allowed the submarine to break speed and depth records for operational boats. It was thought that nonferrous titanium would defeat magnetic ASW sensors, but this was not the case. to give dramatic submerged performance and protection from detection by MAD sensors, is still detectable.
Since titanium structures are detectable, MAD sensors do not directly detect deviations in the Earth's magnetic field. Instead, they may be described as long-range electric and electromagnetic field detector arrays of great sensitivity.
An electric field is set up in conductors experiencing a variation in physical environmental conditions, providing that they are contiguous and possess sufficient mass. Particularly in submarine hulls, there is a measurable temperature difference between the bottom and top of the hull producing a related salinity difference, as salinity is affected by the temperature of the water. The difference in salinity creates an electric potential across the hull. An electric current then flows through the hull, between the laminae of sea water separated by depth and temperature. The resulting dynamic electric field produces an electromagnetic field of its own, and thus even a titanium hull will be detectable on a MAD scope, as will a surface ship for the same reason.
Vehicle detectors
The Remotely Emplaced Battlefield Surveillance System (REMBASS) is a US Army program for detecting the presence, speed, and direction of a ferrous object, such as a tank. Coupled with acoustic sensors that recognize the sound signature of a tank, it could offer high accuracy. It also collects weather information.
The Army's AN/GSQ-187 Improved Remote Battlefield Sensor System (I-REMBASS) includes both magnetic-only and combined passive infrared/magnetic intrusion detectors. The DT-561/GSQ hand emplaced MAG "sensor detects vehicles (tracked or wheeled) and personnel carrying ferrous metal. It also provides information on which to base a count of objects passing through its detection zone and reports their direction of travel relative to their location. The monitor uses two different (MAG and IR) sensors and their identification codes to determine the direction of travel.
Magnetic detonators and countermeasures
Magnetic sensors, much more sophisticated than the early inductive loops, can trigger the explosion of mines or torpedoes. Early in World War II, the US tried to put a magnetic torpedo exploder far beyond the limits of the technology of the time and had to disable it, and then work on also-unreliable contact fuzing, to make torpedoes more than blunt objects than banged into hulls.
Since water is incompressible, an explosion under the keel of a vessel is far more destructive than one at the air-water interface. Torpedo and mine designers want to place the explosions in that vulnerable spot, and countermeasures designers want to hide the magnetic signature of a vessel. Signature is especially relevant here, as mines may be made selective for warships, merchant vessels unlikely to be hardened against underwater explosions, or submarines.
A basic countermeasure, started in World War II, was degaussing, but it is impossible to remove all magnetic properties.
Detecting landmines
Landmines often contain enough ferrous metal to be detectable with appropriate magnetic sensors. Sophisticated mines, however, may also sense a metal-detection oscillator, and, under preprogrammed conditions, detonate to deter demining personnel.
Not all landmines have enough metal to activate a magnetic detector. While, unfortunately, the greatest number of unmapped minefields are in parts of the world that cannot afford high technology, a variety of MASINT sensors could help demining. These would include ground-mapping radar, thermal and multispectral imaging, and perhaps synthetic aperture radar to detect disturbed soil.
Gravitimetric MASINT
Gravity is a function of mass. While the average value of Earth's surface gravity is approximately 9.8 meters per second squared, given sufficiently sensitive instrumentation, it is possible to detect local variations in gravity from the different densities of natural materials: the value of gravity will be greater on top of a granite monolith than over a sand beach. Again with sufficiently sensitive instrumentation, it should be possible to detect gravitational differences between solid rock, and rock excavated for a hidden facility.
Streland 2003 points out that the instrumentation indeed must be sensitive: variations of the force of gravity on the earth’s surface are on the order of 106 of the average value. A practical gravimetric detector of buried facilities would need to be able to measure "less than one one millionth of the force that caused the apple to fall on Sir Isaac Newton’s head." To be practical, it would be necessary for the sensor to be able to be used while in motion, measuring the change in gravity between locations. This change over distance is called the gravity gradient, which can be measured with a gravity gradiometer.
Developing an operationally useful gravity gradiometer is a major technical challenge. One type, the SQUID Superconducting Quantum Interference Device gradiometer, may have adequate sensitivity, but it needs extreme cryogenic cooling, even if in space, a logistic nightmare. Another technique, far more operationally practical but lacking the necessary sensitivity, is the Gravity Recovery and Climate Experiment (GRACE) technique, currently using radar to measure the distance between pairs of satellites, whose orbits will change based on gravity. Substituting lasers for radar will make GRACE more sensitive, but probably not sensitive enough.
A more promising technique, although still in the laboratory, is quantum gradiometry, which is an extension of atomic clock techniques, much like those in GPS. Off-the-shelf atomic clocks measure changes in atomic waves over time rather than the spatial changes measured in a quantum gravity gradiometer. One advantage of using GRACE in satellites is that measurements can be made from a number of points over time, with a resulting improvement as seen in synthetic aperture radar and sonar. Still, finding deeply buried structures of human scale is a tougher problem than the initial goals of finding mineral deposits and ocean currents.
To make this operationally feasible, there would have to be a launcher to put fairly heavy satellites into polar orbits, and as many earth stations as possible to reduce the need for large on-board storage of the large amounts of data the sensors will produce. Finally, there needs to be a way to convert the measurements into a form that can be compared against available signatures in geodetic databases. Those databases would need significant improvement, from measured data, to become sufficiently precise that a buried facility signature would stand out.
References
Measurement and signature intelligence
Acoustics
Signal processing
Military intelligence
Sonar
Anti-submarine warfare
Navigational equipment
Surveillance
Ultrasound
Effects of gravity
Geophysics
Synthetic aperture radar | Geophysical MASINT | Physics,Technology,Engineering | 11,180 |
5,466,491 | https://en.wikipedia.org/wiki/Xylan | Xylan (; ) (CAS number: 9014-63-5) is a type of hemicellulose, a polysaccharide consisting mainly of xylose residues. It is found in plants, in the secondary cell walls of dicots and all cell walls of grasses. Xylan is the third most abundant polysaccharide on Earth, after cellulose and chitin.
Composition
Xylans are polysaccharides made up of β-1,4-linked xylose (a pentose sugar) residues with side branches of α-arabinofuranose and/or α-glucuronic acids. On the basis of substituted groups xylan can be categorized into three classes i) glucuronoxylan (GX) ii) neutral arabinoxylan (AX) and iii) glucuronoarabinoxylan (GAX). In some cases contribute to cross-linking of cellulose microfibrils and lignin through ferulic acid residues.
Occurrence
Plant cell structure
Xylans play an important role in the integrity of the plant cell wall and increase cell wall recalcitrance to enzymatic digestion; thus, they help plants to defend against herbivores and pathogens (biotic stress). Xylan also plays a significant role in plant growth and development. Typically, xylans content in hardwoods is 10-35%, whereas they are 10-15% in softwoods. The main xylan component in hardwoods is O-acetyl-4-O-methylglucuronoxylan, whereas arabino-4-O-methylglucuronoxylans are a major component in softwoods. In general, softwood xylans differ from hardwood xylans by the lack of acetyl groups and the presence of arabinose units linked by α-(1,3)-glycosidic bonds to the xylan backbone.
Algae
Some macrophytic green algae contain xylan (specifically homoxylan) especially those within the Codium and Bryopsis genera where it replaces cellulose in the cell wall matrix. Similarly, it replaces the inner fibrillar cell-wall layer of cellulose in some red algae.
Food science
The quality of cereal flours and the hardness of dough are affected by their xylan content, thus, playing a significant role in bread industry. The main constituent of xylan can be converted into xylitol (a xylose derivative), which is used as a natural food sweetener, which helps to reduce dental cavities and acts as a sugar substitute for diabetic patients. Poultry feed has a high percentage of xylan.
Xylan is one of the foremost anti-nutritional factors in common use feedstuff raw materials. Xylooligosaccharides produced from xylan are considered as "functional food" or dietary fibers due their potential prebiotic properties.
Crystallinity
The regular branching patterns of xylans may facilitate their co-crystallization with cellulose in the plant cell wall. Xylan also tends to crystallize from aqueous solution. Additional polymorphs of (1→4)-β-D-xylan have been obtained by crystallization from non-aqueous environments.
Biosynthesis
Several glycosyltransferases are involved in the biosynthesis of xylans.
In eukaryotes, GTs represent about 1% to 2% of gene products. GTs are assembled into complexes existing in the Golgi apparatus. However, no xylan synthase complexes have been isolated from Arabidopsis tissues (dicot). The first gene involved in the biosynthesis of xylan was revealed on xylem mutants (irx) in Arabidopsis thaliana because of some mutation affecting xylan biosynthesis genes. As a result, abnormal plant growth due to thinning and weakening of secondary xylem cell walls was seen. Arabidopsis mutant irx9 (At2g37090), irx14 (At4g36890), irx10/gut2 (At1g27440), irx10-L/gut1 (At5g61840) showed defect in xylan backbone biosynthesis. Arabidopsis mutants irx7, irx8, and parvus are thought to be related to the reducing end oligosaccharide biosynthesis. Thus, many genes have been associated with xylan biosynthesis but their biochemical mechanism is still unknown. Zeng et al. (2010) immuno-purified xylan synthase activity from etiolated wheat (Triticum aestivum) microsomes. Jiang et al. (2016) reported a xylan synthase complex (XSC) from wheat that has a central core formed of two members of the GT43 and GT47 families (CAZy database). They purified xylan synthase activity from wheat seedlings through proteomics analysis and showed that two members of TaGT43 and TaGT47 are sufficient for the synthesis of a xylan-like polymer in vitro.
Breakdown
Xylanase converts xylan into xylose. Given that plants contain up to 30% xylan, xylanase is important to the nutrient cycle. The degradation of xylan and other hemicelluloses is relevant to the production of biofuels. Being less crystalline and more highly branched, these hemicelluloses are particularly susceptible to hydrolysis.
Research
As a major component of plants, xylan is potentially a significant source of renewable energy especially for second generation biofuels. However, xylose (backbone of xylan) is a pentose sugar that is hard to ferment during biofuel conversion because microorganisms like yeast cannot ferment pentose naturally.
References
Polysaccharides
Cell biology | Xylan | Chemistry,Biology | 1,276 |
45,464,761 | https://en.wikipedia.org/wiki/Penicillium%20ellipsoideosporum | Penicillium ellipsoideosporum is a species of the genus of Penicillium which was isolated in China.
See also
List of Penicillium species
References
ellipsoideosporum
Fungi described in 2000
Fungus species | Penicillium ellipsoideosporum | Biology | 53 |
6,156,073 | https://en.wikipedia.org/wiki/Burr%20Truss | The Burr Arch Truss—or, simply, Burr Truss or Burr Arch—is a combination of an arch and a multiple kingpost truss design. It was invented in 1804 by Theodore Burr, patented on April 3, 1817, and used in bridges, usually covered bridges.
Design
The design principle behind the Burr arch truss is that the arch should be capable of bearing the entire load on the bridge while the truss keeps the bridge rigid. Even though the kingpost truss alone is capable of bearing a load, this was done because it is impossible to evenly balance a dynamic load crossing the bridge between the two parts. The opposite view is also held, based on computer models, that the truss performs the majority of the load bearing and the arch provides the stability. Either way, the combination of the arch and the truss provides a more stable bridge capable of supporting greater weight than either the arch or truss alone.
The U.S. state of Indiana has a large collection of Burr Truss bridges. Of its 92 extant bridges, 53 are Burr Trusses, many of which reside in Parke County.
Design specification
References
External links
Truss bridges by type
American inventions
Trusses | Burr Truss | Technology | 232 |
18,962,396 | https://en.wikipedia.org/wiki/MPLS-TP | In telecommunications, Multiprotocol Label Switching - Transport Profile (MPLS-TP) is a variant of the MPLS protocol that is used in packet switched data networks. MPLS-TP is the product of a joint Internet Engineering Task Force (IETF) / International Telecommunication Union Telecommunication Standardization Sector (ITU-T) effort to include an MPLS Transport Profile within the IETF MPLS and PWE3 architectures to support the capabilities and functionalities of a packet transport network.
Description
MPLS-TP is designed for use as a network layer technology in transport networks. It will be a continuation of the work started by the transport network experts of the ITU-T, specifically SG15, as T-MPLS. Since 2008 the work is progressed in a cooperation between ITU-T and IETF. The required protocol extensions to MPLS being designed by the IETF based on requirements provided by service providers. It will be a connection-oriented packet-switched (CO-PS) application. It will offer a dedicated MPLS implementation by removing features that are not relevant to CO-PS applications and adding mechanisms that provide support of critical transport functionality.
MPLS-TP is to be based on the same architectural principles of layered networking that are used in longstanding transport network technologies like SDH, SONET and OTN. Service providers have already developed management processes and work procedures based on these principles.
MPLS-TP gives service providers a reliable packet-based technology that is based upon circuit-based transport networking, and thus is expected to align with current organizational processes and large-scale work procedures similar to other packet transport technologies.
MPLS-TP is a low cost L2.5 technology (if the limited profile to be specified is implemented in isolation) that provides QoS, end-to-end OA&M and protection switching.
In February 2008 the ITU-T and IETF agreed to work jointly
on the design of MPLS-TP. Based on this agreement IETF and ITU-T experts will jointly work out the requirements and solutions. ITU-T in turn will update the existing T-MPLS standards based on the MPLS-TP related RFCs listed below.
ITU-T
The following ITU-T Recommendations exist for MPLS-TP. Some of those Recommendations are superseding the ones that applied to T-MPLS before this work was ceased.
RFC or drafts
The following IETF RFCs or drafts exist for MPLS-TP:
Solutions
The solutions for the above requirements and framework are as mentioned below and is under development:
An In-Band Data Communication Network For the MPLS Transport Profile
MPLS Generic Associated Channel- Defines GAL/G-ACH
"EXP field" renamed to "Traffic Class field"
MPLS Transport Profile Lock Instruct and Loopback Functions
A Thesaurus for the Interpretation of Terminology Used in MPLS Transport Profile (MPLS-TP) Internet-Drafts and RFCs in the Context of the ITU-T's Transport Network Recommendation
MPLS-TP ACH TLV (IETF Draft)
Proactive continuity and connectivity verification (Individual Draft)
Guidelines for the Use of the "OAM" Acronym in the IETF
MPLS-TP OAM based on Y.1731 (Individual Draft)
MPLS-TP Performance monitoring (Individual Draft)
MPLS Fault Management Operations, Administration, and Maintenance (OAM)
MPLS Transport Profile (MPLS-TP) Linear Protection
Pre-standard Linear Protection Switching in MPLS Transport Profile (MPLS-TP)
MPLS-TP P2MP traffic protection (Individual Draft)
MPLS-TP OAM Alarm suppression (Individual Draft)
MPLS-TP & IP/MPLS Interworking (Individual Draft)
MPLS-TP Ring Protection (Individual Draft)
MPLS-TP LDP extension: No work
MPLS-TP RSVP-TE extensions: No work
References
External links
ITU-T series G recommendations
IETF MPLS Charter
MPLS-TP Mailing list archives
Current mailing list (back to MPLS mailing list)
Latest list of MPLS-TP standards
MPLS networking
Internet Standards
Network protocols
Tunneling protocols | MPLS-TP | Engineering | 874 |
5,325,382 | https://en.wikipedia.org/wiki/Net%20interest%20income | Net interest income (NII) is the difference between revenues generated by interest-bearing assets and the cost of servicing (interest-burdened) liabilities. For banks, the assets typically include commercial and personal loans, mortgages, construction loans and investment securities. The liabilities consist primarily of customers' deposits. NII is the difference between (a) interest payments the bank receives on outstanding loans and (b) interest payments the bank makes to customers on their deposits.
NII = (interest payments on assets) − (interest payments on liabilities)
Depending on a bank's specific assets and liabilities (e.g., fixed or floating rate), NII may be more or less sensitive to changes in interest rates. If the bank's liabilities reprice faster than its assets, then it is said to be "liability-sensitive." Further, the bank is asset-sensitive if its liabilities reprice more slowly than its assets in a changing interest-rate environment. The exposure of NII to changes in interest rates can be measured by the dollar maturity gap (DMG), which is the difference between the dollar amount of assets that reprice and the dollar amount of liabilities that reprice within a given time period.
See also
Net interest spread
Net interest margin
Earnings at risk
References
Financial ratios
Banking
Interest
Income statement | Net interest income | Mathematics | 274 |
24,437,494 | https://en.wikipedia.org/wiki/Ramariopsis%20kunzei | Ramariopsis kunzei is an edible species of coral fungi in the family Clavariaceae, and the type species of the genus Ramariopsis. It is commonly known as white coral because of the branched structure of the fruit bodies that resemble marine coral. The fruit bodies are up to tall by wide, with numerous branches originating from a short rudimentary stem. The branches are one to two millimeters thick, smooth, and white, sometimes with yellowish tips in age. Ramariopsis kunzei has a widespread distribution, and is found in North America, Europe, Asia, and Australia.
Taxonomy and phylogeny
The species was first described as Clavaria kunzei by pioneer mycologist Elias Magnus Fries in 1821. E.J.H. Corner transferred the species to Ramariopsis in 1950, and made it the type species. In general, coral fungi often have extensive taxonomic histories, as mycologists have not agreed on the best way to classify them. In addition to Clavaria and Ramariopsis, the R. kunzei has been placed in the genera Ramaria by Lucien Quélet in 1888, and Clavulinopsis by Walter Jülich in 1985. According to the taxonomic database MycoBank, the species has acquired a sizable list of synonyms, listed in the taxobox. It is commonly known as white coral because of the branched structure of the fruit bodies that resemble marine coral.
A phylogenetic analysis of clavarioid fungi concluded that R. kunzei is in a phylogenetic lineage together with several Clavulinopsis species (including C. sulcata, C. helvola and C. fusiformis), and that this clade (the ramariopsis clade) is nested within a group of species representing the family Clavariaceae.
Description
The fruit bodies of Ramariopsis kunzei are white to whitish-yellow in color, and are highly branched structures resembling coral; the dimensions are typically up to tall and wide. Older specimens may have a pinkish tinge. The tips of the branches are blunt, not crested as in some other species of coral fungi, like Clavulina cristata; branches are between 1 and 5 millimeters thick. The branch tips of mature specimens may be yellow. A stem, if present, may be up to long and scurfy—covered with small flakes or scales. The texture of the flesh may range from pliable to brittle. This fungus does not undergo any color changes upon bruising or injury, however, a 10% solution of FeSO4 (a chemical test known as "iron salts") applied to the flesh will turn it green.
In deposit, the spores are white. Viewed with a light microscope, the spores are translucent and have an ellipsoid to roughly spherical shape with spines on the surface, and dimensions of 3–5.5 by 2.5–4.5 μm. Spores are non-amyloid, meaning that they do not absorb iodine when stained with Melzer's reagent. The spore-bearing cells, the basidia, are usually 25–45 μm long by 6–7 μm wide, and 4-spored. Clamp connections are present in the hyphae of this species.
Edibility
The species is edible, but "fleshless and flavorless." Other authors concur that the odor and taste are not distinctive.
Similar species
The "crested coral" (Clavulina cristata, edible) is similar in appearance to R. kunzei, but its branches have fringed, feathery tips. The "jellied false coral" (Tremellodendron pallidum, edible) has whitish, tough, cartilaginous branches with blunt tips. Ramariopsis lentofragilis (poisonous) is an eastern North American lookalike that causes severe abdominal pain, general weakness, and pain under the sternum.
Also similar are Artomyces pyxidatus, Clavulina cinerea, Clavulina rugosa, and Clavulinopsis corniculata.
Habitat and distribution
The species is thought to be saprobic and is found growing on the ground, in duff, or less frequently on well-decayed wood. Fruit bodies may grow singly, in groups, or clustered together. David Arora has noted a preference for growing under conifers, as well as a prevalence in redwood forests of North America. In contrast, an earlier author claimed this species grows "rarely in coniferous woods."
In Europe, Ramariopsis kunzei has been collected in Scotland (specifically, on the islands of Arran, Gigha and Kintyre peninsula), the Netherlands, Norway, former Czechoslovakia, Germany, Poland, and Russia (Zhiguli Mountains). It has also been found in China, India, Iran, the Solomon Islands, and Australia. In North America, the distribution extends north to Canada, and includes the United States (including Hawaii and Puerto Rico).
References
Clavariaceae
Fungi described in 1950
Fungi of Europe
Fungi of North America
Fungi of Asia
Fungi of Australia
Edible fungi
Taxa named by Elias Magnus Fries
Fungi of Oceania
Fungi without expected TNC conservation status
Fungus species | Ramariopsis kunzei | Biology | 1,083 |
35,039,127 | https://en.wikipedia.org/wiki/Roper%20River%20scrub%20robin | The Roper River scrub robin (Drymodes superciliaris colcloughi), also known as the allied scrub robin, is a putative subspecies of the northern scrub robin, a bird in the Petroicidae, or Australasian robin family. Whether it ever existed is doubtful; if it did it is almost certainly extinct.
History
The subspecies was described in 1914 by Gregory Mathews and named subspecifically for the collector, M. J. Colclough. The description was based on two skins obtained in 1910, supposedly from the tropical monsoonal Roper River region of the eastern Top End of the Northern Territory of Australia. The specimens were taken by an ornithological collecting expedition sponsored by wealthy amateur ornithologist and oologist H. L. White. The expedition was in the Roper area from 15 July until 22 November 1910.
Description
There are only two specimens of the scrub robin, the holotype, a male held in the American Museum of Natural History (registered 585473), and a female in the Queensland Museum. Of these, Mathews said in his original description that the taxon differed from the nominate subspecies D. s. superciliaris, confined to the northern end of the Cape York Peninsula in Queensland, more than 800 km north-east of the Roper, by "being much redder on the back and entirely reddish-buff on the undersurface". Simon Bennett, in a 1983 overview in the Emu found that there were valid, though slight, differences between the birds from the two localities, in particular with the Roper male having its "throat, ear-coverts and forehead washed with rufous". However, Richard Schodde and Ian Mason commented in 1999 that the Roper River specimens did not differ significantly in colouration and measurements from those of the Cape York Peninsula.
Validity and status
The validity of the taxon is primarily based on whether a population of scrub robins existed in the Northern Territory, the only records of such being the two skins collected by Colclough in 1910 on the Roper River. Another member of the 1910 expedition, E. D. Frizelle, collected several clutches of eggs purporting to be those of northern scrub robins from the Roper. Some of these clutches are now in the H. L. White Collection at the National Museum of Victoria, but upon reappraisal have been found to be those of misidentified buff-sided robins.
Several dedicated, though fruitless, searches for scrub robins along the Roper were undertaken in 1980, during the data-collection phase of the first Atlas of Australian Birds project (1977–1981), by Simon Bennett and others. Bennett accepted the validity of the original collection location, though he admitted that the poor documentation of the specimens raised the issue of credibility. He suggested that they represented a relict population inhabiting small and isolated thickets of vegetation on the banks of permanent watercourses. He speculated that the population had been adversely affected by the advent of extensive cattle grazing and changes in fire regime since European settlement of the Roper region. He also raised the possibility of birds surviving along rivers north of the Roper in eastern Arnhem Land. The subspecies is listed by the Australian Government as Extinct and by the Northern Territory Government as Data Deficient. Schodde and Mason have opined:
References
Drymodes
Controversial bird taxa
Birds of the Northern Territory
Birds described in 1914
Taxa named by Gregory Mathews
Bird extinctions since 1500
Extinct birds of Australia | Roper River scrub robin | Biology | 725 |
41,215,364 | https://en.wikipedia.org/wiki/Elektronika%20VM-12 | Elektronika VM-12 was the first Soviet VHS-compatible videocassette recorder. It was capable to record SECAM-IIIB D/K (OIRT), PAL and black-and-white video on a 12,65-mm wide magnetic tape.
Elektronika VM-12 was 480х367х136 mm in size and weighted 10 kg. PAL SP - 2,339±0,5%
The device was developed on the basis of the Panasonic NV-2000 video cassette recorder manufactured in Japan. The equipment for the production of some parts and units was purchased in Japan as well
References
Recording devices
Video storage players
Science and technology in the Soviet Union
Products introduced in 1984
Ministry of the Electronics Industry (Soviet Union) products
VHS | Elektronika VM-12 | Technology | 163 |
157,620 | https://en.wikipedia.org/wiki/Electrochemical%20potential | In electrochemistry, the electrochemical potential (ECP), , is a thermodynamic measure of chemical potential that does not omit the energy contribution of electrostatics. Electrochemical potential is expressed in the unit of J/mol.
Introduction
Each chemical species (for example, "water molecules", "sodium ions", "electrons", etc.) has an electrochemical potential (a quantity with units of energy) at any given point in space, which represents how easy or difficult it is to add more of that species to that location. If possible, a species will move from areas with higher electrochemical potential to areas with lower electrochemical potential; in equilibrium, the electrochemical potential will be constant everywhere for each species (it may have a different value for different species). For example, if a glass of water has sodium ions (Na+) dissolved uniformly in it, and an electric field is applied across the water, then the sodium ions will tend to get pulled by the electric field towards one side. We say the ions have electric potential energy, and are moving to lower their potential energy. Likewise, if a glass of water has a lot of dissolved sugar on one side and none on the other side, each sugar molecule will randomly diffuse around the water, until there is equal concentration of sugar everywhere. We say that the sugar molecules have a "chemical potential", which is higher in the high-concentration areas, and the molecules move to lower their chemical potential. These two examples show that an electrical potential and a chemical potential can both give the same result: A redistribution of the chemical species. Therefore, it makes sense to combine them into a single "potential", the electrochemical potential, which can directly give the net redistribution taking both into account.
It is (in principle) easy to measure whether or not two regions (for example, two glasses of water) have the same electrochemical potential for a certain chemical species (for example, a solute molecule): Allow the species to freely move back and forth between the two regions (for example, connect them with a semi-permeable membrane that lets only that species through). If the chemical potential is the same in the two regions, the species will occasionally move back and forth between the two regions, but on average there is just as much movement in one direction as the other, and there is zero net migration (this is called "diffusive equilibrium"). If the chemical potentials of the two regions are different, more molecules will move to the lower chemical potential than the other direction.
Moreover, when there is not diffusive equilibrium, i.e., when there is a tendency for molecules to diffuse from one region to another, then there is a certain free energy released by each net-diffusing molecule. This energy, which can sometimes be harnessed (a simple example is a concentration cell), and the free-energy per mole is exactly equal to the electrochemical potential difference between the two regions.
Conflicting terminologies
It is common in electrochemistry and solid-state physics to discuss both the chemical potential and the electrochemical potential of the electrons. However, in the two fields, the definitions of these two terms are sometimes swapped. In electrochemistry, the electrochemical potential of electrons (or any other species) is the total potential, including both the (internal, nonelectrical) chemical potential and the electric potential, and is by definition constant across a device in equilibrium, whereas the chemical potential of electrons is equal to the electrochemical potential minus the local electric potential energy per electron. In solid-state physics, the definitions are normally compatible with this, but occasionally the definitions are swapped.
This article uses the electrochemistry definitions.
Definition and usage
In generic terms, electrochemical potential is the mechanical work done in bringing 1 mole of an ion from a standard state to a specified concentration and electrical potential. According to the IUPAC definition, it is the partial molar Gibbs energy of the substance at the specified electric potential, where the substance is in a specified phase. Electrochemical potential can be expressed as
where:
i is the electrochemical potential of species i, in J/mol,
μi is the chemical potential of the species i, in J/mol,
zi is the valency (charge) of the ion i, a dimensionless integer,
F is the Faraday constant, in C/mol,
Φ is the local electrostatic potential in V.
In the special case of an uncharged atom, zi = 0, and so i = μi.
Electrochemical potential is important in biological processes that involve molecular diffusion across membranes, in electroanalytical chemistry, and industrial applications such as batteries and fuel cells. It represents one of the many interchangeable forms of potential energy through which energy may be conserved.
In cell membranes, the electrochemical potential is the sum of the chemical potential and the membrane potential.
Incorrect usage
The term electrochemical potential is sometimes used to mean an electrode potential (either of a corroding electrode, an electrode with a non-zero net reaction or current, or an electrode at equilibrium). In some contexts, the electrode potential of corroding metals is called "electrochemical corrosion potential", which is often abbreviated as ECP, and the word "corrosion" is sometimes omitted. This usage can lead to confusion. The two quantities have different meanings and different dimensions: the dimension of electrochemical potential is energy per mole while that of electrode potential is voltage (energy per charge).
See also
Concentration cell
Electrochemical gradient
Fermi level
Membrane potential
Nernst equation
Poisson–Boltzmann equation
Reduction potential
Standard electrode potential
References
External links
Electrochemical potential – lecture notes from University of Illinois at Urbana-Champaign
Electrochemistry
Thermodynamics
Electrochemical potentials | Electrochemical potential | Physics,Chemistry,Mathematics | 1,193 |
40,230,892 | https://en.wikipedia.org/wiki/Derek%20Pearcy | Derek Pearcy is a game designer, writer, editor and graphic designer known for his work on role-playing games.
Career
Pearcy served as the editor of Pyramid for its first two print issues, in 1993. He worked for Steve Jackson Games in the 1990s. Pearcy was working as a staff member of Steve Jackson Games when they chose him to develop their own version of the French role-playing game In Nomine. SJG's In Nomine was published in 1997. In Nomine won the Origins Award for Best Graphic Presentation of a Roleplaying Game, Adventure, or Supplement of 1997. He was a special guest at HexaCon 9 in 1999.
References
GURPS writers
Living people
Role-playing game designers
Year of birth missing (living people) | Derek Pearcy | Technology | 158 |
421,981 | https://en.wikipedia.org/wiki/Substitute%20good | In microeconomics, substitute goods are two goods that can be used for the same purpose by consumers. That is, a consumer perceives both goods as similar or comparable, so that having more of one good causes the consumer to desire less of the other good. Contrary to complementary goods and independent goods, substitute goods may replace each other in use due to changing economic conditions. An example of substitute goods is Coca-Cola and Pepsi; the interchangeable aspect of these goods is due to the similarity of the purpose they serve, i.e. fulfilling customers' desire for a soft drink. These types of substitutes can be referred to as close substitutes.
Substitute goods are commodity which the consumer demanded to be used in place of another good.
Economic theory describes two goods as being close substitutes if three conditions hold:
products have the same or similar performance characteristics
products have the same or similar occasion for use and
products are sold in the same geographic area
Performance characteristics describe what the product does for the customer; a solution to customers' needs or wants. For example, a beverage would quench a customer's thirst.
A product's occasion for use describes when, where and how it is used. For example, orange juice and soft drinks are both beverages but are used by consumers in different occasions (i.e. breakfast vs during the day).
Two products are in different geographic market if they are sold in different locations, it is costly to transport the goods or it is costly for consumers to travel to buy the goods.
Only if the two products satisfy the three conditions, will they be classified as close substitutes according to economic theory. The opposite of a substitute good is a complementary good, these are goods that are dependent on another. An example of complementary goods are cereal and milk.
An example of substitute goods are tea and coffee. These two goods satisfy the three conditions: tea and coffee have similar performance characteristics (they quench a thirst), they both have similar occasions for use (in the morning) and both are usually sold in the same geographic area (consumers can buy both at their local supermarket). Some other common examples include margarine and butter, and McDonald's and Burger King.
Formally, good is a substitute for good if when the price of rises the demand for rises, see figure 1.
Let be the price of good . Then, is a substitute for if: .
Cross elasticity of demand
The fact that one good is substitutable for another has immediate economic consequences: insofar as one good can be substituted for another, the demands for the two goods will be interrelated by the fact that customers can trade off one good for the other if it becomes advantageous to do so. Cross-price elasticity helps us understand the degree of substitutability of the two products. An increase in the price of a good will increase demand for its substitutes, while a decrease in the price of a good will decrease demand for its substitutes, see Figure 2.
The relationship between demand schedules determines whether goods are classified as substitutes or complements. The cross-price elasticity of demand shows the relationship between two goods, it captures the responsiveness of the quantity demanded of one good to a change in price of another good.
Cross-Price Elasticity of Demand (Ex,y) is calculated with the following formula:
Ex,y = Percentage Change in Quantity Demanded for Good X / Percentage Change in Price of Good Y
The cross-price elasticity may be positive or negative, depending on whether the goods are complements or substitutes. A substitute good is a good with a positive cross elasticity of demand. This means that, if good is a substitute for good , an increase in the price of will result in a leftward movement along the demand curve of and cause the demand curve for to shift out. A decrease in the price of will result in a rightward movement along the demand curve of and cause the demand curve for to shift in. Furthermore, perfect substitutes have a higher cross elasticity of demand than imperfect substitutes do.
Types
Perfect and imperfect substitutes
Perfect substitutes
Perfect substitutes refer to a pair of goods with uses identical to one another. In that case, the utility of a combination of the two goods is an increasing function of the sum of the quantity of each good. That is, the more the consumer can consume (in total quantity), the higher level of utility will be achieved, see figure 3.
Perfect substitutes have a linear utility function and a constant marginal rate of substitution, see figure 3. If goods X and Y are perfect substitutes, any different consumption bundle will result in the consumer obtaining the same utility level for all the points on the indifference curve (utility function). Let a consumption bundle be represented by (X,Y), then, a consumer of perfect substitutes would receive the same level of utility from (20,10) or (30,0).
Consumers of perfect substitutes base their rational decision-making process on prices only. Evidently, the consumer will choose the cheapest bundle to maximise their profits. If the prices of the goods differed, there would be no demand for the more expensive good. Producers and sellers of perfect substitute goods directly compete with each other, that is, they are known to be in direct price competition.
An example of perfect substitutes is butter from two different producers; the producer may be different but their purpose and usage are the same.
Perfect substitutes have a high cross-elasticity of demand. For example, if Country Crock and Imperial margarine have the same price listed for the same amount of spread, but one brand increases its price, its sales will fall by a certain amount. In response, the other brand's sales will increase by the same amount.
Imperfect substitutes
Imperfect substitutes, also known as close substitutes, have a lesser level of substitutability, and therefore exhibit variable marginal rates of substitution along the consumer indifference curve. The consumption points on the curve offer the same level of utility as before, but compensation depends on the starting point of the substitution. Unlike perfect substitutes (see figure 4), the indifference curves of imperfect substitutes are not linear and the marginal rate of substitution is different for different set of combinations on the curve. Close substitute goods are similar products that target the same customer groups and satisfy the same needs, but have slight differences in characteristics. Sellers of close substitute goods are therefore in indirect competition with each other.
Beverages are a great example of imperfect substitutes. As the price of Coca-Cola rises, consumers could be expected to substitute to Pepsi. However, many consumers prefer one brand over the other. Consumers who prefer one brand over the other will not trade between them one-to-one. Rather, a consumer who prefers Coca-Cola (for example) will be willing to exchange more Pepsi for less Coca-Cola, in other words, consumers who prefer Coca-Cola would be willing to pay more.
The degree to which a good has a perfect substitute depends on how specifically the good is defined. The broader the definition of a good, the easier it is for the good to have a substitute good. On the other hand, a good narrowly defined will be likely to not have a substitute good. For example, different types of cereal generally are substitutes for each other, but Rice Krispies cereal, which is a very narrowly defined good as compared to cereal generally, has few, if any substitutes. To illustrate this further, we can imagine that while both Rice Krispies and Froot Loops are types of cereal, they are imperfect substitutes, as the two are very different types of cereal. However, generic brands of Rice Krispies, such as Malt-o-Meal's Crispy Rice would be a perfect substitute for Kellogg's Rice Krispies.
Imperfect substitutes have a low cross-elasticity of demand. If two brands of cereal have the same prices before one's price is raised, we can expect sales to fall for that brand. However, sales will not raise by the same amount for the other brand, as there are many types of cereal that are equally substitutable for the brand which has raised its price; consumer preferences determine which brands pick up their losses.
Gross and net substitutes
If two goods are imperfect substitutes, economists can distinguish them as gross substitutes or net substitutes. Good is a gross substitute for good if, when the price of good increases, spending on good increases, as described above. Gross substitutability is not a symmetric relationship. Even if is a gross substitute for , it may not be true that is a gross substitute for .
Two goods are net substitutes when the demand for good X increases when the price of good Y increases and the utility derived from the substitute remains constant.
Goods and are said to be net substitutes if
That is, goods are net substitutes if they are substitutes for each other under a constant utility function. Net substitutability has the desirable property that, unlike gross substitutability, it is symmetric:
That is, if good is a net substitute for good , then good is also a net substitute for good . The symmetry of net substitution is both intuitively appealing and theoretically useful.
The common misconception is that competitive equilibrium is non-existent when it comes to products that are net substitutes. Like most times when products are gross substitutes, they will also likely be net substitutes, hence most gross substitute preferences supporting a competitive equilibrium also serve as examples of net substitutes doing the same. This misconception can be further clarified by looking at the nature of net substitutes which exists in a purely hypothetical situation where a fictitious entity interferes to shut down the income effect and maintain a constant utility function. This defeats the point of a competitive equilibrium, where no such intervention takes place. The equilibrium is decentralized and left to the producers and consumers to determine and arrive at an equilibrium price.
Within-category and cross-category substitutes
Within-category substitutes are goods that are members of the same taxonomic category such as goods sharing common attributes (e.g., chocolate, chairs, station wagons).
Cross-category substitutes are goods that are members of different taxonomic categories but can satisfy the same goal. A person who wants chocolate but cannot acquire it, for example, might instead buy ice cream to satisfy the goal of having a dessert.
Whether goods are cross-category or within-category substitutes influences the utility derived by consumers. In the case of food, people exhibit a strong preference for within-category substitutes over cross-category substitutes, despite cross-category substitutes being more effective at satisfying customers' needs. Across ten sets of different foods, 79.7% of research participants believed that a within-category substitute would better satisfy their craving for a food they could not have than a cross-category substitute. Unable to acquire a desired Godiva chocolate, for instance, a majority reported that they would prefer to eat a store-brand chocolate (a within-category substitute) than a chocolate-chip granola bar (a cross-category substitute). This preference for within-category food substitutes appears, however, to be misguided. Because within-category food substitutes are more similar to the missing food, their inferiority to it is more noticeable. This creates a negative contrast effect, and leads within-category substitutes to be less satisfying substitutes than cross-category substitutes unless the quality is comparable.
Unit-demand goods
Unit-demand goods are categories of goods from which consumer wants only a single item. If the consumer has two unit-demand items, then his utility is the maximum of the utilities he gains from each of these items. For example, consider a consumer that wants a means of transportation, which may be either a car or a bicycle. The consumer prefers a car to a bicycle. If the consumer has both a car and a bicycle, then the consumer uses only the car. The economic theory of unit elastic demand illustrates the inverse relationship between price and quantity. Unit-demand goods are always substitutes.
In perfect and monopolistic market structures
Perfect competition
Perfect competition is solely based on firms having equal conditions and the continuous pursuit of these conditions, regardless of the market size One of the requirements for perfect competition is that the goods of competing firms should be perfect substitutes. Products sold by different firms have minimal differences in capabilities, features, and pricing. Thus, buyers cannot distinguish between products based on physical attributes or intangible value. When this condition is not satisfied, the market is characterized by product differentiation. A perfectly competitive market is a theoretical benchmark and does not exist in reality. However, perfect substitutability is significant in the era of deregulation because there are usually several competing providers (e.g., electricity suppliers) selling the same good which result in aggressive price competition.
Monopolistic competition
Monopolistic competition characterizes an industry in which many firms offer products or services that are close, but not perfect substitutes. Monopolistic firms have little power to set curtail supply or raise prices to increase profits. Thus, the firms will try to differentiate their product through branding and marketing to capture above market returns. Some common examples of monopolistic industries include gasoline, milk, Internet connectivity (ISP services), electricity, telephony, and airline tickets. Since firms offer similar products, demand is highly elastic in monopolistic competition. As a result of demand being very responsive to price changes, consumers will switch to the cheapest alternative as a result of price increases. This is known as switching costs, or essentially what the consumers are willing to give up.
Market effects
The Michael Porter invented "Porter's Five Forces" to analyse an industry's attractiveness and likely profitability. Alongside competitive rivalry, buyer power, supplier power and threat of new entry, Porter identifies the threat of substitution as one of the five important industry forces. The threat of substitution refers to the likelihood of customers finding alternative products to purchase. When close substitutes are available, customers can easily and quickly forgo buying a company's product by finding other alternatives. This can weaken a company's power which threatens long-term profitability. The risk of substitution can be considered high when:
Customers have slight switching costs between two available substitutes.
The quality and performance offered by a close substitute are of a higher standard.
Customers of a product have low loyalty towards the brand or product, hence being more sensitive to price changes.
Additionally substitute goods have a large impact on markets, consumer and sellers through the following factors:
Markets characterised by close/perfect substitute goods experience great volatility in prices. This volatility negatively impacts producers' profits, as it is possible to earn higher profits in markets with fewer substitute products. That is, perfect substitute results in profits being driven down to zero as seen in perfectly competitive markets equilibrium.
As a result of the intense competition caused the availability of substitute goods, low quality products can arise. Since prices are reduced to capture a larger share of the market, firms try to reduce their utilisation of resources which in turn will reduce their costs.
In a market with close/perfect substitutes, customers have a wide range of products to choose from. As the number of substitutes increase, the probability that every consumer selects what is right for them also increases. That is, consumers can reach a higher overall utility level from the availability of substitute products.
See also
Competitive equilibrium
Currency substitution
Fungible
List of economics topics
References
Goods (economics)
Consumer theory
Perfect competition
Utility function types | Substitute good | Physics | 3,121 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.