id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
50,650,284 | https://en.wikipedia.org/wiki/Boolean%20Pythagorean%20triples%20problem | The Boolean Pythagorean triples problem is a problem from Ramsey theory about whether the positive integers can be colored red and blue so that no Pythagorean triples consist of all red or all blue members. The Boolean Pythagorean triples problem was solved by Marijn Heule, Oliver Kullmann and Victor W. Marek in May 2016 through a computer-assisted proof.
Statement
The problem asks if it is possible to color each of the positive integers either red or blue, so that no Pythagorean triple of integers a, b, c, satisfying are all the same color.
For example, in the Pythagorean triple 3, 4, and 5 (), if 3 and 4 are colored red, then 5 must be colored blue.
Solution
Marijn Heule, Oliver Kullmann, and Victor W. Marek demonstrated that it is possible to partition the set into two subsets such that neither subset contains a Pythagorean triple. However, such a partition is not possible for the set . This result was achieved by analyzing a vast number of possible colorings, which initially amounted to about combinations. The researchers reduced this to around a trillion cases, which were then tested using a SAT solver. The computational process required approximately 4 CPU-years over two days on the Stampede supercomputer at the Texas Advanced Computing Center. The resulting proof, initially 200 terabytes in size, was compressed to 68 gigabytes. The findings were published in the SAT 2016 conference paper "Solving and Verifying the Boolean Pythagorean Triples problem via Cube-and-Conquer," which received the best paper award.
Prize
In the 1980s Ronald Graham offered a $100 prize for the solution of the problem, which has now been awarded to Marijn Heule.
Generalizations
As of 2018, the problem is still open for more than 2 colors, that is, if there exists a k-coloring (k ≥ 3) of the positive integers such that no Pythagorean triples are the same color.
See also
List of long mathematical proofs
References
Computer-assisted proofs
Mathematical problems
Pythagorean theorem
Ramsey theory | Boolean Pythagorean triples problem | Mathematics | 452 |
11,421,635 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20psi18S-1854 | In molecular biology, Small nucleolar RNA psi28S-3327 (also known as snoRNA psi28S-3327) is a non-coding RNA (ncRNA) molecule which functions in the biogenesis (modification) of other small nuclear RNAs (snRNAs). This type of modifying RNA is located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a 'guide RNA'.
This Drosophila-specific snoRNA is a member of the H/ACA box class of snoRNA and is predicted to be responsible for guiding the modification of uridines 1854 and 1937 to pseudouridine in Drosophila 18S rRNA.
References
External links
Small nuclear RNA | Small nucleolar RNA psi18S-1854 | Chemistry | 182 |
169,324 | https://en.wikipedia.org/wiki/Logical%20equivalence | In logic and mathematics, statements and are said to be logically equivalent if they have the same truth value in every model. The logical equivalence of and is sometimes expressed as , , , or , depending on the notation being used.
However, these symbols are also used for material equivalence, so proper interpretation would depend on the context. Logical equivalence is different from material equivalence, although the two concepts are intrinsically related.
Logical equivalences
In logic, many common logical equivalences exist and are often listed as laws or properties. The following tables illustrate some of these.
General logical equivalences
Logical equivalences involving conditional statements
Logical equivalences involving biconditionals
Where represents XOR.
Examples
In logic
The following statements are logically equivalent:
If Lisa is in Denmark, then she is in Europe (a statement of the form ).
If Lisa is not in Europe, then she is not in Denmark (a statement of the form ).
Syntactically, (1) and (2) are derivable from each other via the rules of contraposition and double negation. Semantically, (1) and (2) are true in exactly the same models (interpretations, valuations); namely, those in which either Lisa is in Denmark is false or Lisa is in Europe is true.
(Note that in this example, classical logic is assumed. Some non-classical logics do not deem (1) and (2) to be logically equivalent.)
Relation to material equivalence
Logical equivalence is different from material equivalence. Formulas and are logically equivalent if and only if the statement of their material equivalence () is a tautology.
The material equivalence of and (often written as ) is itself another statement in the same object language as and . This statement expresses the idea "' if and only if '". In particular, the truth value of can change from one model to another.
On the other hand, the claim that two formulas are logically equivalent is a statement in metalanguage, which expresses a relationship between two statements and . The statements are logically equivalent if, in every model, they have the same truth value.
See also
Entailment
Equisatisfiability
If and only if
Logical biconditional
Logical equality
≡ the iff symbol (U+2261 IDENTICAL TO)
∷ the a is to b as c is to d symbol (U+2237 PROPORTION)
⇔ the double struck biconditional (U+21D4 LEFT RIGHT DOUBLE ARROW)
↔ the bidirectional arrow (U+2194 LEFT RIGHT ARROW)
References
Mathematical logic
Metalogic
Logical consequence
Equivalence (mathematics) | Logical equivalence | Mathematics | 534 |
1,075,005 | https://en.wikipedia.org/wiki/Exergy | Exergy, often referred to as "available energy" or "useful work potential", is a fundamental concept in the field of thermodynamics and engineering. It plays a crucial role in understanding and quantifying the quality of energy within a system and its potential to perform useful work. Exergy analysis has widespread applications in various fields, including energy engineering, environmental science, and industrial processes.
From a scientific and engineering perspective, second-law-based exergy analysis is valuable because it provides a number of benefits over energy analysis alone. These benefits include the basis for determining energy quality (or exergy content), enhancing the understanding of fundamental physical phenomena, and improving design, performance evaluation and optimization efforts. In thermodynamics, the exergy of a system is the maximum useful work that can be produced as the system is brought into equilibrium with its environment by an ideal process. The specification of an "ideal process" allows the determination of "maximum work" production. From a conceptual perspective, exergy is the "ideal" potential of a system to do work or cause a change as it achieves equilibrium with its environment. Exergy is also known as "availability". Exergy is non-zero when there is dis-equilibrium between the system and its environment, and exergy is zero when equilibrium is established (the state of maximum entropy for the system plus its environment).
Determining exergy was one of the original goals of thermodynamics. The term "exergy" was coined in 1956 by Zoran Rant (1904–1972) by using the Greek ex and ergon, meaning "from work", but the concept had been earlier developed by J. Willard Gibbs (the namesake of Gibbs free energy) in 1873.
Energy is neither created nor destroyed, but is simply converted from one form to another (see First law of thermodynamics). In contrast to energy, exergy is always destroyed when a process is non-ideal or irreversible (see Second law of thermodynamics). To illustrate, when someone states that "I used a lot of energy running up that hill", the statement contradicts the first law. Although the energy is not consumed, intuitively we perceive that something is. The key point is that energy has quality or measures of usefulness, and this energy quality (or exergy content) is what is consumed or destroyed. This occurs because everything, all real processes, produce entropy and the destruction of exergy or the rate of "irreversibility" is proportional to this entropy production (Gouy–Stodola theorem). Where entropy production may be calculated as the net increase in entropy of the system together with its surroundings. Entropy production is due to things such as friction, heat transfer across a finite temperature difference and mixing. In distinction from "exergy destruction", "exergy loss" is the transfer of exergy across the boundaries of a system, such as with mass or heat loss, where the exergy flow or transfer is potentially recoverable. The energy quality or exergy content of these mass and energy losses are low in many situations or applications, where exergy content is defined as the ratio of exergy to energy on a percentage basis. For example, while the exergy content of electrical work produced by a thermal power plant is 100%, the exergy content of low-grade heat rejected by the power plant, at say, 41 degrees Celsius, relative to an environment temperature of 25 degrees Celsius, is only 5%.
Definitions
Exergy is a combination property of a system and its environment because it depends on the state of both and is a consequence of dis-equilibrium between them. Exergy is neither a thermodynamic property of matter nor a thermodynamic potential of a system. Exergy and energy always have the same units, and the joule (symbol: J) is the unit of energy in the International System of Units (SI). The internal energy of a system is always measured from a fixed reference state and is therefore always a state function. Some authors define the exergy of the system to be changed when the environment changes, in which case it is not a state function. Other writers prefer a slightly alternate definition of the available energy or exergy of a system where the environment is firmly defined, as an unchangeable absolute reference state, and in this alternate definition, exergy becomes a property of the state of the system alone.
However, from a theoretical point of view, exergy may be defined without reference to any environment. If the intensive properties of different finitely extended elements of a system differ, there is always the possibility to extract mechanical work from the system. Yet, with such an approach one has to abandon the requirement that the environment is large enough relative to the "system" such that its intensive properties, such as temperature, are unchanged due to its interaction with the system. So that exergy is defined in an absolute sense, it will be assumed in this article that, unless otherwise stated, that the environment's intensive properties are unchanged by its interaction with the system.
For a heat engine, the exergy can be simply defined in an absolute sense, as the energy input times the Carnot efficiency, assuming the low-temperature heat reservoir is at the temperature of the environment. Since many systems can be modeled as a heat engine, this definition can be useful for many applications.
Terminology
The term exergy is also used, by analogy with its physical definition, in information theory related to reversible computing. Exergy is also synonymous with available energy, exergic energy, essergy (considered archaic), utilizable energy, available useful work, maximum (or minimum) work, maximum (or minimum) work content, reversible work, ideal work, availability or available work.
Implications
The exergy destruction of a cycle is the sum of the exergy destruction of the processes that compose that cycle. The exergy destruction of a cycle can also be determined without tracing the individual processes by considering the entire cycle as a single process and using one of the exergy destruction equations.
Examples
For two thermal reservoirs at temperatures TH and TC < TH, as considered by Carnot, the exergy is the work W that can be done by a reversible engine. Specifically, with QH the heat provided by the hot reservoir, Carnot's analysis gives W/QH = (TH − TC)/TH. Although, exergy or maximum work is determined by conceptually utilizing an ideal process, it is the property of a system in a given environment. Exergy analysis is not merely for reversible cycles, but for all cycles (including non-cyclic or non-ideal), and indeed for all thermodynamic processes.
As an example, consider the non-cyclic process of expansion of an ideal gas. For free expansion in an isolated system, the energy and temperature do not change, so by energy conservation no work is done. On the other hand, for expansion done against a moveable wall that always matched the (varying) pressure of the expanding gas (so the wall develops negligible kinetic energy), with no heat transfer (adiabatic wall), the maximum work would be done. This corresponds to the exergy. Thus, in terms of exergy, Carnot considered the exergy for a cyclic process with two thermal reservoirs (fixed temperatures). Just as the work done depends on the process, so the exergy depends on the process, reducing to Carnot's result for Carnot's case.
W. Thomson (from 1892, Lord Kelvin), as early as 1849 was exercised by what he called “lost energy”, which appears to be the same as “destroyed energy” and what has been called “anergy”. In 1874 he wrote that “lost energy” is the same as the energy dissipated by, e.g., friction, electrical conduction (electric field-driven charge diffusion), heat conduction (temperature-driven thermal diffusion), viscous processes (transverse momentum diffusion) and particle diffusion (ink in water). On the other hand, Kelvin did not indicate how to compute the “lost energy”. This awaited the 1931 and 1932 works of Onsager on irreversible processes.
Mathematical description
An application of the second law of thermodynamics
Exergy uses system boundaries in a way that is unfamiliar to many. We imagine the presence of a Carnot engine between the system and its reference environment even though this engine does not exist in the real world. Its only purpose is to measure the results of a "what-if" scenario to represent the most efficient work interaction possible between the system and its surroundings.
If a real-world reference environment is chosen that behaves like an unlimited reservoir that remains unaltered by the system, then Carnot's speculation about the consequences of a system heading towards equilibrium with time is addressed by two equivalent mathematical statements. Let B, the exergy or available work, decrease with time, and Stotal, the entropy of the system and its reference environment enclosed together in a larger isolated system, increase with time:
For macroscopic systems (above the thermodynamic limit), these statements are both expressions of the second law of thermodynamics if the following expression is used for exergy:
where the extensive quantities for the system are: U = Internal energy, V = Volume, and Ni = Moles of component i. The intensive quantities for the surroundings are: PR = Pressure, TR = temperature, μi, R = Chemical potential of component i. Indeed the total entropy of the universe reads:
the second term being the entropy of the surroundings to within a constant.
Individual terms also often have names attached to them: is called "available PV work", is called "entropic loss" or "heat loss" and the final term is called "available chemical energy."
Other thermodynamic potentials may be used to replace internal energy so long as proper care is taken in recognizing which natural variables correspond to which potential. For the recommended nomenclature of these potentials, see (Alberty, 2001). Equation () is useful for processes where system volume, entropy, and the number of moles of various components change because internal energy is also a function of these variables and no others.
An alternative definition of internal energy does not separate available chemical potential from U. This expression is useful (when substituted into equation ()) for processes where system volume and entropy change, but no chemical reaction occurs:
In this case, a given set of chemicals at a given entropy and volume will have a single numerical value for this thermodynamic potential. A multi-state system may complicate or simplify the problem because the Gibbs phase rule predicts that intensive quantities will no longer be completely independent from each other.
A historical and cultural tangent
In 1848, William Thomson, 1st Baron Kelvin, asked (and immediately answered) the question
Is there any principle on which an absolute thermometric scale can be founded? It appears to me that Carnot's theory of the motive power of heat enables us to give an affirmative answer.
With the benefit of the hindsight contained in equation (), we are able to understand the historical impact of Kelvin's idea on physics. Kelvin suggested that the best temperature scale would describe a constant ability for a unit of temperature in the surroundings to alter the available work from Carnot's engine. From equation ():
Rudolf Clausius recognized the presence of a proportionality constant in Kelvin's analysis and gave it the name entropy in 1865 from the Greek for "transformation" because it quantifies the amount of energy lost during the conversion from heat to work. The available work from a Carnot engine is at its maximum when the surroundings are at a temperature of absolute zero.
Physicists then, as now, often look at a property with the word "available" or "utilizable" in its name with a certain unease. The idea of what is available raises the question of "available to what?" and raises a concern about whether such a property is anthropocentric. Laws derived using such a property may not describe the universe but instead, describe what people wish to see.
The field of statistical mechanics (beginning with the work of Ludwig Boltzmann in developing the Boltzmann equation) relieved many physicists of this concern. From this discipline, we now know that macroscopic properties may all be determined from properties on a microscopic scale where entropy is more "real" than temperature itself (see Thermodynamic temperature). Microscopic kinetic fluctuations among particles cause entropic loss, and this energy is unavailable for work because these fluctuations occur randomly in all directions. The anthropocentric act is taken, in the eyes of some physicists and engineers today, when someone draws a hypothetical boundary, in fact, he says: "This is my system. What occurs beyond it is surroundings." In this context, exergy is sometimes described as an anthropocentric property, both by some who use it and by some who don't. However, exergy is based on the dis-equilibrium between a system and its environment, so its very real and necessary to define the system distinctly from its environment. It can be agreed that entropy is generally viewed as a more fundamental property of matter than exergy.
A potential for every thermodynamic situation
In addition to and the other thermodynamic potentials are frequently used to determine exergy. For a given set of chemicals at a given entropy and pressure, enthalpy H is used in the expression:
For a given set of chemicals at a given temperature and volume, Helmholtz free energy A is used in the expression:
For a given set of chemicals at a given temperature and pressure, Gibbs free energy G is used in the expression:
where is evaluated at the isothermal system temperature (), and is defined with respect to the isothermal temperature of the system's environment (). The exergy is the energy reduced by the product of the entropy times the environment temperature , which is the slope or partial derivative of the internal energy with respect to entropy in the environment. That is, higher entropy reduces the exergy or free energy available relative to the energy level .
Work can be produced from this energy, such as in an isothermal process, but any entropy generation during the process will cause the destruction of exergy (irreversibility) and the reduction of these thermodynamic potentials. Further, exergy losses can occur if mass and energy is transferred out of the system at non-ambient or elevated temperature, pressure or chemical potential. Exergy losses are potentially recoverable though because the exergy has not been destroyed, such as what occurs in waste heat recovery systems (although the energy quality or exergy content is typically low). As a special case, an isothermal process operating at ambient temperature will have no thermally related exergy losses.
Exergy Analysis involving Radiative Heat Transfer
All matter emits radiation continuously as a result of its non-zero (absolute) temperature. This emitted energy flow is proportional to the material’s temperature raised to the fourth power. As a result, any radiation conversion device that seeks to absorb and convert radiation (while reflecting a fraction of the incoming source radiation) inherently emits its own radiation. Also, given that reflected and emitted radiation can occupy the same direction or solid angle, the entropy flows, and as a result, the exergy flows, are generally not independent. The entropy and exergy balance equations for a control volume (CV), re-stated to correctly apply to situations involving radiative transfer, are expressed as,
where or denotes entropy production within the control volume, and,
This rate equation for the exergy within an open system X () takes into account the exergy transfer rates across the system boundary by heat transfer ( for conduction & convection, and by radiative fluxes), by mechanical or electrical work transfer (), and by mass transfer (), as well as taking into account the exergy destruction () that occurs within the system due to irreversibility’s or non-ideal processes. Note that chemical exergy, kinetic energy, and gravitational potential energy have been excluded for simplicity.
The exergy irradiance or flux M, and the exergy radiance N (where M = πN for isotropic radiation), depend on the spectral and directional distribution of the radiation (for example, see the next section on ‘Exergy Flux of Radiation with an Arbitrary Spectrum’). Sunlight can be crudely approximated as blackbody, or more accurately, as graybody radiation. Noting that, although a graybody spectrum looks similar to a blackbody spectrum, the entropy and exergy are very different.
Petela determined that the exergy of isotropic blackbody radiation was given by the expression,
where the exergy within the enclosed system is X (), c is the speed of light, V is the volume occupied by the enclosed radiation system or void, T is the material emission temperature, To is the environmental temperature, and x is the dimensionless temperature ratio To/T.
However, for decades this result was contested in terms of its relevance to the conversion of radiation fluxes, and in particular, solar radiation. For example, Bejan stated that “Petela’s efficiency is no more than a convenient, albeit artificial way, of non-dimensionalizing the calculated work output” and that Petela’s efficiency “is not a ‘conversion efficiency.’ ” However, it has been shown that Petela’s result represents the exergy of blackbody radiation. This was done by resolving a number of issues, including that of inherent irreversibility, defining the environment in terms of radiation, the effect of inherent emission by the conversion device and the effect of concentrating source radiation.
Exergy Flux of Radiation with an Arbitrary Spectrum (including Sunlight)
In general, terrestrial solar radiation has an arbitrary non-blackbody spectrum. Ground level spectrums can vary greatly due to reflection, scattering and absorption in the atmosphere. While the emission spectrums of thermal radiation in engineering systems can vary widely as well.
In determining the exergy of radiation with an arbitrary spectrum, it must be considered whether reversible or ideal conversion (zero entropy production) is possible. It has been shown that reversible conversion of blackbody radiation fluxes across an infinitesimal temperature difference is theoretically possible ]. However, this reversible conversion can only be theoretically achieved because equilibrium can exist between blackbody radiation and matter. However, non-blackbody radiation cannot even exist in equilibrium with itself, nor with its own emitting material.
Unlike blackbody radiation, non-blackbody radiation cannot exist in equilibrium with matter, so it appears likely that the interaction of non-blackbody radiation with matter is always an inherently irreversible process. For example, an enclosed non-blackbody radiation system (such as a void inside a solid mass) is unstable and will spontaneously equilibriate to blackbody radiation unless the enclosure is perfectly reflecting (i.e., unless there is no thermal interaction of the radiation with its enclosure – which is not possible in actual, or real, non-ideal systems). Consequently, a cavity initially devoid of thermal radiation inside a non-blackbody material will spontaneously and rapidly (due to the high velocity of the radiation), through a series of absorption and emission interactions, become filled with blackbody radiation rather than non-blackbody radiation.
The approaches by Petela and Karlsson both assume that reversible conversion of non-blackbody radiation is theoretically possible, that is, without addressing or considering the issue. Exergy is not a property of the system alone, it’s a property of both the system and its environment. Thus, it is of key importance non-blackbody radiation cannot exist in equilibrium with matter, indicating that the interaction of non-blackbody radiation with matter is an inherently irreversible process.
The flux (irradiance) of radiation with an arbitrary spectrum, based on the inherent irreversibility of non-blackbody radiation conversion, is given by the expression,
The exergy flux is expressed as a function of only the energy flux or irradiance and the environment temperature . For graybody radiation, the exergy flux is given by the expression,
As one would expect, the exergy flux of non-blackbody radiation reduces to the result for blackbody radiation when emissivity is equal to one.
Note that the exergy flux of graybody radiation can be a small fraction of the energy flux. For example, the ratio of exergy flux to energy flux for graybody radiation with emissivity is equal to 40.0%, for and . That is, a maximum of only 40% of the graybody energy flux can be converted to work in this case (already only 50% of that of the blackbody energy flux with the same emission temperature). Graybody radiation has a spectrum that looks similar to the blackbody spectrum, but the entropy and exergy flux cannot be accurately approximated as that of blackbody radiation with the same emission temperature. However, it can be reasonably approximated by the entropy flux of blackbody radiation with the same energy flux (lower emission temperature).
Blackbody radiation has the highest entropy-to-energy ratio of all radiation with the same energy flux, but the lowest entropy-to-energy ratio, and the highest exergy content, of all radiation with the same emission temperature. For example, the exergy content of graybody radiation is lower than that of blackbody radiation with the same emission temperature and decreases as emissivity decreases. For the example above with the exergy flux of the blackbody radiation source flux is 52.5% of the energy flux compared to 40.0% for graybody radiation with , or compared to 15.5% for graybody radiation with .
The Exergy Flux of Sunlight
In addition to the production of power directly from sunlight, solar radiation provides most of the exergy for processes on Earth, including processes that sustain living systems directly, as well as all fuels and energy sources that are used for transportation and electric power production (directly or indirectly). This is primarily with the exception of nuclear fission power plants and geothermal energy (due to natural fission decay). Solar energy is, for the most part, thermal radiation from the Sun with an emission temperature near 5762 Kelvin, but it also includes small amounts of higher energy radiation from the fusion reaction or higher thermal emission temperatures within the Sun. The source of most energy on Earth is nuclear in origin.
The figure below depicts typical solar radiation spectrums under clear sky conditions for AM0 (extraterrestrial solar radiation), AM1 (terrestrial solar radiation with solar zenith angle of 0 degrees) and AM4 (terrestrial solar radiation with solar zenith angle of 75.5 degrees). The solar spectrum at sea level (terrestrial solar spectrum) depends on a number of factors including the position of the Sun in the sky, atmospheric turbidity, the level of local atmospheric pollution, and the amount and type of cloud cover. These spectrums are for relatively clear air (α = 1.3, β = 0.04) assuming a U.S. standard atmosphere with 20 mm of precipitable water vapor and 3.4 mm of ozone. The Figure shows the spectral energy irradiance (W/m2μm) which does not provide information regarding the directional distribution of the solar radiation. The exergy content of the solar radiation, assuming that it is subtended by the solid angle of the ball of the Sun (no circumsolar), is 93.1%, 92.3% and 90.8%, respectively, for the AM0, AM1 and the AM4 spectrums.
The exergy content of terrestrial solar radiation is also reduced because of the diffuse component caused by the complex interaction of solar radiation, originally in a very small solid angle beam, with material in the Earth’s atmosphere. The characteristics and magnitude of diffuse terrestrial solar radiation depends on a number of factors, as mentioned, including the position of the Sun in the sky, atmospheric turbidity, the level of local atmospheric pollution, and the amount and type of cloud cover. Solar radiation under clear sky conditions exhibits a maximum intensity towards the Sun (circumsolar radiation) but also exhibits an increase in intensity towards the horizon (horizon brightening). In contrast for opaque overcast skies the solar radiation can be completely diffuse with a maximum intensity in the direction of the zenith and monotonically decreasing towards the horizon. The magnitude of the diffuse component generally varies with frequency, being highest in the ultraviolet region.
The dependence of the exergy content on directional distribution can be illustrated by considering, for example, the AM1 and AM4 terrestrial spectrums depicted in the figure, with the following simplified cases of directional distribution:
• For AM1: 80% of the solar radiation is contained in the solid angle subtended by the Sun, 10% is contained and isotropic in a solid angle 0.008 sr (this field of view includes circumsolar radiation), while the remaining 10% of the solar radiation is diffuse and isotropic in the solid angle 2π sr.
• For AM4: 65% of the solar radiation is contained in the solid angle subtended by the Sun, 20% of the solar radiation is contained and isotropic in a solid angle 0.008 sr, while the remaining 15% of the solar radiation is diffuse and isotropic in the solid angle 2π sr. Note that when the Sun is low in the sky the diffuse component can be the dominant part of the incident solar radiation.
For these cases of directional distribution, the exergy content of the terrestrial solar radiation for the AM1 and AM4 spectrum depicted are 80.8% and 74.0%, respectively. From these sample calculations it is evideνnt that the exergy content of terrestrial solar radiation is strongly dependent on the directional distribution of the radiation. This result is interesting because one might expect that the performance of a conversion device would depend on the incoming rate of photons and their spectral distribution but not on the directional distribution of the incoming photons. However, for a given incoming flux of photons with a certain spectral distribution, the entropy (level of disorder) is higher the more diffuse the directional distribution. From the second law of thermodynamics, the incoming entropy of the solar radiation cannot be destroyed and consequently reduces the maximum work output that can be obtained by a conversion device.
Chemical exergy
Similar to thermomechanical exergy, chemical exergy depends on the temperature and pressure of a system as well as on the composition. The key difference in evaluating chemical exergy versus thermomechanical exergy is that thermomechanical exergy does not take into account the difference in a system and the environment's chemical composition. If the temperature, pressure or composition of a system differs from the environment's state, then the overall system will have exergy.
The definition of chemical exergy resembles the standard definition of thermomechanical exergy, but with a few differences. Chemical exergy is defined as the maximum work that can be obtained when the considered system is brought into reaction with reference substances present in the environment. Defining the exergy reference environment is one of the most vital parts of analyzing chemical exergy. In general, the environment is defined as the composition of air at 25 °C and 1 atm of pressure. At these properties air consists of N2=75.67%, O2=20.35%, H2O(g)=3.12%, CO2=0.03% and other gases=0.83%. These molar fractions will become of use when applying Equation 8 below.
CaHbOc is the substance that is entering a system that one wants to find the maximum theoretical work of. By using the following equations, one can calculate the chemical exergy of the substance in a given system. Below, Equation 9 uses the Gibbs function of the applicable element or compound to calculate the chemical exergy. Equation 10 is similar but uses standard molar chemical exergy, which scientists have determined based on several criteria, including the ambient temperature and pressure that a system is being analyzed and the concentration of the most common components. These values can be found in thermodynamic books or in online tables.
Important equations
where:
is the Gibbs function of the specific substance in the system at . ( refers to the substance that is entering the system)
is the Universal gas constant (8.314462 J/mol•K)
is the temperature that the system is being evaluated at in absolute temperature
is the molar fraction of the given substance in the environment, i.e. air
where is the standard molar chemical exergy taken from a table for the specific conditions that the system is being evaluated.
Equation 10 is more commonly used due to the simplicity of only having to look up the standard chemical exergy for given substances. Using a standard table works well for most cases, even if the environmental conditions vary slightly, the difference is most likely negligible.
Total exergy
After finding the chemical exergy in a given system, one can find the total exergy by adding it to the thermomechanical exergy. Depending on the situation, the amount of chemical exergy added can be very small. If the system being evaluated involves combustion, the amount of chemical exergy is very large and necessary to find the total exergy of the system.
Irreversibility
Irreversibility accounts for the amount of exergy destroyed in a closed system, or in other words, the wasted work potential. This is also called dissipated energy. For highly efficient systems, the value of , is low, and vice versa. The equation to calculate the irreversibility of a closed system, as it relates to the exergy of that system, is as follows:
where , also denoted by , is the entropy generated by processes within the system. If then there are irreversibilities present in the system. If then there are no irreversibilities present in the system. The value of , the irreversibility, can not be negative, as this implies entropy destruction, a direct violation of the second law of thermodynamics.
Exergy analysis also relates the actual work of a work producing device to the maximal work, that could be obtained in the reversible or ideal process:
That is, the irreversibility is the ideal maximum work output minus the actual work production. Whereas, for a work consuming device such as refrigeration or heat pump, irreversibility is the actual work input minus the ideal minimum work input.
The first term at the right part is related to the difference in exergy at inlet and outlet of the system:
where is also denoted by .
For an isolated system there are no heat or work interactions or transfers of exergy between the system and its surroundings. The exergy of an isolated system can therefore only decrease, by a magnitude equal to the irreversibility of that system or process,
Applications
Applying equation () to a subsystem yields:
This expression applies equally well for theoretical ideals in a wide variety of applications: electrolysis (decrease in G), galvanic cells and fuel cells (increase in G), explosives (increase in A), heating and refrigeration (exchange of H), motors (decrease in U) and generators (increase in U).
Utilization of the exergy concept often requires careful consideration of the choice of reference environment because, as Carnot knew, unlimited reservoirs do not exist in the real world. A system may be maintained at a constant temperature to simulate an unlimited reservoir in the lab or in a factory, but those systems cannot then be isolated from a larger surrounding environment. However, with a proper choice of system boundaries, a reasonable constant reservoir can be imagined. A process sometimes must be compared to "the most realistic impossibility," and this invariably involves a certain amount of guesswork.
Engineering applications
One goal of energy and exergy methods in engineering is to compute what comes into and out of several possible designs before a design is built. Energy input and output will always balance according to the First Law of Thermodynamics or the energy conservation principle. Exergy output will not equal the exergy input for real processes since a part of the exergy input is always destroyed according to the Second Law of Thermodynamics for real processes. After the input and output are calculated, an engineer will often want to select the most efficient process. An energy efficiency or first law efficiency will determine the most efficient process based on wasting as little energy as possible relative to energy inputs. An exergy efficiency or second-law efficiency will determine the most efficient process based on wasting and destroying as little available work as possible from a given input of available work, per unit of whatever the desired output is.
Exergy has been applied in a number of design applications in order to optimize systems or identify components or subsystems with the greatest potential for improvement. For instance, an exergy analysis of environmental control systems on the international space station revealed the oxygen generation assembly as the subsystem which destroyed the most exergy.
Exergy is particularly useful for broad engineering analyses with many systems of varied nature, since it can account for mechanical, electrical, nuclear, chemical, or thermal systems. For this reason, Exergy analysis has also been used to optimize the performance of rocket vehicles. Exergy analysis affords additional insight, relative to energy analysis alone, because it incorporates the second law, and considers both the system and its relationship with its environment. For example, exergy analysis has been used to compare possible power generation and storage systems on the moon, since exergy analysis is conducted in reference to the unique environmental operating conditions of a specific application, such as on the surface of the moon.
Application of exergy to unit operations in chemical plants was partially responsible for the huge growth of the chemical industry during the 20th century.
As a simple example of exergy, air at atmospheric conditions of temperature, pressure, and composition contains energy but no exergy when it is chosen as the thermodynamic reference state known as ambient. Individual processes on Earth such as combustion in a power plant often eventually result in products that are incorporated into the atmosphere, so defining this reference state for exergy is useful even though the atmosphere itself is not at equilibrium and is full of long and short term variations.
If standard ambient conditions are used for calculations during chemical plant operation when the actual weather is very cold or hot, then certain parts of a chemical plant might seem to have an exergy efficiency of greater than 100%. Without taking into account the non-standard atmospheric temperature variation, these calculations can give an impression of being a perpetual motion machine. Using actual conditions will give actual values, but standard ambient conditions are useful for initial design calculations.
Applications in natural resource utilization
In recent decades, utilization of exergy has spread outside of physics and engineering to the fields of industrial ecology, ecological economics, systems ecology, and energetics. Defining where one field ends and the next begins is a matter of semantics, but applications of exergy can be placed into rigid categories.
After the milestone work of Jan Szargut who emphasized the relation between exergy and availability,
it is necessary to remember "Exergy Ecology and Democracy".
by Goran Wall, a short essay, which evidences the strict relation that relates exergy disruption with environmental and social disruption.
From this activity it has derived a fundamental research activity in ecological economics and environmental accounting perform exergy-cost analyses in order to evaluate the impact of human activity on the current and future natural environment. As with ambient air, this often requires the unrealistic substitution of properties from a natural environment in place of the reference state environment of Carnot. For example, ecologists and others have developed reference conditions for the ocean and for the Earth's crust. Exergy values for human activity using this information can be useful for comparing policy alternatives based on the efficiency of utilizing natural resources to perform work. Typical questions that may be answered are:
Does the human production of one unit of an economic good by method A utilize more of a resource's exergy than by method B?
Does the human production of economic good A utilize more of a resource's exergy than the production of good B?
Does the human production of economic good A utilize a resource's exergy more efficiently than the production of good B?
There has been some progress in standardizing and applying these methods.
Measuring exergy requires the evaluation of a system's reference state environment. With respect to the applications of exergy on natural resource utilization, the process of quantifying a system requires the assignment of value (both utilized and potential) to resources that are not always easily dissected into typical cost-benefit terms. However, to fully realize the potential of a system to do work, it is becoming increasingly imperative to understand exergetic potential of natural resources, and how human interference alters this potential.
Referencing the inherent qualities of a system in place of a reference state environment is the most direct way that ecologists determine the exergy of a natural resource. Specifically, it is easiest to examine the thermodynamic properties of a system, and the reference substances that are acceptable within the reference environment. This determination allows for the assumption of qualities in a natural state: deviation from these levels may indicate a change in the environment caused by outside sources. There are three kinds of reference substances that are acceptable, due to their proliferation on the planet: gases within the atmosphere, solids within the Earth's crust, and molecules or ions in seawater. By understanding these basic models, it's possible to determine the exergy of multiple earth systems interacting, like the effects of solar radiation on plant life. These basic categories are utilized as the main components of a reference environment when examining how exergy can be defined through natural resources.
Other qualities within a reference state environment include temperature, pressure, and any number of combinations of substances within a defined area. Again, the exergy of a system is determined by the potential of that system to do work, so it is necessary to determine the baseline qualities of a system before it is possible to understand the potential of that system. The thermodynamic value of a resource can be found by multiplying the exergy of the resource by the cost of obtaining the resource and processing it.
Today, it is becoming increasingly popular to analyze the environmental impacts of natural resource utilization, especially for energy usage. To understand the ramifications of these practices, exergy is utilized as a tool for determining the impact potential of emissions, fuels, and other sources of energy. Combustion of fossil fuels, for example, is examined with respect to assessing the environmental impacts of burning coal, oil, and natural gas. The current methods for analyzing the emissions from these three products can be compared to the process of determining the exergy of the systems affected; specifically, it is useful to examine these with regard to the reference state environment of gases within the atmosphere. In this way, it is easier to determine how human action is affecting the natural environment.
Applications in sustainability
In systems ecology, researchers sometimes consider the exergy of the current formation of natural resources from a small number of exergy inputs (usually solar radiation, tidal forces, and geothermal heat). This application not only requires assumptions about reference states, but it also requires assumptions about the real environments of the past that might have been close to those reference states. Can we decide which is the most "realistic impossibility" over such a long period of time when we are only speculating about the reality?
For instance, comparing oil exergy to coal exergy using a common reference state would require geothermal exergy inputs to describe the transition from biological material to fossil fuels during millions of years in the Earth's crust, and solar radiation exergy inputs to describe the material's history before then when it was part of the biosphere. This would need to be carried out mathematically backwards through time, to a presumed era when the oil and coal could be assumed to be receiving the same exergy inputs from these sources. A speculation about a past environment is different from assigning a reference state with respect to known environments today. Reasonable guesses about real ancient environments may be made, but they are untestable guesses, and so some regard this application as pseudoscience or pseudo-engineering.
The field describes this accumulated exergy in a natural resource over time as embodied energy with units of the "embodied joule" or "emjoule".
The important application of this research is to address sustainability issues in a quantitative fashion through a sustainability measurement:
Does the human production of an economic good deplete the exergy of Earth's natural resources more quickly than those resources are able to receive exergy?
If so, how does this compare to the depletion caused by producing the same good (or a different one) using a different set of natural resources?
Exergy and environmental policy
Today environmental policies does not consider exergy as an instrument for a more equitable and effective environmental policy. Recently, exergy analysis allowed to obtain an important fault in today governmental GHGs emission balances, which often do not consider international transport related emissions, therefore the impacts of import/export are not accounted,
Therefore, some preliminary cases of the impacts of import export transportation and of technology had provided evidencing the opportunity of introducing an effective exergy based taxation which can reduce the fiscal impact on citizens.
In addition Exergy can be a precious instrument for an effective estimation of the path toward UN sustainable development goals (SDG).
Assigning one thermodynamically obtained value to an economic good
A technique proposed by systems ecologists is to consolidate the three exergy inputs described in the last section into the single exergy input of solar radiation, and to express the total input of exergy into an economic good as a solar embodied joule or sej. (See Emergy) Exergy inputs from solar, tidal, and geothermal forces all at one time had their origins at the beginning of the solar system under conditions which could be chosen as an initial reference state, and other speculative reference states could in theory be traced back to that time. With this tool we would be able to answer:
What fraction of the total human depletion of the Earth's exergy is caused by the production of a particular economic good?
What fraction of the total human and non-human depletion of the Earth's exergy is caused by the production of a particular economic good?
No additional thermodynamic laws are required for this idea, and the principles of energetics may confuse many issues for those outside the field. The combination of untestable hypotheses, unfamiliar jargon that contradicts accepted jargon, intense advocacy among its supporters, and some degree of isolation from other disciplines have contributed to this protoscience being regarded by many as a pseudoscience. However, its basic tenets are only a further utilization of the exergy concept.
Implications in the development of complex physical systems
A common hypothesis in systems ecology is that the design engineer's observation that a greater capital investment is needed to create a process with increased exergy efficiency is actually the economic result of a fundamental law of nature. By this view, exergy is the analogue of economic currency in the natural world. The analogy to capital investment is the accumulation of exergy into a system over long periods of time resulting in embodied energy. The analogy of capital investment resulting in a factory with high exergy efficiency is an increase in natural organizational structures with high exergy efficiency. (See Maximum power). Researchers in these fields describe biological evolution in terms of increases in organism complexity due to the requirement for increased exergy efficiency because of competition for limited sources of exergy.
Some biologists have a similar hypothesis. A biological system (or a chemical plant) with a number of intermediate compartments and intermediate reactions is more efficient because the process is divided up into many small substeps, and this is closer to the reversible ideal of an infinite number of infinitesimal substeps. Of course, an excessively large number of intermediate compartments comes at a capital cost that may be too high.
Testing this idea in living organisms or ecosystems is impossible for all practical purposes because of the large time scales and small exergy inputs involved for changes to take place. However, if this idea is correct, it would not be a new fundamental law of nature. It would simply be living systems and ecosystems maximizing their exergy efficiency by utilizing laws of thermodynamics developed in the 19th century.
Philosophical and cosmological implications
Some proponents of utilizing exergy concepts describe them as a biocentric or ecocentric alternative for terms like quality and value. The "deep ecology" movement views economic usage of these terms as an anthropocentric philosophy which should be discarded. A possible universal thermodynamic concept of value or utility appeals to those with an interest in monism.
For some, the result of this line of thinking about tracking exergy into the deep past is a restatement of the cosmological argument that the universe was once at equilibrium and an input of exergy from some First Cause created a universe full of available work. Current science is unable to describe the first 10−43 seconds of the universe (See Timeline of the Big Bang). An external reference state is not able to be defined for such an event, and (regardless of its merits), such an argument may be better expressed in terms of entropy.
Quality of energy types
The ratio of exergy to energy in a substance can be considered a measure of energy quality. Forms of energy such as macroscopic kinetic energy, electrical energy, and chemical Gibbs free energy are 100% recoverable as work, and therefore have exergy equal to their energy. However, forms of energy such as radiation and thermal energy can not be converted completely to work, and have exergy content less than their energy content. The exact proportion of exergy in a substance depends on the amount of entropy relative to the surrounding environment as determined by the Second Law of Thermodynamics.
Exergy is useful when measuring the efficiency of an energy conversion process. The exergetic, or 2nd Law, efficiency is a ratio of the exergy output divided by the exergy input. This formulation takes into account the quality of the energy, often offering a more accurate and useful analysis than efficiency estimates only using the First Law of Thermodynamics.
Work can be extracted also from bodies colder than the surroundings. When the flow of energy is coming into the body, work is performed by this energy obtained from the large reservoir, the surrounding. A quantitative treatment of the notion of energy quality rests on the definition of energy. According to the standard definition, Energy is a measure of the ability to do work. Work can involve the movement of a mass by a force that results from a transformation of energy. If there is an energy transformation, the second principle of energy flow transformations says that this process must involve the dissipation of some energy as heat. Measuring the amount of heat released is one way of quantifying the energy, or ability to do work and apply a force over a distance.
Exergy of heat available at a temperature
Maximal possible conversion of heat to work, or exergy content of heat, depends on the temperature at which heat is available and the temperature level at which the reject heat can be disposed, that is the temperature of the surrounding. The upper limit for conversion is known as Carnot efficiency and was discovered by Nicolas Léonard Sadi Carnot in 1824. See also Carnot heat engine.
Carnot efficiency is
where TH is the higher temperature and TC is the lower temperature, both as absolute temperature. From Equation 15 it is clear that in order to maximize efficiency one should maximize TH and minimize TC.
Exergy exchanged is then:
where Tsource is the temperature of the heat source, and To is the temperature of the surrounding.
Connection with economic value
Exergy in a sense can be understood as a measure of the value of energy. Since high-exergy energy carriers can be used for more versatile purposes, due to their ability to do more work, they can be postulated to hold more economic value. This can be seen in the prices of energy carriers, i.e. high-exergy energy carriers such as electricity tend to be more valuable than low-exergy ones such as various fuels or heat. This has led to the substitution of more valuable high-exergy energy carriers with low-exergy energy carriers, when possible. An example is heating systems, where higher investment to heating systems allows using low-exergy energy sources. Thus high-exergy content is being substituted with capital investments.
Exergy based Life Cycle Assessment (LCA)
Exergy of a system is the maximum useful work possible during a process that brings the system into equilibrium with a heat reservoir. Wall clearly states the relation between exergy analysis and resource accounting. This intuition confirmed by Dewulf Sciubba lead to exergo-economic accounting and to methods specifically dedicated to LCA such as exergetic material input per unit of service (EMIPS). The
concept of material input per unit of service (MIPS) is quantified in terms of the second law of thermodynamics, allowing the calculation of both resource input and service output in exergy terms. This exergetic material input per unit of service (EMIPS) has been elaborated for transport technology. The service not only takes into account the total mass to be transported
and the total distance, but also the mass per single transport and the delivery time. The applicability of the EMIPS methodology relates specifically to the transport system and allows an effective coupling with life cycle assessment. The exergy analysis according to EMIPS allowed the definition of a precise strategy for reducing environmental impacts of transport toward more sustainable transport. Such a strategy requires the reduction of the weight of vehicles, sustainable styles of driving, reducing the friction of tires, encouraging electric and hybrid vehicles, improving the walking and cycling environment in cities, and by enhancing the role of public transport, especially electric rail.
History
Carnot
In 1824, Sadi Carnot studied the improvements developed for steam engines by James Watt and others. Carnot utilized a purely theoretical perspective for these engines and developed new ideas. He wrote:
The question has often been raised whether the motive power of heat is unbounded, whether the possible improvements in steam engines have an assignable limit—a limit by which the nature of things will not allow to be passed by any means whatever... In order to consider in the most general way the principle of the production of motion by heat, it must be considered independently of any mechanism or any particular agent. It is necessary to establish principles applicable not only to steam-engines but to all imaginable heat-engines... The production of motion in steam-engines is always accompanied by a circumstance on which we should fix our attention. This circumstance is the re-establishing of equilibrium... Imagine two bodies A and B, kept each at a constant temperature, that of A being higher than that of B. These two bodies, to which we can give or from which we can remove the heat without causing their temperatures to vary, exercise the functions of two unlimited reservoirs...
Carnot next described what is now called the Carnot engine, and proved by a thought experiment that any heat engine performing better than this engine would be a perpetual motion machine. Even in the 1820s, there was a long history of science forbidding such devices. According to Carnot, "Such a creation is entirely contrary to ideas now accepted, to the laws of mechanics and of sound physics. It is inadmissible."
This description of an upper bound to the work that may be done by an engine was the earliest modern formulation of the second law of thermodynamics. Because it involves no mathematics, it still often serves as the entry point for a modern understanding of both the second law and entropy. Carnot's focus on heat engines, equilibrium, and heat reservoirs is also the best entry point for understanding the closely related concept of exergy.
Carnot believed in the incorrect caloric theory of heat that was popular during his time, but his thought experiment nevertheless described a fundamental limit of nature. As kinetic theory replaced caloric theory through the early and mid-19th century (see Timeline of thermodynamics), several scientists added mathematical precision to the first and second laws of thermodynamics and developed the concept of entropy. Carnot's focus on processes at the human scale (above the thermodynamic limit) led to the most universally applicable concepts in physics. Entropy and the second-law are applied today in fields ranging from quantum mechanics to physical cosmology.
Gibbs
In the 1870s, Josiah Willard Gibbs unified a large quantity of 19th century thermochemistry into one compact theory. Gibbs's theory incorporated the new concept of a chemical potential to cause change when distant from a chemical equilibrium into the older work begun by Carnot in describing thermal and mechanical equilibrium and their potentials for change. Gibbs's unifying theory resulted in the thermodynamic potential state functions describing differences from thermodynamic equilibrium.
In 1873, Gibbs derived the mathematics of "available energy of the body and medium" into the form it has today. (See the equations above). The physics describing exergy has changed little since that time.
Helmholtz
In the 1880s, German scientist Hermann von Helmholtz derived the equation for the maximum work which can be reversibly obtained from a closed system.
Rant
In 1956, Yugoslav scholar Zoran Rant proposed the concept of Exergy, extending Gibbs and Helmholtz' work. Since then, continuous development in exergy analysis has seen many applications in thermodynamics, and exergy has been accepted as the maximum theoretical useful work which can be obtained from a system with respect to its environment.
See also
Thermodynamic free energy
Entropy production
Energy: world resources and consumption
Emergy
Notes
References
Further reading
Stephen Jay Kline (1999). The Low-Down on Entropy and Interpretive Thermodynamics, La Cañada, CA: DCW Industries. .
External links
Energy, Incorporating Exergy, An International Journal
An Annotated Bibliography of Exergy/Availability
Exergy – a useful concept by Göran Wall
Exergetics textbook for self-study by Göran Wall
Exergy by Isidoro Martinez
Exergy calculator by The Exergoecology Portal
Global Exergy Resource Chart
Guidebook to IEA ECBCS Annex 37, Low Exergy Systems for Heating and Cooling of Buildings
Introduction to the Concept of Exergy
Thermodynamic free energy
State functions
Ecological economics | Exergy | Physics,Chemistry | 11,320 |
67,064,736 | https://en.wikipedia.org/wiki/Flow-based%20generative%20model | A flow-based generative model is a generative model used in machine learning that explicitly models a probability distribution by leveraging normalizing flow, which is a statistical method using the change-of-variable law of probabilities to transform a simple distribution into a complex one.
The direct modeling of likelihood provides many advantages. For example, the negative log-likelihood can be directly computed and minimized as the loss function. Additionally, novel samples can be generated by sampling from the initial distribution, and applying the flow transformation.
In contrast, many alternative generative modeling methods such as variational autoencoder (VAE) and generative adversarial network do not explicitly represent the likelihood function.
Method
Let be a (possibly multivariate) random variable with distribution .
For , let be a sequence of random variables transformed from . The functions should be invertible, i.e. the inverse function exists. The final output models the target distribution.
The log likelihood of is (see derivation):
To efficiently compute the log likelihood, the functions should be 1. easy to invert, and 2. easy to compute the determinant of its Jacobian. In practice, the functions are modeled using deep neural networks, and are trained to minimize the negative log-likelihood of data samples from the target distribution. These architectures are usually designed such that only the forward pass of the neural network is required in both the inverse and the Jacobian determinant calculations. Examples of such architectures include NICE, RealNVP, and Glow.
Derivation of log likelihood
Consider and . Note that .
By the change of variable formula, the distribution of is:
Where is the determinant of the Jacobian matrix of .
By the inverse function theorem:
By the identity (where is an invertible matrix), we have:
The log likelihood is thus:
In general, the above applies to any and . Since is equal to subtracted by a non-recursive term, we can infer by induction that:
Training method
As is generally done when training a deep learning model, the goal with normalizing flows is to minimize the Kullback–Leibler divergence between the model's likelihood and the target distribution to be estimated. Denoting the model's likelihood and the target distribution to learn, the (forward) KL-divergence is:
The second term on the right-hand side of the equation corresponds to the entropy of the target distribution and is independent of the parameter we want the model to learn, which only leaves the expectation of the negative log-likelihood to minimize under the target distribution. This intractable term can be approximated with a Monte-Carlo method by importance sampling. Indeed, if we have a dataset of samples each independently drawn from the target distribution , then this term can be estimated as:
Therefore, the learning objective
is replaced by
In other words, minimizing the Kullback–Leibler divergence between the model's likelihood and the target distribution is equivalent to maximizing the model likelihood under observed samples of the target distribution.
A pseudocode for training normalizing flows is as follows:
INPUT. dataset , normalizing flow model .
SOLVE. by gradient descent
RETURN.
Variants
Planar Flow
The earliest example. Fix some activation function , and let with the appropriate dimensions, thenThe inverse has no closed-form solution in general.
The Jacobian is .
For it to be invertible everywhere, it must be nonzero everywhere. For example, and satisfies the requirement.
Nonlinear Independent Components Estimation (NICE)
Let be even-dimensional, and split them in the middle. Then the normalizing flow functions arewhere is any neural network with weights .
is just , and the Jacobian is just 1, that is, the flow is volume-preserving.
When , this is seen as a curvy shearing along the direction.
Real Non-Volume Preserving (Real NVP)
The Real Non-Volume Preserving model generalizes NICE model by:
Its inverse is , and its Jacobian is . The NICE model is recovered by setting .
Since the Real NVP map keeps the first and second halves of the vector separate, it's usually required to add a permutation after every Real NVP layer.
Generative Flow (Glow)
In generative flow model, each layer has 3 parts:
channel-wise affine transformwith Jacobian .
invertible 1x1 convolutionwith Jacobian . Here is any invertible matrix.
Real NVP, with Jacobian as described in Real NVP.
The idea of using the invertible 1x1 convolution is to permute all layers in general, instead of merely permuting the first and second half, as in Real NVP.
Masked autoregressive flow (MAF)
An autoregressive model of a distribution on is defined as the following stochastic process:
where and are fixed functions that define the autoregressive model.
By the reparameterization trick, the autoregressive model is generalized to a normalizing flow:The autoregressive model is recovered by setting .
The forward mapping is slow (because it's sequential), but the backward mapping is fast (because it's parallel).
The Jacobian matrix is lower-diagonal, so the Jacobian is .
Reversing the two maps and of MAF results in Inverse Autoregressive Flow (IAF), which has fast forward mapping and slow backward mapping.
Continuous Normalizing Flow (CNF)
Instead of constructing flow by function composition, another approach is to formulate the flow as a continuous-time dynamic. Let be the latent variable with distribution . Map this latent variable to data space with the following flow function:
where is an arbitrary function and can be modeled with e.g. neural networks.
The inverse function is then naturally:
And the log-likelihood of can be found as:
Since the trace depends only on the diagonal of the Jacobian , this allows "free-form" Jacobian. Here, "free-form" means that there is no restriction on the Jacobian's form. It is contrasted with previous discrete models of normalizing flow, where the Jacobian is carefully designed to be only upper- or lower-diagonal, so that the Jacobian can be evaluated efficiently.
The trace can be estimated by "Hutchinson's trick":Given any matrix , and any random with , we have . (Proof: expand the expectation directly.)Usually, the random vector is sampled from (normal distribution) or (Radamacher distribution).
When is implemented as a neural network, neural ODE methods would be needed. Indeed, CNF was first proposed in the same paper that proposed neural ODE.
There are two main deficiencies of CNF, one is that a continuous flow must be a homeomorphism, thus preserve orientation and ambient isotopy (for example, it's impossible to flip a left-hand to a right-hand by continuous deforming of space, and it's impossible to turn a sphere inside out, or undo a knot), and the other is that the learned flow might be ill-behaved, due to degeneracy (that is, there are an infinite number of possible that all solve the same problem).
By adding extra dimensions, the CNF gains enough freedom to reverse orientation and go beyond ambient isotopy (just like how one can pick up a polygon from a desk and flip it around in 3-space, or unknot a knot in 4-space), yielding the "augmented neural ODE".
Any homeomorphism of can be approximated by a neural ODE operating on , proved by combining Whitney embedding theorem for manifolds and the universal approximation theorem for neural networks.
To regularize the flow , one can impose regularization losses. The paper proposed the following regularization loss based on optimal transport theory:where are hyperparameters. The first term punishes the model for oscillating the flow field over time, and the second term punishes it for oscillating the flow field over space. Both terms together guide the model into a flow that is smooth (not "bumpy") over space and time.
Downsides
Despite normalizing flows success in estimating high-dimensional densities, some downsides still exist in their designs. First of all, their latent space where input data is projected onto is not a lower-dimensional space and therefore, flow-based models do not allow for compression of data by default and require a lot of computation. However, it is still possible to perform image compression with them.
Flow-based models are also notorious for failing in estimating the likelihood of out-of-distribution samples (i.e.: samples that were not drawn from the same distribution as the training set). Some hypotheses were formulated to explain this phenomenon, among which the typical set hypothesis, estimation issues when training models, or fundamental issues due to the entropy of the data distributions.
One of the most interesting properties of normalizing flows is the invertibility of their learned bijective map. This property is given by constraints in the design of the models (cf.: RealNVP, Glow) which guarantee theoretical invertibility. The integrity of the inverse is important in order to ensure the applicability of the change-of-variable theorem, the computation of the Jacobian of the map as well as sampling with the model. However, in practice this invertibility is violated and the inverse map explodes because of numerical imprecision.
Applications
Flow-based generative models have been applied on a variety of modeling tasks, including:
Audio generation
Image generation
Molecular graph generation
Point-cloud modeling
Video generation
Lossy image compression
Anomaly detection
References
External links
Flow-based Deep Generative Models
Normalizing flow models
Machine learning
Statistical models
Probabilistic models | Flow-based generative model | Engineering | 2,047 |
10,663,276 | https://en.wikipedia.org/wiki/14%20Herculis%20b | 14 Herculis b or 14 Her b is an exoplanet approximately 58.4 light-years away in the constellation of Hercules. The planet was found orbiting the star 14 Herculis, with a mass that would make the planet a Jovian planet roughly the same size as Jupiter but much more massive. It was discovered in July 1998 by the Geneva Extrasolar Planet Search team. The discovery was formally published in 2003. At the time of discovery it was the extrasolar planet with the longest orbital period, though longer-period planets have subsequently been discovered.
Discovery
14 Herculis b was detected by measuring variations in its star's radial velocity as a result of the planet's gravity. This was done by making precise measurements of the Doppler shift of the spectrum of 14 Herculis. Prior to this analysis, another possible explanation of previous Doppler shift analysis included face-on spectroscopic binaries.
Orbit and mass
Preliminary astrometric measurements made by the Hipparcos satellite suggested that this planet has an orbital inclination of 155.3° with respect to plane of the sky, which would imply a true mass of 11.1 times that of Jupiter, close to the deuterium burning threshold that some astronomers use to define the distinction between a planet and a brown dwarf. However subsequent analysis suggests that the Hipparcos measurements were not precise enough to accurately determine the orbits. According to a 2008 paper, its inclination was being calculated via astrometry with Hubble, with publication expected by mid-2009.
The inclination and true mass of 14 Herculis b were finally measured in 2021, using data from Gaia, and refined by further astrometric studies in 2022 and 2023. The inclination is 35.7°, corresponding to a true mass of .
Direct imaging
Because of the wide separation between this planet and its host star, and the proximity of the 14 Herculis system to the Sun, it is a promising candidate for direct imaging of the planet, as the angular separation of the planet and host star will be large enough that the light from the planet and star might be spatially resolved. However, a search made using the adaptive optics CFHT 3.60m telescope on Mauna Kea did not make such a detection, confirming the object is not a star.
References
External links
Hercules (constellation)
Giant planets
Exoplanets discovered in 1998
Exoplanets detected by radial velocity
Exoplanets detected by astrometry | 14 Herculis b | Astronomy | 500 |
72,727,446 | https://en.wikipedia.org/wiki/Water%20supply%20and%20sanitation%20in%20Angola | Water supply and sanitation is an ongoing challenge in the nation of Angola.
Background
Angola has historically had issues with corruption and instability hindering its water infrastructure development.
Recent developments
Despite being a relatively poor country, water access has improved in recent history. The percentage of Angolans with access to a stable water supply grew from 42% in 1990 to 54% in 2012.
References
Water supply
Health in Angola | Water supply and sanitation in Angola | Chemistry,Engineering,Environmental_science | 80 |
1,384,568 | https://en.wikipedia.org/wiki/Character%20group | In mathematics, a character group is the group of representations of an abelian group by complex-valued functions. These functions can be thought of as one-dimensional matrix representations and so are special cases of the group characters that arise in the related context of character theory. Whenever a group is represented by matrices, the function defined by the trace of the matrices is called a character; however, these traces do not in general form a group. Some important properties of these one-dimensional characters apply to characters in general:
Characters are invariant on conjugacy classes.
The characters of irreducible representations are orthogonal.
The primary importance of the character group for finite abelian groups is in number theory, where it is used to construct Dirichlet characters. The character group of the cyclic group also appears in the theory of the discrete Fourier transform. For locally compact abelian groups, the character group (with an assumption of continuity) is central to Fourier analysis.
Preliminaries
Let be an abelian group. A function mapping to the group of non-zero complex numbers is called a character of if it is a group homomorphism—that is, if for all .
If is a character of a finite group (or more generally a torsion group) , then each function value is a root of unity, since for each there exists such that , and hence .
Each character f is a constant on conjugacy classes of G, that is, f(hgh−1) = f(g). For this reason, a character is sometimes called a class function.
A finite abelian group of order n has exactly n distinct characters. These are denoted by f1, ..., fn. The function f1 is the trivial representation, which is given by for all . It is called the principal character of G; the others are called the non-principal characters.
Definition
If G is an abelian group, then the set of characters fk forms an abelian group under pointwise multiplication. That is, the product of characters and is defined by for all . This group is the character group of G and is sometimes denoted as . The identity element of is the principal character f1, and the inverse of a character fk is its reciprocal 1/fk. If is finite of order n, then is also of order n. In this case, since for all , the inverse of a character is equal to the complex conjugate.
Alternative definition
There is another definition of character grouppg 29 which uses as the target instead of just . This is useful when studying complex tori because the character group of the lattice in a complex torus is canonically isomorphic to the dual torus via the Appell–Humbert theorem. That is,We can express explicit elements in the character group as follows: recall that elements in can be expressed asfor . If we consider the lattice as a subgroup of the underlying real vector space of , then a homomorphismcan be factored as a mapThis follows from elementary properties of homomorphisms. Note thatgiving us the desired factorization. As the groupwe have the isomorphism of the character group, as a group, with the group of homomorphisms of to . Since for any abelian group , we haveafter composing with the complex exponential, we find thatwhich is the expected result.
Examples
Finitely generated abelian groups
Since every finitely generated abelian group is isomorphic tothe character group can be easily computed in all finitely generated cases. From universal properties, and the isomorphism between finite products and coproducts, we have the character groups of is isomorphic tofor the first case, this is isomorphic to , the second is computed by looking at the maps which send the generator to the various powers of the -th roots of unity .
Orthogonality of characters
Consider the matrix A = A(G) whose matrix elements are where is the kth element of G.
The sum of the entries in the jth row of A is given by
if , and
.
The sum of the entries in the kth column of A is given by
if , and
.
Let denote the conjugate transpose of A. Then
.
This implies the desired orthogonality relationship for the characters: i.e.,
,
where is the Kronecker delta and is the complex conjugate of .
See also
Pontryagin duality
References
See chapter 6 of
Number theory
Group theory
Representation theory of groups | Character group | Mathematics | 903 |
18,561,028 | https://en.wikipedia.org/wiki/Russula%20albida | Russula albida is a species of fungus said to be edible. It is found in North America under deciduous trees.
External links
Russula species description
albida
Fungi of North America
Fungus species
Taxa named by Charles Horton Peck | Russula albida | Biology | 48 |
1,565,519 | https://en.wikipedia.org/wiki/Larder | A larder is a cool area for storing food prior to use. Originally, it was where raw meat was larded—covered in fat—to be preserved. By the 18th century, the term had expanded: at that point, a dry larder was where bread, pastry, milk, butter, or cooked meats were stored. Larders were commonplace in houses before the widespread use of the refrigerator.
Stone larders were designed to keep cold in the hottest weather. They had slate or marble shelves two or three inches thick. These shelves were wedged into thick stone walls. Fish or vegetables were laid directly onto the shelves and covered with muslin or handfuls of wet rushes were sprinkled under and around.
Essential qualities
Cool, dry, and well-ventilated.
Usually on the shady side of the house.
No fireplaces or hot flues in any of the adjoining walls.
Might have a door to an outside yard.
Had windows with wire gauze in them instead of glass.
Description
In the northern hemisphere, most houses would be arranged to have their larders and kitchens on the north or west side of the house where they received the least amount of sun. In Australia and New Zealand, larders were placed on the south or east sides of the house for the same reason.
Many larders have small, unglazed windows with window openings covered in fine mesh. This allows free circulation of air without allowing flies to enter. Many larders also have tiled or painted walls to simplify cleaning. Older larders, and especially those in larger houses, have hooks in the ceiling to hang joints of meat.
Etymology
Middle English (denoting a store of meat): from Old French lardier, from medieval Latin lardarium, from laridum.
History
In medieval households, the word "larder" referred both to an office responsible for fish, jams, and meat, as well as to the room in which these commodities were kept. It was headed by a larderer. The Scots term for larder was spence, This referred specifically to a place from which stores or food were distributed. And so in Scotland larderers (also pantlers and cellarers) were known as spencers.
The office generally was subordinated to the kitchen and existed as a separate office only in larger households. It was closely connected to other offices of the kitchen, such as the saucery and the scullery.
Larders were used by the Indus Valley civilization to store bones of goats, oxen, and sheep. These larders were made of large clay pots.
Animal larders
Places where animals store food for later consumption are sometimes referred to as 'larders', a well-known example being the hoards of seeds and nuts hidden by squirrels to provide a store of fresh food during the leaner months of the year.
For alligators and crocodiles, larders are underwater storage places for their fresh kills until such time as they wish to consume the carcass when its flesh is rotten. These larders are usually dug into the side of a land bank, or wedged under a log or tree root.
See also
Food storage
Root cellar
References
Bibliography
Halliday, Tim, gen. ed. (1994). Animal Behavior. Oklahoma: UOP.
Rooms
Food preservation
Food storage
de:Speisekammer
he:מזווה | Larder | Engineering | 702 |
66,072,561 | https://en.wikipedia.org/wiki/Drops%20%28app%29 | Drops is a language learning app that was created in Estonia by Daniel Farkas and Mark Szulyovszky in 2015. It is the second product from the company, after their first app, LearnInvisible, had issues in retaining a user's engagement over the required time period. The languages available include Native Hawaiian and Māori, and was classified as one of the fifty "Most Innovative Companies" for 2019 by Fast Company.
The company partnered with Global Eagle Entertainment to include Travel Talk, a feature intended to focus on words and phrases frequently used by travelers. At the beginning of the COVID-19 pandemic in March 2020, the number of users increased by 55 percent in the United States and 92 percent in the United Kingdom. Droplets, a language app for children, includes profiles for multiple teachers working with remote students. The company also produces an app called Scripts, intended to help users learn to write alphabets.
The app was purchased by the Norwegian company Kahoot! on 24 November 2020.
References
Language learning software
Mobile applications | Drops (app) | Technology | 211 |
234,992 | https://en.wikipedia.org/wiki/Chinese%20character%20radicals | A radical (), or indexing component, is a visually prominent component of a Chinese character under which the character is traditionally listed in a Chinese dictionary. The radical for a character is typically a semantic component, but can also be another structural component or even an artificially extracted portion of the character. In some cases the original semantic or phonological connection has become obscure, owing to changes in the meaning or pronunciation of the character over time.
The use of the English term radical is based on an analogy between the structure of Chinese characters and the inflection of words in European languages. Radicals are also sometimes called classifiers, but this name is more commonly applied to the grammatical measure words in Chinese.
History
In the earliest Chinese dictionaries, such as the Erya (3rd centuryBC), characters were grouped together in broad semantic categories.
Because the vast majority of characters are phono-semantic compounds, combining a semantic component with a phonetic component, each semantic component tended to recur within a particular section of the dictionary. In the 2nd centuryAD, the Han dynasty scholar Xu Shen organized his etymological dictionary Shuowen Jiezi by selecting 540 recurring graphic elements he called bù (部, "categories"). Most were common semantic components, but they also included shared graphic elements such as a dot or horizontal stroke. Some were even artificially extracted groups of strokes, termed "glyphs" by Serruys (1984, p.657), which never had an independent existence other than being listed in Shuowen. Each character was listed under only one element, which is then referred to as the radical for that character. For example, characters containing 女 nǚ "female" or 木 mù "tree, wood" are often grouped together in the sections for those radicals.
Mei Yingzuo's 1615 dictionary Zihui made two further innovations. He reduced the list of radicals to 214, and arranged characters under each radical in increasing order of the number of additional strokes—the radical-and-stroke method still used in the vast majority of present-day Chinese dictionaries. These innovations were also adopted by the more famous Kangxi Dictionary of 1716. Thus the standard 214 radicals introduced in the Zihui are usually known as the Kangxi radicals. These were first called bùshǒu (部首 'section header') in the Kangxi Dictionary. Although there is some variation in such lists – depending primarily on what secondary radicals are also indexed – these canonical 214 radicals of the Kangxi Dictionary still serve as the basis for most modern Chinese dictionaries. Some of the graphically similar radicals are combined in many dictionaries, such as 月 yuè "moon" and the 月 form (⺼) of 肉 ròu, "meat, flesh".
After the writing system reform in mainland China, the traditional set of Kangxi radicals became unsuitable for indexing Simplified Chinese characters. In 1983, the Committee for Reforming the Chinese Written Language and the State Administration of Publication of China published The Table of Unified Indexing Chinese Character Components (Draft) (). In 2009, the Ministry of Education of the People's Republic of China and the State Language Work Committee issued The Table of Indexing Chinese Character Components (GF 0011-2009 ), which includes 201 principal indexing components and 100 associated indexing components (In China's normative documents, "radical" is defined as any component or piānpáng of Chinese characters, while is translated as "indexing component".).
Shape and position
Radicals may appear in any position in a character. For example, 女 appears on the left side in the characters 姐, 媽, 她, 好 and 姓, but it appears at the bottom in 妾. Semantic components tend to appear on the top or on the left side of the character, and phonetic components on the right side or at the bottom. These are loose rules, however, and exceptions are plenty. Sometimes, the radical may span more than one side, as in 園 = 囗 "enclosure" + 袁, or 街 = 行 "go, movement" + 圭. More complicated combinations exist, such as 勝 = 力 "strength" + 朕—the radical is in the lower-right quadrant.
In many characters, the components (including radicals) are distorted or modified to fit into a block with other elements. They may be narrowed, shortened, or have different shapes entirely. Changes in shape, rather than simple distortion, may result in fewer pen strokes. In some cases, combinations may have alternates. The shape of the component can depend on its placement with other elements in the character.
The shape 阝 is indexed as two different radicals depending on where it appears in the character. Placed on the right, as in 都 (dū "metropolis", also read as dōu "all"), it represents an abbreviated form of 邑 yì "city"; placed on the left, as in 陸 lù "land", it represents an abbreviated radical form of 阜 fù "mound, hill".
Some of the most important variant combining forms (besides 邑 → 阝 and 阜 → 阝per the above) are:
刀 "knife" → 刂 when placed to the right of other elements:
examples: 分, 召 ~ 刖
counter-example: 切
人 "man" → 亻 on the left:
囚, 仄, 坐 ~ 他
counter-example: 从
心 "heart" → 忄 on the left:
杺, 您, 恭* ~ 快
(*) 心 occasionally becomes ⺗ when written at the foot of a character.
手 "hand" → 扌 on the left:
杽, 拏, 掱 ~ 扡
counter-example: 拜
水 "water" → 氵 on the left:
汆, 呇, 沊 ~ 池
counter-example: 沝
火 "fire" → 灬 at the bottom:
伙, 秋, 灱 ~ 黑
counter-example: 災
犬 "dog" → 犭 on the left:
伏, 状 ~ 狙
counter-example: 㹜
Semantic components
Over 80% of Chinese characters are phono-semantic compounds (): a semantic component gives a broad category of meaning, while a phonetic component suggests the sound. Usually, the radical is the semantic component.
Thus, although some authors use the term radical for semantic components (義符 yìfú), others distinguish the latter as determinatives or significs or by some other term.
Many radicals are merely artificial extractions of portions of characters, some of which are further truncated or changed when applied (such as 亅 jué or juě in 了 liǎo), as explained by Serruys (1984), who therefore prefers the term "glyph" extraction rather than graphic extraction. This is even truer of modern dictionaries, which cut radicals to less than half the number in Shuowen, at which point it becomes impossible to have enough to cover a semantic element of every character. A sample of the Far Eastern Chinese English Dictionary of mere artificial extraction of a stroke from sub-entries:
一 in 丁 dīng and 且 qiě
乙 yǐ in 九 jiǔ
亅 jué/juě in 了 liǎo/le
二 èr in 亞 yà/yǎ
田 tián in 禺 yù
豕 shǐ in 象 xiàng.
Phonetic components
Radicals sometimes play a phonetic role instead of a semantic one:
In some cases, chosen radicals used phonetically coincidentally are in keeping, in step, semantically.
Simplified radicals
The character simplification pursued in the People's Republic of China and elsewhere has modified a number of components, including those used as radicals. This has created a number of new radical forms. For instance, the character 金 jīn, when used as a radical, is written 釒(that is, with the same number of strokes, and only a minor variation) in traditional writing, but 钅in simplified characters. This means that simplified writing has resulted in significant differences not present in traditional writing. An example of a character using this radical is yín "silver"; traditionally: 銀, simplified: 银.
Dictionary lookup
Many dictionaries support using radical classification to index and look up characters, although many present-day dictionaries supplement it with other methods. For example, modern dictionaries in PRC normally use the Pinyin transcription of a character to perform character lookup. Following the "section-header-and-stroke-count" method of Mei Yingzuo, characters are listed by their radical and then ordered by the number of strokes needed to write them.
The steps involved in looking up a character are as follows:
Identify the radical under which the character is most likely to have been indexed. If in doubt, the component on the left side or at the top is often a good first guess.
Find the section of the dictionary associated with that radical.
Count the number of strokes in the remaining portion of the character.
Find the pages listing characters under that radical that have that number of additional strokes.
Find the appropriate entry or experiment with different choices for steps 1 and 3.
As a rule of thumb, components at the left or top of the character, or elements which surround the rest of the character, are the ones most likely to be used as radical. For example, 信 is typically indexed under the left-side component 人 instead of the right-side 言; and 套 is typically indexed under the top 大 instead of the bottom 長. There are, however, idiosyncratic differences between dictionaries, and except for simple cases, the same character cannot be assumed to be indexed the same way in two different dictionaries.
In order to further ease dictionary lookup, dictionaries sometimes list radicals both under the number of strokes used to write their canonical form and under the number of strokes used to write their variant forms. For example, 心 can be listed as a four-stroke radical but might also be listed as a three-stroke radical because it is usually written as 忄 when it forms a part of another character. This means that the dictionary user need not know that the two are etymologically identical.
It is sometimes possible to find one and the same character indexed under multiple radicals. For example, many dictionaries list 義 under both 羊 and (the radical of its lower part 我). Furthermore, with digital dictionaries, it is now possible to search for characters by cross-reference. Using this "multi-component method" a relatively new development enabled by computing technology, the user can select all of a character's components from a table and the computer will present a list of matching characters. This eliminates the guesswork of choosing the correct radical and calculating the correct stroke count, and cuts down searching time significantly. One can query for characters containing both 羊 and 戈, and get back only five characters (羢, 義, 儀, 羬 and 羲) to search through. The Academia Sinica's 漢字構形資料庫 Chinese character structure database also works this way, returning only seven characters for this query. Harbaugh's Chinese Characters dictionary similarly allows searches based on any component. Some modern computer dictionaries allow the user to draw characters with a mouse, stylus or finger, ideally tolerating a degree of imperfection, thus eliminating the problem of radical identification altogether.
Sets of radicals
Though radicals are widely accepted as a method to categorize Chinese characters and locate a certain character in a dictionary, there is no universal agreement about either the exact number of radicals or the set of radicals to be used, due to the sometimes arbitrary nature of the selection process.
The Kangxi radicals are a de facto standard which, although not implemented exactly in every Chinese dictionary, few dictionary compilers can afford to completely ignore. They serve as the basis for many computer encoding systems. Specifically, the Unicode standard's radical-stroke charts are based on the Kangxi set of radicals.
The count of commonly used radicals in modern abridged dictionaries is often less than 214. The Oxford Concise English–Chinese Dictionary has 188. A few dictionaries also introduce new radicals based on the principles first used by Xu Shen, treating groups of radicals that are used together in many different characters as a kind of radical.
In modern practice, radicals are primarily used as lexicographic tools and as learning aids when writing characters. They have become increasingly disconnected from semantics, etymology and phonetics.
Limitations and flexibility
Some of the radicals used in Chinese dictionaries, even in the era of Kangxi, were not stand-alone current-usage characters. Instead, they indexed unique characters that lacked more obvious qualifiers. The radical 鬯 (chàng "sacrificial wine") indexes only a few characters. Modern dictionaries tend to eliminate these when it is possible to find some more widely used graphic element under which a character can be categorized. Some use a system where characters are indexed under more than one radical and/or set of key elements to make it easier to find them.
See also
List of radicals in Unicode
Chinese character description languages
Chinese character orders
List of kanji radicals by stroke count
List of kanji radicals by frequency
Stroke-based sorting
Notes
References
Works cited
(revised 2003)
Further reading
Luó Zhènyù (羅振玉) 1958. 增訂殷墟書契考釋 (revised and enlarged edition on the interpretation of oracle bone inscriptions). Taipei: Yiwen Publishing (cited in Wu 1990).
Serruys, Paul L-M. (1984) "On the System of the Pu Shou 部首 in the Shuo-wen chieh-tzu 說文解字", in 中央研究院歷史語言研究所集刊 Zhōngyāng Yánjiūyuàn Lìshǐ Yǔyán Yánjiūsuǒ Jíkān, v. 55:4, pp.651–754.
Xu Shen Shuōwén Jǐezì (說文解字), is most often accessed in annotated versions, the most famous of which is Duan Yucai (1815). 說文解字注 Shuōwén Jǐezì Zhù (commentary on the Shuōwén Jíezì), compiled 1776–1807
Radical
Radical
Radical | Chinese character radicals | Technology | 2,866 |
29,457,137 | https://en.wikipedia.org/wiki/Stars%20proposed%20in%20religion | Stars proposed in religion may include:
Kolob, a star proposed in Mormon cosmology
The Star of Bethlehem, the star that supposedly marked the birth of Christ
Wormwood (star), a star said to fall to Earth in the Book of Revelation
Seven Suns, prophesied to appear before the destruction of the earth in Buddhist cosmology
See also
Central Fire, a fiery celestial body hypothesized by the pre-Socratic philosopher Philolaus to be positioned at the center of the universe, around which all other celestial objects revolve
Religious cosmology, a way of explaining the origin, the history and the evolution of the cosmos or universe based on the religious mythology of a specific tradition
Religion and science
History of astronomy | Stars proposed in religion | Astronomy | 152 |
6,162,424 | https://en.wikipedia.org/wiki/List%20of%20educational%20software | This is a list of educational software that is computer software whose primary purpose is teaching or self-learning.
Educational software by subject
Anatomy
3D Indiana
Bodyworks Voyager – Mission in Anatomy
Primal Pictures
Visible Human Project
Chemistry
Aqion - simulates water chemistry
Children's software
Bobo Explores Light
ClueFinders titles
Delta Drawing
Edmark
Fun School titles
GCompris - free software (GPL)
Gold Series
JumpStart titles
Kiwaka
KidPix
Museum Madness
Ozzie series
Reader Rabbit titles
Tux Paint - free software (GPL)
Zoombinis titles
Computer science
JFLAP - Java Formal language and Automata Package
Cryptography
CrypTool - illustrates cryptographic and cryptanalytic concepts
Dictionaries and reference
Britannica
Encarta
Encyclopædia Britannica Ultimate Reference Suite
Geography and Astronomy
Cartopedia: The Ultimate World Reference Atlas
Celestia
Google Earth - (proprietary license)
Gravit - a free (GPL) Newtonian gravity simulator
KGeography
KStars
NASA World Wind - free software (NASA open source)
Stellarium
Swamp Gas Visits the United States of America - a game that teaches geography to children
Where is Carmen Sandiego? game series
WorldWide Telescope - a freeware from Microsoft
Health
TeachAids
History
Encyclopedia Encarta Timeline
Euratlas
Back in Time (iPad)
Balance of Power
Lemonade Stand
Number Munchers
Odell Lake
Spellevator
Windfall: The Oil Crisis Game
Word Munchers
Literacy
Accelerated Reader
AutoTutor
Compu-Read
DISTAR
Managed learning environments
ATutor (GPL)
Blackboard Inc.
Chamilo
Claroline
eCollege
eFront (CPAL)
Fle3 (GPL)
GCompris (GPL)
Google Classroom
ILIAS (GPL)
Kannu
LON-CAPA - free software (GPL)
Moodle - free software (GPL)
OLAT - free software
Renaissance Place
Sakai Project - free software
WebAssign
Mathematics
Accelerated Math
Cantor (mathematics software)
Compu-Math: Fractions
DrGeo
Geogebra
The Geometer's Sketchpad
Maple
Matlab / GNU Octave
Mathematica
Matheass
Math Blaster
Microsoft Mathematics
MathFacts in a Flash
SAGE - free software (GPL)
TK Solver
Tux, of Math Command- free software (GPL)
Zearn
Music
Comparison of music education software
EarMaster
Yousician
MuseScore
Syntorial
Programming
BlueJ
Hackety Hack
Racket
RoboMind
Scratch
Swift Playgrounds
Science
Betty's Brain
Science Sleuths
Simulation
Simulation games
Caesar titles
Capitalism
Civilization
The Oregon Trail
Sid Meier's Colonization
SimCity
Zoo Tycoon
Spaced Repetition
Anki
Memrise
SuperMemo
Synap
Mnemosyne
Touch-Typing Instruction
Mavis Beacon Teaches Typing
Mario Teaches Typing
Smorball
Tux Typing - free software (GPL)
Visual Learning and Mind Mapping
ConceptDraw MINDMAP
Freemind - free software (GPL)
Perception
SpicyNodes
Notable brands and suppliers of educational software
Dorling Kindersley
Promethean World
Renaissance Learning
Houghton Mifflin Harcourt Learning Technology (previously Riverdeep)
SMART Technologies
Software4Students
RM plc
Historical brands and suppliers
Edu-Ware
JumpStart Games (previously Knowledge Adventure)
Davidson & Associates (merged with Knowledge Adventure)
SoftKey (acquired by Mattel, then Riverdeep)
Brøderbund (acquired by Softkey)
The Learning Company (acquired by SoftKey)
Creative Wonders (acquired by the Learning Company)
MECC (acquired by Softkey)
Edmark (acquired by Riverdeep)
References
Educational video games
Educational | List of educational software | Technology | 736 |
40,232,612 | https://en.wikipedia.org/wiki/C16H20N2O | {{DISPLAYTITLE:C16H20N2O}}
The molecular formula C16H20N2O (molar mass: 256.34 g/mol, exact mass: 256.1576 u) may refer to:
Chanoclavine
Chanoclavine II
Fumigaclavine B
Paliclavine
Molecular formulas | C16H20N2O | Physics,Chemistry | 75 |
56,666,850 | https://en.wikipedia.org/wiki/List%20of%20first%20female%20pharmacists%20by%20country | This is a list of the first qualified female pharmacists to practice in each country, where that is known.
Please note: the list should foremost contain the first female pharmacist with a formal qualification from each country. Historically, it was normal for widows of apothecaries and pharmacist to inherit their late husband's profession without being formally qualified. These cases – and other of note – can be noted in the margin, but should not be listed first.
Africa
Namibia: There might be more female graduates, as the names listed were the only women named in the cited article.
Nigeria: Green is considered to have been the first female pharmacist in West Africa. Ekanem Bassey Ikpeme was considered the first native female pharmacist in Nigeria.
Tunisia: Dorra Bouzid is considered the first female pharmacist in Tunisia after independence. She started her practice sometime during the 1960s.
Americas
Canada: Preevoot was considered the first Canadian woman to pass the pharmacy exam by law.
Chile: Glafira Vargas was the first female to graduate with a pharmacy degree in 1887, though Hinojosa appears to be the first female to work as a pharmacist upon graduation.
Curaçao: van heb Elizabeths-Gasthu was said to have been the first woman to have passed the exam for an assistant pharmacist in the colony.
Guatemala: Altuve is considered the first Central American woman to have obtained a university degree.
United States: Elizabeth Gooking Greenleaf was the first not formally qualified pharmacist to practice in 1727. Hayhust was the first woman to receive a pharmacy degree in the United States in 1883. Ella P. Stewart was one of the first African-American female pharmacists in the United States."Ella Stewart." Contemporary Black Biography. Vol. 39. Detroit: Gale, 2003. Accessed via Biography in Context database, 2016-07-02. Available online via Encyclopedia.com.
Asia
Indonesia: Jacobs is considered the first female pharmacist in the Netherlands and Indonesia (then Dutch East Indies).
Europe
Belgium: Certain sources cite Louise Popelin (sister of Belgium's first female lawyer Marie Popelin) or Ida Huys as Belgium's first female pharmacist. They both completed their exams in 1887.
Czech Republic and Slovakia: Other sources cited Elza Fantová as the first Bohemia woman to earn a pharmaceutical degree in 1908. Krontilová-Librova started her pharmacy practice in 1904 and became the first female pharmacy student at the University of Prague in 1907 (graduating in 1909).
Finland: The first female pharmacist to qualify without dispensation in Finland was Helene Aejneleus in 1911. Brunberg was the first women to be qualified by dispensation.
Germany: Anne of Denmark, Electress of Saxony was a non-professional female pharmacist in Germany. Helena Magenbuch and Maria Andreae were professional pharmacists in the 16th-century.
Ireland: Wilson was the first female pharmacist to qualify in the south of Ireland.
Italy: Elisa Gagnatelli and Edvige Moroni were the first women to pass the pharmacy exam in 1897.
Netherlands: In the Netherlands and the Dutch East Indies, Charlotte Jacobs became the first female pharmacist with a degree in 1879.
Norway: Christine Dahl passed her assistant pharmacy exam in 1889, but Eide was considered the first female pharmacist.
Poland: Although Lesniewska was considered the first female pharmacist, Filipina and Konstancja Studzinska (sisters) were the first women to pass the pharmacy examination in 1824.
Russia: Olga Evgenevna Gabrilovich was the first female pharmacist to earn a degree in 1906.
Sweden: Leth was the first female pharmacist to have fulfilled a formal qualification. Maria Dauerer was the first female pharmacist to have obtained a license. The first woman to have obtained a degree in pharmacology was Agnes Arvidsson (1903).
Ukraine: Makarova, a Kiev University (Ukraine) graduate, was the first woman to pass the examination for the title of pharmaceutical assistant.
Oceania
See also
List of first female physicians by country
List of first women dentists by country
Women in medicine
Women in pharmacy
References
Pharmacists
Pharmacists, Nationality
Pharmacists, Nationality
pharmacists | List of first female pharmacists by country | Technology | 929 |
1,685,778 | https://en.wikipedia.org/wiki/Neuropharmacology | Neuropharmacology is the study of how drugs affect function in the nervous system, and the neural mechanisms through which they influence behavior. There are two main branches of neuropharmacology: behavioral and molecular. Behavioral neuropharmacology focuses on the study of how drugs affect human behavior (neuropsychopharmacology), including the study of how drug dependence and addiction affect the human brain. Molecular neuropharmacology involves the study of neurons and their neurochemical interactions, with the overall goal of developing drugs that have beneficial effects on neurological function. Both of these fields are closely connected, since both are concerned with the interactions of neurotransmitters, neuropeptides, neurohormones, neuromodulators, enzymes, second messengers, co-transporters, ion channels, and receptor proteins in the central and peripheral nervous systems. Studying these interactions, researchers are developing drugs to treat many different neurological disorders, including pain, neurodegenerative diseases such as Parkinson's disease and Alzheimer's disease, psychological disorders, addiction, and many others.
History
Neuropharmacology did not appear in the scientific field until, in the early part of the 20th century, scientists were able to figure out a basic understanding of the nervous system and how nerves communicate between one another. Before this discovery, there were drugs that had been found that demonstrated some type of influence on the nervous system. In the 1930s, French scientists began working with a compound called phenothiazine in the hope of synthesizing a drug that would be able to combat malaria. Though this drug showed very little hope in the use against malaria-infected individuals, it was found to have sedative effects along with what appeared to be beneficial effects toward patients with Parkinson's disease. This black box method, wherein an investigator would administer a drug and examine the response without knowing how to relate drug action to patient response, was the main approach to this field, until, in the late 1940s and early 1950s, scientists were able to identify specific neurotransmitters, such as norepinephrine (involved in the constriction of blood vessels and the increase in heart rate and blood pressure), dopamine (the chemical whose shortage is involved in Parkinson's disease), and serotonin (soon to be recognized as deeply connected to depression). In the 1950s, scientists also became better able to measure levels of specific neurochemicals in the body and thus correlate these levels with behavior. The invention of the voltage clamp in 1949 allowed for the study of ion channels and the nerve action potential. These two major historical events in neuropharmacology allowed scientists not only to study how information is transferred from one neuron to another but also to study how a neuron processes this information within itself.
Overview
Neuropharmacology is a very broad region of science that encompasses many aspects of the nervous system from single neuron manipulation to entire areas of the brain, spinal cord, and peripheral nerves. To better understand the basis behind drug development, one must first understand how neurons communicate with one another.
Neurochemical interactions
To understand the potential advances in medicine that neuropharmacology can bring, it is important to understand how human behavior and thought processes are transferred from neuron to neuron and how medications can alter the chemical foundations of these processes.
Neurons are known as excitable cells because on its surface membrane there are an abundance of proteins known as ion-channels that allow small charged particles to pass in and out of the cell. The structure of the neuron allows chemical information to be received by its dendrites, propagated through the perikaryon (cell body) and down its axon, and eventually passing on to other neurons through its axon terminal. These voltage-gated ion channels allow for rapid depolarization throughout the cell. This depolarization, if it reaches a certain threshold, will cause an action potential. Once the action potential reaches the axon terminal, it will cause an influx of calcium ions into the cell. The calcium ions will then cause vesicles, small packets filled with neurotransmitters, to bind to the cell membrane and release its contents into the synapse. This cell is known as the pre-synaptic neuron, and the cell that interacts with the neurotransmitters released is known as the post-synaptic neuron. Once the neurotransmitter is released into the synapse, it can either bind to receptors on the post-synaptic cell, the pre-synaptic cell can re-uptake it and save it for later transmission, or it can be broken down by enzymes in the synapse specific to that certain neurotransmitter. These three different actions are major areas where drug action can affect communication between neurons.
There are two types of receptors that neurotransmitters interact with on a post-synaptic neuron. The first types of receptors are ligand-gated ion channels or LGICs. LGIC receptors are the fastest types of transduction from chemical signal to electrical signal. Once the neurotransmitter binds to the receptor, it will cause a conformational change that will allow ions to directly flow into the cell. The second types are known as G-protein-coupled receptors or GPCRs. These are much slower than LGICs due to an increase in the amount of biochemical reactions that must take place intracellularly. Once the neurotransmitter binds to the GPCR protein, it causes a cascade of intracellular interactions that can lead to many different types of changes in cellular biochemistry, physiology, and gene expression. Neurotransmitter/receptor interactions in the field of neuropharmacology are extremely important because many drugs that are developed today have to do with disrupting this binding process.
Molecular neuropharmacology
Molecular neuropharmacology involves the study of neurons and their neurochemical interactions, and receptors on neurons, with the goal of developing new drugs that will treat neurological disorders such as pain, neurodegenerative diseases, and psychological disorders (also known in this case as neuropsychopharmacology). There are a few technical words that must be defined when relating neurotransmission to receptor action:
Agonist – a molecule that binds to a receptor protein and activates that receptor
Competitive antagonist – a molecule that binds to the same site on the receptor protein as the agonist, preventing activation of the receptor
Non-competitive antagonist – a molecule that binds to a receptor protein on a different site than that of the agonist, but causes a conformational change in the protein that does not allow activation.
The following neurotransmitter/receptor interactions can be affected by synthetic compounds that act as one of the three above. Sodium/potassium ion channels can also be manipulated throughout a neuron to induce inhibitory effects of action potentials.
GABA
The GABA neurotransmitter mediates the fast synaptic inhibition in the central nervous system. When GABA is released from its pre-synaptic cell, it will bind to a receptor (most likely the GABAA receptor) that causes the post-synaptic cell to hyperpolarize (stay below its action potential threshold). This will counteract the effect of any excitatory manipulation from other neurotransmitter/receptor interactions.
This GABAA receptor contains many binding sites that allow conformational changes and are the primary target for drug development. The most common of these binding sites, benzodiazepine, allows for both agonist and antagonist effects on the receptor. A common drug, diazepam, acts as an allosteric enhancer at this binding site. Another receptor for GABA, known as GABAB, can be enhanced by a molecule called baclofen. This molecule acts as an agonist, therefore activating the receptor, and is known to help control and decrease spastic movement.
Dopamine
The dopamine neurotransmitter mediates synaptic transmission by binding to five specific GPCRs. These five receptor proteins are separated into two classes due to whether the response elicits an excitatory or inhibitory response on the post-synaptic cell. There are many types of drugs, legal and illegal, that affect dopamine and its interactions in the brain. With Parkinson's disease, a disease that decreases the amount of dopamine in the brain, the dopamine precursor Levodopa is given to the patient due to the fact that dopamine cannot cross the blood–brain barrier and L-dopa can. Some dopamine agonists are also given to Parkinson's patients that have a disorder known as restless leg syndrome or RLS. Some examples of these are ropinirole and pramipexole.
Psychological disorders like that of attention deficit hyperactivity disorder (ADHD) can be treated with drugs like methylphenidate (also known as Ritalin), which block the re-uptake of dopamine by the pre-synaptic cell, thereby providing an increase of dopamine left in the synaptic gap. This increase in synaptic dopamine will increase binding to receptors of the post-synaptic cell. This same mechanism is also used by other illegal and more potent stimulant drugs such as cocaine.
Serotonin
The neurotransmitter serotonin has the ability to mediate synaptic transmission through either GPCR's or LGIC receptors. The excitatory or inhibitory post-synaptic effects of serotonin are determined by the type of receptor expressed in a given brain region. The most popular and widely used drugs for the regulation of serotonin during depression are known as selective serotonin reuptake inhibitors (SSRIs). These drugs inhibit the transport of serotonin back into the pre-synaptic neuron, leaving more serotonin in the synaptic gap.
Before the discovery of SSRIs, there were also drugs that inhibited the enzyme that breaks down serotonin. Monoamine oxidase inhibitors (MAOIs) increased the amount of serotonin in the synapse, but had many side-effects including intense migraines and high blood pressure. This was eventually linked to the drugs interacting with a common chemical known as tyramine found in many types of food.
Ion channels
Ion channels located on the surface membrane of the neuron allows for an influx of sodium ions and outward movement of potassium ions during an action potential. Selectively blocking these ion channels will decrease the likelihood of an action potential to occur. The drug riluzole is a neuroprotective drug that blocks sodium ion channels. Since these channels cannot activate, there is no action potential, and the neuron does not perform any transduction of chemical signals into electrical signals and the signal does not move on. This drug is used as an anesthetic as well as a sedative.
Behavioral neuropharmacology
One form of behavioral neuropharmacology focuses on the study of drug dependence and how drug addiction affects the human mind. Most research has shown that the major part of the brain that reinforces addiction through neurochemical reward is the nucleus accumbens. The image to the right shows how dopamine is projected into this area. Long-term excessive alcohol use can cause dependence and addiction. How this addiction occurs is described below.
Ethanol
Alcohol's rewarding and reinforcing (i.e., addictive) properties are mediated through its effects on dopamine neurons in the mesolimbic reward pathway, which connects the ventral tegmental area to the nucleus accumbens (NAcc). One of alcohol's primary effects is the allosteric inhibition of NMDA receptors and facilitation of GABAA receptors (e.g., enhanced GABAA receptor-mediated chloride flux through allosteric regulation of the receptor). At high doses, ethanol inhibits most ligand gated ion channels and voltage gated ion channels in neurons as well. Alcohol inhibits sodium–potassium pumps in the cerebellum and this is likely how it impairs cerebellar computation and body co-ordination.
With acute alcohol consumption, dopamine is released in the synapses of the mesolimbic pathway, in turn heightening activation of postsynaptic D1 receptors. The activation of these receptors triggers postsynaptic internal signaling events through protein kinase A which ultimately phosphorylate cAMP response element binding protein (CREB), inducing CREB-mediated changes in gene expression.
With chronic alcohol intake, consumption of ethanol similarly induces CREB phosphorylation through the D1 receptor pathway, but it also alters NMDA receptor function through phosphorylation mechanisms; an adaptive downregulation of the D1 receptor pathway and CREB function occurs as well. Chronic consumption is also associated with an effect on CREB phosphorylation and function via postsynaptic NMDA receptor signaling cascades through a MAPK/ERK pathway and CAMK-mediated pathway. These modifications to CREB function in the mesolimbic pathway induce expression (i.e., increase gene expression) of ΔFosB in the , where ΔFosB is the "master control protein" that, when overexpressed in the NAcc, is necessary and sufficient for the development and maintenance of an addictive state (i.e., its overexpression in the nucleus accumbens produces and then directly modulates compulsive alcohol consumption).
Research
Parkinson's disease
Parkinson's disease is a neurodegenerative disease described by the selective loss of dopaminergic neurons located in the substantia nigra. Today, the most commonly used drug to combat this disease is levodopa or L-DOPA. This precursor to dopamine can penetrate through the blood–brain barrier, whereas the neurotransmitter dopamine cannot. There has been extensive research to determine whether L-dopa is a better treatment for Parkinson's disease rather than other dopamine agonists. Some believe that the long-term use of L-dopa will compromise neuroprotection and, thus, eventually lead to dopaminergic cell death. Though there has been no proof, in-vivo or in-vitro, some still believe that the long-term use of dopamine agonists is better for the patient.
Alzheimer's disease
While there are a variety of hypotheses that have been proposed for the cause of Alzheimer's disease, the knowledge of this disease is far from complete to explain, making it difficult to develop methods for treatment. In the brain of Alzheimer's patients, both neuronal nicotinic acetylcholine (nACh) receptors and NMDA receptors are known to be down-regulated. Thus, four anticholinesterases, such as Donepezil and Rivastigmine, have been developed and approved by the U.S. Food and Drug Administration (FDA) for the treatment in the U.S.A. However, these are not ideal drugs, considering their side-effects and limited effectiveness. The excessive stimulation of muscarinic and nicotinic receptors by acetylcholine may contribute to the side effects that anticholinesterases have.
One promising drug, nefiracetam, is being developed for the treatment of Alzheimer's and other patients with dementia, and has unique actions in potentiating the activity of both nACh receptors and NMDA receptors.
Future
With advances in technology and our understanding of the nervous system, the development of drugs will continue with increasing drug sensitivity and specificity. Structure–activity relationships are a major area of research within neuropharmacology; an attempt to modify the effect or the potency (i.e., activity) of bioactive chemical compounds by modifying their chemical structures.
See also
Electrophysiology
Neuroendocrinology
Neuropsychopharmacology
Neurotechnology
Neurotransmission
Psychopharmacology
Structure–activity relationship
References
External links | Neuropharmacology | Chemistry | 3,382 |
4,864 | https://en.wikipedia.org/wiki/Bucket%20argument | Isaac Newton's rotating bucket argument (also known as Newton's bucket) is a thought experiment that was designed to demonstrate that true rotational motion cannot be defined as the relative rotation of the body with respect to the immediately surrounding bodies. It is one of five arguments from the "properties, causes, and effects" of "true motion and rest" that support his contention that, in general, true motion and rest cannot be defined as special instances of motion or rest relative to other bodies, but instead can be defined only by reference to absolute space. Alternatively, these experiments provide an operational definition of what is meant by "absolute rotation", and do not pretend to address the question of "rotation relative to what?" General relativity dispenses with absolute space and with physics whose cause is external to the system, with the concept of geodesics of spacetime.
Background
These arguments, and a discussion of the distinctions between absolute and relative time, space, place and motion, appear in a scholium at the end of Definitions sections in Book I of Newton's work, The Mathematical Principles of Natural Philosophy (1687) (not to be confused with General Scholium at the end of Book III), which established the foundations of classical mechanics and introduced his law of universal gravitation, which yielded the first quantitatively adequate dynamical explanation of planetary motion.
Despite their embrace of the principle of rectilinear inertia and the recognition of the kinematical relativity of apparent motion (which underlies whether the Ptolemaic or the Copernican system is correct), natural philosophers of the seventeenth century continued to consider true motion and rest as physically separate descriptors of an individual body. The dominant view Newton opposed was devised by René Descartes, and was supported (in part) by Gottfried Leibniz. It held that empty space is a metaphysical impossibility because space is nothing other than the extension of matter, or, in other words, that when one speaks of the space between things one is actually making reference to the relationship that exists between those things and not to some entity that stands between them. Concordant with the above understanding, any assertion about the motion of a body boils down to a description over time in which the body under consideration is at t1 found in the vicinity of one group of "landmark" bodies and at some t2 is found in the vicinity of some other "landmark" body or bodies.
Descartes recognized that there would be a real difference, however, between a situation in which a body with movable parts and originally at rest with respect to a surrounding ring was itself accelerated to a certain angular velocity with respect to the ring, and another situation in which the surrounding ring were given a contrary acceleration with respect to the central object. With sole regard to the central object and the surrounding ring, the motions would be indistinguishable from each other assuming that both the central object and the surrounding ring were absolutely rigid objects. However, if neither the central object nor the surrounding ring were absolutely rigid then the parts of one or both of them would tend to fly out from the axis of rotation.
For contingent reasons having to do with the Inquisition, Descartes spoke of motion as both absolute and relative.
By the late 19th century, the contention that all motion is relative was re-introduced, notably by Ernst Mach (1883).
The argument
Newton discusses a bucket () filled with water hung by a cord. If the cord is twisted up tightly on itself and then the bucket is released, it begins to spin rapidly, not only with respect to the experimenter, but also in relation to the water it contains. (This situation would correspond to diagram B above.)
Although the relative motion at this stage is the greatest, the surface of the water remains flat, indicating that the parts of the water have no tendency to recede from the axis of relative motion, despite proximity to the pail. Eventually, as the cord continues to unwind, the surface of the water assumes a concave shape as it acquires the motion of the bucket spinning relative to the experimenter. This concave shape shows that the water is rotating, despite the fact that the water is at rest relative to the pail. In other words, it is not the relative motion of the pail and water that causes concavity of the water, contrary to the idea that motions can only be relative, and that there is no absolute motion. (This situation would correspond to diagram D.) Possibly the concavity of the water shows rotation relative to something else: say absolute space? Newton says: "One can find out and measure the true and absolute circular motion of the water".
In the 1846 Andrew Motte translation of Newton's words:
The argument that the motion is absolute, not relative, is incomplete, as it limits the participants relevant to the experiment to only the pail and the water, a limitation that has not been established. In fact, the concavity of the water clearly involves gravitational attraction, and by implication the Earth also is a participant. Here is a critique due to Mach arguing that only relative motion is established:
The degree in which Mach's hypothesis is integrated in general relativity is discussed in the article Mach's principle; it is generally held that general relativity is not entirely Machian.
All observers agree that the surface of rotating water is curved. However, the explanation of this curvature involves centrifugal force for all observers with the exception of a truly stationary observer, who finds the curvature is consistent with the rate of rotation of the water as they observe it, with no need for an additional centrifugal force. Thus, a stationary frame can be identified, and it is not necessary to ask "Stationary with respect to what?":
A supplementary thought experiment with the same objective of determining the occurrence of absolute rotation also was proposed by Newton: the example of observing two identical spheres in rotation about their center of gravity and tied together by a string. Occurrence of tension in the string is indicative of absolute rotation; see Rotating spheres.
Detailed analysis
The historic interest of the rotating bucket experiment is its usefulness in suggesting one can detect absolute rotation by observation of the shape of the surface of the water. However, one might question just how rotation brings about this change. Below are two approaches to understanding the concavity of the surface of rotating water in a bucket.
Newton's laws of motion
The shape of the surface of a rotating liquid in a bucket can be determined using Newton's laws for the various forces on an element of the surface. For example, see Knudsen and Hjorth. The analysis begins with the free body diagram in the co-rotating frame where the water appears stationary. The height of the water h = h(r) is a function of the radial distance r from the axis of rotation Ω, and the aim is to determine this function. An element of water volume on the surface is shown to be subject to three forces: the vertical force due to gravity Fg, the horizontal, radially outward centrifugal force FCfgl, and the force normal to the surface of the water Fn due to the rest of the water surrounding the selected element of surface. The force due to surrounding water is known to be normal to the surface of the water because a liquid in equilibrium cannot support shear stresses. To quote Anthony and Brackett:
Moreover, because the element of water does not move, the sum of all three forces must be zero. To sum to zero, the force of the water must point oppositely to the sum of the centrifugal and gravity forces, which means the surface of the water must adjust so its normal points in this direction. (A very similar problem is the design of a banked turn, where the slope of the turn is set so a car will not slide off the road. The analogy in the case of rotating bucket is that the element of water surface will "slide" up or down the surface unless the normal to the surface aligns with the vector resultant formed by the vector addition Fg + FCfgl.)
As r increases, the centrifugal force increases according to the relation (the equations are written per unit mass):
where Ω is the constant rate of rotation of the water. The gravitational force is unchanged at
where g is the acceleration due to gravity. These two forces add to make a resultant at an angle φ from the vertical given by
which clearly becomes larger as r increases. To ensure that this resultant is normal to the surface of the water, and therefore can be effectively nulled by the force of the water beneath, the normal to the surface must have the same angle, that is,
leading to the ordinary differential equation for the shape of the surface:
or, integrating:
where h(0) is the height of the water at r = 0. In other words, the surface of the water is parabolic in its dependence upon the radius.
Potential energy
The shape of the water's surface can be found in a different, very intuitive way using the interesting idea of the potential energy associated with the centrifugal force in the co-rotating frame.
In a reference frame uniformly rotating at angular rate Ω, the fictitious centrifugal force is conservative and has a potential energy of the form:
where r is the radius from the axis of rotation. This result can be verified by taking the gradient of the potential to obtain the radially outward force:
The meaning of the potential energy (stored work) is that movement of a test body from a larger radius to a smaller radius involves doing work against the centrifugal force and thus gaining potential energy. But this test body at the smaller radius where its elevation is lower has now lost equivalent gravitational potential energy.
Potential energy therefore explains the concavity of the water surface in a rotating bucket. Notice that at equilibrium the surface adopts a shape such that an element of volume at any location on its surface has the same potential energy as at any other. That being so, no element of water on the surface has any incentive to move position, because all positions are equivalent in energy. That is, equilibrium is attained. On the other hand, were surface regions with lower energy available, the water occupying surface locations of higher potential energy would move to occupy these positions of lower energy, inasmuch as there is no barrier to lateral movement in an ideal liquid.
We might imagine deliberately upsetting this equilibrium situation by somehow momentarily altering the surface shape of the water to make it different from an equal-energy surface. This change in shape would not be stable, and the water would not stay in our artificially contrived shape, but engage in a transient exploration of many shapes until non-ideal frictional forces introduced by sloshing, either against the sides of the bucket or by the non-ideal nature of the liquid, killed the oscillations and the water settled down to the equilibrium shape.
To see the principle of an equal-energy surface at work, imagine gradually increasing the rate of rotation of the bucket from zero. The water surface is flat at first, and clearly a surface of equal potential energy because all points on the surface are at the same height in the gravitational field acting upon the water. At some small angular rate of rotation, however, an element of surface water can achieve lower potential energy by moving outward under the influence of the centrifugal force; think of an object moving with the force of gravity closer to the Earth's center: the object lowers its potential energy by complying with a force. Because water is incompressible and must remain within the confines of the bucket, this outward movement increases the depth of water at the larger radius, increasing the height of the surface at larger radius, and lowering it at smaller radius. The surface of the water becomes slightly concave, with the consequence that the potential energy of the water at the greater radius is increased by the work done against gravity to achieve the greater height. As the height of water increases, movement toward the periphery becomes no longer advantageous, because the reduction in potential energy from working with the centrifugal force is balanced against the increase in energy working against gravity. Thus, at a given angular rate of rotation, a concave surface represents the stable situation, and the more rapid the rotation, the more concave this surface. If rotation is arrested, the energy stored in fashioning the concave surface must be dissipated, for example through friction, before an equilibrium flat surface is restored.
To implement a surface of constant potential energy quantitatively, let the height of the water be : then the potential energy per unit mass contributed by gravity is and the total potential energy per unit mass on the surface is
with the background energy level independent of r. In a static situation (no motion of the fluid in the rotating frame), this energy is constant independent of position r. Requiring the energy to be constant, we obtain the parabolic form:
where h(0) is the height at r = 0 (the axis). See Figures 1 and 2.
The principle of operation of the centrifuge also can be simply understood in terms of this expression for the potential energy, which shows that it is favorable energetically when the volume far from the axis of rotation is occupied by the heavier substance.
See also
Centrifugal force
Inertial frame of reference
Mach's principle
Philosophy of space and time: Absolutism vs. relationalism
Rotating reference frame
Rotating spheres
Rotational gravity
Sagnac effect
References
Further reading
The isotropy of the cosmic microwave background radiation is another indicator that the universe does not rotate. See:
External links
Newton's Views on Space, Time, and Motion from Stanford Encyclopedia of Philosophy, article by Robert Rynasiewicz. At the end of this article, loss of fine distinctions in the translations as compared to the original Latin text is discussed.
Life and Philosophy of Leibniz see section on Space, Time and Indiscernibles for Leibniz arguing against the idea of space acting as a causal agent.
Newton's Bucket An interactive applet illustrating the water shape, and an attached PDF file with a mathematical derivation of a more complete water-shape model than is given in this article.
Classical mechanics
Isaac Newton
Thought experiments in physics
Rotation | Bucket argument | Physics | 2,917 |
63,945,981 | https://en.wikipedia.org/wiki/Isolation%20pod | An isolation pod is a capsule which is used to provide medical isolation for a patient. Examples include the Norwegian EpiShuttle and the USAF's Transport Isolation System (TIS) or Portable Bio-Containment Module (PBCM), which are used to provide isolation when transporting patients by air.
Isolation devices were developed in the 1970s for the aerial evacuation of patients with Lassa fever. In 2015, Human Stretcher Transit Isolator (HSTI) pods were used for the aerial evacuation of health workers during the Ebola virus epidemic in Guinea.
Isolation pod provide 100% protection to Frontline/Health workers [biosafety level-4] In covid outbrack in India 2020-21 ahmedabad based company Edithheathcare.in Developed such pods to isolate infectious patients.
A review of 14 relevant studies concluded that the use of isolation pods for the transport of COVID-19 patients would not normally be appropriate as the use of oxygen masks and other, less-demanding precautions would be adequate. During the COVID-19 pandemic, the UK's NHS hospitals set up separate reception areas, which were called isolation pods, but these were typically temporary accommodation such as a portacabin or tent, without special technical features, just being located at a distance from the permanent facilities.
See also
References
Containment efforts related to the COVID-19 pandemic
Medical transport devices | Isolation pod | Physics | 283 |
17,099,782 | https://en.wikipedia.org/wiki/AirNav%20Systems | AirNav Systems is a Tampa-based global flight tracking and data services company founded in 2001. The company operates a flight tracking website and mobile app called Radarbox which offers worldwide tracking of commercial and general aviation flights. AirNav Systems also owns and operates a ground-based ADS-B tracking network that is supported by over 20,000 active volunteer ADS-B data feeders from over 180 countries. The company's real-time tracking and data services are also used by 25,000 aviation related businesses, government agencies, airlines, media channels and airports in over 60 countries.
The company's R&D Center and European office is located in Lisbon, Portugal.
History
In 1996, while studying computer science at university, the Founder and CEO, portuguese airline pilot and engineer André Brandão developed a flight tracking application called AirNav Suite. In 2001 he established AirNav Systems.
In early 2020, AirNav Systems announced a partnership with venture capital firm BrightCap Ventures. The same year, the company was chosen by ARGUS International, Inc to provide global flight and ground aircraft tracking services.
In October 2020, Satellite-based ADS-B data was made available to all Radarbox users for free. AirNav Systems currently partners with multiple satellite providers including Spire Aviation to provide satellite-based tracking on its platform.
RadarBox is frequently cited, and its data is used in globally renowned news outlets such as Bloomberg, CNN, the Financial Times, and the New York Times. For instance, in June 2022, the New York Times cited RadarBox in its report about wealthy Russians fleeing the country before and after the Russian invasion of Ukraine. Further, in March 2021, the Financial Times obtained and used flight records from RadarBox in an article following the disappearance of Chinese Billionaire Jack Ma. It also acquired the flight records for its report on Fu Xiaotian, a Chinese TV presenter linked to a missing foreign minister who had a surrogate child in the US. Bloomberg also used RadarBox flight data to report how the rapid recovery in air travel highlighted a massive shortage in staffing at European airports and airlines in 2022. Additionally, RadarBox flight records were used in a Bloomberg article about the German Foreign Minister's Plane being Stranded in UAE en Route to Australia in August, 2023. In March, 2022, RadarBox flight tracking data was also used by the UK news outlet The Guardian to report on the whereabouts of Roman Abramovich’s private jet following UK sanctions on the Russian Oligarch.
Tracking sources
AirNav Systems is the parent company of Radarbox which displays flight data in real-time from all over the world. Radarbox aggregates data from 10 different sources:
ADS-B (Ground-based): The source of this data are terrestrial ADS-B receivers, which receive and collect data from the transponders of aircraft flying within the ADS-B receivers’ range. The data gathered by these terrestrial ADS-B receivers include, aircraft speed, position, registration information and other data. These ADS-B receivers usually operate at 1090 MHz. Radarbox also receives flight data from UAT 978 MHz as well.
ADS-B (Satellite-based): Aircraft transponder data is collected by Satellites equipped with ADS-B receivers. This data is then sent to AirNav Radarbox's servers for processing
FAA SWIM: The Federal Aviation Administration System Wide Information Management data comes directly from the FAA's domestic radar systems and contains data on virtually all aircraft flying in US airspace. This data source includes real-time position data, flight plans, departures, arrivals, routes, waypoints.
EUROCONTROL: In addition to flight position, registration and other data, EUROCONTROL data allows for access to flight plans and other operational information of aircraft and airports operating in the EURCONTROL region.
MLAT: Multilateration or MLAT is a surveillance technique based on the measurement of the difference in distance to four stations at known locations by broadcast signals at known times. This data source allows for tracking of aircraft that are not equipped with ADS-B transponders.
OCEANIC: Oceanic position data from the FAA is reported for all major trans-oceanic routes (Atlantic and Pacific).
ADS-C: The source of data here is an aircraft broadcasting its position at fixed intervals, typically between 5 and 30 minutes. Data is exchanged between the ground system and the aircraft, via a data link, specifying under what conditions ADS-C reports would be initiated, and what data would be contained in the reports.
HFDL: These position reports are received from a High Frequency Data Link system, where air traffic controllers can communicate with pilots over a datalink connection. Aircraft and ground stations make use of high frequency radio signals in order to communicate.
FLIFO: This data is provided by several external sources such as airports and airlines. This commercial data includes the departure time, arrival time, origin airport, destination airport, among other parameters.
ASDE-X: Data from ASDE-X is received from a surface movement radar located in an airport's ATC Tower or remote tower, multilateration sensors, ADS-B sensors, terminal radars, the terminal automation system, and from aircraft transponders. It is provided by the FAA.
Products & services
Website & app
The Radarbox website is a platform that provides flight tracking information of general aviation and commercial flights. The website also displays airport activity, weather information, airport and flight statistics, aviation news and photos. Users can use the site for free or can subscribe to one of three subscription plans on the website to access premium features.
The Radarbox app is available for both Android on the Google Play Store & iOS on the App Store.
Fleet Tracker & Airport View
The Radarbox Fleet Tracker is tracking solution designed for fleet managers & owners to monitor and better manage their fleet of aircraft. In addition to viewing all active flights and statuses, fleet managers can view flight history for up to 365 days. Advanced aircraft statistics of a fleet such as total flight hours, average flight duration etc. can also be accessed.
The Airport View tool displays the live inbound (arrivals) and outbound flight traffic (departures) of a single airport within a single window. It is an excellent secondary tracking resource for Air Traffic Controllers and plane spotters. Airport View also displays the prevailing weather conditions at an airport. Weather conditions such as wind speed & direction, visibility, pressure, and cloud base can be viewed.
Heatmaps & Movement Statistics
Movement Statistics display the total aircraft traffic (arrivals & departures) at a particular airport for each day for up to 31 days.
Airport Heat Maps display the frequency of flights to different airports or cities from a particular originating airport.
The Radarbox real-time statistics page contains a large volume of flight data from flights tracked daily. The stats shown on this page currently display data for the past 6 months. This data can be filtered by Commercial Airlines, Business Jets, Commercial Airports, Business Operators, and by Route and can be downloaded in CSV format for free, non-commercial use.
Flight Data API's
On-Demand API (ODAPI)
The On Demand API is Radarbox's credit-per-query API solution that allows client applications access to both real-time and historical flight data on an as-needed basis. This data is aggregated and delivered by AirNav's ground and satellite-based ADS-B network. Real-time data is sent via a secure TCP web socket connection, while historic flight tracking data is made available via a download link.
Firehose API
Firehose API is Radarbox's enterprise level, data stream API solution that includes over 70 data fields such as Flight Number, Call sign, Speed, Altitude, Registration, Scheduled and Actual Departure / Arrival Times, among other data fields. This data is available in JSON, XML and CSV formats.
References
Avionics
Flight tracking software | AirNav Systems | Technology | 1,614 |
68,732,225 | https://en.wikipedia.org/wiki/HyCOM | The Hybrid Coordinate Ocean Model (HyCOM) is an open-source ocean general circulation modeling system. HyCOM is a primitive equation type of ocean general circulation model. The vertical levels of this modeling system are slightly different than other models, because the vertical coordinates remain isopycnic in the open stratified ocean, smoothly transitioning to z-level coordinates in the weakly stratified upper-ocean mixed layer, to terrain-following sigma coordinates in shallow water regions, and back to z-level coordinates in very shallow water. Therefore, the setup is a “hybrid” between z-level and terrain-following vertical levels. HyCOM outputs are provided online for the global ocean at a spatial resolution of 0.08 degrees (approximately 9 km) from 2003 to present. HyCOM uses netCDF data format for model outputs.
Applications
HyCOM model experiments are used to study the interactions between the ocean and atmosphere, including short-term and long-term processes. This modeling system has also been used to create forecasting tools. For example, HyCOM has been used to:
Assimilate data and provide operational oceanographic forecasting for the United States Navy
Determine the ideal way to parametrize how the sun heats the upper ocean (solar radiation and heat flux) in darker waters like the Black Sea
Study mesoscale variability in sea surface height and temperature in the Gulf of Mexico
Simulate drifting patterns of loggerhead sea turtles of the North American east coast
Predict the extent of Arctic sea ice for naval operations
See also
Climate model
Computational geophysics
General circulation model (GCM)
Ocean general circulation model (OGCM)
Oceanography
List of ocean circulation models
Physical oceanography
ROMS
Sigma coordinate system
References
External links
https://www.hycom.org/
Physical oceanography
Oceanography
Earth system sciences
Computational science
Geophysics
Numerical climate and weather models | HyCOM | Physics,Mathematics,Environmental_science | 382 |
5,330,978 | https://en.wikipedia.org/wiki/Central%20Board%20of%20Film%20Certification | The Central Board of Film Certification (CBFC) or Censor Board of Film Certification is a statutory film-certification body in the Ministry of Information and Broadcasting of the Government of India. It is tasked with "regulating the public exhibition of films under the provisions of the Cinematograph Act 1952." The Cinematograph Act 1952 outlines a strict certification process for commercial films shown in public venues. Films screened in cinemas and on television may only be publicly exhibited in India after certification by the board and edited.
Certificates and guidelines
The board currently issues four certificates. Originally, there were two: U (unrestricted public exhibition with family-friendly movies) and A (restricted to adult audiences but any kind of nudity not allowed). Two more were added in June 1983 that are U/A (unrestricted public exhibition, with parental guidance for children under 12) and S (restricted to specialised audiences, such as doctors or scientists). The board may refuse to certify a film. Additionally, V/U, V/UA, V/A are used for video films with U, U/A and A carrying the same meaning as above.
U certificate
Films with the U certification are fit for unrestricted public exhibition and are family-friendly. These films can contain universal themes like education, family, drama, romance, sci-fi, action etc. These films can also contain some mild violence, but it cannot be prolonged. It may also contain very mild sexual scenes (without any traces of nudity or sexual detail).
U/A certificate
Films with the U/A certification can contain moderate adult themes that are not strong in nature and are not considered appropriate to be watched by a child without parental guidance. These films may contain moderate to strong violence, moderate sexual scenes (traces of nudity and moderate sexual detail can be found), frightening scenes, blood flow, or muted abusive language. Sometimes such films are re-certified with V/U for video viewing. The age threshold was previously set at 12 years of age, but in 2023 this was further refined to 7, 13 and 16 years of age.
UA 7+ – Unrestricted public exhibition, but with parental guidance for children below the age of 7 years and appropriate above the age of seven.
UA 13+ – Unrestricted public exhibition, but with parental guidance for children below the age of 13 years.
UA 16+ – Unrestricted public exhibition, but with parental guidance for children below the age of 16 years.
A certificate
Films with the A certification are available for public exhibition, but with restriction to adults (aged 18+). These films can contain strong violence, explicit and strong sexual scenes, abusive language, but words which insult or degrade women or any social group (despite being common) and nudity are not allowed. Controversial, adult or suggestive themes are considered unsuitable for young viewers. Such films are often re-certified with V/U and V/UA for TV, which does not happen in the case of U and U/A certified movies.
S certificate
Films with S certification cannot be viewed by the public. Only people associated with it (doctors, scientists, etc.), are permitted to view these films.
History
The Indian Cinematograph Act came into effect in 1920, seven years after the production of India's first film: Dadasaheb Phalke's Raja Harishchandra. Censorship boards were originally independent bodies under the police chiefs of the cities of Madras (now Chennai), Bombay (now Mumbai), Calcutta (now Kolkata), Lahore (now in Pakistan), and Rangoon (now Yangon in Myanmar)
it was amended again on 1 August 2023 with the introduction of cinematography amendment bill. The bill awaits presidential assent.
After the 1947 independence of India, autonomous regional censors were absorbed into the Bombay Board of Film Censors. The Cinematograph Act of 1952 reorganised the Bombay board into the Central Board of Film Censors. With the 1983 revision of cinematography rules, the body was renamed the Central Board of Film Certification.
In 2021 the Film Certification Appellate Tribunal (FCAT) was scrapped by the Indian government.
Principles
The board's guiding principles are to ensure healthy public entertainment and education and, using modern technology, to make the certification process and board activities transparent to filmmakers, the media and the public also every video have to undergo CBFC certification for telecasting or distributing over any platform in India and suggestible same standards for anywhere in the world.
Refusal to certify
In addition to the certifications above, there is also the possibility of the board refusing to certify the film at all.
The board's guidelines are:
Anti-social activities (such as violence) may not be glorified.
Any Controversial topics
Criminal acts may not be depicted.
The following is prohibited:
a) Involvement of children in violent acts or abuse.
b) Abuse or ridicule of the physically or mentally handicapped.
c) Unnecessary depictions of cruelty to animals.
Gratuitous violence, cruelty, or horror.
No scenes encouraging alcohol consumption, drug addiction or smoking.
No vulgarity, obscenity, depravity or double entendres.
No scenes degrading women (despite many sexist movies being certified), including sexual violence (as much as possible).
No denigration by race, religion or other social group.
No promotion of sectarian, obscurantist, anti-scientific and anti-national attitudes.
Relations with foreign countries should not be affected.
No national symbols or emblems, except in accordance with the Emblems and Names (Prevention of Improper Use) Act, 1950 (12 of 1950).
Enforcement
Since 2004, censorship has been rigorously enforced. An incident was reported in which exhibitor staff – a clerk who sold the ticket, the usher who allowed minors to sit, a theatre manager and the partners of the theatre complex – were arrested for non-compliance with certification rules.
Composition and leadership
The board consists of a chairperson and 23 members, all of whom are appointed by the central government. Prasoon Joshi chairs the board; Joshi became its 28th chairperson on 11 August 2017, after Pahlaj Nihalani was fired. Nihalani had succeeded Leela Samson after Samson quit in protest of an appellate tribunal's overturning of a board decision to refuse certification for MSG: The Messenger. Samson had succeeded Sharmila Tagore.
The board, headquartered in Mumbai, has nine regional offices:
Bangalore
Chennai
Cuttack
Guwahati
Hyderabad
Kolkata
Mumbai
New Delhi
Thiruvananthapuram
Controversies
The board has been associated with a number of scandals. Film producers reportedly bribe the CBFC to obtain a U/A certificate, which entitles them to a 30-percent reduction in entertainment tax.
In 2002, War and Peace (a documentary film by Anand Patwardhan which depicted nuclear weapons testing and the September 11 attacks) had to be edited 21 times before the film was approved for release. According to Patwardhan, "The cuts that [the Board] asked for are so ridiculous that they won't hold up in court. But if these cuts do make it, it will be the end of freedom of expression in the Indian media." A court ruled that the cut requirement was unconstitutional, and the film was shown uncensored.
Also in 2002, Indian filmmaker and CBFC chair Vijay Anand proposed legalising the exhibition of X-rated films in selected cinemas. Anand said, "Porn is shown everywhere in India clandestinely ... and the best way to fight this onslaught of blue movies is to show them openly in theatres with legally authorised licences". Anand resigned less than a year after becoming chairperson in the wake of his proposal.
The board refused to certify Gulabi Aaina (a film about Indian transsexuals produced and directed by Sridhar Rangayan) in 2003; Rangayan unsuccessfully appealed the decision twice. Although the film is banned in India, it has been screened internationally.
Final Solution, a 2004 documentary examining religious riots between Hindus and Muslims in Gujarat of the 2002 Gujarat riots which killed over 1,000 people, was also banned. According to the board, the film was "highly provocative and may trigger off unrest and communal violence". After a sustained campaign, the ban was lifted in October of that year.
The CBFC demanded five cuts from the 2011 American film, The Girl with the Dragon Tattoo, because of nudity and rape scenes. The producers and the director, David Fincher, eventually decided not to release the film in India.
CEO Rakesh Kumar was arrested in August 2014 for accepting bribes to expedite the issuance of certificates. The board demanded four cuts (three visual and one audio) from the 2015 Malayalam film, Chaayam Poosiya Veedu) (directed by brothers Santosh Babusenan and Satish Babusenan), because of nude scenes. The directors refused to make the changes, and the film was not certified.
CBFC chairperson Leela Samson resigned in protest of political interference in the board's work in 2015 after its decision to refuse certification of the film, MSG: The Messenger, was overturned by an appellate tribunal. Samson was replaced by Pahlaj Nihalani, whose Bharatiya Janata Party affiliation triggered a wave of additional board resignations. The board was criticised for ordering the screen time of two kissing scenes in the James Bond film Spectre to be cut by half for release.
Udta Punjab (2016), a crime drama about drug issues in the state of Punjab, produced by Anurag Kashyap, Ekta Kapoor, et al., inspired a list of 94 cuts and 13 pointers (including an order to remove Punjabi city names). The Bombay High Court allowed the film's release with one cut and disclaimers. A copy of the film was leaked online, with evidence suggesting CBFC involvement. Kashyap posted on Facebook that although he did not object to free downloads, he hoped that viewers would pay for the film. The film eventually grossed over , a commercial success. In August 2017, days after his removal as CBFC chair, Nihalani said in an interview that he had received instructions from the Ministry of Information and Broadcasting to block the release of this film and at least one other.
Lipstick Under My Burkha (2017) by Alankrita Shrivastava, produced by Prakash Jha, was initially denied certification, with the CBFC claiming that "The story is lady oriented, their fantasy above life. There are contanious [sic] sexual scenes, abusive words, audio pornography and a bit sensitive touch about one particular section of society". The black comedy, which had been screened at international film festivals, was eligible for the Golden Globes. The filmmakers appealed to the board's Film Certification Appellate Tribunal (FCAT), which authorised its release. The FCAT requested some cuts (primarily to sex scenes), and the film was released with an "A" certificate. Shrivastava said she would have preferred no cuts, but felt the film's narrative and essence were left intact, and commended the FCAT's handling of the issue.
In 2018, Ashvin Kumar's film No Fathers in Kashmir at first received an "A" certificate. In his open letter to the CBFC chairperson, Kumar stated that for an independent film, this was "as good as banning the film". After appealing to the FCAT and incorporating a few cuts and disclaimers at its request, the film was granted a "U/A" certificate eight months after its initial submission.
References
External links
1952 establishments in Bombay State
Censorship in India
Certification marks
Film organisations in India
Ministry of Information and Broadcasting (India)
Motion picture rating systems
Government agencies established in 1952
Entertainment rating organizations | Central Board of Film Certification | Mathematics | 2,411 |
51,138,703 | https://en.wikipedia.org/wiki/The%20Chemical%20Engineer | The Chemical Engineer is a monthly chemical engineering technical and news magazine published by the Institution of Chemical Engineers (IChemE). It has technical articles of interest to practitioners and educators, and also addresses current events in world of chemical engineering including research, international business news and government policy as it affects the chemical engineering community. The magazine is sent to all members of the IChemE and is included in the cost of membership. Some parts of the magazine are available free online, including recent news and a series of biographies “Chemical Engineers who Changed the World”, although the core and the archive magazine is available only with a subscription. The online magazine also has freely available podcasts.
History
The formal journal of the IChemE was the “Transactions” which was initially an annual publication. In order to keep members informed a “Quarterly Bulletin- Institution of Chemical Engineers” was issued. When the Transactions became quarterly, the Bulletin was issued as a supplement. In 1956 both changed to bi-monthly and the title was changed to “The Chemical Engineer” with the sub-title “Bulletin of the Institution of Chemical Engineers". It kept the same numbering, so was issue 125. According to the editorial it would contain news and “articles and comments by members, handled less formally than in Transactions, relating both to practical matters arising from experience and to broader aspects of professional life.”
From 2002 it was published as “TCE” but reverted to its original title with issue 894 in December 2015.
References
External links
Official website
Monthly magazines published in the United Kingdom
Science and technology magazines published in the United Kingdom
Chemical industry in the United Kingdom
Chemical engineering journals
Magazines established in 1956
Professional and trade magazines
Institution of Chemical Engineers | The Chemical Engineer | Chemistry,Engineering | 339 |
452,219 | https://en.wikipedia.org/wiki/Drill%20bit | A drill bit is a cutting tool used in a drill to remove material to create holes, almost always of circular cross-section. Drill bits come in many sizes and shapes and can create different kinds of holes in many different materials. In order to create holes drill bits are usually attached to a drill, which powers them to cut through the workpiece, typically by rotation. The drill will grasp the upper end of a bit called the shank in the chuck.
Drills come in standardized drill bit sizes. A comprehensive drill bit and tap size chart lists metric and imperial sized drills alongside the required screw tap sizes. There are also certain specialized drill bits that can create holes with a non-circular cross-section.
Characteristics
Drill geometry has several characteristics:
The spiral (or rate of twist) in the drill bit controls the rate of chip removal. A fast spiral (high twist rate or "compact flute") drill bit is used in high feed rate applications under low spindle speeds, where removal of a large volume of chips is required. Low spiral (low twist rate or "elongated flute") drill bits are used in cutting applications where high cutting speeds are traditionally used, and where the material has a tendency to gall on the bit or otherwise clog the hole, such as aluminum or copper.
The point angle, or the angle formed at the tip of the bit, is determined by the material the bit will be operating in. Harder materials require a larger point angle, and softer materials require a sharper angle. The correct point angle for the hardness of the material influences wandering, chatter, hole shape, and wear rate.
The lip angle is the angle between the face of the cut material and the flank of the lip, and determines the amount of support provided to the cutting edge. A greater lip angle will cause the bit to cut more aggressively under the same amount of point pressure as a bit with a smaller lip angle. Both conditions can cause binding, wear, and eventual catastrophic failure of the tool. The proper amount of lip clearance is determined by the point angle. A very acute point angle has more web surface area presented to the work at any one time, requiring an aggressive lip angle, where a flat bit is extremely sensitive to small changes in lip angle due to the small surface area supporting the cutting edges.
The functional length of a bit determines how deep a hole can be drilled, and also determines the stiffness of the bit and accuracy of the resultant hole. While longer bits can drill deeper holes, they are more flexible meaning that the holes they drill may have an inaccurate location or wander from the intended axis. Twist drill bits are available in standard lengths, referred to as Stub-length or Screw-Machine-length (short), the extremely common Jobber-length (medium), and Taper-length or Long-Series (long).
Most drill bits for consumer use have straight shanks. For heavy duty drilling in industry, bits with tapered shanks are sometimes used. Other types of shank used include hex-shaped, and various proprietary quick release systems.
The diameter-to-length ratio of the drill bit is usually between 1:1 and 1:10. Much higher ratios are possible (e.g., "aircraft-length" twist bits, pressured-oil gun drill bits, etc.), but the higher the ratio, the greater the technical challenge of producing good work.
The best geometry to use depends upon the properties of the material being drilled. The following table lists geometries recommended for some commonly drilled materials.
Materials
Many different materials are used for or on drill bits, depending on the required application. Many hard materials, such as carbides, are much more brittle than steel, and are far more subject to breaking, particularly if the drill is not held at a very constant angle to the workpiece; e.g., when hand-held.
Steels
Soft low-carbon steel bits are inexpensive, but do not hold an edge well and require frequent sharpening. They are used only for drilling wood; even working with hardwoods rather than softwoods can noticeably shorten their lifespan.
Bits made from high-carbon steel are more durable than low-carbon steel bits due to the properties conferred by hardening and tempering the material. If they are overheated (e.g., by frictional heating while drilling) they lose their temper, resulting in a soft cutting edge. These bits can be used on wood or metal.
High-speed steel (HSS) is a form of tool steel; HSS bits are hard and much more resistant to heat than high-carbon steel. They can be used to drill metal, hardwood, and most other materials at greater cutting speeds than carbon-steel bits, and have largely replaced carbon steels.
Cobalt steel alloys are variations on high-speed steel that contain more cobalt. They hold their hardness at much higher temperatures and are used to drill stainless steel and other hard materials. The main disadvantage of cobalt steels is that they are more brittle than standard HSS.
Others
Tungsten carbide and other carbides are extremely hard and can drill virtually all materials, while holding an edge longer than other bits. The material is expensive and much more brittle than steels; consequently they are mainly used for drill-bit tips, small pieces of hard material fixed or brazed onto the tip of a bit made of less hard metal. However, it is becoming common in job shops to use solid carbide bits. In very small sizes it is difficult to fit carbide tips; in some industries, most notably printed circuit board manufacturing, requiring many holes with diameters less than 1 mm, solid carbide bits are used.
Polycrystalline diamond (PCD) is among the hardest of all tool materials and is therefore extremely resistant to wear. It consists of a layer of diamond particles, typically about thick, bonded as a sintered mass to a tungsten-carbide support. Bits are fabricated using this material by either brazing small segments to the tip of the tool to form the cutting edges or by sintering PCD into a vein in the tungsten-carbide "nib". The nib can later be brazed to a carbide shaft; it can then be ground to complex geometries that would otherwise cause braze failure in the smaller "segments". PCD bits are typically used in the automotive, aerospace, and other industries to drill abrasive aluminum alloys, carbon-fiber reinforced plastics, and other abrasive materials, and in applications where machine downtime to replace or sharpen worn bits is exceptionally costly. PCD is not used on ferrous metals due to excess wear resulting from a reaction between the carbon in the PCD and the iron in the metal.
Coatings
Black oxide is an inexpensive black coating. A black oxide coating provides heat resistance and lubricity, as well as corrosion resistance. The coating increases the life of high-speed steel bits.
Titanium nitride (TiN) is a very hard metallic material that can be used to coat a high-speed steel bit (usually a twist bit), extending the cutting life by three or more times. Even after sharpening, the leading edge of coating still provides improved cutting and lifetime.
Titanium aluminum nitride (TiAlN) is a similar coating that can extend tool life five or more times.
Titanium carbon nitride (TiCN) is another coating also superior to TiN.
Diamond powder is used as an abrasive, most often for cutting tile, stone, and other very hard materials. Large amounts of heat are generated by friction, and diamond-coated bits often have to be water-cooled to prevent damage to the bit or the workpiece.
Zirconium nitride has been used as a drill-bit coating for some tools under the Craftsman brand name.
Al-Chrome Silicon Nitride (AlCrSi/Ti)N is a multilayer coating made of alternating nanolayer, developed using chemical vapor deposition technique, is used in drilling carbon fiber reinforced polymer (CFRP) and CFRP-Ti stack. (AlCrSi/Ti)N is a superhard ceramic coating, which performs better than other coated and uncoated drill.
BAM coating is Boron-Aluminum-Magnesium BAlMgB14 is a superhard ceramic coating also used in composite drilling.
Universal bits
General-purpose drill bits can be used in wood, metal, plastic, and most other materials.
Twist drill bit
The twist drill bit is the type produced in largest quantity today. It comprises a cutting point at the tip of a cylindrical shaft with helical flutes; the flutes act as an Archimedean screw and lift swarf out of the hole.
The modern-style twist drill bit was invented by Sir Joseph Whitworth in 1860. They were later improved by Steven A. Morse of East Bridgewater, Massachusetts, who experimented with the pitch of the twist. The original method of manufacture was to cut two grooves in opposite sides of a round bar, then to twist the bar (giving the tool its name) to produce the helical flutes. Nowadays, the drill bit is usually made by rotating the bar while moving it past a grinding wheel to cut the flutes in the same manner as cutting helical gears.
Twist drill bits range in diameter from and can be as long as .
The geometry and sharpening of the cutting edges is crucial to the performance of the bit. Small bits that become blunt are often discarded because sharpening them correctly is difficult and they are cheap to replace. For larger bits, special grinding jigs are available. A special tool grinder is available for sharpening or reshaping cutting surfaces on twist drill bits in order to optimize the bit for a particular material.
Manufacturers can produce special versions of the twist drill bit, varying the geometry and the materials used, to suit particular machinery and particular materials to be cut. Twist drill bits are available in the widest choice of tooling materials. However, even for industrial users, most holes are drilled with standard high-speed steel bits.
The most common twist drill bit (sold in general hardware stores) has a point angle of 118 degrees, acceptable for use in wood, metal, plastic, and most other materials, although it does not perform as well as using the optimum angle for each material. In most materials it does not tend to wander or dig in.
A more aggressive angle, such as 90 degrees, is suited for very soft plastics and other materials; it would wear rapidly in hard materials. Such a bit is generally self-starting and can cut very quickly. A shallower angle, such as 150 degrees, is suited for drilling steels and other tougher materials. This style of bit requires a starter hole, but does not bind or suffer premature wear so long as a suitable feed rate is used.
Drill bits with no point angle are used in situations where a blind, flat-bottomed hole is required. These bits are very sensitive to changes in lip angle, and even a slight change can result in an inappropriately fast cutting drill bit that will suffer premature wear.
Long series drill bits are unusually long twist drill bits. However, they are not the best tool for routinely drilling deep holes, as they require frequent withdrawal to clear the flutes of swarf and to prevent breakage of the bit. Instead, gun drill (through coolant drill) bits are preferred for deep hole drilling.
Step drill bit
A step drill bit is a drill bit that has the tip ground down to a different diameter. The transition between this ground diameter and the original diameter is either straight, to form a counterbore, or angled, to form a countersink. The advantage to this style is that both diameters have the same flute characteristics, which keeps the bit from clogging when drilling in softer materials, such as aluminum; in contrast, a drill bit with a slip-on collar does not have the same benefit. Most of these bits are custom-made for each application, which makes them more expensive.
Unibit
A unibit (often called a step drill bit) is a roughly conical bit with a stairstep profile. Due to its design, a single bit can be used for drilling a wide range of hole sizes. Some bits come to a point and are thus self-starting. The larger-size bits have blunt tips and are used for hole enlarging.
Unibits are commonly used on sheet metal and in general construction. One drill bit can drill the entire range of holes necessary on a countertop, speeding up installation of fixtures. They are often used on softer materials, such as plywood, particle board, drywall, acrylic, and laminate. They can be used on very thin sheet metal, but metals tend to cause premature bit wear and dulling.
Unibits are ideal for use in electrical work where thin steel, aluminum or plastic boxes and chassis are encountered. The short length of the unibit and ability to vary the diameter of the finished hole is an advantage in chassis or front panel work. The finished hole can often be made quite smooth and burr-free, especially in plastic.
An additional use of unibits is deburring holes left by other bits, as the sharp increase to the next step size allows the cutting edge to scrape burrs off the entry surface of the workpiece. However, the straight flute is poor at chip ejection, and can cause a burr to be formed on the exit side of the hole, more so than a spiral twist drill bit turning at high speed.
The unibit was invented by Harry C. Oakes and patented in 1973. It was sold only by the Unibit Corporation in the 1980s until the patent expired, and was later sold by other companies. Unibit is a trademark of Irwin Industrial Tools.
Although it is claimed that the stepped drill was invented by Harry C. Oakes it was in fact conceived by George Godbold and first produced by Bradley Engineering, Wandsworth, London in the 1960s and named the Bradrad. It was marketed under this name until the patent was sold to Halls Ltd.uk by whom it is still produced.
Hole saw
Hole saws take the form of a short open cylinder with saw-teeth on the open edge, used for making relatively large holes in thin material. They remove material only from the edge of the hole, cutting out an intact disc of material, unlike many drills which remove all material in the interior of the hole. They can be used to make large holes in wood, sheet metal and other materials.
For metal
Center and spotting drill bit
Center drill bits, occasionally known as Slocombe drill bits, are used in metalworking to provide a starting hole for a larger-sized drill bit or to make a conical indentation in the end of a workpiece in which to mount a lathe center. In either use, the name seems appropriate, as the bit is either establishing the center of a hole or making a conical hole for a lathe center. However, the true purpose of a center drill bit is the latter task, while the former task is best done with a spotting drill bit (as explained in detail below). Nevertheless, because of the frequent lumping together of both the terminology and the tool use, suppliers may call center drill bits combined-drill-and-countersinks in order to make it unambiguously clear what product is being ordered. They are numbered from 00 to 10 (smallest to largest).
Use in making holes for lathe centers
Center drill bits are meant to create a conical hole for "between centers" manufacturing processes (typically lathe or cylindrical-grinder work). That is, they provide a location for a (live, dead, or driven) center to locate the part about an axis. A workpiece machined between centers can be safely removed from one process (perhaps turning in a lathe) and set up in a later process (perhaps a grinding operation) with a negligible loss in the co-axiality of features (usually total indicator reading (TIR) less than ; and TIR < is held in cylindrical grinding operations, as long as conditions are correct).
Use in spotting hole centers
Traditional twist drill bits may tend to wander when started on an unprepared surface. Once a bit wanders off course it is difficult to bring it back on center. A center drill bit frequently provides a reasonable starting point as it is short and therefore has a reduced tendency to wander when drilling is started.
While the above is a common use of center drill bits, it is a technically incorrect practice and should not be considered for production use. The correct tool to start a traditionally drilled hole (a hole drilled by a high-speed steel (HSS) twist drill bit) is a spotting drill bit (or a spot drill bit, as they are referenced in the U.S.). The included angle of the spotting drill bit should be the same as, or greater than, the conventional drill bit so that the drill bit will then start without undue stress on the bit's corners, which would cause premature failure of the bit and a loss of hole quality.
Most modern solid-carbide bits should not be used in conjunction with a spot drill bit or a center drill bit, as solid-carbide bits are specifically designed to start their own hole. Usually, spot drilling will cause premature failure of the solid-carbide bit and a certain loss of hole quality. If it is deemed necessary to chamfer a hole with a spot or center drill bit when a solid-carbide drill bit is used, it is best practice to do so after the hole is drilled.
When drilling with a hand-held drill the flexibility of the bit is not the primary source of inaccuracy—it is the user's hands. Therefore, for such operations, a center punch is often used to spot the hole center prior to drilling a pilot hole.
Core drill bit
The term core drill bit is used for two quite different tools.
Enlarging holes
A bit used to enlarge an existing hole is called a core drill bit. The existing hole may be the result of a core from a casting or a stamped (punched) hole. The name comes from its first use, for drilling out the hole left by a foundry core, a cylinder placed in a mould for a casting that leaves an irregular hole in the product. This core drill bit is solid.
These core drill bits are similar in appearance to reamers as they have no cutting point or means of starting a hole. They have 3 or 4 flutes which enhances the finish of the hole and ensures the bit cuts evenly. Core drill bits differ from reamers in the amount of material they are intended to remove. A reamer is only intended to enlarge a hole a slight amount which, depending on the reamers size, may be anything from 0.1 millimeter to perhaps a millimeter. A core drill bit may be used to double the size of a hole.
Using an ordinary two-flute twist drill bit to enlarge the hole resulting from a casting core will not produce a clean result, the result will possibly be out of round, off center and generally of poor finish. The two fluted drill bit also has a tendency to grab on any protuberance (such as flash) which may occur in the product.
Extracting core
A hollow cylindrical bit which will cut a hole with an annular cross-section and leave the inner cylinder of material (the "core") intact, often removing it, is also called a core drill bit or annular cutter. Unlike other drills, the purpose is often to retrieve the core rather than simply to make a hole. A diamond core drill bit is intended to cut an annular hole in the workpiece. Large bits of similar shape are used for geological work, where a deep hole is drilled in sediment or ice and the drill bit, which now contains an intact core of the material drilled with a diameter of several centimeters, is retrieved to allow study of the strata.
Countersink bit
A countersink is a conical hole cut into a manufactured object; a countersink bit (sometimes called simply countersink) is the cutter used to cut such a hole. A common use is to allow the head of a bolt or screw, with a shape exactly matching the countersunk hole, to sit flush with or below the surface of the surrounding material. (By comparison, a counterbore makes a flat-bottomed hole that might be used with a hex-headed capscrew.) A countersink may also be used to remove the burr left from a drilling or tapping operation.
Ejector drill bit
Used almost exclusively for deep hole drilling of medium to large diameter holes (approximately diameter). An ejector drill bit uses a specially designed carbide cutter at the point. The bit body is essentially a tube within a tube. Flushing water travels down between the two tubes. Chip removal is back through the center of the bit.
Gun drill bit
Gun drills are straight fluted drills which allow cutting fluid (either compressed air or a suitable liquid) to be injected through the drill's hollow body to the cutting face.
Indexable drill bit
Indexable drill bits are primarily used in CNC and other high precision or production equipment, and are the most expensive type of drill bit, costing the most per diameter and length. Like indexable lathe tools and milling cutters, they use replaceable carbide or ceramic inserts as a cutting face to alleviate the need for a tool grinder. One insert is responsible for the outer radius of the cut, and another insert is responsible for the inner radius. The tool itself handles the point deformity, as it is a low-wear task. The bit is hardened and coated against wear far more than the average drill bit, as the shank is non-consumable. Almost all indexable drill bits have multiple coolant channels for prolonged tool life under heavy usage. They are also readily available in odd configurations, such as straight flute, fast spiral, multiflute, and a variety of cutting face geometries.
Typically indexable drill bits are used in holes that are no deeper than about 5 times the bit diameter. They are capable of quite high axial loads and cut very fast.
Left-hand bit
Left-hand bits are almost always twist bits and are predominantly used in the repetition engineering industry on screw machines or drilling heads. Left-handed drill bits allow a machining operation to continue where either the spindle cannot be reversed or the design of the machine makes it more efficient to run left-handed. With the increased use of the more versatile CNC machines, their use is less common than when specialized machines were required for machining tasks.
Screw extractors are essentially left-hand bits of specialized shape, used to remove common right-hand screws whose heads are broken or too damaged to allow a screwdriver tip to engage, making use of a screwdriver impossible. The extractor is pressed against the damaged head and rotated counter-clockwise and will tend to jam in the damaged head and then turn the screw counter-clockwise, unscrewing it. For screws that break off deeper in the hole, an extractor set will often include left handed drill bits of the appropriate diameters so that grab holes can be drilled into the screws in a left handed direction, preventing further tightening of the broken piece.
Metal spade bit
A spade drill bit for metal is a two part bit with a tool holder and an insertable tip, called an insert. The inserts come in various sizes that range from . The tool holder usually has a coolant passage running through it. They are capable of cutting to a depth of about 10 times the bit diameter. This type of drill bit can also be used to make stepped holes.
Straight fluted bit
Straight fluted drill bits do not have a helical twist like twist drill bits do. They are used when drilling copper or brass because they have less of a tendency to "dig in" or grab the material.
Trepan
A trepan, sometimes called a BTA drill bit (after the Boring and Trepanning Association), is a drill bit that cuts an annulus and leaves a center core. Trepans usually have multiple carbide inserts and rely on water to cool the cutting tips and to flush chips out of the hole. Trepans are often used to cut large diameters and deep holes. Typical bit diameters are and hole depth from up to .
For wood
Brad point bit
The brad point drill bit (also known as lip and spur drill bit, and dowel drill bit) is a variation of the twist drill bit which is optimized for drilling in wood.
Conventional twist drill bits tend to wander when presented to a flat workpiece. For metalwork, this is countered by drilling a pilot hole with a spotting drill bit. In wood, the brad point drill bit is another solution: the center of the drill bit is given not the straight chisel of the twist drill bit, but a spur with a sharp point, and four sharp corners to cut the wood. While drilling, the sharp point of the spur pushes into the soft wood to keep the drill bit in line.
Metals are typically isotropic, so even an ordinary twist drill bit will shear the edges of the hole cleanly. Wood drilled across the grain, however, produces long strands of wood fiber. These long strands tend to pull out of the hole, rather than being cleanly cut at the hole edge. The brad point drill bit has the outside corner of the cutting edges leading, so that it cuts the periphery of the hole before the inner parts of the cutting edges plane off the base of the hole. By cutting the periphery first, the lip maximizes the chance that the fibers can be cut cleanly, rather than having to be pulled messily from the timber.
Brad point drill bits are also effective in soft plastic. When using conventional twist drill bits in a handheld drill, where the drilling direction is not maintained perfectly throughout the operation, there is a tendency for hole edges to be "smeared" due to side friction and heat.
In metal, the brad point drill bit is confined to drilling only the thinnest and softest sheet metals, ideally with a drill press. The bits have an extremely fast cutting tool geometry: no point angle, combined with a large (considering the flat cutting edge) lip angle, causes the edges to take a very aggressive cut with relatively little point pressure. This means these bits tend to bind in metal; given a workpiece of sufficient thinness, they have a tendency to punch through and leave the bit's cross-sectional geometry behind.
Brad point drill bits are ordinarily available in diameters from .
Wood spade bit
Spade bits are used for rough boring in wood. They tend to cause splintering when they emerge from the workpiece. Woodworkers avoid splintering by finishing the hole from the opposite side of the work. Spade bits are flat, with a centering point and two cutters. The cutters are often equipped with spurs in an attempt to ensure a cleaner hole. With their small shank diameters relative to their boring diameters, spade bit shanks often have flats forged or ground into them to prevent slipping in drill chucks. Some bits are equipped with long shanks and have a small hole drilled through the flat part, allowing them to be used much like a bell-hanger bit. Intended for high speed use, they are used with electric hand drills. Spade bits are also sometimes referred to as "paddle bits".
Spade drill bits are ordinarily available in diameters from 6 to 36 mm, or to inches.
Spoon bit
Spoon bits consist of a grooved shank with a point shaped somewhat like the bowl of a spoon, with the cutting edge on the end. The more common type is like a gouge bit that ends in a slight point. This is helpful for starting the hole, as it has a center that will not wander or walk. These bits are used by chair-makers for boring or reaming holes in the seats and arms of chairs. Their design is ancient, going back to Roman times. Spoon bits have even been found in Viking excavations. Modern spoon bits are made of hand-forged carbon steel, carefully heat-treated and then hand ground to a fine edge.
Spoon bits are the traditional boring tools used with a brace. They should never be used with a power drill of any kind. Their key advantage over regular brace bits and power drill bits is that the angle of the hole can be adjusted. This is very important in chairmaking, because all the angles are usually eyeballed. Another advantage is that they do not have a lead screw, so they can be drilled successfully in a chair leg without having the lead screw peek out the other side.
When reaming a pre-bored straight-sided hole, the spoon bit is inserted into the hole and rotated in a clockwise direction with a carpenters' brace until the desired taper is achieved. When boring into solid wood, the bit should be started in the vertical position; after a "dish" has been created and the bit has begun to "bite" into the wood, the angle of boring can be changed by tilting the brace a bit out of the vertical. Holes can be drilled precisely, cleanly and quickly in any wood, at any angle of incidence, with total control of direction and the ability to change that direction at will.
Parallel spoon bits are used primarily for boring holes in the seat of a Windsor chair to take the back spindles, or similar round-tenon work when assembling furniture frames in green woodworking work.
The spoon bit may be honed by using a slipstone on the inside of the cutting edge; the outside edge should never be touched.
Forstner bit
Forstner bits were patented by Benjamin Forstner in 1886. They bore precise, flat-bottomed holes in wood, in any orientation with respect to the wood grain. They can cut on the edge of a block of wood, and can cut overlapping holes; for such applications they are normally used in drill presses or lathes rather than in hand-held electric drills. Because of the flat bottom of the hole, they are useful for drilling through veneer already glued to add an inlay.
The bit includes a center brad point which guides it throughout the cut (and incidentally spoils the otherwise flat bottom of the hole). The cylindrical cutter around the perimeter shears the wood fibers at the edge of the bore, and also helps guide the bit into the material more precisely. Forstner bits have radial cutting edges to plane off the material at the bottom of the hole. The bits shown in the images have two radial edges; other designs may have more. Forstner bits have no mechanism to clear chips from the hole, and therefore must be pulled out periodically.
Sawtooth bits are also available, which include many more cutting edges to the cylinder. These cut faster, but produce a more ragged hole. They have advantages over Forstner bits when boring into end grain.
Bits are commonly available in sizes from diameter. Sawtooth bits are available up to diameter.
Originally the Forstner bit was very successful with gunsmiths because of its ability to drill an exceedingly smooth-sided hole.
Center bit
The center bit is optimized for drilling in wood with a hand brace. Many different designs have been produced.
The center of the bit is a tapered screw thread. This screws into the wood as the bit is turned, and pulls the bit into the wood. There is no need for any force to push the bit into the workpiece, only the torque to turn the bit. This is ideal for a bit for a hand tool. The radial cutting edges remove a slice of wood of thickness equal to the pitch of the central screw for each rotation of the bit. To pull the bit from the hole, either the female thread in the wood workpiece must be stripped, or the rotation of the bit must be reversed.
The edge of the bit has a sharpened spur to cut the fibers of the wood, as in the brad point drill bit. A radial cutting edge planes the wood from the base of the hole. In this version, there is minimal or no spiral to remove chips from the hole. The bit must be periodically withdrawn to clear the chips.
Some versions have two spurs. Some have two radial cutting edges.
Center bits do not cut well in the end grain of wood. The central screw tends to pull out, or to split the wood along the grain, and the radial edges have trouble cutting through the long wood fibers.
Center bits are made of relatively soft steel, and can be sharpened with a file.
Auger bit
The cutting principles of the auger bit are the same as those of the center bit above. The auger adds a long deep spiral flute for effective chip removal.
Two styles of auger bit are commonly used in hand braces: the Jennings or Jennings-pattern bit has a self-feeding screw tip, two spurs and two radial cutting edges. This bit has a double flute starting from the cutting edges, and extending several inches up the shank of the bit, for waste removal. This pattern of bit was developed by Russell Jennings in the mid-19th century.
The Irwin or solid-center auger bit is similar, the only difference being that one of the cutting edges has only a "vestigal flute" supporting it, which extends only about up the shank before ending. The other flute continues full-length up the shank for waste removal. The Irwin bit may afford greater space for waste removal, greater strength (because the design allows for a center shank of increased size within the flutes, as compared to the Jenning bits), or smaller manufacturing costs. This style of bit was invented in 1884, and the rights sold to Charles Irwin who patented and marketed this pattern the following year.
Both styles of auger bits were manufactured by several companies throughout the early- and mid-20th century, and are still available new from select sources today.
The diameter of auger bits for hand braces is commonly expressed by a single number, indicating the size in 16ths of an inch. For example, #4 is 4/16 or 1/4 in (6 mm), #6 is 6/16 or 3/8 in (9 mm), #9 is 9/16 in (14 mm), and #16 is 16/16 or 1 in (25 mm). Sets commonly consist of #4-16 or #4-10 bits.
The bit shown in the picture is a modern design for use in portable power tools, made in the UK in about 1995. It has a single spur, a single radial cutting edge and a single flute. Similar auger bits are made with diameters from 6 mm (3/16 in) to 30 mm (1 3/16 in). Augers up to long are available, where the chip-clearing capability is especially valuable for drilling deep holes.
Gimlet bit
The gimlet bit is a very old design. The bit is the same style as that used in the gimlet, a self-contained tool for boring small holes in wood by hand. Since about 1850, gimlets have had a variety of cutter designs, but some are still produced with the original version. The gimlet bit is intended to be used in a hand brace for drilling into wood. It is the usual style of bit for use in a brace for holes below about diameter.
The tip of the gimlet bit acts as a tapered screw, to draw the bit into the wood and to begin forcing aside the wood fibers, without necessarily cutting them. The cutting action occurs at the side of the broadest part of the cutter. Most drill bits cut the base of the hole. The gimlet bit cuts the side of the hole.
Hinge sinker bit
The hinge sinker bit is an example of a custom drill bit design for a specific application. Many European kitchen cabinets are made from particle board or medium-density fiberboard (MDF) with a laminated melamine resin veneer. Those types of pressed wood boards are not very strong, and the screws of butt hinges tend to pull out. A specialist hinge has been developed which uses the walls of a hole, bored in the particle board, for support. This is a very common and relatively successful construction method.
A Forstner bit could bore the mounting hole for the hinge, but particle board and MDF are very abrasive materials, and steel cutting edges soon wear. A tungsten carbide cutter is needed, but the complex shape of a forstner bit is difficult to manufacture in carbide, so this special drill bit with a simpler shape is commonly used. It has cutting edges of tungsten carbide brazed to a steel body; a center spur keeps the bit from wandering.
Adjustable wood bits
An adjustable wood bit, also known as an expansive wood bit, has a small center pilot bit with an adjustable, sliding cutting edge mounted above it, usually containing a single sharp point at the outside, with a set screw to lock the cutter in position. When the cutting edge is centered on the bit, the hole drilled will be small, and when the cutting edge is slid outwards, a larger hole is drilled. This allows a single drill bit to drill a wide variety of holes, and can take the place of a large, heavy set of different size bits, as well as providing uncommon bit sizes. A ruler or vernier scale is usually provided to allow precise adjustment of the bit size.
These bits are available both in a version similar to an auger bit or brace bit, designed for low speed, high torque use with a brace or other hand drill (pictured to the right), or as a high speed, low torque bit meant for a power drill. While the shape of the cutting edges is different, and one uses screw threads and the other a twist bit for the pilot, the method of adjusting them remains the same.
Other materials
Diamond core bit
The diamond masonry mortar bit is a hybrid drill bit, designed to work as a combination router and drill bit. It consists of a steel shell, with the diamonds embedded in metal segments attached to the cutting edge. These drill bits are used at relatively low speeds.
Masonry drill bit
The masonry bit shown here is a variation of the twist drill bit. The bulk of the tool is a relatively soft steel, and is machined with a mill rather than ground. An insert of tungsten carbide is brazed into the steel to provide the cutting edges.
Masonry bits typically are used with a hammer drill, which hammers the bit into the material being drilled as it rotates; the hammering breaks up the masonry at the drill bit tip, and the rotating flutes carry away the dust. Rotating the bit also brings the cutting edges onto a fresh portion of the hole bottom with every hammer blow. Hammer drill bits often use special shank shapes such as the SDS type, which allows the bit to slide within the chuck when hammering, without the whole heavy chuck executing the hammering motion.
Masonry bits of the style shown are commonly available in diameters from 3 mm to 40 mm. For larger diameters, core bits are used. Masonry bits up to long can be used with hand-portable power tools, and are very effective for installing wiring and plumbing in existing buildings.
A , similar in appearance and function to a hole punch or chisel, is used as a hand powered drill in conjunction with a hammer to drill into stone and masonry. A star drill bit's cutting edge consists of several blades joined at the center to form a star pattern.
Glass drill bit
Glass bits have a spade-shaped carbide point. They generate high temperatures and have a very short life. Holes are generally drilled at low speed with a succession of increasing bit sizes. Diamond drill bits can also be used to cut holes in glass, and last much longer.
Ceramic drill bit
Ceramic drill bits are made to drill through glazed and unglazed ceramic tiles, for instance for installing bathroom fittings.
PCB through-hole drill bit
A great number of holes with small diameters of about 1 mm or less must be drilled in printed circuit boards (PCBs) used by electronic equipment with through-hole components. Most PCBs are made of highly abrasive fiberglass, which quickly wears steel bits, especially given the hundreds or thousands of holes on most circuit boards. To solve this problem, solid tungsten carbide twist bits, which drill quickly through the board while providing a moderately long life, are almost always used. Carbide PCB bits are estimated to outlast high-speed steel bits by a factor of ten or more. Other options sometimes used are diamond or diamond-coated bits.
In industry, virtually all drilling is done by automated machines, and the bits are often automatically replaced by the equipment as they wear, as even solid carbide bits do not last long in constant use. PCB bits, of narrow diameter, typically mount in a collet rather than a chuck, and come with standard-size shanks, often with pre-installed stops to set them at an exact depth every time when being automatically chucked by the equipment.
Very high rotational speeds—30,000 to 100,000 RPM or even higher—are used; this translates to a reasonably fast linear speed of the cutting tip in these very small diameters. The high speed, small diameter, and the brittleness of the material, make the bits very vulnerable to breaking, particularly if the angle of the bit to the workpiece changes at all, or the bit contacts any object. Drilling by hand is not practical, and many general-purpose drilling machines designed for larger bits rotate too slowly and wobble too much to use carbide bits effectively.
Resharpened and easily available PCB drills have historically been used in many prototyping and home PCB labs, using a high-speed rotary tool for small-diameter bits (such as a Moto-Tool by Dremel) in a stiff drill-press jig. If used for other materials these tiny bits must be evaluated for equivalent cutting speed vs material resistance to the cut (hardness), as the bit's rake angle and expected feed per revolution are optimised for high-speed automated use on fiberglass PCB substrate.
Installer bit
Fishing bit
Installer bits, also known as bell-hanger bits or fishing bits, are a type of twist drill bit for use with a hand-portable power tool. The key distinguishing feature of an installer bit is a transverse hole drilled through the web of the bit near the tip. Once the bit has penetrated a wall, a wire can be threaded through the hole and the bit pulled back out, pulling the wire with it. The wire can then be used to pull a cable or pipe back through the wall. This is especially helpful where the wall has a large cavity, where threading a fish tape could be difficult. Some installer bits have a transverse hole drilled at the shank end as well. Once a hole has been drilled, the wire can be threaded through the shank end, the bit released from the chuck, and all pulled forward through the drilled hole. These bits are made for cement, block and brick; they are not for drilling into wood. Sinclair Smith of Brooklyn, New York was issued for this invention on January 25, 1898.
Installer bits are available in various materials and styles for drilling wood, masonry and metal.
Flexible shaft bit
Another, different, bit also called an installer bit has a very long flexible shaft, typically up to long, with a small twist bit at the end. The shaft is made of spring steel instead of hardened steel, so it can be flexed while drilling without breaking. This allows the bit to be curved inside walls, for example to drill through studs from a light switch box without needing to remove any material from the wall. These bits usually come with a set of special tools to aim and flex the bit to reach the desired location and angle, although the problem of seeing where the operator is drilling still remains.
This flexible installer bit is used in the US, but does not appear to be routinely available in Europe.
Drill bit shank
Different shapes of shank are used. Some are simply the most appropriate for the chuck used; in other cases particular combinations of shank and chuck give performance advantages, such as allowing higher torque, greater centering accuracy, or efficient hammering action.
See also
Drill and tap size chart
Drill bit shank
Drill bit sizes
Drill rod
Endmill
References
Citations
Cited references
External links
Nomenclature
Hole making
Metallic objects
Woodworking tools | Drill bit | Physics | 9,178 |
2,829,162 | https://en.wikipedia.org/wiki/Green%20wave | A green wave occurs when a series of traffic lights (usually three or more) are coordinated to allow continuous traffic flow over several intersections in one main direction.
Any vehicle traveling along with the green wave (at an approximate speed decided upon by the traffic engineers) will see a progressive cascade of green lights, and not have to stop at intersections. This allows higher traffic loads, and reduces noise and energy use (because less acceleration and braking is needed). In practical use, only a group of cars (known as a "platoon", the size of which is defined by the signal times) can use the green wave before the time band is interrupted to give way to other traffic flows.
The coordination of the signals is sometimes done dynamically, according to sensor data of currently existing traffic flows - otherwise it is done statically, by the use of timers. Under certain circumstances, green waves can be interwoven with each other, but this increases their complexity and reduces usability, so in conventional set-ups only the roads and directions with the heaviest loads get this preferential treatment.
In 2011, a study modeled the implementation of green waves during the night in a busy Manchester suburb (Chorlton-cum-Hardy) using S-Paramics microsimulation and the AIRE emissions module. The results showed using green wave signal setups on a network have the potential to:
Reduce , , and PM10 emissions from traffic.
Reduce fuel consumption of vehicles.
Be used on roads that intersect with other green waves.
Reduce the time cars wait at side roads.
Give pedestrians more time to cross at crossings and help them to cross streets as vehicles travel in platoons
Control the speed of traffic in urban areas.
Reduce component wear of vehicles and indirect energy consumption through their manufacture
A green wave in both directions may be possible with different speed recommendations for each direction, otherwise traffic coming from one direction may reach the traffic light faster than from the other direction if the distance from the previous traffic light is not mathematically a multiple of the opposite direction. Alternatively a dual carriageway may be suitable for green waves in both directions if there is sufficient space in the central reservation to allow pedestrians to wait and separate pedestrian crossing stages for each side of the road.
Green waves are sometimes used to facilitate bicycle traffic. Copenhagen, Amsterdam, San Francisco, and other cities may synchronize traffic signals to provide a green light for a flow of cyclists. On San Francisco's Valencia Street, the signals were retimed in early 2009 to provide a green wave in both directions, possibly the first street in the world with a two-way green wave for cyclists. In Copenhagen, a green wave on the arterial street Nørrebrogade facilitates 30,000 cyclists to maintain a speed of for . In Amsterdam, cyclists riding at a speed of will be able to travel without being stopped by a red signal. Tests show that public transport can benefit as well and cars may travel slightly slower.
In Vienna, Austria a stretch of cycle path on Lassallestraße in the 2nd district has a display that tells cyclists their speed and the speed they must maintain to make the next green light.
Frederiksberg, a part of Copenhagen, the capital of Denmark, has implemented a green wave for emergency vehicles to improve the public services.
In the UK, in 2009, it was revealed that the Department for Transport had previously discouraged green waves as they reduced fuel usage, and thus less revenue was raised from fuel taxes. Despite this government Webtag documents were only updated in 2011. It is still unclear if the economic appraisal software used to apply these guidelines has also been updated and if the new guidelines are being applied to new projects.
In a more limited sense, the term Green wave has also been applied to railroad travel. For several years starting in the 1960s, the German Federal Railway maintained an advertising campaign featuring the slogan , which communicated the notion of speed, limited delays and open track blocks to potential customers choosing between train and automobile travel, and was featured prominently in promotional materials ranging from posters to radio jingles.
See also
Bus priority signal
Fundamental diagram of traffic flow
Road traffic control
Traffic wave
Michigan Left
References
Further reading
External links
http://www.accesstoenergy.com/view/atearchive/s76a4022.htm
http://www.ivv.tuwien.ac.at/fileadmin/mediapool- verkehrsplanung/Diverse/Lehre/RingVO_2014/2014-03-31_Felix-Beyer.pdf
http://www.onemotoring.com.sg/publish/onemotoring/en/on_the_roads/traffic_management/intelligent_transport_systems/glide.html
https://web.archive.org/web/20080613190052/http://www.bathnes.gov.uk/BathNES/transportandstreets/roadshighwaysandpavements/lightingtrafficlights/UrbanTrafficManagementControl/default.htm
http://www.yairharel.com/2010/07/20/how-to-surf-the-green-wave
http://www.sacs.dk/
Traffic signals
Waves | Green wave | Physics | 1,081 |
13,048,207 | https://en.wikipedia.org/wiki/Tizen%20Association | The Tizen Association, formerly the LiMo Foundation (short for Linux Mobile), is a non-profit consortium which develops and maintains the Tizen mobile operating system. Tizen is a Linux-based operating system for smartphones and other mobile devices. The founding members were Motorola, NEC, NTT DoCoMo, Panasonic Mobile Communications, Samsung Electronics, and Vodafone. The consortium's work resulted in the LiMo Platform—which was integrated into mobile phone products from NEC, Panasonic and Samsung—and later became the Tizen platform.
Members
Members of the Tizen Association are:
Phones
Phones using LiMo include:
LiMo & Tizen
In the end of September 2011 it was announced by the Linux Foundation that MeeGo will be totally replaced by the Tizen mobile operating system project during 2012. Tizen will be a new free and open source Linux-based operating-system which itself will not be released until the first quarter of 2012. Intel and Samsung, in collaboration with the LiMo Foundation and assisting MeeGo developers, have been pointed out to lead the development of this new software platform, using third-party developer frameworks that will primarily be built on the HTML5 and other web standards. As of October 2012, the LiMo website traffic is redirected to tizen.org.
See also
Linux Phone Standards Forum
Android (operating system) from Google
MeeGo Operating System from Nokia and Intel (former Maemo and Moblin)
Openmoko
Symbian Foundation
Open Handset Alliance
References
https://archive.today/20130127214730/http://www.linuxdevices.com/news/NS8711151732.html
External links
Coming Battle Over Open Source Phones
Linux organizations
Mobile Linux
Organizations established in 2007 | Tizen Association | Technology | 359 |
62,244,367 | https://en.wikipedia.org/wiki/International%20Medal | The International Medal is an award presented by the Royal Academy of Engineering in the UK, to individuals who are non-UK citizens or residents. It is awarded to individuals who have made exceptional contributions to engineering.
Background
The medal is given annually to recognize individuals who have made sustained personal achievements in any field of engineering. Individuals are foreign nationals who are not citizens nor residents of the UK.
Winners
Source: RAE
2011 Andrew Viterbi
2008 Abdul Kalam
2007 Xu Kuangdi
2006 Cham Tao Soon
See also
List of engineering awards
References
International awards
Awards established in 1991
Awards of the Royal Academy of Engineering | International Medal | Technology | 121 |
10,038,531 | https://en.wikipedia.org/wiki/Premack%27s%20principle | The Premack principle, or the relativity theory of reinforcement, states that more probable behaviors will reinforce less probable behaviors.
Origin and description
The Premack principle was derived from a study of Cebus monkeys by David Premack. It was found that parameters can be understood in which the monkey operates. However, it has explanatory and predictive power when applied to humans, and it has been used by therapists practicing applied behavior analysis. The Premack principle suggests that if a person wants to perform a given activity, the person will perform a less desirable activity to get at the more desirable activity; that is, activities may themselves be reinforcers. An individual will be more motivated to perform a particular activity if they know that they will partake in a more desirable activity as a consequence. Stated objectively, if high-probability behaviors (more desirable behaviors) are made contingent upon lower-probability behaviors (less desirable behaviors), then the lower-probability behaviors are more likely to occur. More desirable behaviors are those that individuals spend more time doing if permitted; less desirable behaviors are those that individuals spend less time doing when free to act. Just as "reward" was commonly used to alter behavior long before "reinforcement" was studied experimentally, the Premack principle has long been informally understood and used in a wide variety of circumstances. An example is a mother who says, "You have to finish your vegetables (low frequency) before you can eat any ice cream (high frequency)."
Experimental evidence
David Premack and his colleagues, and others have conducted several experiments to test the effectiveness of the Premack principle in humans. One of the earliest studies was conducted with young children. Premack gave the children two response alternatives, eating candy or playing a pinball machine, and determined which of these behaviors was more probable for each child. Some of the children preferred one activity, some the other. In the second phase of the experiment, the children were tested with one of two procedures. In one procedure, eating was the reinforcing response, and playing pinball served as the instrumental response; that is, the children had to play pinball to eat candy. The results were consistent with the Premack principle: only the children who preferred eating candy over playing pinball showed a reinforcement effect. The roles of responses were reversed in the second test, with corresponding results. That is, only children who preferred playing pinball over eating candy showed a reinforcement effect. This study, among others, helps to confirm the Premack principle in showing that a high-probability activity can be an effective reinforcer for an activity that the subject is less likely to perform.
An alternative: response deprivation theory
The Premack principle may be violated if a situation or schedule of reinforcement provides much more of the high-probability behavior than of the low-probability behavior. Such observations led the team of Timberlake and Allison (1974) to propose the response deprivation hypothesis. Like the Premack principle, this hypothesis bases reinforcement of one behavior on access to another. Experimenters observe the extent to which an individual is deprived of or prevented from performing the behavior that is later made contingent on the second behavior. Reinforcement occurs only when the situation is set up so that access to the contingent response has been reduced relative to its baseline level. In effect, the subject must subsequently increase responding to make up for the "deprivation" of the contingent response. Several subsequent experiments have supported this alternative to the Premack principle.
Application to applied behavior analysis
In applied behavior analysis, the Premack principle is sometimes known as "grandma's rule", which states that making the opportunity to engage in high-frequency behavior contingent upon the occurrence of low-frequency behavior will function as a reinforcer for the low-frequency behavior. In other words, an individual must "first" engage in the desired target behavior, "then" they get to engage something reinforcing in return. For example, to encourage a child who prefers chocolate candy to eat vegetables (low-frequency behavior), the behaviorist would want to make access to eating chocolate candy (high-frequency behavior) contingent upon consuming the vegetables (low-frequency behavior). In this example, the statement would be, "first eat all of your vegetables; then you can have one chocolate candy." This statement or "rule" serves to make a highly probable behavior or preferred event used contingently to strengthen a low likely or non-preferred event. Its applications are seen in many different settings, from early intervention services, at home, to educational systems.
See also
List of eponymous laws
Citations
Behaviorism | Premack's principle | Biology | 925 |
44,088,995 | https://en.wikipedia.org/wiki/Acoustoelastography | Acoustoelastography is an ultrasound technique that relates ultrasonic wave amplitude changes to a tendon's mechanical properties.
See also the page on the acoustoelastic effect.
References
Medical technology | Acoustoelastography | Biology | 44 |
32,546,455 | https://en.wikipedia.org/wiki/Microextrusion | Microextrusion is a microforming extrusion process performed at the submillimeter range. Like extrusion, material is pushed through a die orifice, but the resulting product's cross section can fit through a 1mm square. Several microextrusion processes have been developed since microforming was envisioned in 1990. Forward (ram and billet move in the same direction) and backward (ram and billet move in the opposite direction) microextrusion were first introduced, with forward rod-backward cup and double cup extrusion methods developing later. Regardless of method, one of the greatest challenges of creating a successful microextrusion machine is the manufacture of the die and ram. "The small size of the die and ram, along with the stringent accuracy requirement, needs suitable manufacturing processes." Additionally, as Fu and Chan pointed out in a 2013 state-of-the-art technology review, several issues must still be resolved before microextrusion and other microforming technologies can be implemented more widely, including deformation load and defects, forming system stability, mechanical properties, and other size-related effects on the crystallite (grain) structure and boundaries.
Development and use
Microextrusion is an outgrowth of microforming, a science that was in its infancy in the early 1990s. In 2002, Engel et al. expressed that up to that point, only a few research experiments involving micro-deep drawing and extruding processes had been attempted, citing limitations in shearing on billets and difficulties in tool manufacturing and handling. By the mid- to late 2000s, researchers were working on issues such as billet flow, interfacial friction, extrusion force, and size effects, "the deviations from the expected results that occur when the dimension of a workpiece or sample is reduced." Most recently, research into using ultrafine-grained material at higher formation temperatures and applying ultrasonic vibration to the process has pushed the science further. However, before bulk production of microparts such as pins, screws, fasteners, connectors, and sockets using microforming and microextrusion techniques can occur, more research into billet production, transportation, positioning, and ejection are required.
Microextrusion techniques have also been applied to bioceramic and plastic extrusion and the manufacture of components for resorbable and implantable medical devices, from bioresorbable stents to controlled drug release systems.
Microextrusion processes
Like normal macro-level extrusion, several similar microextrusion processes have been described over the years. The most basic processes were forward (direct) and backward (indirect) microextrusion. The ram (which propels the billet forward) and billet both move in the same direction with forward microextrusion, while in backward microextrusion has the ram and billet moving in opposite directions. These in turn have been applied to specialized applications such as the manufacture of microbillet, brass micropins, microgear shafts, and microcondensers. However, other processes have been applied to microextrusion, including forward rod–backward cup extrusion and double cup (one forward, one backward) extrusion.
Strengths and limitations
Strengths of microextrusion over other manufacturing processes include its ability to create very complex cross-sections, preserve chemical properties, condition physical properties, and process materials which are delicate or dependent on physical or chemical properties. However, microextrusion has some limitations, though primarily related to the need for improvement of the relatively young process. Dixit and Das described it thus in 2012:
With the scaling down of dimensions and increasing geometric complexity of objects, currently available technologies and systems may not be able to meet the development needs. New measuring devices, principles and instrumentation, tolerance rules, and procedures have to be developed. Materials databases with detailed information on various materials and their properties/interface properties including microstructures and size effect would be very useful for product innovation and process design. More studies are necessary on micro/nanowear and damages/failures of the micromanufacturing tools. The forming limits for different types of materials at the microlevel must be prescribed. More specific considerations must be incorporated into the design of machines that are scaled down for microforming to meet engineering applications and requirements.
Further reading
References
Forming processes
Medical technology
Drug manufacturing | Microextrusion | Biology | 909 |
42,877,569 | https://en.wikipedia.org/wiki/Relationship%20between%20mathematics%20and%20physics | The relationship between mathematics and physics has been a subject of study of philosophers, mathematicians and physicists since antiquity, and more recently also by historians and educators. Generally considered a relationship of great intimacy, mathematics has been described as "an essential tool for physics" and physics has been described as "a rich source of inspiration and insight in mathematics". Some of the oldest and most discussed themes are about the main differences between the two subjects, their mutual influence, the role of mathematical rigor in physics, and the problem of explaining the effectiveness of mathematics in physics.
In his work Physics, one of the topics treated by Aristotle is about how the study carried out by mathematicians differs from that carried out by physicists. Considerations about mathematics being the language of nature can be found in the ideas of the Pythagoreans: the convictions that "Numbers rule the world" and "All is number", and two millennia later were also expressed by Galileo Galilei: "The book of nature is written in the language of mathematics".
Historical interplay
Before giving a mathematical proof for the formula for the volume of a sphere, Archimedes used physical reasoning to discover the solution (imagining the balancing of bodies on a scale). Aristotle classified physics and mathematics as theoretical sciences, in contrast to practical sciences (like ethics or politics) and to productive sciences (like medicine or botany).
From the seventeenth century, many of the most important advances in mathematics appeared motivated by the study of physics, and this continued in the following centuries (although in the nineteenth century mathematics started to become increasingly independent from physics). The creation and development of calculus were strongly linked to the needs of physics: There was a need for a new mathematical language to deal with the new dynamics that had arisen from the work of scholars such as Galileo Galilei and Isaac Newton. The concept of derivative was needed, Newton did not have the modern concept of limits, and instead employed infinitesimals, which lacked a rigorous foundation at that time. During this period there was little distinction between physics and mathematics; as an example, Newton regarded geometry as a branch of mechanics.
Non-Euclidean geometry, as formulated by Carl Friedrich Gauss, János Bolyai, Nikolai Lobachevsky, and Bernhard Riemann, freed physics from the limitation of a single Euclidean geometry. A version of non-Euclidean geometry, called Riemannian geometry, enabled Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity.
In the 19th century Auguste Comte in his hierarchy of the sciences, placed physics and astronomy as less general and more complex than mathematics, as both depend on it. In 1900, David Hilbert in his 23 problems for the advancement of mathematical science, considered the axiomatization of physics as his sixth problem. The problem remains open.
In 1930, Paul Dirac invented the Dirac delta function which produced a single value when used in an integral.
The mathematical rigor of this function was in doubt until the mathematician Laurent Schwartz developed on the theory of distributions.
Connections between the two fields sometimes only require identifing similar concepts by different names, as shown in the 1975 Wu–Yang dictionary, that related concepts of gauge theory with differential geometry.
Physics is not mathematics
Despite the close relationship between math and physics, they are not synonyms. In mathematics objects can be defined exactly and logically related, but the object need have no relationship to experimental measurements. In physics, definitions are abstractions or idealizations, approximations adequate when compared to the natural world. In 1960, Georg Rasch noted that no models are ever true, not even Newton's laws, emphasizing that models should not be evaluated based on truth but on their applicability for a given purpose. For example, Newton built a physical model around definitions like his second law of motion based on observations, leading to the development of calculus and highly accurate planetary mechanics, but later this definition was superseded by improved models of mechanics. Mathematics deals with entities whose properties can be known with certainty. According to David Hume, only statements that deal solely with ideas themselves—such as those encountered in mathematics—can be demonstrated to be true with certainty, while any conclusions pertaining to experiences of the real world can only be achieved via "probable reasoning". This leads to a situation that was put by Albert Einstein as "No number of experiments can prove me right; a single experiment can prove me wrong." The ultimate goal in research in pure mathematics are rigorous proofs, while in physics heuristic arguments may sometimes suffice in leading-edge research. In short, the methods and goals of physicists and mathematicians are different. Nonetheless, according to Roland Omnès, the axioms of mathematics are not mere conventions, but have physical origins.
Role of rigor in physics
Rigor is indispensable in pure mathematics. But many definitions and arguments found in the physics literature involve concepts and ideas that are not up to the standards of rigor in mathematics.
For example,
Freeman Dyson characterized quantum field theory as having two "faces". The outward face looked at nature and there the predictions of quantum field theory are exceptionally successful. The inward face looked at mathematical foundations and found inconsistency and mystery. The success of the physical theory comes despite its lack of rigorous mathematical backing.
Philosophical problems
Some of the problems considered in the philosophy of mathematics are the following:
Explain the effectiveness of mathematics in the study of the physical world: "At this point an enigma presents itself which in all ages has agitated inquiring minds. How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?" —Albert Einstein, in Geometry and Experience (1921).
Clearly delineate mathematics and physics: For some results or discoveries, it is difficult to say to which area they belong: to the mathematics or to physics.
What is the geometry of physical space?
What is the origin of the axioms of mathematics?
How does the already existing mathematics influence in the creation and development of physical theories?
Is arithmetic analytic or synthetic? (from Kant, see Analytic–synthetic distinction)
What is essentially different between doing a physical experiment to see the result and making a mathematical calculation to see the result? (from the Turing–Wittgenstein debate)
Do Gödel's incompleteness theorems imply that physical theories will always be incomplete? (from Stephen Hawking)
Is mathematics invented or discovered? (millennia-old question, raised among others by Mario Livio)
Education
In recent times the two disciplines have most often been taught separately, despite all the interrelations between physics and mathematics. This led some professional mathematicians who were also interested in mathematics education, such as Felix Klein, Richard Courant, Vladimir Arnold and Morris Kline, to strongly advocate teaching mathematics in a way more closely related to the physical sciences. The initial courses of mathematics for college students of physics are often taught by mathematicians, despite the differences in "ways of thinking" of physicists and mathematicians about those traditional courses and how they are used in the physics courses classes thereafter.
See also
Non-Euclidean geometry
Fourier series
Conic section
Kepler's laws of planetary motion
Saving the phenomena
The Unreasonable Effectiveness of Mathematics in the Natural Sciences
Mathematical universe hypothesis
Zeno's paradoxes
Axiomatic system
Mathematical model
Empiricism
Logicism
Formalism
Mathematics of general relativity
Bourbaki
Experimental mathematics
History of Maxwell's equations
History of astronomy
Why Johnny Can't Add
Mathematical formulation of quantum mechanics
Scientific modelling
All models are wrong
References
Further reading
(part 1) (part 2).
External links
Gregory W. Moore – Physical Mathematics and the Future (July 4, 2014)
IOP Institute of Physics – Mathematical Physics: What is it and why do we need it? (September 2014)
Feynman explaining the differences between mathematics and physics in a video available on YouTube
Philosophy of physics
Philosophy of mathematics
History of science
Mathematics education
Physics education
Foundations of mathematics
History of mathematics
History of physics
mathematics | Relationship between mathematics and physics | Physics,Mathematics,Technology | 1,631 |
12,795,518 | https://en.wikipedia.org/wiki/Hornet%20moth | The hornet moth or hornet clearwing (Sesia apiformis) is a large moth native to Europe and the Middle East and has been introduced to North America. Its protective coloration is an example of Batesian mimicry, as its similarity to a hornet makes it unappealing to predators. The hornet moth has been linked to the large dieback of poplar trees across Europe because its larvae bore into the trunk of the tree before re-emerging as adults.
Geographic range
Sesia apiformis is found across mainland Europe, Great Britain, and in parts of the Middle East. It has also recently been introduced to America and Canada.
Habitat
Adult hornet moths are often found in open habitat such as parks, golf courses, and marshy areas. Females prefer to lay eggs on old or isolated trees, especially trees surrounded by vegetation.
Food resources
Larvae feed on host trees of several poplar species including Populus tremula and Populus nigra as well as Salix caprea. The moth also prefers to feed around trees surrounded by heavy vegetation. It was found that trees near this heavy vegetation suffered from a lot more infestation than those without the basal vegetation.
Life history
Egg
Eggs of S. apiformis are brown and ovular in shape and have a major diameter of 0.43–0.85 mm. They are laid mostly around the base of an isolated tree or in surrounding vegetation. Since the female S. apiformis does not tend to disperse far from the tree from which she emerged, searching for a host plant is not a necessary step before oviposition. The female flies around the tree and continuously deposits eggs, laying from hundreds to thousands at a time. After depositing the eggs, the female flies away and does not return to care for the eggs or the larvae. It is evident when comparing the number of eggs produced to the number of adults that emerge each year that there is a large mortality between egg and adult stages. Therefore, the large number of eggs probably exhibits a trade-off for the female, with a large energy investment in developing eggs but no continued investment of parental care.
Larva
S. apiformis larvae hatch from September to May and spend two or three years in the larval stage, overwintering as larvae. The larvae are mostly found around the roots of host trees. Prior to pupating, the larvae bore up to ten centimeters into the trunk of the host tree leaving a thin layer of bark over the entrance to disguise the tunnel. Once inside the larva builds a cocoon from silk and excavated tree material.
Pupa
The pupa of S. apiformis are lined with rings of hard spines called adminicula that allow the pupa to maneuver through the bored tunnel in the tree. Males and females of the species have differing numbers of adminicula on the pupa and thus can be sexed prior to emergence as an adult. Additionally, female pupa are larger than those of males. Before the adult moth can emerge from the host tree, the pupa must make its way to the entrance of the tunnel. It does this by bending and straightening which causes the adminicula to catch on any indentations in the tree trunk and thus propels its way up the tunnel. It proceeds in this fashion until part of the pupa is protruding on the surface of the tree and stays in this position until the adult moth emerges.
Adult
Adults emerge between mid-June and July. Females spend the first few hours after emergence on the tree from which they emerged and typically do not fly until after they have mated. Conversely, males fly almost immediately after emergence and begin to look for a mate. Within seconds of emergence and prior to flying, adults expel liquid waste that can reach up to 70% of their body volume.
Adult hornet moths have clear wings that span 34–50 mm. Females and males both have yellow and black striped abdomens, but the number of stripes varies; females have two stripes whereas males have three. Females are on average larger than males.
Enemies
The primary predators of S. apiformis are bird species such as magpies and great tits, which is unexpected given the moths Batesian mimicry, as these birds are not among the species that eat hornets. A likely explanation for this phenomenon is an absence of the model hornet, which would lead to a decreased efficacy of the mimicry. This could lead to the conspicuous coloration having the opposite intended effect once birds realize that the hornet moth is harmless and begin to seek it out.
Protective coloration
The coloration of S. apiformis is an example of Batesian mimicry, as it resembles the coloration of hornets. The moth is as large as a hornet and even has the hornet's rather jerky flight when disturbed, but it has more yellow and lacks the waist between the abdomen and the thorax. This provides it protection from predators who want to avoid being stung and thus do not try to eat the moth.
Mating
Mate searching
Female S. apiformis use specialized posterior glands to emit sex pheromones in order to attract potential mates. When the female is ready to mate, usually very soon after emerging from the pupae, she raises her abdomen and releases pheromones for several minutes at a time. The effectiveness of this calling is crucial for mating as the moths have only a short lifespan in which to mate and reproduce.
Female-Male interaction
The male S. apiformis does not appear to exhibit any courtship behavior; as soon as a male and female come into contact they are likely to begin mating. After the female has mated with one male, she will not wait to mate again. Each female usually mates several times before laying eggs. Males also do not appear to show a preference for virgin females as they will begin trying to mate with a female almost immediately after she has finished mating. Copulation is performed on the trunk of a tree with the female positioned above the male.
Interactions with humans
Pest
Due to the large dieback of poplar trees across eastern United Kingdom and the association of boring larvae, S. apiformis has often been considered an agricultural pest. However, recent evidence suggests that the moth is not the primary driver of poplar tree dieback but rather increases the effects due to drought and human influence. Attempts to control the species have used the sex pheromones of S. apiformis females to create traps that attract individuals of the species.
Conservation
Populations of S. apiformis in the United Kingdom have shown evidence of decline over the past couple decades. While the adult forms are elusive and therefore have always been difficult to observe in the wild, the partially protruding pupae that are left after adult emergence provide a proxy for the number of moths in an ecosystem. In several sites around southern England where old exit holes were found, no new exit holes were found in trees, suggesting local population extinction. This coupled with the under-reporting of the species has led it to be classified as nationally scarce in the United Kingdom.
Similar species
British Isles Sesia bembeciformis (lunar hornet moth) smaller with black head and shoulders.
Europe Eusphecia melanocephala Dalman 1816
Europe Eusphecia pimplaeformis Oberthür 1872
Gallery
References
External links
Lepidoptera of Belgium
Swedish moths
Lepiforum.de
Vlindernet.nl
UK moths
Moths described in 1759
Sesiidae
Moths of Europe
Moths of Asia
Mimicry
Taxa named by Carl Alexander Clerck
Palearctic Lepidoptera | Hornet moth | Biology | 1,521 |
3,376,907 | https://en.wikipedia.org/wiki/Jack%20Linnett | John Wilfrid Linnett (3 August 1913 – 7 November 1975) was Vice-Chancellor at the University of Cambridge from 1973 to 1975. He was for many years a Fellow of the Queen's College, Oxford, and a demonstrator in Inorganic Chemistry at the University of Oxford.
Education
He was born on 3 August 1913 in Coventry in England and educated at King Henry VIII School and St John's College, University of Oxford, and was later a Junior Fellow there.
Academic career
He was appointed Professor of Physical Chemistry at Cambridge University in 1965. He was Master of Sidney Sussex College, Cambridge, on the Council of the Royal Society, and was President of the Faraday Society.
Throughout his career as a chemist, he was noted for his wide interests, making substantial contributions in theoretical chemistry, mass spectrometry, explosion limits, atom recombination reactions, combustion, and several other areas.
Octet rule
In 1960, Linnett originated a modification to the octet rule, originally proposed by Lewis, concerning valence electrons. He proposed that the octet should be considered as a double quartet of electrons rather than as four pairs, and hence the theory became known as "Linnett double-quartet theory". Using this method, he was able to explain the stability of 'odd electron' molecules such as nitric oxide and oxygen. This theory was set out in a book "The Electronic Structure of Molecules: A New Approach", published by Methuen & Co Ltd, London, 1964. His general book "Wave Mechanics and Valency" also published by Methuen & Co Ltd, London, appeared in 1960.
Death
He died of a heart attack in the Athenaeum Club, London, on 7 November 1975, only five weeks after ceasing to be Vice-Chancellor of the University of Cambridge.
The John Wilfrid Linnett Visiting Professor of Chemistry was established in his memory in 1993 at the University of Cambridge.
References
1913 births
1975 deaths
Alumni of St John's College, Oxford
Fellows of St John's College, Oxford
Fellows of the Queen's College, Oxford
Fellows of the Royal Society
English chemists
Mass spectrometrists
Theoretical chemists
Fellows of Sidney Sussex College, Cambridge
Masters of Sidney Sussex College, Cambridge
People educated at King Henry VIII School, Coventry
Vice-chancellors of the University of Cambridge
Professors of Physical Chemistry (Cambridge) | Jack Linnett | Physics,Chemistry | 482 |
7,287,976 | https://en.wikipedia.org/wiki/BMT%20Group | BMT Group Ltd (previously British Maritime Technology), established in 1985, is an international multidisciplinary engineering, science and technology consultancy offering services particularly in the defence, security, critical infrastructure, commercial shipping, and environment sectors. The company's heritage dates to World War II. BMT's head office is at the Zig Zag Building, 70 Victoria Street Westminster, London, United Kingdom.
BMT specialises in maritime engineering design, design support, risk and contract management and provides services focused by geography, technology and/or market sector. It employs around 1,300 professionals operating from 27 offices across four continents, with primary bases in Australia, Europe, North America and Asia-Pacific.
In August 2017, Sarah Kenny OBE was appointed as Chief Executive. A marine environmental scientist by background, Sarah has worked in maritime science and technology businesses throughout her career.
Sarah recently completed her tenure as Chair of Chair of Maritime UK. to which role she was appointed in 2021. She is also a board member of Maritime London, a Trustee Director of the National Oceanography Centre, a member of the UK Defence Innovation Advisory Panel, and of the National Shipbuilding Office SEG group.
Sarah is also an Honorary Captain of the Royal Navy, an Honorary member of the Royal Corps of Naval Constructors, and a Younger Brother of Trinity House. She was awarded an OBE in the 2019 Queen's Birthday Honours list, for services to the Maritime Industry and Diversity.
BMT's annual turnover for the year 2023 was approx. £184.7m.
History
Originally formed from the merger and privatisation of the UK's British Shipbuilders Research Association (BSRA) and the National Maritime Institute (NMI), it enjoyed tax-free status as a scientific research association for more than a decade.
BMT's heritage includes the water tanks where the famous Bouncing Bomb, used in the Dambusters Raid, was developed during World War II, as well as more recent advances in computer-aided design and aerodynamics.
BMT has also helped to assess the damage caused by major maritime disasters, from the Piper Alpha platform and the Herald of Free Enterprise in 1987, to the Sea Empress oil spill and the effects of Hurricane Katrina.
BMT has also conducted airflow wind tunnel testing of major landmarks and tall buildings, including the Bird's Nest Olympic Stadium in Beijing, the Stonecutters Bridge in Hong Kong; and the 21st Century Tower and Burj al-Arab in Dubai, although it no longer operates wind tunnels.
Employee Ownership
BMT Group Ltd is a company limited by guarantee with its assets held in an Employee Benefit Trust. The remit of the EBT is to ensure the long-term sustainability of the group with the employees as beneficiaries. The EBT trustees are chaired by Sue Mackenzie and include other non-executive directors from the board of BMT and a wholly independent external trustee.
Notable Defence Projects
Queen Elizabeth-Class Aircraft Carrier Design
BMT gained prominence in 2003 when the Secretary of State for Defence revealed the crucial design role of BMT Defence Services in the Future Aircraft Carriers programme. The company provided much of the design expertise within the Thales CVF Team, whose design was taken forward into the alliance with BAE Systems to create what is now the Royal Navy's Queen Elizabeth-class aircraft carrier.
Other Naval Projects
Tide Class (MARS) Royal Fleet Auxiliary Tankers
Tide Class is a fleet of four tankers built to enhance the Royal Navy's maritime capabilities. The first vessel in the class was commissioned into the Royal Fleet Auxiliary (RFA) service in 2016. The next-generation tankers are part of the Military Afloat Reach and Sustainability (MARS) project and were designed to replace the RFA's ageing fleet of single-hulled tankers.
The vessels are designed by BMT, in cooperation with BMT Reliability Consultants and BMT Isis, and are based on the AEGIR tanker. The Tide Class vessels are intended to provide logistics support and services such as transportation of fuel, fresh water, food, and weaponry to the Royal Navy's warships and vessels deployed around the world.
In addition to maintaining the Royal Navy's bulk fuel replenishment at sea capabilities, the tankers can also conduct constabulary and humanitarian aid missions, as well as provide assistance to NATO and coalition allied forces.
Aurora Engineering Delivery Partnership (EDP)
Aurora EDP is a partnership between QinetiQ, AtkinsRéalis and BMT and is the UK Ministry of Defence's primary route for procuring engineering services to ensure that necessary systems and equipment, including maintenance and spares, are available when and where they are required to meet Royal Navy and Royal Fleet Auxiliary operational demand. The partnership thus contributes to platform availability, capability and safety, supporting DE&S Ships Domain through the Master Record Data Centre (MRDC), the Ministry of Defence's core facility for ship information configuration services for the Royal Navy and Royal Fleet Auxiliary Surface Ship Fleet.
Royal Fleet Auxiliary’s Fleet Solid Support (FSS)
Fleet Solid Support Ships are the UK Royal Fleet Auxiliary's modern solid stores replenishment ships- an essential supporting element to the delivery of the Maritime Carrier Strike Group. Fleet Solid Support (FSS) will provide support ships designed to deliver munitions, supplies and provisions to the Royal Navy while at sea. They will provide logistical and operational support, including counter-piracy and counter-terrorism missions and will collaborate with allies on operations.
In January 2023, DE&S awarded a contract worth £1.6 billion to Team Resolute, composed of Harland & Wolff, BMT and Navantia UK, to deliver three Fleet Solid Support ships to the RFA.
The construction of the ships, being built to BMT's design, will take place in both the UK and Spain. Each ship will have a core RFA crew of 101, with accommodation provided for an additional 80 personnel operating helicopters, boats, or performing other roles when required.
The ships are designed with an emphasis on minimising carbon emissions, equipped with energy-efficient technologies to decrease power consumption and are adaptable to reduce their carbon footprint by using low-carbon, non-fossil fuels, and future sustainable energy sources. They are also designed to be adaptable from the outset to achieve a Carbon Zero status by the end of their 30-year operational lifespan.
The production of the first FSS ship is expected to begin in 2025 across three shipyards and all three ships will enter service after final equipment fits and military trials, by 2032.
Team Victoria-Class
Team Victoria-Class is a partnership involving Babcock Canada, Seaspan Victoria Shipyards, and BMT, providing submarine maintenance and sustainment for the Royal Canadian Navy. Operating under the Victoria In-Service Support Contract (VISSC) since 2008, the team manages the upkeep of the Victoria-Class submarines, focusing on project management, engineering, and supply chain development. The submarines perform strategic military roles, such as coastal surveillance and joint coalition missions. The initiative also supports Canadian industry, Indigenous relations, and academic institutions.
ELMS Contract Award
BMT was awarded the Engineering, Logistics, and Management Support (ELMS) contract by the Royal Canadian Navy. The contract involves providing services for the Arctic and Offshore Patrol Ships (AOPS) and Joint Support Ships (JSS), focusing on engineering expertise, in-service support, and supply chain management. The ELMS initiative aims to enhance the operational readiness and sustainment of the Navy's vessels, ensuring long-term efficiency in fleet management.
Non-Defence Projects
BMT REMBRANDT
BMT REMBRANDT is a simulation and training tool developed by BMT for maritime pilot training, incident reconstruction, and risk assessment. It offers high-fidelity simulations of vessel operations, with a focus on navigation and manoeuvring in various environments. The system is used for training mariners, analysing real-world incidents, and assessing operational risks. It supports dynamic modelling of ships and environmental factors, providing a realistic training experience. BMT REMBRANDT is widely employed in the maritime industry for its versatility in enhancing safety and operational efficiency.
Notable BMT REMBRANDT Projects
ROC Dock Project
The ROC Dock Project is a UK maritime innovation initiative involving BMT, which integrates high-fidelity simulation with real-world operations. The project uses BMT's synthetic REMBRANDT environment to enhance maritime training, design, and testing, providing a virtual platform for evaluating vessel behaviour and port operations. By combining digital simulations with physical trials, ROC Dock aims to drive advancements in maritime safety, operational efficiency, and technology development across the UK's maritime sector.
BMT MOD Tug Training Contract
BMT secured a contract with the UK Ministry of Defence (MOD) to provide tug training for Queen Elizabeth-class aircraft carriers. The program utilises BMT REMBRANDT to deliver realistic training scenarios for tug crews managing large vessels. The training focuses on enhancing safety and operational efficiency when manoeuvring the aircraft carriers in confined spaces. The contract emphasises BMT's expertise in simulation-based maritime training, supporting the MOD's requirements for advanced, high-fidelity training solutions.
Networked Simulators for Pilot and Tug Master Training
BMT has integrated networked simulators, including BMT REMBRANDT, to facilitate joint training for marine pilots and tug masters. This approach enables realistic, scenario-based training in which participants can practise complex manoeuvres and coordinated operations. The use of networked simulation enhances safety by allowing pilots and tug operators to train together, simulating the dynamics of real-life ship handling in various conditions. This system is designed to improve operational readiness and collaboration in challenging maritime environments.
Voyage Optimisation Technology
BMT has developed digital voyage optimisation solutions aimed at improving fuel efficiency, emissions reduction, and operational costs for the maritime industry. These tools utilise real-time data and predictive analytics to assist in route planning and decision-making, ensuring safer and more efficient navigation. The technology supports compliance with environmental regulations by optimising voyages to reduce fuel consumption and greenhouse gas emissions.
South Devon College Marine Training Initiative
South Devon College, in collaboration with BMT, launched a marine training initiative integrating advanced technology for maritime education. The program features simulators and other digital tools to train students in navigation, vessel operations, and marine engineering. The collaboration aims to equip the next generation of maritime professionals with industry-relevant skills and support local maritime development.
DNV Approval for BMT Simulators
BMT's simulator suite and associated software, including BMT REMBRANDT, received approval from DNV, an international classification society. The recognition certifies the simulators for use in maritime training and operational assessments. The suite offers high-fidelity simulations for ship handling, navigation, and incident reconstruction, meeting industry standards for training and competency evaluation.
Innovations
BMT SPARO Payload Delivery Device
The BMT SPARO is an innovative payload delivery system developed by BMT for drones, specifically designed for challenging front-line logistics and various other demanding environments. It replaces traditional drone-mounted winches with a novel approach that allows the payload to autonomously descend on a controlled line while the drone maintains a stable hover at a higher altitude.
The BMT SPARO system includes an internally powered cable drum and integrated rotors for horizontal manoeuvrability, providing precise control over the payload's positioning and delivery. This design improves safety and operational versatility by reducing the noise and complexity typically associated with traditional winch systems. It enables drones to perform precise deliveries in situations where direct landing is not feasible, such as hostile environments, disaster relief zones, or ship-to-shore transfers.
While initially developed for defence logistics, the BMT SPARO has potential applications in other sectors, including emergency response, humanitarian aid, and commercial delivery services, where reliable, accurate, and autonomous payload delivery is required. The system's design supports operations in diverse conditions, offering a new solution for aerial logistics.
Offshore Energy Infrastructure
BMT is an established designer of Crew Transfer Vessels for the offshore wind power sector, with vessels deployed in the North American, Japanese and Taiwanese markets. In February 2024 it unveiled its first Service Operation Vessel (SOV) design, capable of being powered by methanol (potentially the efuel variant).
BMT is involved earth observation for maritime markets, having been selected in February 2021 by the European Space Agency as part of the development team to assess the feasibility of applying space-based data to support the decommissioning of offshore energy assets, including oil and gas platforms and offshore wind farms.
Stampede Tension Leg Platform
BMT provided an Integrated Marine Monitoring System (IMMS) for the Stampede Tension Leg Platform (TLP) located in the U.S. Gulf of Mexico. The system is designed to monitor the platform's structural integrity, environmental conditions, and riser tension, enhancing safety and operational efficiency. The IMMS supplies real-time data to support platform management and decision-making in harsh offshore conditions, reflecting BMT's expertise in advanced monitoring solutions for the offshore energy sector.
Partnership for Subsea Monitoring
BMT and Sonardyne are collaborating to advance subsea monitoring technology with a focus on improving the accuracy and reliability of underwater asset monitoring. Their joint efforts aim to deliver a significant step-change in data quality for offshore applications, which will aid in enhancing safety, reducing operational risks, and optimising maintenance strategies. The partnership integrates BMT's expertise in monitoring solutions with Sonardyne's advanced subsea technologies to provide comprehensive monitoring systems for offshore platforms and other underwater assets.
Marine Monitoring for BP in the Gulf of Mexico
BMT has been contracted by BP to provide marine monitoring services for its Gulf of Mexico operations. The scope of the project includes monitoring the environmental conditions, structural health of offshore assets, and key safety indicators. BMT's systems will deliver real-time insights that help ensure operational safety and support compliance with environmental regulations. The contract represents BMT's continued partnership with BP in enhancing the safety and sustainability of offshore oil and gas activities.
Mad Dog Phase 2 Floating Production System
BMT has been awarded a contract to provide monitoring systems for the Mad Dog Phase 2 Floating Production System (FPS) in the Gulf of Mexico. The project involves implementing systems to track structural integrity, environmental conditions, and safety performance. These monitoring capabilities aim to improve the safety and efficiency of offshore operations by offering real-time data to inform maintenance and operational decisions. This initiative highlights BMT's commitment to supporting high-risk offshore projects with advanced monitoring technologies.
StratCat35 Crew Transfer Vessel
The StratCat35 is a crew transfer vessel (CTV) developed by BMT, in collaboration with Strategic Marine, for the offshore wind industry. The vessel was unveiled at WindEnergy Hamburg and is designed to meet diverse operator requirements, with a focus on sustainability in offshore wind operations.
The StratCat35 is part of Strategic Marine's range of CTVs and measures 35 metres in length. It features an expansive deck area to improve storage capacity and operational versatility. The vessel is equipped with BMT's Z-Bow hull form, which is engineered to enhance seakeeping capabilities, vessel speed, and overall performance in challenging offshore conditions.
The StratCat35 incorporates a hybrid propulsion system aimed at reducing greenhouse gas emissions and increasing fuel efficiency. The vessel is also configured to be methanol-ready, allowing for future adaptation to alternative fuel technologies without the need for significant retrofits. The vessel includes BMT's latest active fender system, designed to facilitate safer and more efficient technician transfers in rough sea conditions. It can accommodate up to 36 passengers and 10 crew members, with facilities designed to maximise comfort during transit.
The development of the StratCat35 is part of BMT and Strategic Marine's ongoing collaboration to advance CTV technology within the offshore wind sector. The vessel's design reflects a combination of sustainability considerations and operational efficiency.
Commercial Naval Architecture
Fire and Rescue Vessel Projects
BMT, in collaboration with Penguin Shipyard International, is developing a new 38-metre fire and rescue vessel for the Singapore Civil Defence Force (SCDF). The project expands on previous vessels, the Heavy Rescue Vessel (Red Manta) and Marine Rescue Vessel (Red Dolphin), delivered in 2019. The new vessels will enhance SCDF's rapid response capabilities with advanced firefighting and rescue equipment, high-speed capabilities, and a design focused on manoeuvrability and safety. The vessels are expected to be operational by 2025.
Greenline 150 Electric Ferry
BMT, in collaboration with Greenline Marine, unveiled the Greenline 150 Passenger Electric Ferry at the Canadian Ferry Association 2024 Conference. The 32-metre vessel, designed to accommodate 150 passengers, focuses on energy efficiency and environmental sustainability. It features an optimised hull form and electric propulsion system aimed at minimising energy consumption and emissions. The design meets international environmental standards and includes safety measures for battery systems. The ferry aims to enhance passenger comfort with a quieter, smoother ride.
References
External links
1985 establishments in the United Kingdom
Companies established in 1985
Defence companies of the United Kingdom
Engineering companies of the United Kingdom
Marine engineering organizations | BMT Group | Engineering | 3,519 |
23,534,720 | https://en.wikipedia.org/wiki/International%20Academy%20of%20Mathematical%20Chemistry | The International Academy of Mathematical Chemistry (IAMC) was founded in Dubrovnik, Croatia, in 2005 by Milan Randić. It is an organization for chemistry and mathematics avocation; its predecessors have been around since the 1930s. There are 88 Academy members () from around the world (27 countries), comprising six scientists awarded the Nobel Prize.
Governing body of the IAMC
2005–2007:
President: Alexandru Balaban
Vice-president: Milan Randić
Secretary: Ante Graovac
Treasurer: Dejan Plavšić
2008–2011:
President: Roberto Todeschini
Vice-president: Tomaž Pisanski
Secretary: Ante Graovac
Treasurer: Dražen Vikić-Topić
Member: Ivan Gutman
Member: Nikolai Zefirov
since 2011:
President: Roberto Todeschini
Vice-president: Edward C. Kirby
Vice-president: Sandi Klavžar
Secretary: Ante Graovac
Treasurer: Dražen Vikić-Topić
Member: Ivan Gutman
Member: Nikolai Zefirov
since 2019:
President:
Vice-president: Douglas J. Klein
Vice-president: Xueliang Li
Vice-president: Sandi Klavžar
Vice-president: Tomaž Pisanski
Secretary: Boris Furtula
Treasurer:
Member: Ivan Gutman
IAMC yearly meetings
2005 – Dubrovik, Croatia
2006 – Dubrovik, Croatia
2007 – Dubrovik, Croatia
2008 – Verbania, Italy
2009 – Dubrovik, Croatia
2010 – Dubrovik, Croatia
2011 – Bled, Slovenia
2012 – Verona, Italy
2014 – Split, Croatia
2015 – Kranjska Gora, Slovenia
2016 – Tianjin, China
2017 – Cluj, Romania
2019 – Bled, Slovenia
2023 – Kranjska Gora, Slovenia
See also
Mathematical chemistry
References
Mathematical chemistry
International academies
Scientific organizations established in 2005
2005 establishments in Croatia | International Academy of Mathematical Chemistry | Chemistry,Mathematics | 375 |
24,559,346 | https://en.wikipedia.org/wiki/Nominal%20impedance | Nominal impedance in electrical engineering and audio engineering refers to the approximate designed impedance of an electrical circuit or device. The term is applied in a number of different fields, most often being encountered in respect of:
The nominal value of the characteristic impedance of a cable or other form of transmission line.
The nominal value of the input, output or image impedance of a port of a network, especially a network intended for use with a transmission line, such as filters, equalisers and amplifiers.
The nominal value of the input impedance of a radio frequency antenna
The actual impedance may vary quite considerably from the nominal figure with changes in frequency. In the case of cables and other transmission lines, there is also variation along the length of the cable, if it is not properly terminated.
It is usual practice to speak of nominal impedance as if it were a constant resistance, that is, it is invariant with frequency and has a zero reactive component, despite this often being far from the case. Depending on the field of application, nominal impedance is implicitly referring to a specific point on the frequency response of the circuit under consideration. This may be at low-frequency, mid-band or some other point and specific applications are discussed in the sections below.
In most applications, there are a number of values of nominal impedance that are recognised as being standard. The nominal impedance of a component or circuit is often assigned one of these standard values, regardless of whether the measured impedance exactly corresponds to it. The item is assigned the nearest standard value.
600 Ω
Nominal impedance first started to be specified in the early days of telecommunications. At first, amplifiers were not available and, when they did become available, they were expensive. It was consequently necessary to achieve maximum power transfer from the cable at the receiving end in order to maximize the lengths of cables that could be installed. It also became apparent that reflections on the transmission line would severely limit the bandwidth that could be used or the distance that it was practicable to transmit. Matching equipment impedance to the characteristic impedance of the cable reduces reflections (and they are eliminated altogether if the match is perfect) and power transfer is maximised. To this end, all cables and equipment started to be specified to a standard nominal impedance. The earliest, and still the most widespread, standard is 600 Ω, originally used for telephony. The choice of this figure had more to do with the way telephones were interfaced into the local exchange than any characteristic of the local telephone cable. Telephones (old style analogue telephones) connect to the exchange through twisted pair cabling. Each leg of the pair is connected to a relay coil which detects the signalling on the line (dialling, handset off-hook etc.). The other end of one coil is connected to a supply voltage and the second coil is connected to ground. A telephone exchange relay coil is around 300 Ω, so the two of them together are terminating the line in 600 Ω.
The wiring to the subscriber in telephone networks is generally done in twisted pair cable. Its impedance at audio frequencies, and especially at the more restricted telephone band frequencies, is far from constant. It is possible to manufacture this kind of cable to have a 600 Ω characteristic impedance but it will only be this value at one specific frequency. This might be quoted as a nominal 600 Ω impedance at 800 Hz or 1 kHz. Below this frequency, the characteristic impedance rapidly rises and becomes more and more dominated by the ohmic resistance of the cable as the frequency falls. At the bottom of the audio band, the impedance can be several tens of kilohms. On the other hand, at high frequency in the MHz region, the characteristic impedance flattens out to something almost constant. The reason for this response lies in primary line constants.
Local area networks (LANs) commonly use a similar kind of twisted pair cable, but screened and manufactured to tighter tolerances than is necessary for telephony. Even though it has a very similar impedance to telephone cable, the nominal impedance is rated at 100 Ω. This is because the LAN data is in a higher frequency band where the characteristic impedance is substantially flat and mostly resistive.
Standardisation of line nominal impedance led to two-port networks such as filters being designed to a matching nominal impedance. The nominal impedance of low-pass symmetrical T- or Pi-filter sections (or more generally, image filter sections) is defined as the limit of the filter image impedance as the frequency approaches zero and is given by,
where L and C are as defined in constant k filter. This impedance is purely resistive. This filter, when transformed to a band-pass filter, will have an impedance equal to the nominal impedance at resonance rather than low frequency. This nominal impedance of filters will generally be the same as the nominal impedance of the circuit or cable that the filter is working into.
While 600 Ω is an almost universal standard in telephony for local presentation at customer's premises from the exchange, for long distance transmission on trunk lines between exchanges, other standard nominal impedances are used and are usually lower, such as 150 Ω.
50 Ω and 75 Ω
In the field of radio frequency (RF) and microwave engineering, by far and away the most common transmission line standard is 50 Ω coaxial cable (coax), which is an unbalanced line. 50 Ω first arose as a nominal impedance during World War II work on radar and is a compromise between two requirements. This standard was the work of the wartime US joint Army-Navy RF Cable Coordinating Committee. The first requirement is for minimum loss. The loss of coaxial cable is given by,
nepers/metre
where R is the loop resistance per metre and Z0 is the characteristic impedance. Making the diameter of the inner conductor larger will decrease R and decreasing R decreases the loss. On the other hand, Z0 depends on the ratio of the diameters of outer and inner conductors (Dr) and will decrease with increasing inner conductor diameter thus increasing the loss. There is a specific value of Dr for which the loss is a minimum, which turns out to be 3.6. For an air dielectric coax, this corresponds to a characteristic impedance of 77 Ω. The coax produced during the WW2 was rigid air-insulated pipe, and this remained the case for some time afterwards. The second requirement is for maximum power handling and was an important requirement for radar. This is not the same condition as minimum loss, because power handling is usually limited by the breakdown voltage of the dielectric. However, there is a similar compromise in terms of the ratio of conductor diameters. Making the inner conductor too large results in a thin insulator which breaks down at a lower voltage. On the other hand, making the inner conductor too small results in higher electric field strength near the inner conductor (because the same field energy is accumulated around smaller conductor surface) and again reduces the breakdown voltage. The ideal ratio, Dr, for maximum power handling, is 1.65 and corresponds to a characteristic impedance of 30 Ω in air. The 50 Ω impedance is the geometric mean of these two figures;
and then rounding to a convenient whole number.
Wartime production of coax, and for a period afterwards, tended to use standard plumbing pipe sizes for the outer conductor and standard AWG sizes for the inner conductor. This resulted in coax that was nearly, but not quite, 50 Ω. Matching is a much more critical requirement at radio frequencies than it is at voice frequencies, so when cable started to become available, that was truly 50 Ω and a need arose for matching circuits to interface between the new cables and legacy equipment, such as the rather strange 51.5 Ω to 50 Ω matching network.
While 30 Ω cable is highly desirable for its power handling capabilities, it has never been in commercial production because the large size of inner conductor makes it difficult to manufacture. This is not the case with 77 Ω cable. Cable with 75 Ω nominal impedance has been in use from an early period in telecommunications for its low loss characteristic. According to Stephen Lampen of Belden Wire & Cable, 75 Ω was chosen as the nominal impedance rather than 77 Ω because it corresponded to a standard AWG wire size for the inner conductor. For coax video cables and interfaces, 75 Ω is now the near universal standard nominal impedance.
Radio antennae
The widespread idea that 50 Ω and 75 Ω cable nominal impedances arose in connection with the input impedance of various antennae is a myth. Several common antennae are easily matched to cables with these nominal impedances. A quarter wavelength monopole in free space has an impedance of 36.5 Ω, and a half wavelength dipole in free space has an impedance of 72 Ω. A half-wavelength folded dipole, commonly seen on television antennae, on the other hand, has a 288 Ω impedance—four times that of a straight-line dipole. The λ dipole and the λ folded dipole are commonly taken as having nominal impedances of 75 Ω and 300 Ω, respectively.
An installed antenna's feed-point impedance varies above and below the quoted value, depending on its installation height above the ground and the electrical properties of the surrounding earth.
Cable quality
One measure of cable manufacturing and installation quality is how closely the characteristic impedance adheres to the nominal impedance along its length. Impedance changes can be caused by variations in geometry along the cable length. In turn, these can be caused by a faulty manufacturing process or by faulty installation, such as not observing limits on bend radii. Unfortunately, there is no easy, non-destructive method of directly measuring impedance along a cable's length. It can, however, be indicated indirectly by measuring reflections, that is, return loss. Return loss by itself does not reveal much, since the cable design will have some intrinsic return loss anyway due to not having a purely resistive characteristic impedance. The technique used is to carefully adjust the cable termination to obtain as close a match as possible and then to measure the variation of return loss with frequency. The minimum return loss so measured is called the structural return loss (SRL). SRL is a measure of a cables' adherence to its nominal impedance, but it is not a direct correspondence, as errors further from the generator have less effect on SRL than those close to it. The measurement must also be carried out at all in-band frequencies to be significant. The reason for this is that equally spaced errors introduced by the manufacturing process will cancel and be invisible, or at least much reduced, at certain frequencies due to quarter wave impedance transformer action.
Audio systems
For the most part, audio systems, both professional and domestic, have their components interconnected with low impedance outputs connected to high impedance inputs. These impedances are poorly defined and nominal impedances are not usually assigned for this kind of connection. The exact impedances make little difference to performance as long as the latter is many times larger than the former. This is a common interconnection scheme, not just for audio, but for electronic units in general which form part of a larger equipment or are only connected over a short distance. Where audio needs to be transmitted over large distances, which is often the case in broadcast engineering, considerations of matching and reflections dictate that a telecommunications standard is used, which would normally mean using 600 Ω nominal impedance, although other standards are sometimes encountered, such as sending at 75 Ω and receiving at 600 Ω which has bandwidth advantages. The nominal impedance of the transmission line and of the amplifiers and equalisers in the transmission chain will all be the same value.
Nominal impedance is used, however, to characterise the transducers of an audio system, such as its microphones and loudspeakers. It is important that these are connected to a circuit capable of dealing with impedances in the appropriate range and assigning a nominal impedance is a convenient way of quickly determining likely incompatibilities. Loudspeakers and microphones are dealt with in separate sections below.
Loudspeakers
Loudspeaker impedances are kept relatively low compared with other audio components so that the required audio power can be transmitted without using inconveniently (and dangerously) high voltages. The most common nominal impedance for loudspeakers is 8 Ω. Also used are 4 Ω and 16 Ω. The once common 16 Ω is now mostly reserved for high frequency compression drivers since the high frequency end of the audio spectrum does not usually require so much power to reproduce.
The impedance of a loudspeaker is not constant across all frequencies. In a typical loudspeaker, the impedance will rise with increasing frequency from its DC value, as shown in the diagram, until it reaches a point of its mechanical resonance. Following resonance, the impedance falls to a minimum and then begins to rise again. Speakers are usually designed to operate at frequencies above their resonance, and for this reason, it is the usual practice to define nominal impedance at this minimum and then round to the nearest standard value. The ratio of the peak resonant frequency to the nominal impedance can be as much as 4:1. It is, however, still perfectly possible for the low frequency impedance to actually be lower than the nominal impedance. A given audio amplifier may not be capable of driving this low frequency impedance even though it is capable of driving the nominal impedance, a problem that can be solved either with the use of crossover filters or underrating the amplifier supplied.
In the days of valves (vacuum tubes), most loudspeakers had a nominal impedance of 16 Ω. Valve outputs require an output transformer to match the very high output impedance and voltage of the output valves to this lower impedance. These transformers were commonly tapped to allow matching of the output to a multiple loudspeaker setup. For example, two 16 Ω loudspeakers in parallel will give an impedance of 8 Ω. Since the advent of solid-state amplifiers whose outputs require no transformer, the once-common multiple-impedance outputs have become rare, and lower impedance loudspeakers more common. The most common nominal impedance for a single loudspeaker is now 8 Ω. Most solid-state amplifiers are designed to work with loudspeaker combinations of anything from 4 Ω to 8 Ω.
Microphones
There are a large number of different types of microphone and there are correspondingly large differences in impedance between them. They range from the very low impedance of ribbon microphones (can be less than one ohm) to the very large impedance of piezoelectric microphones which are measured in megohms. The Electronic Industries Alliance (EIA) has defined a number of standard microphone nominal impedances to aid categorisation of microphones.
The International Electrotechnical Commission defines a similar set of nominal impedances, but also has a coarser classification of low (less than 600 Ω), medium (600 Ω to 10 kΩ) and high (more than 10 kΩ) impedances.
Oscilloscopes
Oscilloscope inputs are usually high impedance so that they only minimally affect the circuit being measured when connected. However, the input impedance is made a specific nominal value, rather than arbitrarily high, because of the common use of X10 probes. A common value for oscilloscope nominal impedance is 1 MΩ resistance and 20 pF capacitance. With a known input impedance to the oscilloscope, the probe designer can ensure that the probe input impedance is exactly ten times this figure (actually oscilloscope plus probe cable impedance). Since the impedance included the input capacitance and the probe is an impedance divider circuit, the result is that the waveform being measured is not distorted by the RC circuit formed by the probe resistance and the capacitance of the input (or the cable capacitance which is generally higher).
References
Bibliography
Glen Ballou, Handbook for Sound Engineers, Gulf Professional Publishing, 2005 .
John Bird, Electrical Circuit Theory and Technology, Elsevier, 2007 .
Gary Breed, "There's nothing magic about 50 ohms", High Frequency Electronics, pp. 6–7, June 2007, Summit Technical Media LLC, archived 26 June 2015.
Wai-Kai Chen, The Electrical Engineering Handbook, Academic Press, 2005 .
Walter S. Ciciora, Modern Cable Television Technology: Video, Voice, and Data Communications, Morgan Kaufmann, 2004 .
Gary Davis, Ralph Jones, The Sound Reinforcement Handbook, Hal Leonard Corporation, 1989 .
John M. Eargle, Chris Foreman, Audio engineering for Sound Reinforcement, Hal Leonard Corporation, 2002, .
John Michael Golio, The RF and Microwave Handbook, CRC Press, 2001 .
Rudolf F. Graf, Modern Dictionary of Electronics, Newnes, 1999 .
R.R. Gulati, Modern Television Practice Principles, Technology and Servicing, New Age International, .
John D. Heys, Practical Wire Antennas, Radio Society of Great Britain, 1989 .
Ian Hickman, Oscilloscopes: How to Use Them, How They Work, Newnes, 2001 .
Stephen Lampen, Wire, Cable and Fiber Optics for Video and Audio Engineers, McGraw-Hill 1997 .
A.K.Maini, Electronic Projects For Beginners, Pustak Mahal, 1997 .
Nicholas M. Maslin, HF Communications: a Systems Approach, CRC Press, 1987 .
Thomas Henry O'Dell, Circuits for Electronic Instrumentation, Cambridge University Press, 1991 .
R. Tummala, E. J. Rymaszewski (ed), Alan G. Klopfenstein, Microelectronics Packaging Handbook, Volume 3, Springer, 1997 .
Ron Schmitt, Electromagnetics Explained: a Handbook for Wireless/RF, EMC, and High-speed Electronics, Newnes, 2002 .
Scott Hunter Stark, Live Sound Reinforcement: a Comprehensive Guide to P.A. and Music Reinforcement Systems and Technology, Hal Leonard Corporation, 1996 .
John Vasey, Concert Sound and Lighting Systems, Focal Press, 1999 .
Menno van der Veen, Modern High-end Valve Amplifiers: Based on Toroidal Output Transformers, Elektor International Media, 1999 .
Jerry C. Whitaker, Television Receivers, McGraw-Hill Professional, 2001 .
Electrical parameters
Audio amplifier specifications | Nominal impedance | Engineering | 3,804 |
10,983 | https://en.wikipedia.org/wiki/First-order%20logic | First-order logic—also called predicate logic, predicate calculus, quantificational logic—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables. Rather than propositions such as "all men are mortal", in first-order logic one can have expressions in the form "for all x, if x is a man, then x is mortal"; where "for all x" is a quantifier, x is a variable, and "... is a man" and "... is mortal" are predicates. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.
A theory about a topic, such as set theory, a theory for groups, or a formal theory of arithmetic, is usually a first-order logic together with a specified domain of discourse (over which the quantified variables range), finitely many functions from that domain to itself, finitely many predicates defined on that domain, and a set of axioms believed to hold about them. "Theory" is sometimes understood in a more formal sense as just a set of sentences in first-order logic.
The term "first-order" distinguishes first-order logic from higher-order logic, in which there are predicates having predicates or functions as arguments, or in which quantification over predicates, functions, or both, are permitted. In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets.
There are many deductive systems for first-order logic which are both sound, i.e. all provable statements are true in all models; and complete, i.e. all statements which are true in all models are provable. Although the logical consequence relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Löwenheim–Skolem theorem and the compactness theorem.
First-order logic is the standard for the formalization of mathematics into axioms, and is studied in the foundations of mathematics. Peano arithmetic and Zermelo–Fraenkel set theory are axiomatizations of number theory and set theory, respectively, into first-order logic. No first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line. Axiom systems that do fully describe these two structures, i.e. categorical axiom systems, can be obtained in stronger logics such as second-order logic.
The foundations of first-order logic were developed independently by Gottlob Frege and Charles Sanders Peirce. For a history of first-order logic and how it came to dominate formal logic, see José Ferreirós (2001).
Introduction
While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates and quantification. A predicate evaluates to true or false for an entity or entities in the domain of discourse.
Consider the two sentences "Socrates is a philosopher" and "Plato is a philosopher". In propositional logic, these sentences themselves are viewed as the individuals of study, and might be denoted, for example, by variables such as p and q. They are not viewed as an application of a predicate, such as , to any particular objects in the domain of discourse, instead viewing them as purely an utterance which is either true or false. However, in first-order logic, these two sentences may be framed as statements that a certain individual or non-logical object has a property. In this example, both sentences happen to have the common form for some individual , in the first sentence the value of the variable x is "Socrates", and in the second sentence it is "Plato". Due to the ability to speak about non-logical individuals along with the original logical connectives, first-order logic includes propositional logic.
The truth of a formula such as "x is a philosopher" depends on which object is denoted by x and on the interpretation of the predicate "is a philosopher". Consequently, "x is a philosopher" alone does not have a definite truth value of true or false, and is akin to a sentence fragment. Relationships between predicates can be stated using logical connectives. For example, the first-order formula "if x is a philosopher, then x is a scholar", is a conditional statement with "x is a philosopher" as its hypothesis, and "x is a scholar" as its conclusion, which again needs specification of x in order to have a definite truth value.
Quantifiers can be applied to variables in a formula. The variable x in the previous formula can be universally quantified, for instance, with the first-order sentence "For every x, if x is a philosopher, then x is a scholar". The universal quantifier "for every" in this sentence expresses the idea that the claim "if x is a philosopher, then x is a scholar" holds for all choices of x.
The negation of the sentence "For every x, if x is a philosopher, then x is a scholar" is logically equivalent to the sentence "There exists x such that x is a philosopher and x is not a scholar". The existential quantifier "there exists" expresses the idea that the claim "x is a philosopher and x is not a scholar" holds for some choice of x.
The predicates "is a philosopher" and "is a scholar" each take a single variable. In general, predicates can take several variables. In the first-order sentence "Socrates is the teacher of Plato", the predicate "is the teacher of" takes two variables.
An interpretation (or model) of a first-order formula specifies what each predicate means, and the entities that can instantiate the variables. These entities form the domain of discourse or universe, which is usually required to be a nonempty set. For example, consider the sentence "There exists x such that x is a philosopher." This sentence is seen as being true in an interpretation such that the domain of discourse consists of all human beings, and that the predicate "is a philosopher" is understood as "was the author of the Republic." It is true, as witnessed by Plato in that text.
There are two key parts of first-order logic. The syntax determines which finite sequences of symbols are well-formed expressions in first-order logic, while the semantics determines the meanings behind these expressions.
Syntax
Unlike natural languages, such as English, the language of first-order logic is completely formal, so that it can be mechanically determined whether a given expression is well formed. There are two key types of well-formed expressions: terms, which intuitively represent objects, and formulas, which intuitively express statements that can be true or false. The terms and formulas of first-order logic are strings of symbols, where all the symbols together form the alphabet of the language.
Alphabet
As with all formal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols.
It is common to divide the symbols of the alphabet into logical symbols, which always have the same meaning, and non-logical symbols, whose meaning varies by interpretation. For example, the logical symbol always represents "and"; it is never interpreted as "or", which is represented by the logical symbol . However, a non-logical predicate symbol such as Phil(x) could be interpreted to mean "x is a philosopher", "x is a man named Philip", or any other unary predicate depending on the interpretation at hand.
Logical symbols
Logical symbols are a set of characters that vary by author, but usually include the following:
Quantifier symbols: for universal quantification, and for existential quantification
Logical connectives: for conjunction, for disjunction, for implication, for biconditional, for negation. Some authors use Cpq instead of and Epq instead of , especially in contexts where → is used for other purposes. Moreover, the horseshoe may replace ; the triple-bar may replace ; a tilde (), Np, or Fp may replace ; a double bar , , or Apq may replace ; and an ampersand , Kpq, or the middle dot may replace , especially if these symbols are not available for technical reasons. (The aforementioned symbols Cpq, Epq, Np, Apq, and Kpq are used in Polish notation.)
Parentheses, brackets, and other punctuation symbols. The choice of such symbols varies depending on context.
An infinite set of variables, often denoted by lowercase letters at the end of the alphabet x, y, z, ... . Subscripts are often used to distinguish variables:
An equality symbol (sometimes, identity symbol) (see below).
Not all of these symbols are required in first-order logic. Either one of the quantifiers along with negation, conjunction (or disjunction), variables, brackets, and equality suffices.
Other logical symbols include the following:
Truth constants: T, V, or for "true" and F, O, or for "false" (V and O are from Polish notation). Without any such logical operators of valence 0, these two constants can only be expressed using quantifiers.
Additional logical connectives such as the Sheffer stroke, Dpq (NAND), and exclusive or, Jpq.
Non-logical symbols
Non-logical symbols represent predicates (relations), functions and constants. It used to be standard practice to use a fixed, infinite set of non-logical symbols for all purposes:
For every integer n ≥ 0, there is a collection of n-ary, or n-place, predicate symbols. Because they represent relations between n elements, they are also called relation symbols. For each arity n, there is an infinite supply of them:
Pn0, Pn1, Pn2, Pn3, ...
For every integer n ≥ 0, there are infinitely many n-ary function symbols:
f n0, f n1, f n2, f n3, ...
When the arity of a predicate symbol or function symbol is clear from context, the superscript n is often omitted.
In this traditional approach, there is only one language of first-order logic. This approach is still common, especially in philosophically oriented books.
A more recent practice is to use different non-logical symbols according to the application one has in mind. Therefore, it has become necessary to name the set of all non-logical symbols used in a particular application. This choice is made via a signature.
Typical signatures in mathematics are {1, ×} or just {×} for groups, or {0, 1, +, ×, <} for ordered fields. There are no restrictions on the number of non-logical symbols. The signature can be empty, finite, or infinite, even uncountable. Uncountable signatures occur for example in modern proofs of the Löwenheim–Skolem theorem.
Though signatures might in some cases imply how non-logical symbols are to be interpreted, interpretation of the non-logical symbols in the signature is separate (and not necessarily fixed). Signatures concern syntax rather than semantics.
In this approach, every non-logical symbol is of one of the following types:
A predicate symbol (or relation symbol) with some valence (or arity, number of arguments) greater than or equal to 0. These are often denoted by uppercase letters such as P, Q and R. Examples:
In P(x), P is a predicate symbol of valence 1. One possible interpretation is "x is a man".
In Q(x,y), Q is a predicate symbol of valence 2. Possible interpretations include "x is greater than y" and "x is the father of y".
Relations of valence 0 can be identified with propositional variables, which can stand for any statement. One possible interpretation of R is "Socrates is a man".
A function symbol, with some valence greater than or equal to 0. These are often denoted by lowercase roman letters such as f, g and h. Examples:
f(x) may be interpreted as "the father of x". In arithmetic, it may stand for "-x". In set theory, it may stand for "the power set of x".
In arithmetic, g(x,y) may stand for "x+y". In set theory, it may stand for "the union of x and y".
Function symbols of valence 0 are called constant symbols, and are often denoted by lowercase letters at the beginning of the alphabet such as a, b and c. The symbol a may stand for Socrates. In arithmetic, it may stand for 0. In set theory, it may stand for the empty set.
The traditional approach can be recovered in the modern approach, by simply specifying the "custom" signature to consist of the traditional sequences of non-logical symbols.
Formation rules
The formation rules define the terms and formulas of first-order logic. When terms and formulas are represented as strings of symbols, these rules can be used to write a formal grammar for terms and formulas. These rules are generally context-free (each production has a single symbol on the left side), except that the set of symbols may be allowed to be infinite and there may be many start symbols, for example the variables in the case of terms.
Terms
The set of terms is inductively defined by the following rules:
Variables. Any variable symbol is a term.
Functions. If f is an n-ary function symbol, and t1, ..., tn are terms, then f(t1,...,tn) is a term. In particular, symbols denoting individual constants are nullary function symbols, and thus are terms.
Only expressions which can be obtained by finitely many applications of rules 1 and 2 are terms. For example, no expression involving a predicate symbol is a term.
Formulas
The set of formulas (also called well-formed formulas or WFFs) is inductively defined by the following rules:
Predicate symbols. If P is an n-ary predicate symbol and t1, ..., tn are terms then P(t1,...,tn) is a formula.
Equality. If the equality symbol is considered part of logic, and t1 and t2 are terms, then t1 = t2 is a formula.
Negation. If is a formula, then is a formula.
Binary connectives. If and are formulas, then () is a formula. Similar rules apply to other binary logical connectives.
Quantifiers. If is a formula and x is a variable, then (for all x, holds) and (there exists x such that ) are formulas.
Only expressions which can be obtained by finitely many applications of rules 1–5 are formulas. The formulas obtained from the first two rules are said to be atomic formulas.
For example:
is a formula, if f is a unary function symbol, P a unary predicate symbol, and Q a ternary predicate symbol. However, is not a formula, although it is a string of symbols from the alphabet.
The role of the parentheses in the definition is to ensure that any formula can only be obtained in one way—by following the inductive definition (i.e., there is a unique parse tree for each formula). This property is known as unique readability of formulas. There are many conventions for where parentheses are used in formulas. For example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are inserted. Each author's particular definition must be accompanied by a proof of unique readability.
Notational conventions
For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need to write parentheses in some cases. These rules are similar to the order of operations in arithmetic. A common convention is:
is evaluated first
and are evaluated next
Quantifiers are evaluated next
is evaluated last.
Moreover, extra punctuation not required by the definition may be inserted—to make formulas easier to read. Thus the formula:
might be written as:
In some fields, it is common to use infix notation for binary relations and functions, instead of the prefix notation defined above. For example, in arithmetic, one typically writes "2 + 2 = 4" instead of "=(+(2,2),4)". It is common to regard formulas in infix notation as abbreviations for the corresponding formulas in prefix notation, cf. also term structure vs. representation.
The definitions above use infix notation for binary connectives such as . A less common convention is Polish notation, in which one writes , and so on in front of their arguments rather than between them. This convention is advantageous in that it allows all punctuation symbols to be discarded. As such, Polish notation is compact and elegant, but rarely used in practice because it is hard for humans to read. In Polish notation, the formula:
becomes
Free and bound variables
In a formula, a variable may occur free or bound (or both). One formalization of this notion is due to Quine, first the concept of a variable occurrence is defined, then whether a variable occurrence is free or bound, then whether a variable symbol overall is free or bound. In order to distinguish different occurrences of the identical symbol x, each occurrence of a variable symbol x in a formula φ is identified with the initial substring of φ up to the point at which said instance of the symbol x appears.p.297 Then, an occurrence of x is said to be bound if that occurrence of x lies within the scope of at least one of either or . Finally, x is bound in φ if all occurrences of x in φ are bound.pp.142--143
Intuitively, a variable symbol is free in a formula if at no point is it quantified:pp.142--143 in , the sole occurrence of variable x is free while that of y is bound. The free and bound variable occurrences in a formula are defined inductively as follows.
Atomic formulas If φ is an atomic formula, then x occurs free in φ if and only if x occurs in φ. Moreover, there are no bound variables in any atomic formula.
Negation x occurs free in ¬φ if and only if x occurs free in φ. x occurs bound in ¬φ if and only if x occurs bound in φ
Binary connectives x occurs free in (φ → ψ) if and only if x occurs free in either φ or ψ. x occurs bound in (φ → ψ) if and only if x occurs bound in either φ or ψ. The same rule applies to any other binary connective in place of →.
Quantifiers x occurs free in , if and only if x occurs free in φ and x is a different symbol from y. Also, x occurs bound in , if and only if x is y or x occurs bound in φ. The same rule holds with in place of .
For example, in , x and y occur only bound, z occurs only free, and w is neither because it does not occur in the formula.
Free and bound variables of a formula need not be disjoint sets: in the formula , the first occurrence of x, as argument of P, is free while the second one, as argument of Q, is bound.
A formula in first-order logic with no free variable occurrences is called a first-order sentence. These are the formulas that will have well-defined truth values under an interpretation. For example, whether a formula such as Phil(x) is true must depend on what x represents. But the sentence will be either true or false in a given interpretation.
Example: ordered abelian groups
In mathematics, the language of ordered abelian groups has one constant symbol 0, one unary function symbol −, one binary function symbol +, and one binary relation symbol ≤. Then:
The expressions +(x, y) and +(x, +(y, −(z))) are terms. These are usually written as x + y and x + y − z.
The expressions +(x, y) = 0 and ≤(+(x, +(y, −(z))), +(x, y)) are atomic formulas. These are usually written as x + y = 0 and x + y − z ≤ x + y.
The expression is a formula, which is usually written as This formula has one free variable, z.
The axioms for ordered abelian groups can be expressed as a set of sentences in the language. For example, the axiom stating that the group is commutative is usually written
Semantics
An interpretation of a first-order language assigns a denotation to each non-logical symbol (predicate symbol, function symbol, or constant symbol) in that language. It also determines a domain of discourse that specifies the range of the quantifiers. The result is that each term is assigned an object that it represents, each predicate is assigned a property of objects, and each sentence is assigned a truth value. In this way, an interpretation provides semantic meaning to the terms, predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic. (It is also possible to define game semantics for first-order logic, but aside from requiring the axiom of choice, game semantics agree with Tarskian semantics for first-order logic, so game semantics will not be elaborated herein.)
First-order structures
The most common way of specifying an interpretation (especially in mathematics) is to specify a structure (also called a model; see below). The structure consists of a domain of discourse D and an interpretation function mapping non-logical symbols to predicates, functions, and constants.
The domain of discourse D is a nonempty set of "objects" of some kind. Intuitively, given an interpretation, a first-order formula becomes a statement about these objects; for example, states the existence of some object in D for which the predicate P is true (or, more precisely, for which the predicate assigned to the predicate symbol P by the interpretation is true). For example, one can take D to be the set of integers.
Non-logical symbols are interpreted as follows:
The interpretation of an n-ary function symbol is a function from Dn to D. For example, if the domain of discourse is the set of integers, a function symbol f of arity 2 can be interpreted as the function that gives the sum of its arguments. In other words, the symbol f is associated with the function which, in this interpretation, is addition.
The interpretation of a constant symbol (a function symbol of arity 0) is a function from D0 (a set whose only member is the empty tuple) to D, which can be simply identified with an object in D. For example, an interpretation may assign the value to the constant symbol .
The interpretation of an n-ary predicate symbol is a set of n-tuples of elements of D, giving the arguments for which the predicate is true. For example, an interpretation of a binary predicate symbol P may be the set of pairs of integers such that the first one is less than the second. According to this interpretation, the predicate P would be true if its first argument is less than its second argument. Equivalently, predicate symbols may be assigned Boolean-valued functions from Dn to .
Evaluation of truth values
A formula evaluates to true or false given an interpretation and a variable assignment μ that associates an element of the domain of discourse with each variable. The reason that a variable assignment is required is to give meanings to formulas with free variables, such as . The truth value of this formula changes depending on the values that x and y denote.
First, the variable assignment μ can be extended to all terms of the language, with the result that each term maps to a single element of the domain of discourse. The following rules are used to make this assignment:
Variables. Each variable x evaluates to μ(x)
Functions. Given terms that have been evaluated to elements of the domain of discourse, and a n-ary function symbol f, the term evaluates to .
Next, each formula is assigned a truth value. The inductive definition used to make this assignment is called the T-schema.
Atomic formulas (1). A formula is associated the value true or false depending on whether , where are the evaluation of the terms and is the interpretation of , which by assumption is a subset of .
Atomic formulas (2). A formula is assigned true if and evaluate to the same object of the domain of discourse (see the section on equality below).
Logical connectives. A formula in the form , , etc. is evaluated according to the truth table for the connective in question, as in propositional logic.
Existential quantifiers. A formula is true according to M and if there exists an evaluation of the variables that differs from at most regarding the evaluation of x and such that φ is true according to the interpretation M and the variable assignment . This formal definition captures the idea that is true if and only if there is a way to choose a value for x such that φ(x) is satisfied.
Universal quantifiers. A formula is true according to M and if φ(x) is true for every pair composed by the interpretation M and some variable assignment that differs from at most on the value of x. This captures the idea that is true if every possible choice of a value for x causes φ(x) to be true.
If a formula does not contain free variables, and so is a sentence, then the initial variable assignment does not affect its truth value. In other words, a sentence is true according to M and if and only if it is true according to M and every other variable assignment .
There is a second common approach to defining truth values that does not rely on variable assignment functions. Instead, given an interpretation M, one first adds to the signature a collection of constant symbols, one for each element of the domain of discourse in M; say that for each d in the domain the constant symbol cd is fixed. The interpretation is extended so that each new constant symbol is assigned to its corresponding element of the domain. One now defines truth for quantified formulas syntactically, as follows:
Existential quantifiers (alternate). A formula is true according to M if there is some d in the domain of discourse such that holds. Here is the result of substituting cd for every free occurrence of x in φ.
Universal quantifiers (alternate). A formula is true according to M if, for every d in the domain of discourse, is true according to M.
This alternate approach gives exactly the same truth values to all sentences as the approach via variable assignments.
Validity, satisfiability, and logical consequence
If a sentence φ evaluates to true under a given interpretation M, one says that M satisfies φ; this is denoted . A sentence is satisfiable if there is some interpretation under which it is true. This is a bit different from the symbol from model theory, where denotes satisfiability in a model, i.e. "there is a suitable assignment of values in 's domain to variable symbols of ".
Satisfiability of formulas with free variables is more complicated, because an interpretation on its own does not determine the truth value of such a formula. The most common convention is that a formula φ with free variables , ..., is said to be satisfied by an interpretation if the formula φ remains true regardless which individuals from the domain of discourse are assigned to its free variables , ..., . This has the same effect as saying that a formula φ is satisfied if and only if its universal closure is satisfied.
A formula is logically valid (or simply valid) if it is true in every interpretation. These formulas play a role similar to tautologies in propositional logic.
A formula φ is a logical consequence of a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ.
Algebraizations
An alternate approach to the semantics of first-order logic proceeds via abstract algebra. This approach generalizes the Lindenbaum–Tarski algebras of propositional logic. There are three ways of eliminating quantified variables from first-order logic that do not involve replacing quantifiers with other variable binding term operators:
Cylindric algebra, by Alfred Tarski and colleagues;
Polyadic algebra, by Paul Halmos;
Predicate functor logic, mainly due to Willard Quine.
These algebras are all lattices that properly extend the two-element Boolean algebra.
Tarski and Givant (1987) showed that the fragment of first-order logic that has no atomic sentence lying in the scope of more than three quantifiers has the same expressive power as relation algebra. This fragment is of great interest because it suffices for Peano arithmetic and most axiomatic set theory, including the canonical ZFC. They also prove that first-order logic with a primitive ordered pair is equivalent to a relation algebra with two ordered pair projection functions.
First-order theories, models, and elementary classes
A first-order theory of a particular signature is a set of axioms, which are sentences consisting of symbols from that signature. The set of axioms is often finite or recursively enumerable, in which case the theory is called effective. Some authors require theories to also include all logical consequences of the axioms. The axioms are considered to hold within the theory and from them other sentences that hold within the theory can be derived.
A first-order structure that satisfies all sentences in a given theory is said to be a model of the theory. An elementary class is the set of all structures satisfying a particular theory. These classes are a main subject of study in model theory.
Many theories have an intended interpretation, a certain model that is kept in mind when studying the theory. For example, the intended interpretation of Peano arithmetic consists of the usual natural numbers with their usual operations. However, the Löwenheim–Skolem theorem shows that most first-order theories will also have other, nonstandard models.
A theory is consistent if it is not possible to prove a contradiction from the axioms of the theory. A theory is complete if, for every formula in its signature, either that formula or its negation is a logical consequence of the axioms of the theory. Gödel's incompleteness theorem shows that effective first-order theories that include a sufficient portion of the theory of the natural numbers can never be both consistent and complete.
Empty domains
The definition above requires that the domain of discourse of any interpretation must be nonempty. There are settings, such as inclusive logic, where empty domains are permitted. Moreover, if a class of algebraic structures includes an empty structure (for example, there is an empty poset), that class can only be an elementary class in first-order logic if empty domains are permitted or the empty structure is removed from the class.
There are several difficulties with empty domains, however:
Many common rules of inference are valid only when the domain of discourse is required to be nonempty. One example is the rule stating that implies when x is not a free variable in . This rule, which is used to put formulas into prenex normal form, is sound in nonempty domains, but unsound if the empty domain is permitted.
The definition of truth in an interpretation that uses a variable assignment function cannot work with empty domains, because there are no variable assignment functions whose range is empty. (Similarly, one cannot assign interpretations to constant symbols.) This truth definition requires that one must select a variable assignment function (μ above) before truth values for even atomic formulas can be defined. Then the truth value of a sentence is defined to be its truth value under any variable assignment, and it is proved that this truth value does not depend on which assignment is chosen. This technique does not work if there are no assignment functions at all; it must be changed to accommodate empty domains.
Thus, when the empty domain is permitted, it must often be treated as a special case. Most authors, however, simply exclude the empty domain by definition.
Deductive systems
A deductive system is used to demonstrate, on a purely syntactic basis, that one formula is a logical consequence of another formula. There are many such systems for first-order logic, including Hilbert-style deductive systems, natural deduction, the sequent calculus, the tableaux method, and resolution. These share the common property that a deduction is a finite syntactic object; the format of this object, and the way it is constructed, vary widely. These finite deductions themselves are often called derivations in proof theory. They are also often called proofs but are completely formalized unlike natural-language mathematical proofs.
A deductive system is sound if any formula that can be derived in the system is logically valid. Conversely, a deductive system is complete if every logically valid formula is derivable. All of the systems discussed in this article are both sound and complete. They also share the property that it is possible to effectively verify that a purportedly valid deduction is actually a deduction; such deduction systems are called effective.
A key property of deductive systems is that they are purely syntactic, so that derivations can be verified without considering any interpretation. Thus, a sound argument is correct in every possible interpretation of the language, regardless of whether that interpretation is about mathematics, economics, or some other area.
In general, logical consequence in first-order logic is only semidecidable: if a sentence A logically implies a sentence B then this can be discovered (for example, by searching for a proof until one is found, using some effective, sound, complete proof system). However, if A does not logically imply B, this does not mean that A logically implies the negation of B. There is no effective procedure that, given formulas A and B, always correctly decides whether A logically implies B.
Rules of inference
A rule of inference states that, given a particular formula (or set of formulas) with a certain property as a hypothesis, another specific formula (or set of formulas) can be derived as a conclusion. The rule is sound (or truth-preserving) if it preserves validity in the sense that whenever any interpretation satisfies the hypothesis, that interpretation also satisfies the conclusion.
For example, one common rule of inference is the rule of substitution. If t is a term and φ is a formula possibly containing the variable x, then φ[t/x] is the result of replacing all free instances of x by t in φ. The substitution rule states that for any φ and any term t, one can conclude φ[t/x] from φ provided that no free variable of t becomes bound during the substitution process. (If some free variable of t becomes bound, then to substitute t for x it is first necessary to change the bound variables of φ to differ from the free variables of t.)
To see why the restriction on bound variables is necessary, consider the logically valid formula φ given by , in the signature of (0,1,+,×,=) of arithmetic. If t is the term "x + 1", the formula φ[t/y] is , which will be false in many interpretations. The problem is that the free variable x of t became bound during the substitution. The intended replacement can be obtained by renaming the bound variable x of φ to something else, say z, so that the formula after substitution is , which is again logically valid.
The substitution rule demonstrates several common aspects of rules of inference. It is entirely syntactical; one can tell whether it was correctly applied without appeal to any interpretation. It has (syntactically defined) limitations on when it can be applied, which must be respected to preserve the correctness of derivations. Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule.
Hilbert-style systems and natural deduction
A deduction in a Hilbert-style deductive system is a list of formulas, each of which is a logical axiom, a hypothesis that has been assumed for the derivation at hand or follows from previous formulas via a rule of inference. The logical axioms consist of several axiom schemas of logically valid formulas; these encompass a significant amount of propositional logic. The rules of inference enable the manipulation of quantifiers. Typical Hilbert-style systems have a small number of rules of inference, along with several infinite schemas of logical axioms. It is common to have only modus ponens and universal generalization as rules of inference.
Natural deduction systems resemble Hilbert-style systems in that a deduction is a finite list of formulas. However, natural deduction systems have no logical axioms; they compensate by adding additional rules of inference that can be used to manipulate the logical connectives in formulas in the proof.
Sequent calculus
The sequent calculus was developed to study the properties of natural deduction systems. Instead of working with one formula at a time, it uses sequents, which are expressions of the form:
where A1, ..., An, B1, ..., Bk are formulas and the turnstile symbol is used as punctuation to separate the two halves. Intuitively, a sequent expresses the idea that implies .
Tableaux method
Unlike the methods just described the derivations in the tableaux method are not lists of formulas. Instead, a derivation is a tree of formulas. To show that a formula A is provable, the tableaux method attempts to demonstrate that the negation of A is unsatisfiable. The tree of the derivation has at its root; the tree branches in a way that reflects the structure of the formula. For example, to show that is unsatisfiable requires showing that C and D are each unsatisfiable; this corresponds to a branching point in the tree with parent and children C and D.
Resolution
The resolution rule is a single rule of inference that, together with unification, is sound and complete for first-order logic. As with the tableaux method, a formula is proved by showing that the negation of the formula is unsatisfiable. Resolution is commonly used in automated theorem proving.
The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must first be converted to this form through Skolemization. The resolution rule states that from the hypotheses and , the conclusion can be obtained.
Provable identities
Many identities can be proved, which establish equivalences between particular formulas. These identities allow for rearranging formulas by moving quantifiers across other connectives and are useful for putting formulas in prenex normal form. Some provable identities include:
(where must not occur free in )
(where must not occur free in )
Equality and its axioms
There are several different conventions for using equality (or identity) in first-order logic. The most common convention, known as first-order logic with equality, includes the equality symbol as a primitive logical symbol which is always interpreted as the real equality relation between members of the domain of discourse, such that the "two" given members are the same member. This approach also adds certain axioms about equality to the deductive system employed. These equality axioms are:
Reflexivity. For each variable x, x = x.
Substitution for functions. For all variables x and y, and any function symbol f,
x = y → f(..., x, ...) = f(..., y, ...).
Substitution for formulas. For any variables x and y and any formula φ(z) with a free variable z, then:
x = y → (φ(x) → φ(y)).
These are axiom schemas, each of which specifies an infinite set of axioms. The third schema is known as Leibniz's law, "the principle of substitutivity", "the indiscernibility of identicals", or "the replacement property". The second schema, involving the function symbol f, is (equivalent to) a special case of the third schema, using the formula:
φ(z): f(..., x, ...) = f(..., z, ...)
Then
x = y → (f(..., x, ...) = f(..., x, ...) → f(..., x, ...) = f(..., y, ...)).
Since x = y is given, and f(..., x, ...) = f(..., x, ...) true by reflexivity, we have f(..., x, ...) = f(..., y, ...)
Many other properties of equality are consequences of the axioms above, for example:
Symmetry. If x = y then y = x.
Transitivity. If x = y and y = z then x = z.
First-order logic without equality
An alternate approach considers the equality relation to be a non-logical symbol. This convention is known as first-order logic without equality. If an equality relation is included in the signature, the axioms of equality must now be added to the theories under consideration, if desired, instead of being considered rules of logic. The main difference between this method and first-order logic with equality is that an interpretation may now interpret two distinct individuals as "equal" (although, by Leibniz's law, these will satisfy exactly the same formulas under any interpretation). That is, the equality relation may now be interpreted by an arbitrary equivalence relation on the domain of discourse that is congruent with respect to the functions and relations of the interpretation.
When this second convention is followed, the term normal model is used to refer to an interpretation where no distinct individuals a and b satisfy a = b. In first-order logic with equality, only normal models are considered, and so there is no term for a model other than a normal model. When first-order logic without equality is studied, it is necessary to amend the statements of results such as the Löwenheim–Skolem theorem so that only normal models are considered.
First-order logic without equality is often employed in the context of second-order arithmetic and other higher-order theories of arithmetic, where the equality relation between sets of natural numbers is usually omitted.
Defining equality within a theory
If a theory has a binary formula A(x,y) which satisfies reflexivity and Leibniz's law, the theory is said to have equality, or to be a theory with equality. The theory may not have all instances of the above schemas as axioms, but rather as derivable theorems. For example, in theories with no function symbols and a finite number of relations, it is possible to define equality in terms of the relations, by defining the two terms s and t to be equal if any relation is unchanged by changing s to t in any argument.
Some theories allow other ad hoc definitions of equality:
In the theory of partial orders with one relation symbol ≤, one could define s = t to be an abbreviation for s ≤ t t ≤ s.
In set theory with one relation ∈, one may define s = t to be an abbreviation for . This definition of equality then automatically satisfies the axioms for equality. In this case, one should replace the usual axiom of extensionality, which can be stated as , with an alternative formulation , which says that if sets x and y have the same elements, then they also belong to the same sets.
Metalogical properties
One motivation for the use of first-order logic, rather than higher-order logic, is that first-order logic has many metalogical properties that stronger logics do not have. These results concern general properties of first-order logic itself, rather than properties of individual theories. They provide fundamental tools for the construction of models of first-order theories.
Completeness and undecidability
Gödel's completeness theorem, proved by Kurt Gödel in 1929, establishes that there are sound, complete, effective deductive systems for first-order logic, and thus the first-order logical consequence relation is captured by finite provability. Naively, the statement that a formula φ logically implies a formula ψ depends on every model of φ; these models will in general be of arbitrarily large cardinality, and so logical consequence cannot be effectively verified by checking every model. However, it is possible to enumerate all finite derivations and search for a derivation of ψ from φ. If ψ is logically implied by φ, such a derivation will eventually be found. Thus first-order logical consequence is semidecidable: it is possible to make an effective enumeration of all pairs of sentences (φ,ψ) such that ψ is a logical consequence of φ.
Unlike propositional logic, first-order logic is undecidable (although semidecidable), provided that the language has at least one predicate of arity at least 2 (other than equality). This means that there is no decision procedure that determines whether arbitrary formulas are logically valid. This result was established independently by Alonzo Church and Alan Turing in 1936 and 1937, respectively, giving a negative answer to the Entscheidungsproblem posed by David Hilbert and Wilhelm Ackermann in 1928. Their proofs demonstrate a connection between the unsolvability of the decision problem for first-order logic and the unsolvability of the halting problem.
There are systems weaker than full first-order logic for which the logical consequence relation is decidable. These include propositional logic and monadic predicate logic, which is first-order logic restricted to unary predicate symbols and no function symbols. Other logics with no function symbols which are decidable are the guarded fragment of first-order logic, as well as two-variable logic. The Bernays–Schönfinkel class of first-order formulas is also decidable. Decidable subsets of first-order logic are also studied in the framework of description logics.
The Löwenheim–Skolem theorem
The Löwenheim–Skolem theorem shows that if a first-order theory of cardinality λ has an infinite model, then it has models of every infinite cardinality greater than or equal to λ. One of the earliest results in model theory, it implies that it is not possible to characterize countability or uncountability in a first-order language with a countable signature. That is, there is no first-order formula φ(x) such that an arbitrary structure M satisfies φ if and only if the domain of discourse of M is countable (or, in the second case, uncountable).
The Löwenheim–Skolem theorem implies that infinite structures cannot be categorically axiomatized in first-order logic. For example, there is no first-order theory whose only model is the real line: any first-order theory with an infinite model also has a model of cardinality larger than the continuum. Since the real line is infinite, any theory satisfied by the real line is also satisfied by some nonstandard models. When the Löwenheim–Skolem theorem is applied to first-order set theories, the nonintuitive consequences are known as Skolem's paradox.
The compactness theorem
The compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. This implies that if a formula is a logical consequence of an infinite set of first-order axioms, then it is a logical consequence of some finite number of those axioms. This theorem was proved first by Kurt Gödel as a consequence of the completeness theorem, but many additional proofs have been obtained over time. It is a central tool in model theory, providing a fundamental method for constructing models.
The compactness theorem has a limiting effect on which collections of first-order structures are elementary classes. For example, the compactness theorem implies that any theory that has arbitrarily large finite models has an infinite model. Thus, the class of all finite graphs is not an elementary class (the same holds for many other algebraic structures).
There are also more subtle limitations of first-order logic that are implied by the compactness theorem. For example, in computer science, many situations can be modeled as a directed graph of states (nodes) and connections (directed edges). Validating such a system may require showing that no "bad" state can be reached from any "good" state. Thus, one seeks to determine if the good and bad states are in different connected components of the graph. However, the compactness theorem can be used to show that connected graphs are not an elementary class in first-order logic, and there is no formula φ(x,y) of first-order logic, in the logic of graphs, that expresses the idea that there is a path from x to y. Connectedness can be expressed in second-order logic, however, but not with only existential set quantifiers, as also enjoys compactness.
Lindström's theorem
Per Lindström showed that the metalogical properties just discussed actually characterize first-order logic in the sense that no stronger logic can also have those properties (Ebbinghaus and Flum 1994, Chapter XIII). Lindström defined a class of abstract logical systems, and a rigorous definition of the relative strength of a member of this class. He established two theorems for systems of this type:
A logical system satisfying Lindström's definition that contains first-order logic and satisfies both the Löwenheim–Skolem theorem and the compactness theorem must be equivalent to first-order logic.
A logical system satisfying Lindström's definition that has a semidecidable logical consequence relation and satisfies the Löwenheim–Skolem theorem must be equivalent to first-order logic.
Limitations
Although first-order logic is sufficient for formalizing much of mathematics and is commonly used in computer science and other fields, it has certain limitations. These include limitations on its expressiveness and limitations of the fragments of natural languages that it can describe.
For instance, first-order logic is undecidable, meaning a sound, complete and terminating decision algorithm for provability is impossible. This has led to the study of interesting decidable fragments, such as C2: first-order logic with two variables and the counting quantifiers and .
Expressiveness
The Löwenheim–Skolem theorem shows that if a first-order theory has any infinite model, then it has infinite models of every cardinality. In particular, no first-order theory with an infinite model can be categorical. Thus, there is no first-order theory whose only model has the set of natural numbers as its domain, or whose only model has the set of real numbers as its domain. Many extensions of first-order logic, including infinitary logics and higher-order logics, are more expressive in the sense that they do permit categorical axiomatizations of the natural numbers or real numbers. This expressiveness comes at a metalogical cost, however: by Lindström's theorem, the compactness theorem and the downward Löwenheim–Skolem theorem cannot hold in any logic stronger than first-order.
Formalizing natural languages
First-order logic is able to formalize many simple quantifier constructions in natural language, such as "every person who lives in Perth lives in Australia". Hence, first-order logic is used as a basis for knowledge representation languages, such as FO(.).
Still, there are complicated features of natural language that cannot be expressed in first-order logic. "Any logical system which is appropriate as an instrument for the analysis of natural language needs a much richer structure than first-order predicate logic".
Restrictions, extensions, and variations
There are many variations of first-order logic. Some of these are inessential in the sense that they merely change notation without affecting the semantics. Others change the expressive power more significantly, by extending the semantics through additional quantifiers or other new logical symbols. For example, infinitary logics permit formulas of infinite size, and modal logics add symbols for possibility and necessity.
Restricted languages
First-order logic can be studied in languages with fewer logical symbols than were described above:
Because can be expressed as , and can be expressed as , either of the two quantifiers and can be dropped.
Since can be expressed as and can be expressed as , either or can be dropped. In other words, it is sufficient to have and , or and , as the only logical connectives.
Similarly, it is sufficient to have only and as logical connectives, or to have only the Sheffer stroke (NAND) or the Peirce arrow (NOR) operator.
It is possible to entirely avoid function symbols and constant symbols, rewriting them via predicate symbols in an appropriate way. For example, instead of using a constant symbol one may use a predicate (interpreted as ) and replace every predicate such as with . A function such as will similarly be replaced by a predicate interpreted as . This change requires adding additional axioms to the theory at hand, so that interpretations of the predicate symbols used have the correct semantics.
Restrictions such as these are useful as a technique to reduce the number of inference rules or axiom schemas in deductive systems, which leads to shorter proofs of metalogical results. The cost of the restrictions is that it becomes more difficult to express natural-language statements in the formal system at hand, because the logical connectives used in the natural language statements must be replaced by their (longer) definitions in terms of the restricted collection of logical connectives. Similarly, derivations in the limited systems may be longer than derivations in systems that include additional connectives. There is thus a trade-off between the ease of working within the formal system and the ease of proving results about the formal system.
It is also possible to restrict the arities of function symbols and predicate symbols, in sufficiently expressive theories. One can in principle dispense entirely with functions of arity greater than 2 and predicates of arity greater than 1 in theories that include a pairing function. This is a function of arity 2 that takes pairs of elements of the domain and returns an ordered pair containing them. It is also sufficient to have two predicate symbols of arity 2 that define projection functions from an ordered pair to its components. In either case it is necessary that the natural axioms for a pairing function and its projections are satisfied.
Many-sorted logic
Ordinary first-order interpretations have a single domain of discourse over which all quantifiers range. Many-sorted first-order logic allows variables to have different sorts, which have different domains. This is also called typed first-order logic, and the sorts called types (as in data type), but it is not the same as first-order type theory. Many-sorted first-order logic is often used in the study of second-order arithmetic.
When there are only finitely many sorts in a theory, many-sorted first-order logic can be reduced to single-sorted first-order logic.
One introduces into the single-sorted theory a unary predicate symbol for each sort in the many-sorted theory and adds an axiom saying that these unary predicates partition the domain of discourse. For example, if there are two sorts, one adds predicate symbols and and the axiom:
.
Then the elements satisfying are thought of as elements of the first sort, and elements satisfying as elements of the second sort. One can quantify over each sort by using the corresponding predicate symbol to limit the range of quantification. For example, to say there is an element of the first sort satisfying formula , one writes:
.
Additional quantifiers
Additional quantifiers can be added to first-order logic.
Sometimes it is useful to say that " holds for exactly one x", which can be expressed as . This notation, called uniqueness quantification, may be taken to abbreviate a formula such as .
First-order logic with extra quantifiers has new quantifiers Qx,..., with meanings such as "there are many x such that ...". Also see branching quantifiers and the plural quantifiers of George Boolos and others.
Bounded quantifiers are often used in the study of set theory or arithmetic.
Infinitary logics
Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics including topology and model theory.
Infinitary logic generalizes first-order logic to allow formulas of infinite length. The most common way in which formulas can become infinite is through infinite conjunctions and disjunctions. However, it is also possible to admit generalized signatures in which function and relation symbols are allowed to have infinite arities, or in which quantifiers can bind infinitely many variables. Because an infinite formula cannot be represented by a finite string, it is necessary to choose some other representation of formulas; the usual representation in this context is a tree. Thus, formulas are, essentially, identified with their parse trees, rather than with the strings being parsed.
The most commonly studied infinitary logics are denoted Lαβ, where α and β are each either cardinal numbers or the symbol ∞. In this notation, ordinary first-order logic is Lωω.
In the logic L∞ω, arbitrary conjunctions or disjunctions are allowed when building formulas, and there is an unlimited supply of variables. More generally, the logic that permits conjunctions or disjunctions with less than κ constituents is known as Lκω. For example, Lω1ω permits countable conjunctions and disjunctions.
The set of free variables in a formula of Lκω can have any cardinality strictly less than κ, yet only finitely many of them can be in the scope of any quantifier when a formula appears as a subformula of another. In other infinitary logics, a subformula may be in the scope of infinitely many quantifiers. For example, in Lκ∞, a single universal or existential quantifier may bind arbitrarily many variables simultaneously. Similarly, the logic Lκλ permits simultaneous quantification over fewer than λ variables, as well as conjunctions and disjunctions of size less than κ.
Non-classical and modal logics
Intuitionistic first-order logic uses intuitionistic rather than classical reasoning; for example, ¬¬φ need not be equivalent to φ and ¬ ∀x.φ is in general not equivalent to ∃ x.¬φ .
First-order modal logic allows one to describe other possible worlds as well as this contingently true world which we inhabit. In some versions, the set of possible worlds varies depending on which possible world one inhabits. Modal logic has extra modal operators with meanings which can be characterized informally as, for example "it is necessary that φ" (true in all possible worlds) and "it is possible that φ" (true in some possible world). With standard first-order logic we have a single domain, and each predicate is assigned one extension. With first-order modal logic we have a domain function that assigns each possible world its own domain, so that each predicate gets an extension only relative to these possible worlds. This allows us to model cases where, for example, Alex is a philosopher, but might have been a mathematician, and might not have existed at all. In the first possible world P(a) is true, in the second P(a) is false, and in the third possible world there is no a in the domain at all.
First-order fuzzy logics are first-order extensions of propositional fuzzy logics rather than classical propositional calculus.
Fixpoint logic
Fixpoint logic extends first-order logic by adding the closure under the least fixed points of positive operators.
Higher-order logics
The characteristic feature of first-order logic is that individuals can be quantified, but not predicates. Thus
is a legal first-order formula, but
is not, in most formalizations of first-order logic. Second-order logic extends first-order logic by adding the latter type of quantification. Other higher-order logics allow quantification over even higher types than second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, and other higher-type objects. Thus the "first" in first-order logic describes the type of objects that can be quantified.
Unlike first-order logic, for which only one semantics is studied, there are several possible semantics for second-order logic. The most commonly employed semantics for second-order and higher-order logic is known as full semantics. The combination of additional quantifiers and the full semantics for these quantifiers makes higher-order logic stronger than first-order logic. In particular, the (semantic) logical consequence relation for second-order and higher-order logic is not semidecidable; there is no effective deduction system for second-order logic that is sound and complete under full semantics.
Second-order logic with full semantics is more expressive than first-order logic. For example, it is possible to create axiom systems in second-order logic that uniquely characterize the natural numbers and the real line. The cost of this expressiveness is that second-order and higher-order logics have fewer attractive metalogical properties than first-order logic. For example, the Löwenheim–Skolem theorem and compactness theorem of first-order logic become false when generalized to higher-order logics with full semantics.
Automated theorem proving and formal methods
Automated theorem proving refers to the development of computer programs that search and find derivations (formal proofs) of mathematical theorems. Finding derivations is a difficult task because the search space can be very large; an exhaustive search of every possible derivation is theoretically possible but computationally infeasible for many systems of interest in mathematics. Thus complicated heuristic functions are developed to attempt to find a derivation in less time than a blind search.
The related area of automated proof verification uses computer programs to check that human-created proofs are correct. Unlike complicated automated theorem provers, verification systems may be small enough that their correctness can be checked both by hand and through automated software verification. This validation of the proof verifier is needed to give confidence that any derivation labeled as "correct" is actually correct.
Some proof verifiers, such as Metamath, insist on having a complete derivation as input. Others, such as Mizar and Isabelle, take a well-formatted proof sketch (which may still be very long and detailed) and fill in the missing pieces by doing simple proof searches or applying known decision procedures: the resulting derivation is then verified by a small core "kernel". Many such systems are primarily intended for interactive use by human mathematicians: these are known as proof assistants. They may also use formal logics that are stronger than first-order logic, such as type theory. Because a full derivation of any nontrivial result in a first-order deductive system will be extremely long for a human to write, results are often formalized as a series of lemmas, for which derivations can be constructed separately.
Automated theorem provers are also used to implement formal verification in computer science. In this setting, theorem provers are used to verify the correctness of programs and of hardware such as processors with respect to a formal specification. Because such analysis is time-consuming and thus expensive, it is usually reserved for projects in which a malfunction would have grave human or financial consequences.
For the problem of model checking, efficient algorithms are known to decide whether an input finite structure satisfies a first-order formula, in addition to computational complexity bounds: see .
See also
ACL2 — A Computational Logic for Applicative Common Lisp
Aristotelian logic
Equiconsistency
Ehrenfeucht-Fraisse game
Extension by definitions
Extension (predicate logic)
Herbrandization
List of logic symbols
Lojban
Löwenheim number
Nonfirstorderizability
Prenex normal form
Prior Analytics
Prolog
Relational algebra
Relational model
Skolem normal form
Tarski's World
Truth table
Type (model theory)
Notes
References
Andrews, Peter B. (2002); An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof, 2nd ed., Berlin: Kluwer Academic Publishers. Available from Springer.
Avigad, Jeremy; Donnelly, Kevin; Gray, David; and Raff, Paul (2007); "A formally verified proof of the prime number theorem", ACM Transactions on Computational Logic, vol. 9 no. 1
Barwise, Jon; and Etchemendy, John (2000); Language Proof and Logic, Stanford, CA: CSLI Publications (Distributed by the University of Chicago Press)
Bocheński, Józef Maria (2007); A Précis of Mathematical Logic, Dordrecht, NL: D. Reidel, translated from the French and German editions by Otto Bird
Ferreirós, José (2001); The Road to Modern Logic — An Interpretation, Bulletin of Symbolic Logic, Volume 7, Issue 4, 2001, pp. 441–484, ,
Hilbert, David; and Ackermann, Wilhelm (1950); Principles of Mathematical Logic, Chelsea (English translation of Grundzüge der theoretischen Logik, 1928 German first edition)
Hodges, Wilfrid (2001); "Classical Logic I: First-Order Logic", in Goble, Lou (ed.); The Blackwell Guide to Philosophical Logic, Blackwell
Ebbinghaus, Heinz-Dieter; Flum, Jörg; and Thomas, Wolfgang (1994); Mathematical Logic, Undergraduate Texts in Mathematics, Berlin, DE/New York, NY: Springer-Verlag, Second Edition,
Tarski, Alfred and Givant, Steven (1987); A Formalization of Set Theory without Variables. Vol.41 of American Mathematical Society colloquium publications, Providence RI: American Mathematical Society,
External links
Stanford Encyclopedia of Philosophy: Shapiro, Stewart; "Classical Logic". Covers syntax, model theory, and metatheory for first-order logic in the natural deduction style.
Magnus, P. D.; forall x: an introduction to formal logic. Covers formal semantics and proof theory for first-order logic.
Metamath: an ongoing online project to reconstruct mathematics as a huge first-order theory, using first-order logic and the axiomatic set theory ZFC. Principia Mathematica modernized.
Podnieks, Karl; Introduction to mathematical logic
Cambridge Mathematical Tripos notes (typeset by John Fremlin). These notes cover part of a past Cambridge Mathematical Tripos course taught to undergraduate students (usually) within their third year. The course is entitled "Logic, Computation and Set Theory" and covers Ordinals and cardinals, Posets and Zorn's Lemma, Propositional logic, Predicate logic, Set theory and Consistency issues related to ZFC and other set theories.
Tree Proof Generator can validate or invalidate formulas of first-order logic through the semantic tableaux method.
Systems of formal logic
Predicate logic
Model theory | First-order logic | Mathematics | 14,325 |
3,278,092 | https://en.wikipedia.org/wiki/Sporotrichosis | Sporotrichosis, also known as rose handler's disease, is a fungal infection that may be localised to skin, lungs, bone and joint, or become systemic. It presents with firm painless nodules that later ulcerate. Following initial exposure to Sporothrix schenckii, the disease typically progresses over a period of a week to several months. Serious complications may develop in people who have a weakened immune system.
Sporotrichosis is caused by fungi of the S. schenckii species complex. Because S. schenckii is naturally found in soil, hay, sphagnum moss, and plants, it most often affects farmers, gardeners, and agricultural workers. It enters through small cuts in the skin to cause a fungal infection. In cases of sporotrichosis affecting the lungs, the fungal spores enter by inhalation. Sporotrichosis can be acquired by handling cats with the disease; it is an occupational hazard for veterinarians.
Treatment depends on the site and extent of infection. Topical antifungals may be applied to skin lesions. Deep infection in the lungs may require surgery. Systemic medications used include Itraconazole, posaconazole and amphotericin B. With treatment, most people will recover, but an immunocompromised status and systemic infection carry a worse prognosis.
S. schenkii, the causal fungus, is found worldwide. The species was named for Benjamin Schenck, a medical student who, in 1896, was the first to isolate it from a human specimen.
Sporotrichosis has been reported in cats, mules, dogs, mice and rats.
Signs and symptoms
Cutaneous or skin sporotrichosis
This is the most common form of this disease. Symptoms of this form include nodular lesions or bumps in the skin, at the point of entry and also along lymph nodes and vessels. The lesion starts off small and painless, and ranges in color from pink to purple. Left untreated, the lesion becomes larger and look similar to a boil and more lesions will appear, until a chronic ulcer develops.
Usually, cutaneous sporotrichosis lesions occur in the finger, hand, and arm.
Pulmonary sporotrichosis
This rare form of the disease occurs when S. schenckii spores are inhaled. Symptoms of pulmonary sporotrichosis include productive coughing, nodules and cavitations of the lungs, fibrosis, and swollen hilar lymph nodes. Patients with this form of sporotrichosis are susceptible to developing tuberculosis and pneumonia
Disseminated sporotrichosis
When the infection spreads from the initial site to secondary sites in the body, the disease develops into an uncommon and potentially life-threatening form, called disseminated sporotrichosis. The infection can spread to joints and bones (called osteoarticular sporotrichosis) as well as the central nervous system and the brain (called sporotrichosis meningitis).
Some symptoms of disseminated sporotrichosis include weight loss, anorexia, and bone lesions.
Complications
Open sporotrichosis lesions on the skin will on occasion become superinfected with bacteria. Cellulitis may also occur.
Diagnosis
Sporotrichosis is an acute infection with slow progression and often subtle symptoms. It is often difficult to diagnose, as many other diseases share similar symptoms and therefore must be ruled out.
Patients with sporotrichosis will have antibodies against the fungus S. schenckii; however, due to variability in sensitivity and specificity, antibody detection may not be a reliable diagnostic test for this disease. The confirming diagnosis remains culturing the fungus from the skin, sputum, synovial fluid, and cerebrospinal fluid. Smears should be taken from any draining fistulas or ulcers.
Cats with sporotrichosis are unique in that the exudate from their lesions may contain numerous infectious organisms. This makes cytological evaluation of exudate a valuable diagnostic tool in this species. Exudate is pyogranulomatous, and phagocytic cells may be packed with yeast forms. These yeast cells are variable in size; many are cigar-shaped.
Differential diagnosis
Differential diagnoses includes: leishmaniasis, nocardiosis, mycobacterium marinum, cat-scratch disease, leprosy, syphilis, sarcoidosis and tuberculosis.
Prevention
The majority of sporotrichosis cases occur when the fungus is introduced through a cut or puncture in the skin while handling vegetation containing the fungal spores. Prevention of this disease includes wearing long sleeves and gloves while working with soil, hay bales, rose bushes, pine seedlings, and sphagnum moss.
The risk of sporotrichosis in cats is increased in male cats that roam outdoors. Accordingly, the risk may be reduced by keeping cats indoors or neutering them. Isolating infected animals can also be a preventive measure. The risk of spread from infected cats to humans can be reduced by appropriate biosafety measures, including wearing personal protective equipment when handling a cat with suspected sporotrichosis and by washing hands, arms and clothing after handling the cat.
Treatment
Treatment of sporotrichosis depends on the severity and location of the disease. The following are treatment options for this condition:
Oral potassium iodide
Potassium iodide is an anti-fungal drug that is widely used as a treatment for cutaneous sporotrichosis. Despite its wide use, there is no high-quality evidence for or against this practice. Further studies are needed to assess the efficacy and safety of oral potassium iodide in the treatment of sporotrichosis.
Itraconazole (Sporanox) and fluconazole
These are antifungal drugs. Itraconazole is currently the drug of choice and is significantly more effective than fluconazole. Fluconazole should be reserved for patients who cannot tolerate itraconazole.
Amphotericin B
This antifungal medication is delivered intravenously. Many patients, however, cannot tolerate Amphotericin B due to its potential side effects of fever, nausea, and vomiting.
Lipid formulations of amphotericin B are usually recommended instead of amphotericin B deoxycholate because of a better adverse-effect profile. Amphotericin B can be used for severe infection during pregnancy. For children with disseminated or severe disease, amphotericin B deoxycholate can be used initially, followed by itraconazole.
In case of sporotrichosis meningitis, the patient may be given a combination of Amphotericin B and 5-fluorocytosine/Flucytosine.
Terbinafine
500mg and 1000mg daily dosages of terbinafine for twelve to 24 weeks has been used to treat cutaneous sporotrichosis.
Newer triazoles
Several studies have shown that posaconazole has in vitro activity similar to that of amphotericin B and itraconazole; therefore, it shows promise as an alternative therapy. However, voriconazole susceptibility varies. Because the correlation between in vitro data and clinical response has not been demonstrated, there is insufficient evidence to recommend either posaconazole or voriconazole for treatment of sporotrichosis at this time.
Surgery
In cases of bone infection and cavitary nodules in the lungs, surgery may be necessary.
Heat therapy
Heat creates higher tissue temperatures, which may inhibit fungus growth while the immune system counteracts the infection. The "pocket warmer" used for this purpose has the advantage of being able to maintain a constant temperature of 44 degrees-45 degrees C on the skin surface for several hours, while permitting unrestricted freedom of movement. The duration of treatment depends on the type of lesion, location, depth, and size. Generally, local application for 1-2 h per day, or in sleep time, for 5-6 weeks seems to be sufficient.
Other animals
Sporotrichosis can be diagnosed in domestic and wild mammals. In veterinary medicine it is most frequently seen in cats and horses. Cats have a particularly severe form of cutaneous sporotrichosis. Infected cats may exhibit abscesses, cellulitis, or draining wounds that fail to respond to antibiotic treatment.
Sporotrichosis can spread from nonhuman animals to humans (zoonosis). Infected cats in particular exude large quantities of Sporothrix organisms from their skin leasions and can spread the infection to people who handle them. Although cats are the most common animal source, the infection has also been known to spread to humans from dogs, rats, squirrels, and armadillos.
See also
Mucormycosis
List of cutaneous conditions
References
External links
Animal fungal diseases
Bird diseases
Bovine diseases
Cat diseases
Horse diseases
Mycosis-related cutaneous conditions
Rodent diseases
Sheep and goat diseases
Swine diseases
Zoonoses
Fungal diseases | Sporotrichosis | Biology | 1,872 |
34,863,807 | https://en.wikipedia.org/wiki/Vittarioideae | Vittarioideae is a subfamily of the fern family Pteridaceae, in the order Polypodiales. The subfamily includes the previous families Adiantaceae (adiantoids or maidenhair ferns) and Vittariaceae (vittarioids or shoestring ferns).
Description
The subfamily includes two distinct groups of ferns: the adiantoids, consisting of the single genus Adiantum, and the vittarioids, several genera, including Vittaria, which typically have highly reduced leaves, usually entire, and an epiphytic habit. The ferns historically considered as Adiantum include both petrophilic and terrestrial plants. The vittarioid ferns are primarily epiphytic in tropical regions and all have simple leaves with sori that follow the veins and lack true indusia; the sori are most often marginal with a false indusium formed from the reflexed leaf margin. The family also includes a species, Vittaria appalachiana, that is highly unusual in that the sporophyte stage of the life cycle is absent. This species consists solely of photosynthetic gametophytes that reproduce asexually.
Taxonomy
Molecular phylogenetic analysis demonstrated that the vittarioid ferns were nested within the genus Adiantum as it was originally circumscribed, making that genus paraphyletic. In the Pteridophyte Phylogeny Group classification of 2016 (PPG I), the family is treated as the subfamily Vittarioideae] of the family Pteridaceae.
The following diagram shows a likely phylogenetic relationship between the Vittarioideae and other subfamilies of the Pteridaceae.
History
The first suprageneric classification based on Vittaria was made by Carl Borivoj Presl in 1836, who erected the tribe Vittariaceae to contain the genera Vittaria and Prosaptia, the latter now included in the grammitid ferns. He invented the new genus Haplopteris to accommodate another group of simple-leaved ferns separated from Pteris, but placed it in tribe Adiantaceae instead, due to the location of its sori just behind the leaf margin.
In his 1911 treatment of the tribe, Ralph Benedict adopted a circumscription similar to modern treatments, within which he recognized the genera Ananthacorus, Anetium, Antrophyum, Hecistopteris, Monogramma, Polytaenium, and Vittaria. He described Radiovittaria as a subgenus of Vittaria, subsumed Scoliosorus within Polytaenium as doubtfully worthy of subgeneric rank, while Rheopteris had not yet been discovered. Haplopteris he explicitly synonymized with Vittaria in 1914.
Carl Christensen used the name "Vittarioideae" in Verdoorn's Manual of Pteridology in 1938, but did not include a description, leaving it nomenclaturally invalid. Ren-Chang Ching raised Vittariaceae to the rank of a family in 1940.
The first well-sampled molecular phylogenetic study of the vittarioids was based on the chloroplast gene rbcL. In this study, it was found that the type species of Monogramma is embedded in Haplopteris; the segregation of Vaginularia from Monogramma was also supported, as members of Vaginularia formed a clade sister to Rheopteris and distant from Monogramma sensu stricto. A later molecular phylogeny, published in 2016, established the genus Antrophyopsis (formerly a subgenus of Antrophyum) for three species placed in Scoliosorus but more distant from the type of that genus than Antrophyum. This treatment also sank Anetium into Polytaenium and Monogramma into Haplopteris. Since the name Monogramma has taxonomic priority over Haplopteris, a proposal to reject Monogramma in favor of Haplopteris has been put forth to conserve the name and comparatively stable circumscription of Haplopteris.
Genera
The following genera are recognized in the Pteridophyte Phylogeny Group classification of 2016 (PPG I):
Adiantum L.
Ananthacorus Underw. & Maxon
Antrophyopsis (Benedict) Schuettp.
Antrophyum Kaulf.
Haplopteris C.Presl
Hecistopteris J.Sm.
Polytaenium Desv.
Radiovittaria (Benedict) E.H.Crane
Rheopteris Alston
Scoliosorus T.Moore
Vaginularia Fée
Vittaria Sm.
The following phylogeny for the currently recognized genera of the subfamily was presented by Schuettpelz et al.:
References
Pteridaceae
Plant subfamilies | Vittarioideae | Biology | 1,030 |
7,257,439 | https://en.wikipedia.org/wiki/Ekman%20current%20meter | The Ekman current meter is a mechanical flowmeter invented by Vagn Walfrid Ekman, a Swedish oceanographer, in 1903. It comprises a propeller with a mechanism to record the number of revolutions, a compass and a recorder with which to record the direction, and a vane that orients the instrument so the propeller faces the current. It is mounted on a free-swinging vertical axis suspended from a wire and has a weight attached below.
The balanced propeller, with four to eight blades, rotates inside a protective ring. The position of a lever controls the propeller. In down position the propeller is stopped and the instrument is lowered, after reaching the desired depth a weight called a messenger is dropped to move the lever into the middle position which allows the propeller to turn freely. When the measurement has been taken another weight is dropped to push the lever to its highest position at which the propeller is again stopped.
The propeller revolutions are counted via a simple mechanism that gears down the revolutions and counts them on an indicator dial. The direction is indicated by a device connected to the directional vane that drops a small metal ball about every 100 revolutions. The ball falls into one of thirty-six compartments in the bottom of the compass box that indicate direction in increments of 10 degrees. If the direction changes while the measurement is being performed the balls will drop into separate compartments and a weighted mean is taken to determine the average current direction.
This is a simple and reliable instrument whose main disadvantage is that is must be hauled up to be read and reset after each measurement. Ekman solved this problem by designed a repeating current meter which could take up to forty-seven measurements before needing to be hauled up and reset. This device used a more complicated system of dropping small numbered metal balls at regular intervals to record the separate measurements.
Bibliography
Harald U. Sverdrup, Martin W. Johnson, and Richard H. Fleming, The Oceans: Their Physics, Chemistry, and General Biology, Prentice-Hall, Inc., 1942
See also
Oceanic current
Ekman spiral
Ekman water bottle
Physical oceanography
fr:Courantomètre#Courantomètre d'Ekman | Ekman current meter | Physics | 435 |
32,474,913 | https://en.wikipedia.org/wiki/Biofabrication%20%28journal%29 | Biofabrication is a peer-reviewed scientific journal covering research that leads to the fabrication of advanced biological models, medical therapeutic products, and non-medical biological systems. The editor-in-chief is Wei Sun (Drexel University).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2023 impact factor of 8.2.
References
External links
IOP Publishing academic journals
Materials science journals
Academic journals established in 2009
Quarterly journals
English-language journals
Biomaterials | Biofabrication (journal) | Physics,Materials_science,Engineering,Biology | 112 |
40,933,519 | https://en.wikipedia.org/wiki/HCentive | hCentive Inc. is Virginia based software development company specializing in cloud-based products for health insurers and state health agencies. The company was Ranked 205 by Deloitte in 2016 under Software category.
It is involved in the development of state and private health insurance marketplaces, namely exchanges, for state organizations, health insurance providers, companies, and third party administrators. Company works with three of the top five health insurance organizations in the US.
About 125 employees of its work at the corporate headquarters. The others are based at the Indian R&D center.
History
hCentive was founded in 2009 by Sanjay Singh, Manoj Agarwala and Tarun Upaday. They started their activities before the Affordable Care Act became law, expecting that it would create the new market. The idea of founding the hCentive company emerged from concepts of the Affordable Care Act- ACA, when health care reform was being debated in the United States Congress.
Products and services
hCentive's WebInsure State is a cloud-hosted solution for state-based health insurance exchanges . It is adopted by the states of New York, Colorado and Kentucky for their state health insurance marketplaces.
Company's WebInsure Private Exchange solution is created specially for the private market: It is a cloud-based solution to be used for health plans, TPAs, and dental plans.
Another private market solution, WebInsure Exchange Manager, serves the purpose of integration between health insurance exchange, health insurance providers, and other related parties.
In August, 2013, Geisinger Health Plan (GHP), a health insurance provider, began cooperation with hCentive to implement health insurance exchange integration and private exchange solutions.
Awards & Recognitions
Rank of No. 62 on the Deloitte Technology Fast 500 in North American, No. 117 on the Inc. 5000 2014 list and No. 12 on the Washington Business Journal Fast 50 in 2015.
Ranked No. 19 among Virginia-based companies and was ranked No. 117 on the Inc. 5000 2014 list.
References
External links
Cloud computing providers
Cloud platforms
Software companies based in Virginia
Technology companies established in 2009
Defunct software companies of the United States | HCentive | Technology | 438 |
34,278,093 | https://en.wikipedia.org/wiki/Anatolii%20Goldberg | Anatolii Asirovich Goldberg (, , , April 2, 1930 in Kyiv – October 11, 2008 in Netanya) was a Soviet and Israeli mathematician working in complex analysis. His main area of research was the theory of entire and meromorphic functions.
Life and work
Goldberg received his PhD in 1955 from Lviv University under the direction
of Lev Volkovyski. He worked as a docent in Uzhhorod National University (1955–1963), then in Lviv University (1963–1997), where he became a full professor in 1965, and in Bar Ilan University (1997–2008). Goldberg, jointly with Iossif Ostrovskii and Boris Levin, was awarded the State Prize of Ukraine in 1992.
Among his main achievements are:
construction of meromorphic functions with infinitely many deficient values,
solution of the inverse problem of Nevanlinna theory for finitely many deficient values,
development of the integral with respect to a semi-additive measure.
He authored a book and over 150 research papers.
Several things are named after him: Goldberg's examples,
Goldberg's constants, and Goldberg's conjecture.
Selected publications
, translated as
References
External links
The International conference on complex analysis and related topics dedicated to the 90-th anniversary of Anatolii Asirovich Goldberg (1930-2008)
20th-century Ukrainian Jews
Ukrainian mathematicians
20th-century Israeli Jews
Israeli mathematicians
1930 births
2008 deaths
Complex analysts
Mathematical analysts | Anatolii Goldberg | Mathematics | 298 |
5,259,758 | https://en.wikipedia.org/wiki/FCER1 | The high-affinity IgE receptor, also known as FcεRI, or Fc epsilon RI, is the high-affinity receptor for the Fc region of immunoglobulin E (IgE), an antibody isotype involved in allergy disorders and parasite immunity. FcεRI is a tetrameric receptor complex that binds Fc portion of the ε heavy chain of IgE. It consists of one alpha (FcεRIα – antibody binding site), one beta (FcεRIβ – which amplifies the downstream signal), and two gamma chains (FcεRIγ – the site where the downstream signal initiates) connected by two disulfide bridges on mast cells and basophils. It lacks the beta subunit on other cells. It is constitutively expressed on mast cells and basophils and is inducible in eosinophils.
Tissue distribution
FcεRI is found on epidermal Langerhans cells, eosinophils, mast cells, and basophils. As a result of its cellular distribution, this receptor plays a major role in controlling allergic responses. FcεRI is also expressed on antigen-presenting cells, and controls the production of important immune mediators (cytokines, interleukins, leukotrienes, and prostaglandins) that promote inflammation. The most known mediator is histamine, which results in the five symptoms of inflammation: heat, swelling, pain, redness and loss of function.
FcεRI was demonstrated in bronchial/tracheal airway smooth muscle cells in normal and asthmatic patients. FcεRI cross-linking by IgE and anti-IgE antibodies led to Th2 (IL-4, -5, and -13) cytokines and CCL11/eotaxin-1 chemokine release; and ([Ca2+]i) mobilization, suggesting a likely IgE-FcεRI-ASM (airway smooth muscle cell)-mediated link to airway inflammation and airway hyperresponsiveness.
Mechanism of action
Crosslinking of the FcεRI via IgE-antigen complexes leads to degranulation of mast cells or basophils and release of inflammatory mediators. Under laboratory conditions, degranulation of isolated basophils can also be induced with antibodies to the FcεRIα, which crosslink the receptor. Such crosslinking and potentially pathogenic autoantibodies to the FcεRIα have been isolated from human cord blood, which suggest that they occur naturally and are present already at birth. However, their epitope on FcεRIα was masked by IgE, and the affinity of the corresponding autoantibodies found in healthy adults appeared lowered.
See also
Omalizumab
References
External links
Fc receptors
Proteins | FCER1 | Chemistry | 592 |
63,569,293 | https://en.wikipedia.org/wiki/Richard%20Kingsford%20%28ecologist%29 | Richard Kingsford is an environmental/biological expert and river ecologist. Much of his work has been undertaken with the Murray-Darling Basin wetlands and rivers covering approximately 70 percent of the Australian continent. He is the director of the Centre for Ecosystem Science at the University of New South Wales School of Biological, Earth and Environmental Sciences, a member of the Australian Government’s Environmental Flows Scientific Committee. He has received the following awards:
2001: Eureka Award for his research on ecological values of rivers and impact of Australia’s water resource aridity;
2007: Hoffman medal for his contribution to global wetland science;
2008: Eureka Award for Promoting Understanding of Science;
2015: Eureka Award (as a member of the IUCN Red List of Ecosystems team) for Environmental Research resulting in the establishment of a universal standard for assessing ecosystem risks;
2019: Society for Conservation Biology’s Distinguished Service Award also relating to contributions to freshwater/ecosystem conservation;
2020: Fellow of the Royal Society of New South Wales.
Kingsford presented "A Meander Down a River or Two: How Water Defines Our Continent and Its Future" for the second Eric Rolls Memorial Lecture in 2012.
In 2019 the Australian Regional Council (ARC) appointed Kingsford as chief investigator in a team to develop a new international standard for the appraisal and reporting of the status of the most crucial wetlands worldwide.
Publications
His most cited publication, Kingsford, Richard Tennant. "Ecological impacts of dams, water diversions and river management on floodplain wetlands in Australia." Australian Ecology 25.2 (2000): 109-127.m has been cited 985 times, according to Google Scholar.
References
Australian ecologists
Environmental scientists
Living people
Year of birth missing (living people)
Fellows of the Royal Society of New South Wales | Richard Kingsford (ecologist) | Environmental_science | 356 |
57,613,887 | https://en.wikipedia.org/wiki/Nimbus%202 | Nimbus 2 (also called Nimbus-C) was a meteorological satellite. It was the second in a series of the Nimbus program.
Launch
Nimbus 2 was launched on May 15, 1966, by a Thor-Agena rocket from Vandenberg Air Force Base, California, United States. The spacecraft functioned nominally until January 17, 1969. The satellite orbited the Earth once every 1 hour and 48 minutes, at an inclination of 100°. Its perigee was and its apogee was .
Mission
The second in a series of second-generation meteorological research and development satellites, Nimbus 2 was designed to serve as a stabilized, Earth-oriented platform for the testing of advanced meteorological sensor systems and the collecting of meteorological data. The polar-orbiting spacecraft consisted of three major elements: (1) a torus-shaped sensory ring, (2) solar paddles, and (3) the control system housing. The solar paddles and control system housing were connected to the sensory ring by a truss structure, giving the satellite the appearance of an ocean buoy.
Nimbus 2 was nearly tall, in diameter at the base, and about across with solar paddles extended. The sensory ring, which formed the satellite base, housed the electronics equipment and battery modules. The lower surface of the sensory ring provided mounting space for sensors and telemetry antennas. An H-frame structure mounted within the center of the torus provided support for the larger experiments and tape recorders. Mounted on the control system housing, which was located on top of the spacecraft, were Sun sensors, horizon scanners, gas nozzles for attitude control, and a command antenna.
Use of a stabilization and control system allowed the spacecraft's orientation to be controlled to within plus or minus 1° for all three axes (pitch, roll, yaw). The spacecraft carried:
Advanced Vidicon Camera System (AVCS): instrument for recording and storing remote cloud cover pictures
Automatic Picture Transmission (APT): instrument for providing real-time cloud cover pictures
High and Medium Resolution Infrared Radiometers (HRIR/MRIR): for measuring the intensity and distribution of electromagnetic radiation emitted by and reflected from the Earth and its atmosphere
The Nimbus 2 and experiments performed normally after launch until July 26, 1966, when the spacecraft tape recorder failed. Its function was taken over by the HRIR tape recorder until November 15, 1966, when it also failed. Some real-time data were collected until January 17, 1969, when the spacecraft mission was terminated owing to deterioration of the horizon scanner used for Earth reference.
References
Weather satellites of the United States
Spacecraft launched in 1966 | Nimbus 2 | Astronomy | 544 |
5,067,485 | https://en.wikipedia.org/wiki/HD%2088366 | S Carinae (HD 88366) is a variable star in the constellation Carina.
S Carinae is an M-type red giant with a mean apparent magnitude of +6.94. It is approximately 1,620 light years from Earth. Benjamin Apthorp Gould discovered the variable star, in 1871. It appeared with its variable star designation, S Carinae, in Annie Jump Cannon's 1907 work, Second Catalogue of Variable Stars. It is classified as a Mira type variable star and its brightness varies between magnitude +4.5 and +10.0 with a period of 149.49 days. When it is near its maximum brightness, it is visible to the naked eye. It has one of the earliest spectral types, and hence the hottest temperatures, of any Mira variable, and has a relatively short period for the class. The temperature of this pulsing star is highest at visual brightness maximum and lowest at visual brightness minimum.
S Carinae has exhausted its core hydrogen and expanded to become a red giant. It has also exhausted its core helium and evolved to the asymptotic giant branch, where it fuses hydrogen and helium in separate shells outside the core.
References
Carina (constellation)
Mira variables
M-type giants
Carinae, S
088366
3999
049751
Durchmusterung objects | HD 88366 | Astronomy | 272 |
40,458,586 | https://en.wikipedia.org/wiki/Cryptopine | Cryptopine is an opium alkaloid. It is found in plants in the family Papaveraceae, including Argemone mexicana.
See also
Allocryptopine
Protopine
References
Natural opium alkaloids
Alkaloids found in Papaveraceae
Benzodioxoles
Alkaloids | Cryptopine | Chemistry | 64 |
56,054,658 | https://en.wikipedia.org/wiki/RU-59063 | RU-59063 is a nonsteroidal androgen or selective androgen receptor modulator (SARM) which was first described in 1994 and was never marketed. It was originally thought to be a potent antiandrogen, but subsequent research found that it actually possesses dose-dependent androgenic activity, albeit with lower efficacy than dihydrotestosterone (DHT). The drug is an N-substituted arylthiohydantoin and was derived from the first-generation nonsteroidal antiandrogen (NSAA) nilutamide. The second-generation NSAAs enzalutamide, RD-162, and apalutamide were derived from RU-59063.
RU-59063 has high affinity for the human androgen receptor (AR) (Ki = 2.2 nM; Ka = 5.4 nM) and 1,000-fold selectivity for the AR over other nuclear steroid hormone receptors, including the , , , and . It shows 3- and 8-fold higher affinity than testosterone for the rat and human AR, respectively, and up to 100-fold higher affinity for the rat AR than the first-generation NSAAs flutamide, nilutamide, and bicalutamide. It also has slightly higher affinity for the AR than DHT and nearly equal affinity to that of the very-high-affinity AR ligand metribolone (R-1881). In addition, RU-59063, unlike testosterone and DHT, shows no specific binding to human plasma.
See also
Cyanonilutamide
RU-58642
RU-58841
References
External links
Sarms & Bodybuilding
MK 677 Sarm For Research
MK 677 25mg
MK-677 Dosage, Results, Bodybuilding, Reviews, Cycle and Benefits
Abandoned drugs
Primary alcohols
Imidazolidines
Ketones
Nitriles
Selective androgen receptor modulators
Thioureas
Trifluoromethyl compounds
Benzonitriles | RU-59063 | Chemistry | 414 |
27,417,419 | https://en.wikipedia.org/wiki/CCIE%20Certification | The Cisco Certified Internetwork Expert, or CCIE, is a technical certification offered by Cisco Systems.
The Cisco Certified Internetwork Expert (CCIE) and Cisco Certified Design Expert (CCDE) certifications were established to assist the industry in distinguishing the top echelon of internetworking experts worldwide and to assess expert-level infrastructure network design skills worldwide. Holders of these certifications are generally acknowledged as having an advanced level of knowledge. The CCIE and CCDE communities have established a reputation for leading the networking industry in deep technical networking knowledge and are deployed in the most technically challenging network assignments. The expert-level certification program continually updates and revises its testing tools and methodologies to ensure and maintain program quality, relevance and value. Through a rigorous written exam and a performance-based lab exam, these expert-level certification programs set the standard for internetworking expertise.
The program is currently divided into six different areas of expertise or "tracks". One may choose to pursue multiple CCIE tracks in several different categories of Cisco technology: Routing & Switching, Service Provider, Security, Collaboration, Data Center, and Wireless.
CCIE Requirements
CCIE candidates must first pass a written exam and then the corresponding hands-on lab exam. Though there are no formal requirements to take a CCIE certification exam, an in-depth understanding of the topics covered by the exams and three to five years of job experience are expected before attempting certification.
There are two test sets for the requirement for certification:
Written Exam: Duration 120 minutes, 90–110 questions with multiple choice and simulation.
Lab Exam: 8 hours Lab exam (One day), Previously, CCIE Lab Exams were two full-day exams.
Details: The CCIE Routing and Switching Lab exam consist of a 2-hour Troubleshooting section, a 30-minute Diagnostic section, and a 5-hour and 30-minute Configuration section. The format differs per track.
The eight-hour lab exam tests your ability to configure actual equipment and troubleshoot the network in a timed test situation. You must make an initial attempt at the CCIE lab exam within 18 months of passing the CCIE written exam. Candidates who do not pass must reattempt the lab exam within 12 months of their last scored attempt in order for their written exam to remain valid. If you do not pass the lab exam within three years of passing the written exam, you must retake the written exam before being allowed to attempt the lab exam again.
Overview of the CCIE Lab exam
The CCIE lab exam was first introduced in 1993 with a two-day test. As there was an overwhelming demand for CCIE certification, the waiting time for taking the lab exam was at least six months.
In October 2001, Cisco updated the two-day lab exam to one day by removing some of the testing measures like diagramming, IP addressing, and troubleshooting.
When the CCIE lab test was two days:
The first day comprised: building the network by patching, IP addressing, and configuration to terminal servers: it's the verification method of your performance about the whole layer 2 and 3 configurations. Before the end of the day about 5:15 pm, the proctor would mark your capability, and decide whether you are able to attend the second-day lab exam by passing an 80% pass mark on the first day.
The second day: You would receive another paper that covered more configuration tasks during the morning session.
To configure the devices correctly, you should be aware of the core technologies. There are various hazards that you should understand as well as demonstrate basic networking knowledge and practical experience. The Proctor would observe your process and let you know whether you can attend the afternoon exam session for the troubleshooting field before the end of the day.
The current CCIE exam consists of eight hours within a day.
There are three sections for the lab test.
The troubleshooting module, the diagnostic module, and the configuration module.
The process of lab examination ensures evaluations of the capabilities to the understanding of the essence of networking knowledge of the candidates.
You would receive the result of the lab exam within 48 hours. The failing score reports incorporate the details of your mark on the primary topic ranges. If you pass your lab exam, you will receive the passing result without the details of the statement.
Many lab centers across the US (e.g. California) have been closed down, leaving only Richardson, Texas open. As of 2020, these are the only available cities, worldwide, in which you can take the lab exam:
Bangalore, Beijing, Brussels, Dubai, Feltham, Hong Kong, Johannesburg, Krakow, Mexico City, Moscow, Richardson, Riyadh, Sao Paulo, Shanghai, Singapore, Sydney, Tokyo, and Toronto.
Tracks of Expert Certification
There are seven tracks of Expert Level certifications. It was designed for the requirement of the industry. It was updated over time, and the latest tracks should be verified on the official CCIE website.
CCDE : Expert-level network design engineers capable of translating business needs, budget, and operational constraints into the design of a converged solution.
CCIE Enterprise Infrastructure (formerly Routing and Switching, rebranded approx February 2020): The most popular CCIE track; for expert-level network engineers who can plan, operate, and troubleshoot complex, converged network infrastructure.
CCIE Collaboration: Suitable expert level for Collaboration Architects, Unified Communications Architects, and Voice and Video Network Managers.
CCIE Data Center: Experts in the planning, design, implementation and management of complex modern IT data center infrastructure.
CCIE Security: Concerned with modern security risks, threats, and vulnerabilities; for security experts who have the knowledge and skills to architect, engineer, implement, troubleshoot, and support the full suite of Cisco security technologies and solutions using the latest industry best practices to secure systems and environments.
CCIE Service Provider: Expert-level ISP (Internet Service Provider) network engineers who can build extensible Service Provider infrastructure to deliver rich managed services.
CCIE Wireless: Experts who are able to demonstrate broad theoretical knowledge of wireless networking and a solid understanding of wireless local area networking (WLAN) technologies from Cisco.
CCIE Emeritus
Active CCIE holders are able to apply for CCIE Emeritus status when they reach their tenth anniversary of CCIE certification. Lifetime Emeritus tenure is applicable to candidates who maintains their CCIE Active or Emeritus status for 20 consecutive years. It is necessary to submit the application of CCIE Emeritus within 60 days of the due date of the license expiry date.
CCIE Emeritus status generally applies to those who have moved out of "day to day" network and technical work but would like to stay involved in the CCIE program serving as ambassadors to current and future CCIE programs.
A CCIE Emeritus status holder is a non-active CCIE holder but candidates are recognized for technical proficiency and long term status within the CCIE program.
CCIE Emeritus holders have the opportunity to re-enter active CCIE status by taking any current CCIE-level written exam.
Continuing Education Program
The Continuing Education Program is an alternative way of re-certification for the CCIE license instead of passing the written exam.
This program is managed according to three core principles: Flexibility, Diversity and Integrity.
Flexibility: Allowing individuals a wide range of options for the re-certification.
Diversity: Several options of training: online courses, instructor-led training and authoring of content.
Integrity: Achieved by having Cisco authorized content providers with re-certification requirements validate the credits submitted by an individual.
Relevance to real-world situations
Regarding a new approach to networking with software-defined networking,
there are some critics about CCIE certifications that challenge corner-case scenarios of written and lab exams. However, the principle of networking is still the same regardless of whether a router or a switch is a physical device or just a piece of software, the fundamental networking operations remain the same in the real-world internetworking. CCIE holders are still relevant to the software-defined networking era.
See also
CCNA
CCNP
Cisco certifications
References
Career Certifications
Information technology qualifications | CCIE Certification | Technology | 1,644 |
24,006,624 | https://en.wikipedia.org/wiki/Cymbeline%20%28radar%29 | Radar, Field Artillery, No 15, better known as Cymbeline, was a widely used British mortar locating radar operating in the I band using a Foster scanner. Developed by Thorn-EMI and built at their now-defunct site at Hayes in Middlesex, it was in British service from 1975 until about 2003 with the Royal Artillery.
Cymbeline replaced Green Archer in British service, but in a larger number of larger units including the Territorial Army. Cymbeline came in Mk 1 and Mk 2 versions, the difference being their mobility; Mk 1 was on a lightweight two wheel trailer whereas Mk 2 was mounted on top of an FV432 armoured carrier. The Mk 1 was transportable underslung by a helicopter.
Cymbeline was more accurate than Green Archer and a lot more mobile. Although mortar locating was its primary role, it was capable of various other tasks.
Description
The basic Cymbeline was a single unit on a turntable stand with four adjustable legs for levelling. The main elements were the antenna, the Foster scanner, the main electronic unit and a Wankel rotary generator (using two-stroke fuel) weighing 390 kg. This was strapped to a simple 590 kg trailer that it could be operated from. Off the trailer the radar was 2.29 metres high with the antenna up, 1.07 metres with antenna folded, 1.7 metres wide and 1.5 metres long.
The radar itself had a nominal 100 kW peak average output and a nominal pulse repetition frequency of 4000 pulses/second.
The radar was connected by cable to the operator’s console (‘Indicator Azimuth Unit’) and connected to this was the ‘Mortar Coordinate Indicator’ that displayed the located mortar’s position to the detachment commander.
The Foster scanner converted a narrow radar beam, about 40 mils diameter, into one 720 mils wide and 30 mils high. This beam had five pre-set operating positions determined by five different radar horns that directed the beam onto the antenna at slightly different angles. Two beam positions were used with a mortar bomb being recorded by the operator as it went through each beam. This gave two coordinates. The operator marked his screen at each bomb position and changed the beam angle. He then placed electronic crosshairs over his marks (which represented the bomb’s position in the horizontal plane) which the analogue computer used to calculate the mortar coordinates using the expected mortar altitude that had been previously set.
Setting-up the radar involved orientating it in a known azimuth (the basic mounting could cover an arc of 4800 mils) and setting the datum beam elevation between -90 and 360 mils so that it was above the radar horizon. Other beam positions, 25, 40, 45, 65 and 90 mils were relative to this datum. The lowest beam position was used to alert the operator that a bomb was in flight and where to expect it on his screen; once alerted he tilted the beam into the first position, waited for it to appear, and then switched to the next angle. The angles were pre-selected according to the local circumstances. Data memory was also available, although in British service this was normally only fitted to Cymbeline Mk 2; in essence it recorded the detected mortar bomb signal and allowed it to be replayed.
Cymbeline could locate a mortar with an accuracy of 50 metres. 81 mm mortars could be located at up to 10 km and larger mortars to 20 km. Secondary roles were area and coastal surveillance, helicopter and light aircraft control, meteor balloon tracking and rapid survey. It could also observe and adjust ground and airburst artillery fire and high angle fire.
In British service each radar held a spare main electronic unit, spare generator and spare displays. The radar includes an integral simulator for operator training and practice and built-in test functionality.
Cymbeline Mk 2 was mounted on an FV432. The normal hatch covers over the rear compartment were replaced by a circular radar mounting, with three ‘supports’ for the radar turntable. The radar could be rotated up to 12 800 mils. Prototypes used hydraulic supports to level the radar but the production version used mercury in the same manner as Green Archer Mk 2 had. When the taps were opened and the radar unclamped the mercury levelled itself and the radar floated on it. There was some internal furniture for the operator and the displays, as well as racking for the spare assemblies, and a simple hoist to lift the assemblies up to the radar. For movement the antenna was lowered and the whole radar surrounded by a folding grille attached to decking around the radar to protect it.
Operational history
In British service from 1975 until 2003, the radars were used in the Falkland Islands in 1982, the Gulf War in 1991 and in the Balkans, where 5 Cymbelines were deployed to Sarajevo in 1994/5 as part of the BRITFOR contribution to UNPROFOR, to locate 'heavy weapons' violations.
The radars were organized in troops; originally these were in each field regiment (equipped with Abbot) and light regiment (equipped with L118 Light Gun) and were designated 'G Troop' in each regiment apart from the radar troop in Paderborn (25 Fd Regt RA, 3 RHA, 45 Fd Regt RA) who were designated as "I" troop. The field regiments had Cymbeline Mk 2, the light regiments had Cymbeline Mk 1.
In British service a radar troop normally comprised a radar command post, three radars, three listening posts (LPs) and a reconnaissance party led by the troop sergeant major. Radars and LPs were normally commanded by sergeants. The task of the LPs was to report mortars firing and the area they were in. Radars were only switched on in response to reports of hostile mortar fire, and so thereby avoiding continuous transmission in order to minimise the risk of detection. Radars reported the hostile mortar locations to the radar command post, from where they were reported to the Brigade Artillery Intelligence Officer and regimental headquarters for rapid counter-fire.
The troop's maintenance section, with radar technicians deployed with the radar command post and attended radars as necessary. The troop commander was the Brigade Artillery Intelligence Officer and with the troop's artillery intelligence section was at brigade headquarters.
In the late 1970s some troops were removed from regiments and concentrated into a specialist batteries.
Four sets went to China, via the US, in 1979. Two may have been retained for study by Chinese radio engineers. The others were used in the border conflicts between China and Vietnam, with great effectiveness. These were such a disruption to the Vietnamese that in 1984 a special forces operation at the Battle of Laoshan was organised to destroy one of them. China eventually produced a reverse engineered version designated as Type 371.
Variants
Thorn-EMI developed a commercial Mk 3 version with an electronic phase-scanned scanner replacing the Foster scanner. There was also larger version code-named Cervantes developed by RSRE and Thorn-EMI, intended for use by the British Army. It made use of modern electronics, had a larger antenna, far greater range and some capability to locate guns and rocket launchers; however, the development effort was eventually abandoned in 1986 in favour of the multi-national COBRA(radar), as part of a general hiatus in defence procurement under Heseltine.
Operators
: The French Army operated radars captured in the Gulf War.
: Produced under license by Bharat Electronics Ltd for the Indian Army
Notes
References
Weapon locating radar
Military radars of the United Kingdom | Cymbeline (radar) | Technology | 1,532 |
47,490,492 | https://en.wikipedia.org/wiki/Logitech%20Harmony | Logitech Harmony is a line of remote controls and home automation products produced by Logitech. The line includes universal remote products designed for controlling the components of home theater systems (including televisions, set-top boxes, DVD and Blu-ray players, video game consoles) and other devices that can be controlled via infrared, as well as newer smart home hub products that can be used to additionally control supported Internet of things (IoT) and Smart home products, and allow the use of mobile apps to control devices. On April 10, 2021, Logitech announced that they would discontinue Harmony Remote manufacturing.
History
The Harmony remote control was originally created in 2001 by Easy Zapper, a Canadian company, and first sold in November 2001. The company later changed its name to Intrigue Technologies and was located in Mississauga, Ontario, Canada. Computer peripheral manufacturer Logitech acquired it in May 2004 for US$29 million, turning Harmony remotes into a worldwide phenomenon.
In April 2021, Logitech announced the decision to discontinue the manufacturing of Harmony remotes. Any remaining Harmony remote inventory will continue to be available through retailers for new customers, and support will continue to be offered.
Features
All Harmony remotes are set up online using an external configuration software. For all models this can be done using a computer running Microsoft Windows or MacOS to which they need to be connected via USB cable; the Elite and Ultimate models can also be configured wirelessly using a smartphone app for Android or iOS.
Each remote has infrared (IR) learning capability (some later models also include RF support), and can upload information about a new remote to an online device database. 5000+ brands of devices were supported.
All Harmony remotes support one-touch activity based control, which allows control of multiple devices at once. For example, a home theater setup might include a TV, a digital set top box and a home theater sound system. Pressing the 'Watch TV' activity button on the remote will turn on the TV, turn on digital set top box, turn on the sound system, switch the input of TV to the digital set top box and switch the input of the sound system to the set top box. In addition, the volume buttons would be mapped to the sound system, the channel buttons would be mapped to the digital set to box, and other controls to the most appropriate system component for the activity. The remote would track which devices were powered on or off and which inputs devices had previously been switched to, allowing it to transition the devices from one activity to another without sending redundant or incorrect commands.
Harmony Remote software
The remote software allows users to update the remote configuration, learn IR commands, and upgrade the remote control's firmware.
Early versions of the remote Software 6 required a web browser; newer versions are Java-based. The software requires constant Internet connectivity while programming the remote, as remote control codes are downloaded from Logitech. This method allows updates to the product database, remote codes, and macro sequences to be easily distributed. This also allows Logitech to survey their market in order to determine products for investigation and research. Harmony control software is available for Microsoft Windows and Mac OS X. A group of developers was working on Harmony Remote software for the Linux operating system; the latest available release was dated August 2010.
On March 31, 2010, Logitech launched a new website called "My Harmony" for setting up several later Harmony remote controls.
On April 10, 2021, Logitech announced that they would discontinue Harmony Remote manufacturing.
Products
Harmony 650/665
The lowest-cost version of the Harmony remote that contains a display screen, which is color. It can be programmed with multiple activities and up to 8 devices.
Harmony Express
The Express uses Amazon Alexa to navigate, via a smaller distinct remote. It is the only Harmony remote that supports voice-activated search.
Harmony Hub
This device is not a remote, but rather a hub that can control IR and Bluetooth devices, as well as certain smart home devices (e.g. Philips Hue, Nest thermostat). It is controlled by certain Harmony remotes as well iOS/Android based apps, and more recently Alexa can control certain functions. By itself, it can control up to 8 home theater devices and an amount of home automation devices. A lot of the current products include this along with the remote. This replaces the older Harmony Ultimate Hub, Harmony Home Hub and Harmony Link devices.
Harmony Smart Control
Includes a Harmony Home Hub and a simple remote control that contains three activity buttons used to activate up to 6 different activities. The simple remote lacks a display screen, and can also be purchased separately for those who already own a Harmony Home Hub. Supports up to 8 devices.
Harmony Companion (formerly Harmony Home Control)
Like the Harmony Smart Control described above, but the included Simple Remote also contains home automation controls. Like the Smart Control Simple Remote, the included remote lacks a display screen, but it cannot be purchased for Home Hub owners unlike the Smart Control remote.
Harmony Smart Keyboard
This includes the Harmony Hub along with a keyboard containing a built-in touchpad. The keyboard appears to be like Logitech's previous K400 keyboard and touchpad combo, except some of the keys and buttons have been replaced with others more useful to a home theater remote, and two numbered, Harmony-specific USB receivers are included. It lacks a display screen, supports three activities (Watch a Movie, Watch TV, and Listen to Music), and can also be purchased as an add-on accessory for Harmony Home Hub owners. It controls up to 8 devices.
Harmony Touch
The Harmony Touch remote control contains a full-color display screen with touch functionality. It is an IR remote that supports up to 15 devices and multiple activities. It lacks dedicated physical buttons for home automation control. This remote can be added to a Harmony Hub for additional functionality. No longer available from the manufacturer, but still available via retail.
Harmony 950/Harmony Elite/Harmony Pro
The current top of the range Harmony available via retail. The Harmony 950 is a redesigned version of the Harmony Ultimate One with the addition of dedicated physical buttons for home automation control. Other changes include the media transport control buttons being relocated to a more ergonomic location, and the addition of user accessible battery compartment. This remote can be added to a Harmony Hub for additional functionality. The Harmony Elite is a bundle containing both the Harmony 950 remote control and the Harmony Hub. The Harmony Pro is the Harmony Elite bundle sold for professional installers.
Harmony Pro 2400
The Pro 2400 is the only Harmony product that includes a hub with an Ethernet port, as well as power over Ethernet (POE) support. The hub is significantly wider, and comes with a detachable directional antenna. It also has six, 3.5mm jacks for IR sensors (versus two, 2.5 mm jacks on other Hub products). It uses the Elite remote, and is only available through professional installers.
Harmony 350
The lowest-budget version of the Harmony remote, which can control up to 8 devices in particular categories, and supports only one activity: Watch TV. Unlike most current and former products in the Harmony line, this model lacks a display screen.
Harmony Ultimate One/Harmony Ultimate
The Harmony Ultimate One remote control is a revised version of the Harmony Touch adding motion-activated back-lit keys, eyes-free gesture control, tilt sensor and vibration feedback. This remote can be added to a Harmony Hub for additional functionality. The Harmony Ultimate is a bundle containing both the Harmony Ultimate One remote control and the Harmony Hub but this pack is no longer available from the manufacturer, but still available via retail.
Harmony Ultimate Home
Includes the Harmony Home Hub and a remote similar to the above described Harmony Ultimate One. The package includes four IR emitters, the remote, the hub, and two IR extenders that plug into the hub. Pressing a button on the included remote or any add-on remote will first communicate with the hub, then the hub will tell one of the four IR emitters based on configuration (including the IR emitter on the remote) to transmit the command. Harmony Ultimate Home also contains home automation controls, unlike the Ultimate One. The remote can't be purchased separately for Home Hub owners, unlike most of the other remotes that include it. It supports a maximum of 15 devices.
Harmony Link
A device which utilizes a mobile app as a remote to control devices within the room. It has since been succeeded by the newer Harmony Hub product, which also supports controlling Smart home products. On November 8, 2017, Logitech announced that it would end support for the Harmony Link and make the devices inoperable after March 18, 2018, citing an expired security certificate for a component in the platform. Following criticism of Logitech's originally-announced plan to do so for users whose devices were still under warranty, Logitech announced on November 10, 2017, that it would exchange all Harmony Links for Harmony Hubs free-of-charge, regardless of warranty status.
Harmony 500 series
The Harmony 500 remotes are mid-range remotes that is similar in functionality to the Harmony 659 and 670, but with different button arrangements and a squared-off physical design compared to the hourglass design of the 6xx series. Compared to today's offerings, these remotes offered control of up to 15 devices at an affordable price. The remotes have a back-lit monochrome LCD screen. The 500 series seems to be discontinued entirely.
Harmony for Xbox 360
While it's marketed for the Xbox 360 segment, this remote must be said to be part of the 5xx series. It runs the same software. The Harmony 360 is pre-configured to be used with the Xbox 360 console, and has special buttons, X, Y, A, B and media center control, correlating with the same as found on native Xbox controllers. It has a back-lit LCD screen and uses four AAA batteries.
The hardware layout is mostly the same as the 550. The extra up/down arrows of the 550 is removed to make room for the colored X, Y, A and B buttons beneath the play and pause rows. This would make it the remote in the 500 series with the most hardware buttons, 54 (counting the four direction arrow keys). It can control up to 12 devices.
Harmony 510/515
The Harmony 510/515 is an entry-level remote that is essentially a replacement to the 500 series and the Xbox 360 version. It has the same number of buttons as the 525 and features colored buttons typical on most satellite boxes. It has a four-button, monochrome LCD display. This remote is software limited to controlling up to five devices. Like its mid-range cousins, the 520 and 550, it has no recharge pod and uses AAA batteries instead. Unlike previous 500 series models, these newer models have been limited to 5 devices in software, yet sell for the same prices.
Difference between 510/515:
The 510 is black; the 515 is silver.
Harmony 520/525
The Harmony 520 is a mid-range remote that is similar in functionality to the Harmony 659 and 670, but with a different button arrangement and a squared-off physical design compared to the hourglass design of the 6xx series. It has a blue back-light and monochrome LCD screen. These 5xx models are equipped with an infrared learning port to learn IR signals of unsupported or unknown devices. By pointing an original remote control at the Harmony's learning port, it is able to copy and reproduce those codes and, in the case of supported devices, it is able to figure out what the remote is used to control and imports that device. They require 4 AAA batteries. A mini USB port is used to connect these to a computer for programming.
Difference between 520/525:
The 525 has 50 buttons, while the 520 has 46. It lacks the red, green, yellow and blue colour buttons commonly used for things like teletext and PVR control. Apparently, the 520 is the American model while the 525 is the European. The 520 and 525 can control up to 12 and 15 devices respectively.
Harmony 550/555
The harmony 550/555 remotes are variants of the 525 remote. Compared to the model 525, the 550 and 555 have two extra buttons, and are made of higher grade materials with different colors. The 550 and 555 models both have a sound and a picture button that changes the button mapping on the remote, allowing for reuse of the same physical buttons for different set of functionality. 52 buttons.
Difference between 550/555:
The 550 and 555 have the same number and placement of buttons, just with different mapping. The 555 have the same color buttons as the 525. The 550 does not, instead it has the following extra functions: Up arrow, Down arrow, A and B buttons. The 555 has orange back-light, the 550 has blue.
Harman/Kardon TC 30
The Harman/Kardon TC 30 appears to be a redesigned, rebranded Harmony 52x with a cradle and a color LCD. The LCD has eight items compared to the four of the rest of the Harmony 5xx series. Images exist of the TC 30 both with and without the teletext color buttons. This might mean that there's one version based on the 520 and one based on the 525. The key layout is identical to the 52x remotes. It seems to require different software from the Logitech branded remotes — however at the moment you can download this software from Logitech via harmonyremote.com.
Harmony 610
The Harmony 610 is functionally identical to the Harmony 670 and Harmony 620, but comes in black with a silver face panel. The 610 can control a maximum of 5 devices.
Harmony 620
The Harmony 620 is functionally identical to the Harmony 670, but comes in black instead of silver/black. The 670 can control up to 15 devices, where the 620 can only control 12 devices.
Harmony 659
The Harmony 659 is another mid-range universal remote that offers most of the functionality in the Harmony line. It has a monochrome LCD screen.
Harmony 670
The Harmony 670 is a mid-range universal remote that offers most of the functionality in the Harmony line. The 670 has a monochrome LCD screen and puts DVR functions in the middle of the remote. Logitech has discontinued this product.
Harmony 680
The Harmony 680 is a mid-range, computer programmable universal remote. The 680 has a back lit monochrome LCD screen, and Media PC specific.
buttons. Unlike many newer Harmony remotes, the 680 is able to control up to 15 devices.
Harmony 688
The Harmony 688 was (no longer produced) a mid-range, computer programmable universal remote. The H688 has a monochrome LCD screen and is back lit by an Electro Luminescent sheet (blue in color).
Harmony 720
The Harmony 720 was initially offered exclusively through Costco in 2006 and featured a color screen and backlit keys. It was designed as an inexpensive competitor to the earlier Harmony 880, with few differences, except for the ergonomic design and key layout. It is now available through other vendors, but remains unlisted on Logitech's product page.
The Harmony 720 remote is closely related to the 500 series, as it has a square shape and a layout akin to those remotes. When compared to the 525, you will find the same buttons above the LCD. The 720 has a colour LCD with six buttons/activities instead of four. The eight play/stop etc. buttons have been moved to the lower part. The Mute and Prev buttons have been moved and in their place, there are extra up and down buttons — same as on the 550. Compared to the 500 series, the glow button has been removed. These remotes do not have the Sound and Picture buttons to change key mappings, like the 550/555 remotes does. Lacking red, green, yellow and blue colour buttons, the 720 has 49 buttons. It can control up to 12 devices.
Harmony 768
The Harmony 768 is a capsule-shaped remote with a backlit LCD screen it was available in silver, blue or red. It has 32 buttons, as well as a clickable thumb-wheel to scroll through and select activities.
Harmony 785
The harmony 785 is nearly identical to the 720. While the 720 has 49 buttons, the 785 has 53. The extra buttons are the red, green, yellow and blue colour buttons commonly used for things like teletext and PVR control. These are located above the number buttons, which are placed further down compared to the 720. Another difference from the 720 is that the 785 can control up to 15 devices.
Harmony 880/885
The Harmony 880 was the first Harmony with a color LCD screen and a rechargeable battery. The Harmony 885 remote has extra buttons as mentioned below. The 885 replaces up and down keys with four color keys used for Teletext and, more recently, by some set-top boxes.
There was a short-lived 880Pro that had the picture and sound buttons. This remote did not feature multi-room/multi-controller support like the 890Pro.
Difference between 880/885:
The 885 has the red, green, yellow and blue colour buttons commonly used for things like teletext and PVR control. These four buttons occupy the same space where the 880 has two selection buttons (up arrow, down arrow).
Harmony 890/895
The Harmony 890/895 is the same as the 880/885, but it adds radio frequency (RF) capability, enabling the remote to control devices even without line-of-sight to and from different rooms, up to a range of 30 meters. This remote control cannot control proprietary RF devices, but it can control special Z-Wave RF devices, as well as IR devices without line-of-sight via the RF extender.
The 890Pro adds multi-room and multi-controller support, as well as a different color scheme. (Primary and secondary remotes can be set up that work with the same wireless extender) It also adds two buttons — picture and sound — that allow for quick access to picture- and sound-related commands. It is not listed on the Logitech Web site and is sold through custom installation companies. The 890Pro is not shipped with the RF extender.
Harmony 1000
The Harmony 1000 has customizable touch screen commands, sounds and a rechargeable battery, and allows control up to 15 devices. It is also compatible with the RF extender. A maximum of two extenders can be configured within the software.
Harmony 300
The universal remote has 1 activity support (Watch TV), and control up to 4 devices. The remote supports customizes key with remote features and favorite channels. This remote has no LCD, and like the discontinued 500 series mid-range models, no battery charge pod. Requires two AA batteries.
Harmony 300i
Similar to the Harmony 300, but has a glossy finish rather than a matte finish.
Harmony 600
Support for up to 5 devices. Monochrome display. Requires 2 AA batteries.
Harmony 700
Support for up to 6 devices. Color display. Rechargeable AA batteries via USB.
Harmony One
The Harmony One features a color touch screen and is rechargeable. It does not offer any RF capability. A CNET TV review stated that it is one of the best universal remotes on the market today.
Harmony 900
Harmony 900 has the same ergonomics design as Harmony One. It has additional four color buttons compared to Harmony One and RF supported. The RF technology used by Harmony 900 is not comparable with Harmony 890, 1000, and 1100. The Harmony 900 and 1100 models do not support "sequences" (Logitech parlance for macros).
Harmony 1100
Adds QVGA resolution to the touch screen and allows 15 devices to be controlled.
The user interface of the Harmony 1100 is now Flash based vs the Java based one found in the Harmony 1000.
Accessories
E-R0001
The Harmony E-R0001 is an IR to Bluetooth adapter for the PS3.
RF Wireless Extender
The Harmony RF Wireless Extender allows some Harmony remotes, e.g., models 890, 1000 and 1100, to control devices using radio frequencies instead of infrared, with longer range than infrared and no need for line-of-sight transmission. The Harmony 1000 can use two RF Extenders, while the 1100 can use multiple extenders.
IR Extender System
The Harmony IR Extender System has an IR blaster and a set of mini blasters, and does not require programming. It is manufactured by Philips and rebadged.
See also
Universal Remote Controls - General Article on Universal Remote Controls.
JP1 remote - Universal Electronics/One For All range of programmable remotes
References
External links
Harmony at Logitech.com
Assistive technology
Remote control
Smart home hubs
Harmony | Logitech Harmony | Technology | 4,296 |
39,490,884 | https://en.wikipedia.org/wiki/Liberation%20%28pharmacology%29 | Release (Liberation) is the first step in the process by which medication enters the body and liberates the active ingredient that has been administered. The pharmaceutical drug must separate from the vehicle or the excipient that it was mixed with during manufacture. Some authors split the process of liberation into three steps: disintegration, disaggregation and dissolution. A limiting factor in the adsorption of pharmaceutical drugs is the degree to which they are ionized, as cell membranes are relatively impermeable to ionized molecules.
The characteristics of a medication's excipient play a fundamental role in creating a suitable environment for the correct absorption of a drug. This can mean that the same dose of a drug in different forms can have different bioequivalence, as they yield different plasma concentrations and therefore have different therapeutic effects. Dosage forms with modified release (such as delayed or extended release) allow this difference to be usefully applied.
Dissolution
In a typical situation, a pill taken orally will pass through the oesophagus and into the stomach. As the stomach has an aqueous environment, it is the first place where the pill can dissolve. The rate of dissolution is a key element in controlling the duration of a drug's effect. For this reason, different forms of the same medication can have the same active ingredients but different dissolution rates. If a drug is administered in a form that is not rapidly dissolved, the drug will be absorbed more gradually over time and its action will have a longer duration. A consequence of this is that patients will comply more closely to a prescribed course of treatment, if the medication does not have to be taken as frequently. In addition, a slow release system will maintain drug concentrations within a therapeutically acceptable range for longer than quicker releasing delivery systems as these result in more pronounced peaks in plasma concentration.
The dissolution rate is described by the Noyes–Whitney equation:
Where:
is the dissolution rate.
A is the solid's surface area.
C is the concentration of the solid in the bulk dissolution medium.
is the concentration of the solid in the diffusion layer surrounding the solid.
D is the diffusion coefficient.
L is the thickness of the diffusion layer.
As the solution is already in a dissolved state, it does not have to go through a dissolution stage before absorption begins.
Ionization
Cell membranes present a greater barrier to the movement of ionized molecules than non-ionized liposoluble substances. This is particularly important for substances that are weakly amphoteric. The stomach's acidic pH and the subsequent alkalization in the intestine modifies the degree of ionization of acids and weak bases depending on a substance's pKa. The pKa is the pH at which a substance is present at an equilibrium between ionized and non-ionized molecules. The Henderson–Hasselbalch equation is used to calculate pKa.
See also
Absorption (pharmacokinetics)
ADME
Bioequivalence
Distribution (pharmacology)
Elimination (pharmacology)
Generic drugs
Metabolism
Pharmacodynamics
Pharmacokinetics
Pharmacy
References
Pharmacokinetics | Liberation (pharmacology) | Chemistry | 648 |
696,955 | https://en.wikipedia.org/wiki/Antisymmetric%20tensor | In mathematics and theoretical physics, a tensor is antisymmetric on (or with respect to) an index subset if it alternates sign (+/−) when any two indices of the subset are interchanged. The index subset must generally either be all covariant or all contravariant.
For example,
holds when the tensor is antisymmetric with respect to its first three indices.
If a tensor changes sign under exchange of each pair of its indices, then the tensor is completely (or totally) antisymmetric. A completely antisymmetric covariant tensor field of order may be referred to as a differential -form, and a completely antisymmetric contravariant tensor field may be referred to as a -vector field.
Antisymmetric and symmetric tensors
A tensor A that is antisymmetric on indices and has the property that the contraction with a tensor B that is symmetric on indices and is identically 0.
For a general tensor U with components and a pair of indices and U has symmetric and antisymmetric parts defined as:
{|
|-
| || || (symmetric part)
|-
| || ||(antisymmetric part).
|}
Similar definitions can be given for other pairs of indices. As the term "part" suggests, a tensor is the sum of its symmetric part and antisymmetric part for a given pair of indices, as in
Notation
A shorthand notation for anti-symmetrization is denoted by a pair of square brackets. For example, in arbitrary dimensions, for an order 2 covariant tensor M,
and for an order 3 covariant tensor T,
In any 2 and 3 dimensions, these can be written as
where is the generalized Kronecker delta, and the Einstein summation convention is in use.
More generally, irrespective of the number of dimensions, antisymmetrization over indices may be expressed as
In general, every tensor of rank 2 can be decomposed into a symmetric and anti-symmetric pair as:
This decomposition is not in general true for tensors of rank 3 or more, which have more complex symmetries.
Examples
Totally antisymmetric tensors include:
Trivially, all scalars and vectors (tensors of order 0 and 1) are totally antisymmetric (as well as being totally symmetric).
The electromagnetic tensor, in electromagnetism.
The Riemannian volume form on a pseudo-Riemannian manifold.
See also
Notes
References
External links
Antisymmetric Tensor – mathworld.wolfram.com
Tensors | Antisymmetric tensor | Engineering | 540 |
352,631 | https://en.wikipedia.org/wiki/Messier%2061 | Messier 61 (also known as M61, NGC 4303, or the Swelling Spiral Galaxy) is an intermediate barred spiral galaxy in the Virgo Cluster of galaxies. It was first discovered by Barnaba Oriani on May 5, 1779, six days before Charles Messier discovered the same galaxy. Messier had observed it on the same night as Oriani but had mistaken it for a comet. Its distance has been estimated to be 45.61 million light years from the Milky Way Galaxy. It is a member of the M61 Group of galaxies, which is a member of the Virgo II Groups, a series of galaxies and galaxy clusters strung out from the southern edge of the Virgo Supercluster.
Properties
M61 is one of the largest members of Virgo Cluster, and belongs to a smaller subgroup known as the S Cloud. The morphological classification of SAB(rs)bc indicates a weakly-barred spiral (SAB) with the suggestion of a ring structure (rs) and moderate to loosely wound spiral arms. It has an active galactic nucleus and is classified as a starburst galaxy containing a massive nuclear star cluster with an estimated mass of 105 solar masses and an age of 4 million years, as well as a central candidate supermassive black hole weighing around solar masses.
It cohabits with an older massive star cluster as well as a likely older starburst. Evidence of significant star formation and active bright nebulae appears across M61's disk. Unlike most late-type spiral galaxies within the Virgo Cluster, M61 shows an unusual abundance of neutral hydrogen (H I).
Supernovae
Eight supernovae have been observed in M61, making it one of the most prodigious galaxies for such cataclysmic events. These include:
SN 1926A (type II, mag. 14) was discovered by Max Wolf and Karl Wilhelm Reinmuth on 9 May 1926.
SN 1961I (Type II, mag. 13) was discovered by Milton Humason on 3 June 1961.
SN 1964F (Type II, mag. 14) was discovered by Leonida Rosino on 30 June 1964.
SN 1999gn (Type II, mag. 16) was discovered by Alessandro Dimai on 17 December 1999.
SN 2006ov (Type II, mag. 14.9) was discovered by Kōichi Itagaki on 24 November 2006.
SN 2008in (Type II, mag. 14.9) was discovered by Kōichi Itagaki on 26 December 2008.
SN 2014dt (type Ia-pec, mag. 13.6) was discovered by Kōichi Itagaki on 29 October 2014.
SN 2020jfo (Type II, mag. 16) was discovered by the Zwicky Transient Facility on 6 May 2020.
Gallery
See also
List of Messier objects
References
External links
messier.seds.org/m/m061.html
Intermediate spiral galaxies
Messier 061
Messier 061
061
Messier 061
07420
40001
17790505 | Messier 61 | Astronomy | 635 |
1,029,022 | https://en.wikipedia.org/wiki/Embryonic%20stem%20cell | Embryonic stem cells (ESCs) are pluripotent stem cells derived from the inner cell mass of a blastocyst, an early-stage pre-implantation embryo. Human embryos reach the blastocyst stage 4–5 days post fertilization, at which time they consist of 50–150 cells. Isolating the inner cell mass (embryoblast) using immunosurgery results in destruction of the blastocyst, a process which raises ethical issues, including whether or not embryos at the pre-implantation stage have the same moral considerations as embryos in the post-implantation stage of development.
Researchers are currently focusing heavily on the therapeutic potential of embryonic stem cells, with clinical use being the goal for many laboratories. Potential uses include the treatment of diabetes and heart disease. The cells are being studied to be used as clinical therapies, models of genetic disorders, and cellular/DNA repair. However, adverse effects in the research and clinical processes such as tumors and unwanted immune responses have also been reported.
Properties
Embryonic stem cells (ESCs), derived from the blastocyst stage of early mammalian embryos, are distinguished by their ability to differentiate into any embryonic cell type and by their ability to self-renew. It is these traits that makes them valuable in the scientific and medical fields. ESCs have a normal karyotype, maintain high telomerase activity, and exhibit remarkable long-term proliferative potential.
Pluripotent
Embryonic stem cells of the inner cell mass are pluripotent, meaning they are able to differentiate to generate primitive ectoderm, which ultimately differentiates during gastrulation into all derivatives of the three primary germ layers: ectoderm, endoderm, and mesoderm. These germ layers generate each of the more than 220 cell types in the adult human body. When provided with the appropriate signals, ESCs initially form precursor cells that in subsequently differentiate into the desired cell types. Pluripotency distinguishes embryonic stem cells from adult stem cells, which are multipotent and can only produce a limited number of cell types.
Self renewal and repair of structure
Under defined conditions, embryonic stem cells are capable of self-renewing indefinitely in an undifferentiated state. Self-renewal conditions must prevent the cells from clumping and maintain an environment that supports an unspecialized state. Typically this is done in the lab with media containing serum and leukemia inhibitory factor or serum-free media supplements with two inhibitory drugs ("2i"), the MEK inhibitor PD03259010 and GSK-3 inhibitor CHIR99021.
Growth
ESCs divide very frequently due to a shortened G1 phase in their cell cycle. Rapid cell division allows the cells to quickly grow in number, but not size, which is important for early embryo development. In ESCs, cyclin A and cyclin E proteins involved in the G1/S transition are always expressed at high levels. Cyclin-dependent kinases such as CDK2 that promote cell cycle progression are overactive, in part due to downregulation of their inhibitors. Retinoblastoma proteins that inhibit the transcription factor E2F until the cell is ready to enter S phase are hyperphosphorylated and inactivated in ESCs, leading to continual expression of proliferation genes. These changes result in accelerated cycles of cell division. Although high expression levels of pro-proliferative proteins and a shortened G1 phase have been linked to maintenance of pluripotency, ESCs grown in serum-free 2i conditions do express hypo-phosphorylated active Retinoblastoma proteins and have an elongated G1 phase. Despite this difference in the cell cycle when compared to ESCs grown in media containing serum these cells have similar pluripotent characteristics. Pluripotency factors Oct4 and Nanog play a role in transcriptionally regulating the embryonic stem cell cycle.
Uses
Due to their plasticity and potentially unlimited capacity for self-renewal, embryonic stem cell therapies have been proposed for regenerative medicine and tissue replacement after injury or disease. Pluripotent stem cells have shown promise in treating a number of varying conditions, including but not limited to: spinal cord injuries, age related macular degeneration, diabetes, neurodegenerative disorders (such as Parkinson's disease), AIDS, etc. In addition to their potential in regenerative medicine, embryonic stem cells provide a possible alternative source of tissue/organs which serves as a possible solution to the donor shortage dilemma. There are some ethical controversies surrounding this though (see Ethical debate section below). Aside from these uses, ESCs can also be used for research on early human development, certain genetic disease, and in vitro toxicology testing.
Utilizations
According to a 2002 article in PNAS, "Human embryonic stem cells have the potential to differentiate into various cell types, and, thus, may be useful as a source of cells for transplantation or tissue engineering."
Tissue engineering
In tissue engineering, the use of stem cells are known to be of importance. In order to successfully engineer a tissue, the cells used must be able to perform specific biological functions such as secretion of cytokines, signaling molecules, interacting with neighboring cells, and producing an extracellular matrix in the correct organization. Stem cells demonstrates these specific biological functions along with being able to self-renew and differentiate into one or more types of specialized cells. Embryonic stem cells is one of the sources that are being considered for the use of tissue engineering. The use of human embryonic stem cells have opened many new possibilities for tissue engineering, however, there are many hurdles that must be made before human embryonic stem cell can even be utilized. It is theorized that if embryonic stem cells can be altered to not evoke the immune response when implanted into the patient then this would be a revolutionary step in tissue engineering. Embryonic stem cells are not limited to tissue engineering.
Cell replacement therapies
Research has focused on differentiating ESCs into a variety of cell types for eventual use as cell replacement therapies. Some of the cell types that have or are currently being developed include cardiomyocytes, neurons, hepatocytes, bone marrow cells, islet cells and endothelial cells. However, the derivation of such cell types from ESCs is not without obstacles, therefore research has focused on overcoming these barriers. For example, studies are underway to differentiate ESCs into tissue specific cardiomyocytes and to eradicate their immature properties that distinguish them from adult cardiomyocytes.
Clinical potential
Researchers have differentiated ESCs into dopamine-producing cells with the hope that these neurons could be used in the treatment of Parkinson's disease.
ESCs have been differentiated to natural killer cells and bone tissue.
Studies involving ESCs are underway to provide an alternative treatment for diabetes. For example ESCs have been differentiated into insulin-producing cells, and researchers at Harvard University were able to produce large quantities of pancreatic beta cells from ESCs.
An article published in the European Heart Journal describes a translational process of generating human embryonic stem cell-derived cardiac progenitor cells to be used in clinical trials of patients with severe heart failure.
Drug discovery
Besides becoming an important alternative to organ transplants, ESCs are also being used in the field of toxicology, and as cellular screens to uncover new chemical entities that can be developed as small-molecule drugs. Studies have shown that cardiomyocytes derived from ESCs are validated in vitro models to test drug responses and predict toxicity profiles. ESC derived cardiomyocytes have been shown to respond to pharmacological stimuli and hence can be used to assess cardiotoxicity such as torsades de pointes.
ESC-derived hepatocytes are also useful models that could be used in the preclinical stages of drug discovery. However, the development of hepatocytes from ESCs has proven to be challenging and this hinders the ability to test drug metabolism. Therefore, research has focused on establishing fully functional ESC-derived hepatocytes with stable phase I and II enzyme activity.
Models of genetic disorder
Several new studies have started to address the concept of modeling genetic disorders with embryonic stem cells. Either by genetically manipulating the cells, or more recently, by deriving diseased cell lines identified by prenatal genetic diagnosis (PGD), modeling genetic disorders is something that has been accomplished with stem cells. This approach may very well prove valuable at studying disorders such as Fragile-X syndrome, Cystic fibrosis, and other genetic maladies that have no reliable model system.
Yury Verlinsky, a Russian-American medical researcher who specialized in embryo and cellular genetics (genetic cytology), developed prenatal diagnosis testing methods to determine genetic and chromosomal disorders a month and a half earlier than standard amniocentesis. The techniques are now used by many pregnant women and prospective parents, especially couples who have a history of genetic abnormalities or where the woman is over the age of 35 (when the risk of genetically related disorders is higher). In addition, by allowing parents to select an embryo without genetic disorders, they have the potential of saving the lives of siblings that already had similar disorders and diseases using cells from the disease free offspring.
Repair of DNA damage
Differentiated somatic cells and ES cells use different strategies for dealing with DNA damage. For instance, human foreskin fibroblasts, one type of somatic cell, use non-homologous end joining (NHEJ), an error prone DNA repair process, as the primary pathway for repairing double-strand breaks (DSBs) during all cell cycle stages. Because of its error-prone nature, NHEJ tends to produce mutations in a cell's clonal descendants.
ES cells use a different strategy to deal with DSBs. Because ES cells give rise to all of the cell types of an organism including the cells of the germ line, mutations arising in ES cells due to faulty DNA repair are a more serious problem than in differentiated somatic cells. Consequently, robust mechanisms are needed in ES cells to repair DNA damages accurately, and if repair fails, to remove those cells with un-repaired DNA damages. Thus, mouse ES cells predominantly use high fidelity homologous recombinational repair (HRR) to repair DSBs. This type of repair depends on the interaction of the two sister chromosomes formed during S phase and present together during the G2 phase of the cell cycle. HRR can accurately repair DSBs in one sister chromosome by using intact information from the other sister chromosome. Cells in the G1 phase of the cell cycle (i.e. after metaphase/cell division but prior the next round of replication) have only one copy of each chromosome (i.e. sister chromosomes aren't present). Mouse ES cells lack a G1 checkpoint and do not undergo cell cycle arrest upon acquiring DNA damage. Rather they undergo programmed cell death (apoptosis) in response to DNA damage. Apoptosis can be used as a fail-safe strategy to remove cells with un-repaired DNA damages in order to avoid mutation and progression to cancer. Consistent with this strategy, mouse ES stem cells have a mutation frequency about 100-fold lower than that of isogenic mouse somatic cells.
Clinical trial
On January 23, 2009, Phase I clinical trials for transplantation of oligodendrocytes (a cell type of the brain and spinal cord) derived from human ESCs into spinal cord-injured individuals received approval from the U.S. Food and Drug Administration (FDA), marking it the world's first human ESC human trial. The study leading to this scientific advancement was conducted by Hans Keirstead and colleagues at the University of California, Irvine and supported by Geron Corporation of Menlo Park, CA, founded by Michael D. West, PhD. A previous experiment had shown an improvement in locomotor recovery in spinal cord-injured rats after a 7-day delayed transplantation of human ESCs that had been pushed into an oligodendrocytic lineage. The phase I clinical study was designed to enroll about eight to ten paraplegics who have had their injuries no longer than two weeks before the trial begins, since the cells must be injected before scar tissue is able to form. The researchers emphasized that the injections were not expected to fully cure the patients and restore all mobility. Based on the results of the rodent trials, researchers speculated that restoration of myelin sheathes and an increase in mobility might occur. This first trial was primarily designed to test the safety of these procedures and if everything went well, it was hoped that it would lead to future studies that involve people with more severe disabilities. The trial was put on hold in August 2009 due to FDA concerns regarding a small number of microscopic cysts found in several treated rat models but the hold was lifted on July 30, 2010.
In October 2010 researchers enrolled and administered ESCs to the first patient at Shepherd Center in Atlanta. The makers of the stem cell therapy, Geron Corporation, estimated that it would take several months for the stem cells to replicate and for the GRNOPC1 therapy to be evaluated for success or failure.
In November 2011 Geron announced it was halting the trial and dropping out of stem cell research for financial reasons, but would continue to monitor existing patients, and was attempting to find a partner that could continue their research. In 2013 BioTime, led by CEO Dr. Michael D. West, acquired all of Geron's stem cell assets, with the stated intention of restarting Geron's embryonic stem cell-based clinical trial for spinal cord injury research.
BioTime company Asterias Biotherapeutics (NYSE MKT: AST) was granted a $14.3 million Strategic Partnership Award by the California Institute for Regenerative Medicine (CIRM) to re-initiate the world's first embryonic stem cell-based human clinical trial, for spinal cord injury. Supported by California public funds, CIRM is the largest funder of stem cell-related research and development in the world.
The award provides funding for Asterias to reinitiate clinical development of AST-OPC1 in subjects with spinal cord injury and to expand clinical testing of escalating doses in the target population intended for future pivotal trials.
AST-OPC1 is a population of cells derived from human embryonic stem cells (hESCs) that contains oligodendrocyte progenitor cells (OPCs). OPCs and their mature derivatives called oligodendrocytes provide critical functional support for nerve cells in the spinal cord and brain. Asterias recently presented the results from phase 1 clinical trial testing of a low dose of AST-OPC1 in patients with neurologically complete thoracic spinal cord injury. The results showed that AST-OPC1 was successfully delivered to the injured spinal cord site. Patients followed 2–3 years after AST-OPC1 administration showed no evidence of serious adverse events associated with the cells in detailed follow-up assessments including frequent neurological exams and MRIs. Immune monitoring of subjects through one year post-transplantation showed no evidence of antibody-based or cellular immune responses to AST-OPC1. In four of the five subjects, serial MRI scans performed throughout the 2–3 year follow-up period indicate that reduced spinal cord cavitation may have occurred and that AST-OPC1 may have had some positive effects in reducing spinal cord tissue deterioration. There was no unexpected neurological degeneration or improvement in the five subjects in the trial as evaluated by the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) exam.
The Strategic Partnership III grant from CIRM will provide funding to Asterias to support the next clinical trial of AST-OPC1 in subjects with spinal cord injury, and for Asterias' product development efforts to refine and scale manufacturing methods to support later-stage trials and eventually commercialization. CIRM funding will be conditional on FDA approval for the trial, completion of a definitive agreement between Asterias and CIRM, and Asterias' continued progress toward the achievement of certain pre-defined project milestones.
Concern and controversy
Adverse effects
The major concern with the possible transplantation of ESCs into patients as therapies is their ability to form tumors including teratomas. Safety issues prompted the FDA to place a hold on the first ESC clinical trial, however no tumors were observed.
The main strategy to enhance the safety of ESCs for potential clinical use is to differentiate the ESCs into specific cell types (e.g. neurons, muscle, liver cells) that have reduced or eliminated ability to cause tumors. Following differentiation, the cells are subjected to sorting by flow cytometry for further purification. ESCs are predicted to be inherently safer than iPS cells created with genetically integrating viral vectors because they are not genetically modified with genes such as c-Myc that are linked to cancer. Nonetheless, ESCs express very high levels of the iPS inducing genes and these genes including Myc are essential for ESC self-renewal and pluripotency, and potential strategies to improve safety by eliminating c-Myc expression are unlikely to preserve the cells' "stemness". However, N-myc and L-myc have been identified to induce iPS cells instead of c-myc with similar efficiency. Later protocols to induce pluripotency bypass these problems completely by using non-integrating RNA viral vectors such as sendai virus or mRNA transfection.
Ethical debate
Due to the nature of embryonic stem cell research, there are a lot of controversial opinions on the topic. Since harvesting embryonic stem cells usually necessitates destroying the embryo from which those cells are obtained, the moral status of the embryo comes into question. Some people claim that the embryo is too young to achieve personhood or that the embryo, if donated from an IVF clinic (where labs typically acquire embryos), would otherwise go to medical waste anyway. Opponents of ESC research claim that an embryo is a human life, therefore destroying it is murder and the embryo must be protected under the same ethical view as a more developed human being.
History
1964: Lewis Kleinsmith and G. Barry Pierce Jr. isolated a single type of cell from a teratocarcinoma, a tumor now known from a germ cell. These cells were isolated from the teratocarcinoma replicated and grew in cell culture as a stem cell and are now known as embryonal carcinoma (EC) cells. Although similarities in morphology and differentiating potential (pluripotency) led to the use of EC cells as the in vitro model for early mouse development, EC cells harbor genetic mutations and often abnormal karyotypes that accumulated during the development of the teratocarcinoma. These genetic aberrations further emphasized the need to be able to culture pluripotent cells directly from the inner cell mass.
1981: Embryonic stem cells (ES cells) were independently first derived from a mouse embryos by two groups. Martin Evans and Matthew Kaufman from the Department of Genetics, University of Cambridge published first in July, revealing a new technique for culturing the mouse embryos in the uterus to allow for an increase in cell number, allowing for the derivation of ES cell from these embryos. Gail R. Martin, from the Department of Anatomy, University of California, San Francisco, published her paper in December and coined the term "Embryonic Stem Cell". She showed that embryos could be cultured in vitro and that ES cells could be derived from these embryos.
1989: Mario R. Cappechi, Martin J. Evans, and Oliver Smithies publish their research that details their isolation and genetic modifications of embryonic stem cells, creating the first "knockout mice". In creating knockout mice, this publication provided scientists with an entirely new way to study disease.
1996: Dolly, was the first mammal cloned from an adult cell by the Roslin Institute of the University of Edinburgh. This experiment instituted the proposition that specialized adult cells obtain the genetic makeup to perform a specific task; which established a basis for further research within a variety of cloning techniques. The Dolly experiment was performed by obtaining the mammalian udder cells from a sheep (Dolly) and differentiating these cells until division was concluded. An egg cell was then procured from a different sheep host and the nucleus was removed. An udder cell was placed next to the egg cell and connected by electricity causing this cell to share DNA. This egg cell differentiated into an embryo and the embryo was inserted into a third sheep which gave birth to the clone version of Dolly.
1998: A team from the University of Wisconsin, Madison (James A. Thomson, Joseph Itskovitz-Eldor, Sander S. Shapiro, Michelle A. Waknitz, Jennifer J. Swiergiel, Vivienne S. Marshall, and Jeffrey M. Jones) publish a paper titled "Embryonic Stem Cell Lines Derived From Human Blastocysts". The researchers behind this study not only created the first embryonic stem cells, but recognized their pluripotency, as well as their capacity for self-renewal. The abstract of the paper notes the significance of the discovery with regards to the fields of developmental biology and drug discovery.
2001: President George W. Bush allows federal funding to support research on roughly 60—at this time, already existing—lines of embryonic stem cells. Seeing as the limited lines that Bush allowed research on had already been established, this law supported embryonic stem cell research without raising any ethical questions that could arise with the creation of new lines under federal budget.
2006: Japanese scientists Shinya Yamanaka and Kazutoshi Takashi publish a paper describing the induction of pluripotent stem cells from cultures of adult mouse fibroblasts. Induced pluripotent stem cells (iPSCs) are a huge discovery, as they are seemingly identical to embryonic stem cells and could be used without sparking the same moral controversy.
January, 2009: The US Food and Drug Administration (FDA) provides approval for Geron Corporation's phase I trial of their human embryonic stem cell-derived treatment for spinal cord injuries. The announcement was met with excitement from the scientific community, but also with wariness from stem cell opposers. The treatment cells were, however, derived from the cell lines approved under George W. Bush's ESC policy.
March, 2009: Executive Order 13505 is signed by President Barack Obama, removing the restrictions put in place on federal funding for human stem cells by the previous presidential administration. This would allow the National Institutes of Health (NIH) to provide funding for hESC research. The document also states that the NIH must provide revised federal funding guidelines within 120 days of the order's signing.
Techniques and conditions for derivation and culture
Derivation from humans
In vitro fertilization generates multiple embryos. The surplus of embryos is not clinically used or is unsuitable for implantation into the patient, and therefore may be donated by the donor with consent. Human embryonic stem cells can be derived from these donated embryos or additionally they can also be extracted from cloned embryos created using a cell from a patient and a donated egg through the process of somatic cell nuclear transfer. The inner cell mass (cells of interest), from the blastocyst stage of the embryo, is separated from the trophectoderm, the cells that would differentiate into extra-embryonic tissue. Immunosurgery, the process in which antibodies are bound to the trophectoderm and removed by another solution, and mechanical dissection are performed to achieve separation. The resulting inner cell mass cells are plated onto cells that will supply support. The inner cell mass cells attach and expand further to form a human embryonic cell line, which are undifferentiated. These cells are fed daily and are enzymatically or mechanically separated every four to seven days. For differentiation to occur, the human embryonic stem cell line is removed from the supporting cells to form embryoid bodies, is co-cultured with a serum containing necessary signals, or is grafted in a three-dimensional scaffold to result.
Derivation from other animals
Embryonic stem cells are derived from the inner cell mass of the early embryo, which are harvested from the donor mother animal. Martin Evans and Matthew Kaufman reported a technique that delays embryo implantation, allowing the inner cell mass to increase. This process includes removing the donor mother's ovaries and dosing her with progesterone, changing the hormone environment, which causes the embryos to remain free in the uterus. After 4–6 days of this intrauterine culture, the embryos are harvested and grown in in vitro culture until the inner cell mass forms “egg cylinder-like structures,” which are dissociated into single cells, and plated on fibroblasts treated with mitomycin-c (to prevent fibroblast mitosis). Clonal cell lines are created by growing up a single cell. Evans and Kaufman showed that the cells grown out from these cultures could form teratomas and embryoid bodies, and differentiate in vitro, all of which indicating that the cells are pluripotent.
Gail Martin derived and cultured her ES cells differently. She removed the embryos from the donor mother at approximately 76 hours after copulation and cultured them overnight in a medium containing serum. The following day, she removed the inner cell mass from the late blastocyst using microsurgery. The extracted inner cell mass was cultured on fibroblasts treated with mitomycin-c in a medium containing serum and conditioned by ES cells. After approximately one week, colonies of cells grew out. These cells grew in culture and demonstrated pluripotent characteristics, as demonstrated by the ability to form teratomas, differentiate in vitro, and form embryoid bodies. Martin referred to these cells as ES cells.
It is now known that the feeder cells provide leukemia inhibitory factor (LIF) and serum provides bone morphogenetic proteins (BMPs) that are necessary to prevent ES cells from differentiating. These factors are extremely important for the efficiency of deriving ES cells. Furthermore, it has been demonstrated that different mouse strains have different efficiencies for isolating ES cells. Current uses for mouse ES cells include the generation of transgenic mice, including knockout mice. For human treatment, there is a need for patient specific pluripotent cells. Generation of human ES cells is more difficult and faces ethical issues. So, in addition to human ES cell research, many groups are focused on the generation of induced pluripotent stem cells (iPS cells).
Potential methods for new cell line derivation
On August 23, 2006, the online edition of Nature scientific journal published a letter by Dr. Robert Lanza (medical director of Advanced Cell Technology in Worcester, MA) stating that his team had found a way to extract embryonic stem cells without destroying the actual embryo. This technical achievement would potentially enable scientists to work with new lines of embryonic stem cells derived using public funding in the US, where federal funding was at the time limited to research using embryonic stem cell lines derived prior to August 2001. In March, 2009, the limitation was lifted.
Human embryonic stem cells have also been derived by somatic cell nuclear transfer (SCNT). This approach has also sometimes been referred to as "therapeutic cloning" because SCNT bears similarity to other kinds of cloning in that nuclei are transferred from a somatic cell into an enucleated zygote. However, in this case SCNT was used to produce embryonic stem cell lines in a lab, not living organisms via a pregnancy. The "therapeutic" part of the name is included because of the hope that SCNT produced embryonic stem cells could have clinical utility.
Induced pluripotent stem cells
The iPS cell technology was pioneered by Shinya Yamanaka's lab in Kyoto, Japan, who showed in 2006 that the introduction of four specific genes encoding transcription factors could convert adult cells into pluripotent stem cells. He was awarded the 2012 Nobel Prize along with Sir John Gurdon "for the discovery that mature cells can be reprogrammed to become pluripotent."
In 2007, it was shown that pluripotent stem cells, highly similar to embryonic stem cells, can be induced by the delivery of four factors (Oct3/4, Sox2, c-Myc, and Klf4) to differentiated cells. Utilizing the four genes previously listed, the differentiated cells are "reprogrammed" into pluripotent stem cells, allowing for the generation of pluripotent/embryonic stem cells without the embryo. The morphology and growth factors of these lab induced pluripotent cells, are equivalent to embryonic stem cells, leading these cells to be known as induced pluripotent stem cells (iPS cells). This observation was observed in mouse pluripotent stem cells, originally, but now can be performed in human adult fibroblasts using the same four genes.
Because ethical concerns regarding embryonic stem cells typically are about their derivation from terminated embryos, it is believed that reprogramming to these iPS cells may be less controversial.
This may enable the generation of patient specific ES cell lines that could potentially be used for cell replacement therapies. In addition, this will allow the generation of ES cell lines from patients with a variety of genetic diseases and will provide invaluable models to study those diseases.
However, as a first indication that the iPS cell technology can in rapid succession lead to new cures, it was used by a research team headed by Rudolf Jaenisch of the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts, to cure mice of sickle cell anemia, as reported by Science journal's online edition on December 6, 2007.
On January 16, 2008, a California-based company, Stemagen, announced that they had created the first mature cloned human embryos from single skin cells taken from adults. These embryos can be harvested for patient matching embryonic stem cells.
Contamination by reagents used in cell culture
The online edition of Nature Medicine published a study on January 24, 2005, which stated that the human embryonic stem cells available for federally funded research are contaminated with non-human molecules from the culture medium used to grow the cells. It is a common technique to use mouse cells and other animal cells to maintain the pluripotency of actively dividing stem cells. The problem was discovered when non-human sialic acid in the growth medium was found to compromise the potential uses of the embryonic stem cells in humans, according to scientists at the University of California, San Diego.
However, a study published in the online edition of Lancet Medical Journal on March 8, 2005, detailed information about a new stem cell line that was derived from human embryos under completely cell- and serum-free conditions. After more than 6 months of undifferentiated proliferation, these cells demonstrated the potential to form derivatives of all three embryonic germ layers both in vitro and in teratomas. These properties were also successfully maintained (for more than 30 passages) with the established stem cell lines.
Muse cells
Muse cells (Multi-lineage differentiating stress enduring cell) are non-cancerous pluripotent stem cell found in adults. They were discovered in 2010 by Mari Dezawa and her research group. Muse cells reside in the connective tissue of nearly every organ including the umbilical cord, bone marrow and peripheral blood. They are collectable from commercially obtainable mesenchymal cells such as human fibroblasts, bone marrow-mesenchymal stem cells and adipose-derived stem cells. Muse cells are able to generate cells representative of all three germ layers from a single cell both spontaneously and under cytokine induction. Expression of pluripotency genes and triploblastic differentiation are self-renewable over generations. Muse cells do not undergo teratoma formation when transplanted into a host environment in vivo, eradicating the risk of tumorigenesis through unbridled cell proliferation.
See also
Embryoid body
Embryonic Stem Cell Research Oversight Committees
Fetal tissue implant
Induced stem cells
KOSR (KnockOut Serum Replacement)
Stem cell controversy
References
External links
Understanding Stem Cells: A View of the Science and Issues from the National Academies
National Institutes of Health
University of Oxford practical workshop on pluripotent stem cell technology
Fact sheet on embryonic stem cells
Fact sheet on ethical issues in embryonic stem cell research
Information & Alternatives to Embryonic Stem Cell Research
A blog focusing specifically on ES cells and iPS cells including research, biotech, and patient-oriented issues
Stem cells
Biotechnology
Embryology
1981 in biotechnology
Sociobiology | Embryonic stem cell | Biology | 6,783 |
63,942,274 | https://en.wikipedia.org/wiki/C/2019%20Y1%20%28ATLAS%29 | C/2019 Y1 (ATLAS) is a comet with a near-parabolic orbit discovered by the ATLAS survey on 16 December 2019. It passed perihelion on 15 March 2020 at 0.84 AU from the Sun. Its orbit is very similar to C/1988 A1 (Liller), C/1996 Q1 (Tabur), C/2015 F3 (SWAN) and C/2023 V5 (Leonard), suggesting they may be fragments of a larger ancient comet.
Observations
The comet passed close to Earth in early May 2020. It was visible in the northern hemisphere sky in the spring of 2020.
References
External links
TheSkyLive Comet C/2019 Y1 (ATLAS)
astro.vanbuitenen.nl 2019Y1
Universe Today 25 March 2020 by David Dickinson
Non-periodic comets
Discoveries by ATLAS
20191216
Comets in 2019
Comets in 2020 | C/2019 Y1 (ATLAS) | Astronomy | 181 |
13,657,747 | https://en.wikipedia.org/wiki/Dirac%20bracket | The Dirac bracket is a generalization of the Poisson bracket developed by Paul Dirac to treat classical systems with second class constraints in Hamiltonian mechanics, and to thus allow them to undergo canonical quantization. It is an important part of Dirac's development of Hamiltonian mechanics to elegantly handle more general Lagrangians; specifically, when constraints are at hand, so that the number of apparent variables exceeds that of dynamical ones. More abstractly, the two-form implied from the Dirac bracket is the restriction of the symplectic form to the constraint surface in phase space.
This article assumes familiarity with the standard Lagrangian and Hamiltonian formalisms, and their connection to canonical quantization. Details of Dirac's modified Hamiltonian formalism are also summarized to put the Dirac bracket in context.
Inadequacy of the standard Hamiltonian procedure
The standard development of Hamiltonian mechanics is inadequate in several specific situations:
When the Lagrangian is at most linear in the velocity of at least one coordinate; in which case, the definition of the canonical momentum leads to a constraint. This is the most frequent reason to resort to Dirac brackets. For instance, the Lagrangian (density) for any fermion is of this form.
When there are gauge (or other unphysical) degrees of freedom which need to be fixed.
When there are any other constraints that one wishes to impose in phase space.
Example of a Lagrangian linear in velocity
An example in classical mechanics is a particle with charge and mass confined to the - plane with a strong constant, homogeneous perpendicular magnetic field, so then pointing in the -direction with strength .
The Lagrangian for this system with an appropriate choice of parameters is
where is the vector potential for the magnetic field, ; is the speed of light in vacuum; and is an arbitrary external scalar potential; one could easily take it to be quadratic in and , without loss of generality. We use
as our vector potential; this corresponds to a uniform and constant magnetic field B in the z direction. Here, the hats indicate unit vectors. Later in the article, however, they are used to distinguish quantum mechanical operators from their classical analogs. The usage should be clear from the context.
Explicitly, the Lagrangian amounts to just
which leads to the equations of motion
For a harmonic potential, the gradient of amounts to just the coordinates, .
Now, in the limit of a very large magnetic field, . One may then drop the kinetic term to produce a simple approximate Lagrangian,
with first-order equations of motion
Note that this approximate Lagrangian is linear in the velocities, which is one of the conditions under which the standard Hamiltonian procedure breaks down. While this example has been motivated as an approximation, the Lagrangian under consideration is legitimate and leads to consistent equations of motion in the Lagrangian formalism.
Following the Hamiltonian procedure, however, the canonical momenta associated with the coordinates are now
which are unusual in that they are not invertible to the velocities; instead, they are constrained to be functions of the coordinates: the four phase-space variables are linearly dependent, so the variable basis is overcomplete.
A Legendre transformation then produces the Hamiltonian
Note that this "naive" Hamiltonian has no dependence on the momenta, which means that equations of motion (Hamilton's equations) are inconsistent.
The Hamiltonian procedure has broken down. One might try to fix the problem by eliminating two of the components of the -dimensional phase space, say and , down to a reduced phase space of dimensions, that is sometimes expressing the coordinates as momenta and sometimes as coordinates. However, this is neither a general nor rigorous solution. This gets to the heart of the matter: that the definition of the canonical momenta implies a constraint on phase space (between momenta and coordinates) that was never taken into account.
Generalized Hamiltonian procedure
In Lagrangian mechanics, if the system has holonomic constraints, then one generally adds Lagrange multipliers to the Lagrangian to account for them. The extra terms vanish when the constraints are satisfied, thereby forcing the path of stationary action to be on the constraint surface. In this case, going to the Hamiltonian formalism introduces a constraint on phase space in Hamiltonian mechanics, but the solution is similar.
Before proceeding, it is useful to understand the notions of weak equality and strong equality. Two functions on phase space, and , are weakly equal if they are equal when the constraints are satisfied, but not throughout the phase space, denoted . If and are equal independently of the constraints being satisfied, they are called strongly equal, written . It is important to note that, in order to get the right answer, no weak equations may be used before evaluating derivatives or Poisson brackets.
The new procedure works as follows, start with a Lagrangian and define the canonical momenta in the usual way. Some of those definitions may not be invertible and instead give a constraint in phase space (as above). Constraints derived in this way or imposed from the beginning of the problem are called primary constraints. The constraints, labeled , must weakly vanish, .
Next, one finds the naive Hamiltonian, , in the usual way via a Legendre transformation, exactly as in the above example. Note that the Hamiltonian can always be written as a function of s and s only, even if the velocities cannot be inverted into functions of the momenta.
Generalizing the Hamiltonian
Dirac argues that we should generalize the Hamiltonian (somewhat analogously to the method of Lagrange multipliers) to
where the are not constants but functions of the coordinates and momenta. Since this new Hamiltonian is the most general function of coordinates and momenta weakly equal to the naive Hamiltonian, is the broadest generalization of the Hamiltonian possible
so that when .
To further illuminate the , consider how one gets the equations of motion from the naive Hamiltonian in the standard procedure. One expands the variation of the Hamiltonian out in two ways and sets them equal (using a somewhat abbreviated notation with suppressed indices and sums):
where the second equality holds after simplifying with the Euler-Lagrange equations of motion and the definition of canonical momentum. From this equality, one deduces the equations of motion in the Hamiltonian formalism from
where the weak equality symbol is no longer displayed explicitly, since by definition the equations of motion only hold weakly. In the present context, one cannot simply set the coefficients of and separately to zero, since the variations are somewhat restricted by the constraints. In particular, the variations must be tangent to the constraint surface.
One can demonstrate that the solution to
for the variations and restricted by the constraints (assuming the constraints satisfy some regularity conditions) is generally
where the are arbitrary functions.
Using this result, the equations of motion become
where the are functions of coordinates and velocities that can be determined, in principle, from the second equation of motion above.
The Legendre transform between the Lagrangian formalism and the Hamiltonian formalism has been saved at the cost of adding new variables.
Consistency conditions
The equations of motion become more compact when using the Poisson bracket, since if is some function of the coordinates and momenta then
if one assumes that the Poisson bracket with the (functions of the velocity) exist; this causes no problems since the contribution weakly vanishes. Now, there are some consistency conditions which must be satisfied in order for this formalism to make sense. If the constraints are going to be satisfied, then their equations of motion must weakly vanish, that is, we require
There are four different types of conditions that can result from the above:
An equation that is inherently false, such as .
An equation that is identically true, possibly after using one of our primary constraints.
An equation that places new constraints on our coordinates and momenta, but is independent of the .
An equation that serves to specify the .
The first case indicates that the starting Lagrangian gives inconsistent equations of motion, such as . The second case does not contribute anything new.
The third case gives new constraints in phase space. A constraint derived in this manner is called a secondary constraint. Upon finding the secondary constraint one should add it to the extended Hamiltonian and check the new consistency conditions, which may result in still more constraints. Iterate this process until there are no more constraints. The distinction between primary and secondary constraints is largely an artificial one (i.e. a constraint for the same system can be primary or secondary depending on the Lagrangian), so this article does not distinguish between them from here on. Assuming the consistency condition has been iterated until all of the constraints have been found, then will index all of them. Note this article uses secondary constraint to mean any constraint that was not initially in the problem or derived from the definition of canonical momenta; some authors distinguish between secondary constraints, tertiary constraints, et cetera.
Finally, the last case helps fix the . If, at the end of this process, the are not completely determined, then that means there are unphysical (gauge) degrees of freedom in the system. Once all of the constraints (primary and secondary) are added to the naive Hamiltonian and the solutions to the consistency conditions for the are plugged in, the result is called the total Hamiltonian.
Determination of the
The uk must solve a set of inhomogeneous linear equations of the form
The above equation must possess at least one solution, since otherwise the initial Lagrangian is inconsistent; however, in systems with gauge degrees of freedom, the solution will not be unique. The most general solution is of the form
where is a particular solution and is the most general solution to the homogeneous equation
The most general solution will be a linear combination of linearly independent solutions to the above homogeneous equation. The number of linearly independent solutions equals the number of (which is the same as the number of constraints) minus the number of consistency conditions of the fourth type (in previous subsection). This is the number of unphysical degrees of freedom in the system. Labeling the linear independent solutions where the index runs from to the number of unphysical degrees of freedom, the general solution to the consistency conditions is of the form
where the are completely arbitrary functions of time. A different choice of the corresponds to a gauge transformation, and should leave the physical state of the system unchanged.
The total Hamiltonian
At this point, it is natural to introduce the total Hamiltonian
and what is denoted
The time evolution of a function on the phase space, , is governed by
Later, the extended Hamiltonian is introduced. For gauge-invariant (physically measurable quantities) quantities, all of the Hamiltonians should give the same time evolution, since they are all weakly equivalent. It is only for non gauge-invariant quantities that the distinction becomes important.
The Dirac bracket
Above is everything needed to find the equations of motion in Dirac's modified Hamiltonian procedure. Having the equations of motion, however, is not the endpoint for theoretical considerations. If one wants to canonically quantize a general system, then one needs the Dirac brackets. Before defining Dirac brackets, first-class and second-class constraints need to be introduced.
We call a function of coordinates and momenta first class if its Poisson bracket with all of the constraints weakly vanishes, that is,
for all . Note that the only quantities that weakly vanish are the constraints , and therefore anything that weakly vanishes must be strongly equal to a linear combination of the constraints. One can demonstrate that the Poisson bracket of two first-class quantities must also be first class. The first-class constraints are intimately connected with the unphysical degrees of freedom mentioned earlier. Namely, the number of independent first-class constraints is equal to the number of unphysical degrees of freedom, and furthermore, the primary first-class constraints generate gauge transformations. Dirac further postulated that all secondary first-class constraints are generators of gauge transformations, which turns out to be false; however, typically one operates under the assumption that all first-class constraints generate gauge transformations when using this treatment.
When the first-class secondary constraints are added into the Hamiltonian with arbitrary as the first-class primary constraints are added to arrive at the total Hamiltonian, then one obtains the extended Hamiltonian. The extended Hamiltonian gives the most general possible time evolution for any gauge-dependent quantities, and may actually generalize the equations of motion from those of the Lagrangian formalism.
For the purposes of introducing the Dirac bracket, of more immediate interest are the second class constraints. Second class constraints are constraints that have a nonvanishing Poisson bracket with at least one other constraint.
For instance, consider second-class constraints and whose Poisson bracket is simply a constant, ,
Now, suppose one wishes to employ canonical quantization, then the phase-space coordinates become operators whose commutators become times their classical Poisson bracket. Assuming there are no ordering issues that give rise to new quantum corrections, this implies that
where the hats emphasize the fact that the constraints are on operators.
On one hand, canonical quantization gives the above commutation relation, but on the other hand 1 and are constraints that must vanish on physical states, whereas the right-hand side cannot vanish. This example illustrates the need for some generalization of the Poisson bracket which respects the system's constraints, and which leads to a consistent quantization procedure. This new bracket should be bilinear, antisymmetric, satisfy the Jacobi identity as does the Poisson bracket, reduce to the Poisson bracket for unconstrained systems, and, additionally, the bracket of any second-class constraint with any other quantity must vanish.
At this point, the second class constraints will be labeled . Define a matrix with entries
In this case, the Dirac bracket of two functions on phase space, and , is defined as
where denotes the entry of 's inverse matrix. Dirac proved that will always be invertible.
It is straightforward to check that the above definition of the Dirac bracket satisfies all of the desired properties, and especially the last one, of vanishing for an argument which is a second-class constraint.
When applying canonical quantization on a constrained Hamiltonian system, the commutator of the operators is supplanted by times their classical Dirac bracket. Since the Dirac bracket respects the constraints, one need not be careful about evaluating all brackets before using any weak equations, as is the case with the Poisson bracket.
Note that while the Poisson bracket of bosonic (Grassmann even) variables with itself must vanish, the Poisson bracket of fermions represented as a Grassmann variables with itself need not vanish. This means that in the fermionic case it is possible for there to be an odd number of second class constraints.
Illustration on the example provided
Returning to the above example, the naive Hamiltonian and the two primary constraints are
Therefore, the extended Hamiltonian can be written
The next step is to apply the consistency conditions , which in this case become
These are not secondary constraints, but conditions that fix and . Therefore, there are no secondary constraints and the arbitrary coefficients are completely determined, indicating that there are no unphysical degrees of freedom.
If one plugs in with the values of and , then one can see that the equations of motion are
which are self-consistent and coincide with the Lagrangian equations of motion.
A simple calculation confirms that and are second class constraints since
hence the matrix looks like
which is easily inverted to
where is the Levi-Civita symbol. Thus, the Dirac brackets are defined to be
If one always uses the Dirac bracket instead of the Poisson bracket, then there is no issue about the order of applying constraints and evaluating expressions, since the Dirac bracket of anything weakly zero is strongly equal to zero. This means that one can just use the naive Hamiltonian with Dirac brackets, instead, to thus get the correct equations of motion, which one can easily confirm on the above ones.
To quantize the system, the Dirac brackets between all of the phase space variables are needed. The nonvanishing Dirac brackets for this system are
while the cross-terms vanish, and
Therefore, the correct implementation of canonical quantization dictates the commutation relations,
with the cross terms vanishing, and
This example has a nonvanishing commutator between and , which means this structure specifies a noncommutative geometry. (Since the two coordinates do not commute, there will be an uncertainty principle for the and positions.)
Further Illustration for a hypersphere
Similarly, for free motion on a hypersphere , the coordinates are constrained, . From a plain kinetic Lagrangian, it is evident that their momenta are perpendicular to them, . Thus the corresponding Dirac Brackets are likewise simple to work out,
The ( constrained phase-space variables obey much simpler Dirac brackets than the unconstrained variables, had one eliminated one of the s and one of the s through the two constraints ab initio, which would obey plain Poisson brackets. The Dirac brackets add simplicity and elegance, at the cost of excessive (constrained) phase-space variables.
For example, for free motion on a circle, , for and eliminating from the circle constraint yields the unconstrained
with equations of motion
an oscillation; whereas the equivalent constrained system with yields
whence, instantly, virtually by inspection, oscillation for both variables,
See also
Canonical quantization
Hamiltonian mechanics
Poisson bracket
Moyal bracket
First class constraint
Second class constraints
Lagrangian
Symplectic structure
Overcompleteness
References
Mathematical quantization
Symplectic geometry
Hamiltonian mechanics | Dirac bracket | Physics,Mathematics | 3,662 |
26,868,709 | https://en.wikipedia.org/wiki/Polder%20tensor | The Polder tensor is a tensor introduced by Dirk Polder for the description of magnetic permeability of ferrites. The tensor notation needs to be used because ferrimagnetic material becomes anisotropic in the presence of a magnetizing field.
The tensor is described mathematically as:
Neglecting the effects of damping, the components of the tensor are given by
where
(rad / s) / (A / m) is the effective gyromagnetic ratio and , the so-called effective g-factor (physics), is a ferrite material constant typically in the range of 1.5 - 2.6, depending on the particular ferrite material. is the frequency of the RF/microwave signal propagating through the ferrite, is the internal magnetic bias field, is the magnetization of the ferrite material and is the magnetic permeability of free space.
To simplify computations, the radian frequencies of and can be replaced with frequencies (Hz) in the equations for and because the factor cancels. In this case, Hz / (A / m) MHz / Oe. If CGS units are used, computations can be further simplified because the factor can be dropped.
References
Ferrites
Tensor physical quantities
Ferromagnetic materials
Magnetic ordering | Polder tensor | Physics,Chemistry,Materials_science,Mathematics,Engineering | 272 |
46,253,875 | https://en.wikipedia.org/wiki/Engineering%20and%20Technology%20History%20Wiki | The Engineering and Technology History Wiki (ETHW) is a MediaWiki-based website dedicated to the history of technology. It started operating in 2015. It consists of articles, first-hand accounts, oral histories, landmarks and milestones.
A partnership between the United Engineering Foundation (UEF) and its member engineering organizations ASCE, AIME, AIChE, ASME, IEEE as well as the Society of Women Engineers is developing the ETHW as a central repository for the documentation, analysis and explanation of the history of technology.
Origins
In September 2008, the IEEE History Committee founded the IEEE Global History Network, which operated from 2008 to 2014. The ETHW became successor to the former IEEE Global History Network (IEEE GHN).
Originally, the United Engineering Foundation had made a grant to develop an engineering intersociety web platform as a central historical repository. Initially, the work was mainly done at the IEEE History Center. annexed to the Stevens Institute of Technology in Hoboken, NJ.
At the beginning, most content was related to electrical, electronics and computer engineering. As the fields of civil engineering, mining, metallurgical and petroleum engineering, chemical engineering and mechanical engineering are covered by members of the respective organizations now, ETHW is becoming a global record for preserving knowledge of the history of technological innovation in a broad sense. It differs from other online sources, as personal accounts of technical innovators as primary sources are made available to the public. After its start as a common platform for several engineering societies, the Society of Women Engineers and the Society of Petroleum Engineers have contributed new content as well.
As of 2018, ETHW included thousands of wiki entries, and recorded over 800 oral histories.
Comparison with Wikipedia
ETHW is a semi-open wiki. In contrast to Wikipedia, no anonymous writing is allowed. Members of the affiliated engineering organizations can contribute their own autobiographical professional history and experiences as First-hand History or are interviewed to provide their Oral History. Administrators review these postings to see whether they conform with the rules established. In contrast to ETHW encyclopedia articles and landmarks/milestones, such content is subjective and not peer reviewed. Changes can only be made by the original author. Not all content can be used freely by everybody.
Sources
External links
Engineering and Technology History Wiki
United Engineering Foundation
Engineering societies
Engineering organizations
American engineering organizations
Organizations established in 2015
MediaWiki websites | Engineering and Technology History Wiki | Engineering | 493 |
327,393 | https://en.wikipedia.org/wiki/Decagon | In geometry, a decagon (from the Greek δέκα déka and γωνία gonía, "ten angles") is a ten-sided polygon or 10-gon. The total sum of the interior angles of a simple decagon is 1440°.
Regular decagon
A regular decagon has all sides of equal length and each internal angle will always be equal to 144°. Its Schläfli symbol is {10} and can also be constructed as a truncated pentagon, t{5}, a quasiregular decagon alternating two types of edges.
Side length
The picture shows a regular decagon with side length and radius of the circumscribed circle.
The triangle has two equally long legs with length and a base with length
The circle around with radius intersects in a point (not designated in the picture).
Now the triangle is an isosceles triangle with vertex and with base angles .
Therefore . So and hence is also an isosceles triangle with vertex . The length of its legs is , so the length of is .
The isosceles triangles and have equal angles of 36° at the vertex, and so they are similar, hence:
Multiplication with the denominators leads to the quadratic equation:
This equation for the side length has one positive solution:
So the regular decagon can be constructed with ruler and compass.
Further conclusions
and the base height of (i.e. the length of ) is and the triangle has the area: .
Area
The area of a regular decagon of side length a is given by:
In terms of the apothem r (see also inscribed figure), the area is:
In terms of the circumradius R, the area is:
An alternative formula is where d is the distance between parallel sides, or the height when the decagon stands on one side as base, or the diameter of the decagon's inscribed circle.
By simple trigonometry,
and it can be written algebraically as
Construction
As 10 = 2 × 5, a power of two times a Fermat prime, it follows that a regular decagon is constructible using compass and straightedge, or by an edge-bisection of a regular pentagon.
An alternative (but similar) method is as follows:
Construct a pentagon in a circle by one of the methods shown in constructing a pentagon.
Extend a line from each vertex of the pentagon through the center of the circle to the opposite side of that same circle. Where each line cuts the circle is a vertex of the decagon. In other words, the image of a regular pentagon under a point reflection with respect of its center is a concentric congruent pentagon, and the two pentagons have in total the vertices of a concentric regular decagon.
The five corners of the pentagon constitute alternate corners of the decagon. Join these points to the adjacent new points to form the decagon.
The golden ratio in decagon
Both in the construction with given circumcircle as well as with given side length is the golden ratio dividing a line segment by exterior division the
determining construction element.
In the construction with given circumcircle the circular arc around G with radius produces the segment , whose division corresponds to the golden ratio.
In the construction with given side length the circular arc around D with radius produces the segment , whose division corresponds to the golden ratio.
Symmetry
The regular decagon has Dih10 symmetry, order 20. There are 3 subgroup dihedral symmetries: Dih5, Dih2, and Dih1, and 4 cyclic group symmetries: Z10, Z5, Z2, and Z1.
These 8 symmetries can be seen in 10 distinct symmetries on the decagon, a larger number because the lines of reflections can either pass through vertices or edges. John Conway labels these by a letter and group order. Full symmetry of the regular form is r20 and no symmetry is labeled a1. The dihedral symmetries are divided depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars), and i when reflection lines path through both edges and vertices. Cyclic symmetries in the middle column are labeled as g for their central gyration orders.
Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g10 subgroup has no degrees of freedom but can be seen as directed edges.
The highest symmetry irregular decagons are d10, an isogonal decagon constructed by five mirrors which can alternate long and short edges, and p10, an isotoxal decagon, constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular decagon.
Dissection
Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into m(m-1)/2 parallelograms.
In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the regular decagon, m=5, and it can be divided into 10 rhombs, with examples shown below. This decomposition can be seen as 10 of 80 faces in a Petrie polygon projection plane of the 5-cube. A dissection is based on 10 of 30 faces of the rhombic triacontahedron. The list defines the number of solutions as 62, with 2 orientations for the first symmetric form, and 10 orientations for the other 6.
Skew decagon
A skew decagon is a skew polygon with 10 vertices and edges but not existing on the same plane. The interior of such a decagon is not generally defined. A skew zig-zag decagon has vertices alternating between two parallel planes.
A regular skew decagon is vertex-transitive with equal edge lengths. In 3-dimensions it will be a zig-zag skew decagon and can be seen in the vertices and side edges of a pentagonal antiprism, pentagrammic antiprism, and pentagrammic crossed-antiprism with the same D5d, [2+,10] symmetry, order 20.
These can also be seen in these four convex polyhedra with icosahedral symmetry. The polygons on the perimeter of these projections are regular skew decagons.
Petrie polygons
The regular skew decagon is the Petrie polygon for many higher-dimensional polytopes, shown in these orthogonal projections in various Coxeter planes: The number of sides in the Petrie polygon is equal to the Coxeter number, h, for each symmetry family.
See also
Decagonal number and centered decagonal number, figurate numbers modeled on the decagon
Decagram, a star polygon with the same vertex positions as the regular decagon
References
External links
Definition and properties of a decagon With interactive animation
10 (number)
Constructible polygons
Polygons by the number of sides
Elementary shapes | Decagon | Mathematics | 1,505 |
30,232,146 | https://en.wikipedia.org/wiki/Enprofylline | Enprofylline (3-propylxanthine) is a xanthine derivative used in the treatment of asthma, which acts as a bronchodilator. It acts primarily as a competitive nonselective phosphodiesterase inhibitor with relatively little activity as a nonselective adenosine receptor antagonist.
References
Adenosine receptor antagonists
Bronchodilators
Phosphodiesterase inhibitors
Xanthines
Propyl compounds | Enprofylline | Chemistry | 98 |
5,719,915 | https://en.wikipedia.org/wiki/Babulang | Babulang is the largest festival of the traditional Bisaya community of Limbang, Sarawak. The festival showcases various music, songs, dances, colourful traditional costumes, decorations and handicrafts.
The festival includes a Ratu Babulang competition, and Water buffalo races.
References
Cultural conventions
Ritual | Babulang | Biology | 60 |
5,082,809 | https://en.wikipedia.org/wiki/Mutual%20standardisation | Mutual Standardisation is a term used within spatial epidemiology to refer to when ecological bias results as a consequence of adjusting disease rates for confounding at the area level but leaving the exposure unadjusted and vice versa. This bias is prevented by adjusting in the same way both the exposure and disease rates. This adjustment is rarely possible as it requires data on within-area distribution of the exposure and confounder variables. (Elliot, 2001)
See also
Outline of public health
References
Elliott, P., J. C. Wakefield, N. G. Best and D. J. Briggs (eds.) 2001. Spatial Epidemiology: Methods and Applications. Oxford University Press, Oxford.
Epidemiology | Mutual standardisation | Environmental_science | 150 |
71,266,648 | https://en.wikipedia.org/wiki/KDU-414 | The KDU-414 (Russian Корректирующая Двигательная Установка, Corrective Propulsion Unit), is a pressure-fed liquid rocket Propulsion Unit developed and produced by the Isayev Design Bureau (today known as KhimMash). From 1960 onward, it powered several unmanned Soviet Spacecraft, including the first series of Molniya satellites, several Kosmos satellites as well as the space probes Mars 1, Venera 1, Zond 2 and Zond 3, featured as a part of standardized spacecraft buses known as KAUR-2, 2MV and 3MV.
The Corrective Propulsion Unit consists of a single chamber 'S5.19' liquid rocket engine and a conical thermal protection cowl containing the spherical propellant tank.
A barrier splits the tank into two separate compartments, filled with the propellant, UDMH, and the oxidizer, IRFNA, respectively. This combination of propellants is hypergolic, igniting on contact.
The rocket motor is supplied with fuel by pressurizing the tank using gaseous nitrogen, which doubles as a source of RCS propellant.
Elastic barriers within the tank prevent the nitrogen gas and propellant/oxidiser from mixing with each other.
A gimbal mount allows the engine to swivel along two axes.
In 1974, it was replaced with its derived successor, the KDU-414A with the S5.114 engine.
References
External links
KB_KhimMash_rocket_engines
Rocket_engines
Rocket_engines_of_the_Soviet_Union
Rocket_engines_using_hypergolic_propellant
Rocket_engines_using_the_pressure-fed_cycle
Spacecraft attitude control | KDU-414 | Technology | 377 |
63,591,386 | https://en.wikipedia.org/wiki/Olamkicept | Olamkicept, also known as soluble gp130Fc or sgp130Fc (other designations are FE 999301, FE301, TJ301) is an immunosuppressive drug candidate, which selectively blocks activities of the cytokine Interleukin-6, which are mediated by the soluble Interleukin-6. Interleukin-6 is a cytokine, which plays a dominant role in the regulation of the immune response and also in autoimmunity. Furthermore, Interleukin-6 has been demonstrated to be involved in the regulation of metabolism and body weight. Interleukin-6 also has many activities on neural cells. The biochemical principle was invented by the German biochemist Stefan Rose-John and it was further developed into a biotech compound by the Conaris Research Institute AG, which gave an exclusive world-wide license to the Swiss-based biopharmaceutical company Ferring Pharmaceuticals. In December 2016, Ferring and the biotech company I-MAB signed a licensing agreement granting I-MAB exclusive rights in Asia to Olamkicept for the treatment of autoimmune disease.
Mechanism of action
On cells, interleukin-6 binds to an Interleukin-6 receptor, which, however, is not signaling. The complex of Interleukin-6 and the Interleukin-6 receptor binds to a second receptor protein, gp130, which thereupon dimerizes and initiates intracellular signaling. The gp130 receptor is present on all cells of the human body, whereas the Interleukin-6 receptor is only expressed by some cells such as hepatocytes, epithelial cells and some leukocytes. Since Interleukin-6 exhibits only measurable affinity to the Interleukin-6 receptor but not to gp130, only cells, which express the Interleukin-6 receptor can respond to Interleukin-6. It was found that the Interleukin-6 receptor can be cleaved from the cell membrane by the protease ADAM17 generating a soluble receptor. The soluble interleukin-6 receptor can still bind interleukin-6 and the complex of interleukin-6 and interleukin-6 receptor can bind to gp130 even on cells which do not express the membrane-bound interleukin-6 receptor. This mode of signaling has named Interleukin-6 trans-signaling. The protein olamkicept consists of the extracellular portion of gp130 fused (and thereby dimerized) to the constant portion of a human IgG1 antibody. Like membrane bound gp130, the protein olamkicept does not bind Interleukin-6 alone but only the complex of interleukin-6 and soluble interleukin-6 receptor. Therefore, olamkicept only inhibits interleukin-6 trans-signaling but not interleukin-6 signaling via the membrane-bound interleukin-6 receptor. It has been shown that Interleukin-6 activities via the membrane-bound interleukin-6 receptor are regenerative and protect from bacterial infections whereas interleukin-6 activities via the soluble interleukin-6 receptor are considered pro-inflammatory. Therefore, olamkicept only blocks the pro-inflammatory activities of the cytokine interleukin-6.
Research
In many animal disease models of human pathologies it was tested whether the specific blockade of interleukin-6 trans-signaling by the olamkicept protein was superior to a global blockade with an interleukin-6 or an interleukin-6 receptor neutralizing antibody. It turned out that the specific blockade of Interleukin-6 trans-signaling was superior to global Interleukin-6 blockade in models of e.g. sepsis, of acute lung injury after severe acute pancreatitis and of abdominal aortic aneurysm. Furthermore, it was shown that Interleukin-6 trans-signaling plays a dominant role in colon cancer and lung cancer.
Medical use
The olamkicept protein underwent phase I clinical studies in healthy volunteers and a small cohort of largely inactive patients with IBD in 2013/14. An open label phase IIa study in patients with active inflammatory bowel disesase was performed in Germany. A second placebo-controlled, phase II clinical trial in patients with ulcerative colitis was successfully completed in China, Taiwan and South Korea. The results of the German phase IIa clinical trial were published and demonstrated target engagement through olamkicept exposure over 12 weeks in patients with active inflammatory bowel disease. Most interestingly some patients developed a complete remission while others went into response. The molecular analysis revealed an olamkicept-specific signature in influencing disease pathophysiology. The results of the placebo controlled trial in China/Taiwan/South Korea were released during the 2021 Digestive Disease Week (DDW) and the 2021 annual meeting of the European Crohn’s and Colitis Organization (ECCO).
References
Recombinant proteins | Olamkicept | Biology | 1,088 |
4,278,455 | https://en.wikipedia.org/wiki/Vertex%20pipeline | The function of the vertex pipeline in any GPU is to take geometry data (usually supplied as vector points), work with it if needed with either fixed function processes (earlier DirectX), or a vertex shader program (later DirectX), and create all of the 3D data points in a scene to a 2D plane for display on a computer monitor.
It is possible to eliminate unneeded data from going through the rendering pipeline to cut out extraneous work (called view volume clipping and backface culling). After the vertex engine is done working with the geometry, all the 2D calculated data is sent to the pixel engine for further processing such as texturing and fragment shading.
As of DirectX 9c, the vertex processor is able to do the following by programming the vertex processing under the Direct X API:
Displacement mapping
Geometry blending
Higher-order primitives
Point sprites
Matrix stacks
External links
Anandtech Article
3D computer graphics
Graphics standards | Vertex pipeline | Technology | 195 |
4,240,032 | https://en.wikipedia.org/wiki/Solenoid%20voltmeter | A solenoid voltmeter is a specific type of voltmeter electricians use to test electrical power circuits. It uses a solenoid coil to attract a spring-loaded plunger; the movement of the plunger is calibrated in terms of approximate voltage. It is more rugged than a D'arsonval movement, but neither as sensitive nor as precise.
Wiggy is the registered trademark for a common solenoid voltmeter used in North America derived from a device patent assigned to the Wigginton Company, US patent number 1,538,906.
Operation
Rather than using a D'Arsonval movement or digital electronics, the solenoid voltmeter simply uses a spring-loaded solenoid carrying a pointer (it might also be described as a form of moving iron meter). Greater voltage creates more magnetism pulling the solenoid's core in further against the spring loading, moving the pointer. A short scale converts the pointer's movement into the voltage reading. Solenoid voltmeters usually have a scale on each side of the pointer; one is calibrated for alternating current and one is calibrated for direct current. Only one "range" is provided and it usually extends from zero to about 600 volts.
A small permanent magnet rotor is usually mounted at the top of the meter. For DC, this magnet flips one way or the other, indicating by the exposed color (red or black) which lead is connected to positive. For AC, the rotor simply vibrates, indicating that the meter is connected to an AC circuit. Another form of tester uses a miniature neon lamp; the negative electrode glows, indicating polarity on DC circuits, or both electrodes glow, indicating AC.
Models made by some manufacturers include continuity test lights, which are energized by a battery within the tester. This is particularly advantageous when testing, for example, fuses in live circuits, since no switching is required to change from continuity mode to voltage detecting mode.
Comparison with moving coil meters
Solenoid voltmeters are extremely rugged and not very susceptible to damage through either rough handling or electrical overload, compared with more delicate but more precise instruments of the moving-coil D'arsonval type
For "go/no go" testing, there is no need to read the scale as application of AC power creates a perceivable vibration and sound within the meter. This feature makes the tester very handy in noisy, poorly illuminated, or very bright surroundings. The meter can be felt, the more it jumps the higher the voltage.
Solenoid voltmeters draw appreciable current in operation. When testing power supply circuits, a high-impedance connection (that is, a nearly open-circuit fault such as a burned switch contact or wire joint) in the power path might still allow enough voltage/current through to register on a high-impedance digital voltmeter, but it probably can't actuate the solenoid voltmeter. For use with high impedance circuit applications, however, they are not so good, as they draw appreciable current and therefore alter the voltage being measured. They can be used to test residual-current devices (GFCIs) because the current drawn trips most RCDs when the solenoid voltmeter is connected between the live and earth conductors.
Some manufacturers include a continuity test lamp function in a solenoid meter; these use the same probes as the voltage test function. This feature is useful when testing the status of contacts in energized circuits. The continuity light displays if the contact is closed, and the solenoid voltmeter shows voltage presence if open (and energized).
In contrast to multimeters, solenoid voltmeters have no other built-in functions (such as the ability to act as an ammeter, ohmmeter, or capacitance meter); they are just simple, easy-to-use power voltmeters. Solenoid voltmeters are useless on low-voltage circuits (for example, 12 volt circuits). The basic range of the voltmeter starts at around 90V (AC or DC).
Solenoid voltmeters are not precise. For example, there would be no reliably perceptible difference in the reading between 220 VAC and 240 VAC.
They are meant for intermittent operation. They draw a moderate amount of power from the circuit under test and can overheat if used for continuous monitoring.
The low impedance and low sensitivity of the tester may not show high-impedance connections to a voltage source, which can still source enough current to cause a shock hazard.
See also
Test light
Continuity tester
References
External links
All About Wiggy
Voltmeters
Electrical test equipment | Solenoid voltmeter | Physics,Technology,Engineering | 1,000 |
77,215,572 | https://en.wikipedia.org/wiki/NGC%204332 | NGC 4332 is a barred spiral galaxy and a starburst galaxy located 128 million light-years away in the constellation Draco. The galaxy was discovered by astronomer William Herschel on March 20, 1790. NGC 4332 is host to a supermassive black hole with an estimated mass of 9.5 solar masses.
NGC 4332 is a member of the NGC 4256 Group, and is located in a subgroup surrounding the galaxy NGC 4210. The NGC 4256 Group is located within the Canes Venatici-Camelopardalis Cloud, which lies in the First Upper Plane of the Virgo Supercluster.
SN 2009an
NGC 4332 has hosted one supernova, a Type Ia supernova designated as SN 2009an that had an apparent magnitude of 15.4. The supernova was discovered by Giancarlo Cortini and Stefano Antonellini with a 35-cm telescope at the Monte Maggiore Observatory in Predappio, Italy on February 27, 2009. It was independently discovered by Petri Kehusmaa of Hyvinkaa, Finland and Mikko Paivinen of Rajamaki, Finland on February 28, 2009, using a 28-cm Schmidt-Cassegrain reflector.
SN 2009an had absolute magntude of -18.841 ± 5 in the blue part of the spectrum. This makes it dimmer than a normal Type Ia supernova. Also, SN 2009an had a light curve that declined a lot faster than a normal Type Ia supernova. Additionally, the bolometric luminosity is estimated at 42.89 erg/s, implying that about 0.41 solar masses of were synthesized in the supernova. However, to account for additional flux lost from UV and NIR light bands, the estimate for the amount of nickel-56 thought to have formed in the supernova increases to 0.50 solar masses. Lastly, spectroscopic data show that SN 2009an has high-velocity features which are observed in the calcium triplet during its pre-maximum and early post-maximum phases. However, the post-maximum spectral evolution resembles a normal Type Ia supernovae, with SN 2009an containing broad Si II 6355 Å lines and Si II 5972 Å lines that are stronger than a normal Type Ia supernova.
These properties make Sn 2009an most similar to another Type Ia supernova known as SN 2004eo. It is thought that supernovae like SN 2009an and SN 2004eo form from the explosion of a white dwarf with lower amounts of kinetic energy than a normal Type Ia supernova and produce more stable elements from the Iron-Group of the periodic table such as iron, nickel and others. These types of Type Ia supernovae account for only 15% of all observed Type Ia supernovae known as non-standard or transitional Type Ia events.
See also
List of NGC objects (4001–5000)
NGC 4513-Ring galaxy in the NGC 4256 Group
References
External links
40133
7453
Barred spiral galaxies
Starburst galaxies
Draco (constellation)
Virgo Supercluster
4332
Astronomical objects discovered in 1790 | NGC 4332 | Astronomy | 653 |
48,833,157 | https://en.wikipedia.org/wiki/Market%20for%20zero-day%20exploits | The market for zero-day exploits is commercial activity related to the trafficking of software exploits.
Software vulnerabilities and "exploits" are used to get remote access to both stored information and information generated in real time. When most people use the same software, as is the case in most of countries today given the monopolistic nature of internet content and service providers, one specific vulnerability can be used against thousands if not millions of people. In this context, criminals have become interested in such vulnerabilities. A 2014 report from McAfee's Center for Strategic and International Studies estimates that the cost of cybercrime and cyberespionage is somewhere around $160 billion per year. Worldwide, countries have appointed public institutions to deal with this issue, but they will likely conflict with the interest of their own government to access people's information in order to prevent crime. As a result, both national security agencies and criminals hide certain software vulnerabilities from both users and the original developer. This type of vulnerability is known as a zero-day exploit.
Much has been said in academia and regular media about the regulation of zero-day exploits in the market. However, it is very difficult to reach a consensus because most definitions for zero-day exploits are rather vague or not applicable, as one can only define the use of certain software as malware after it has been used. In addition, there is a conflict of interest within the operations of the state that could prevent a regulation that can make mandatory the disclosure of zero-days. Governments face a trade-off between protecting their citizens' privacy through the reporting of vulnerabilities to private companies on one hand and undermining the communication technologies used by their targets—who also threaten the security of the public—on the other. The protection of national security through exploitation of software vulnerabilities unknown to both companies and the public is an ultimate resource for security agencies but also compromises the safety of every single user because any third party, including criminal organizations, could be making use of the same resource. Hence, only users and private firms have incentives to minimize the risks associated with zero-day exploits; the former to avoid an invasion of privacy and the latter to reduce the costs of data breaches. These include legal processes, costs related to the development of solutions to fix or "patch" the original vulnerability in the software and costs associated with the loss of confidence of clients in the product.
Description
Ablon, Libicki and Golay have explained to a great extent the inner workings of the zero-day market. The main findings can be separated into five components: Commodity, Currency, Marketplace, Supply and Demand. These components and their relationship with pricing will be described. The definition given to the demand component will also be challenged because it is paramount to understand the nature of the markets (i.e. white, gray and black) and its regulation or lack thereof.
Commodity
Exploits are digital products, which means that they are information goods with near-zero marginal production costs. However, they are atypical information goods. Unlike e-books or digital videos, they do not lose their value because they are easy to replicate but due to the fact that once they are exposed, the original developer will "patch" the vulnerability, decreasing the value of the commodity.
The value will not go to zero for two reasons: (1) the distribution of the patch is asymmetric and (2) developers could use the original bug to create a variant at a decreased cost. They are also atypical because they are time-sensitive commodities. Companies are updating their software on a regular basis and a patch is only useful during the lapse between versions; sometimes a vulnerability can be corrected without any external report. Third, even in confidential transactions, the use of the exploit itself can create a dysfunction on the user-end, exposing the vulnerability and leading to its loss of value. In this sense, exploits are non-excludable but they can or can not be non-rivalrous, as the more users use a zero day exploit after a certain point, the more visible its impact will be to the company in question. This could result in the company being more likely to patch the exploit, resulting in a limited period of time where the exploit is available for all users.
Currency
In most cases, transactions are typically designed to protect the identity of at least one of the parties involved in the exchange. While this is dependent on the type of market—white markets can use traceable money—most purchases are made with stolen digital funds (credit cards) and cryptocurrencies. While the latter has been the dominant trend in the last few years, prices in the gray market are set in dollars, as shown by the leaks of Hacking Team's email archive.
Marketplace
Classically, black markets—like illegal weapons or narcotics—require a huge network of trusted parties to perform the transactions of deal-making, document forgery, financial transfers and illicit transport, among others. As it is very difficult to enforce any legal agreement within these networks, many criminal organizations recruit members close to home. This proximity element increases the cost of transaction as more intermediaries are required for transnational transactions, decreasing the overall profit of the original seller.
Zero-days, on the other hand, are virtual products and can be easily sold without intermediaries over the internet as available technologies are strong enough to provide anonymity at a very low cost. Even if there is a need for intermediaries, "unwitting data mules" can be used to avoid any evidence of wrongdoing. This is why the black market is so lucrative compared to gray markets. Gray markets, which involve transactions with public institutions in charge of national security, usually require the use of third parties to hide the traces of their transactions. The Hacking Team archive, for example, contains alleged contracts with the Ecuadorian National Secretariat of Intelligence where they used two intermediaries: Robotec and Theola. In the same archive, it is said that third-party companies Cicom and Robotec negotiated the contracts on behalf of the FBI and DEA respectively. It is less likely that white markets face the same problem as it is not in their interest to hide the transaction, it is quite the opposite because companies actively promote the use of their new patches.
Supply
The supply chain is complex and involves multiple actors organized by hierarchies, where administrators sit at the top, followed by the technical experts. Next are intermediaries, brokers and vendors which can or can not be sophisticated, finally followed by witting mules. Within this chain of command, one can find multiple products. While zero-day exploits can be "found" or developed by subject matter experts only, other exploits can be easily commercialized by almost any person willing to enter the black market. There are two reasons for this. First, some devices use outdated or deprecated software and can be easily targeted by exploits that otherwise would be completely useless. Second, these "half-day exploits" can be used through graphical interfaces and learned through freely available tutorials, which means that very little expertise is required to enter the market as a seller.
The coexistence of zero-day and half-day markets influences the resilience of the black market, as developers keep moving towards the more sophisticated end. While take-downs on high organized crime has increased, the suppliers are easily replaced with people in lower levels of the pyramid. It can take less than a day to find a new provider after a take-down operation that can easily last months.
Getting to the top, however, requires personal connections and a good reputation, in this the digital black market is no different from the physical one. Half-day exploits are usually traded in more easily accessible places but zero-days often require "double-blind" auctions and the use of multiple layers of encryption to evade law enforcement. This can not be done in forums or boards, hence these transactions occur in extremely vetted spaces.
Demand
Who buys zero-day exploits defines the kind of market we are dealing with. Afidler differentiates between white, gray and black markets following the market-sizing methodology from Harvard Business School as a guide. Here they differentiate between white markets, gray markets and black markets.
White markets are those where the original developers reward security researchers for reporting vulnerabilities. On average, prices reported until 2014 were less than ten thousands of dollars but special offers up to $100,000 were made to certain vulnerabilities based on the type, criticality, and nature of the affected software. Fourteen percent of all Microsoft, Apple and Adobe vulnerabilities in the past ten years came through white market programs.
Criminals buy in the black market; however, governments can be occasional buyers if their offer can not be satisfied in the gray market or if they find impediments to acquire zero-days due to international regulations. Hacking Team states in their website that they "do not sell products to governments or to countries blacklisted by the U.S., EU, UN, NATO or ASEAN", although they have been found infringing their own policy. Prices are usually 10–100 times higher in this market when compared to the white market and this changes depending on the location of the buyer; The United States being the place where the best prices are offered. Potential sellers which are not allowed to sell in specific territories, like Cuba and North Korea in the case of the U.S., are likely to operate in the black market as well.
Gray markets buyers include clients from the private sector, governments and brokers who resell vulnerabilities. The information regarding these markets is only available through requests of confidential information from governments, where the price is usually redacted for safety purposes, and information leaked from both national security agencies and private companies (i.e. FinFisher and Hacking Team).
Tsyrklevich reported on the transactions made by Hacking Team. To date, this represents the best evidence available on the inner workings of the gray market. However, it is likely to be the case that some of these procedures are applied in both white and black markets as well:
Controversies
Typically the parties opposed to gray markets are the retailers of the item in the market as it damages its profits and reputation. As a result, they usually pressure the original manufacturer to adjust the official channels of distribution. The state also plays an important role enforcing penalties in the case of law infringement. However, the zero-day exploit market is atypical and the way it operates is closer to the workings of the black market. Brokers and bounty programs, which could be seen as retailers of zero-days, have no control whatsoever on the original producers of the "bad" as they are independently discovered by different, and often anonymous, actors. It is not in their interest to change the channel of distribution as they can profit from both the white and gray markets, having much less risk in the former.
States, which usually complement the labour of the original manufacturers to restrict gray markets, play a different role in the zero-day market as they are regular purchasers of exploits. Given the secretive nature of information security, it is not in their interest to disclose information on software vulnerabilities as their interest is, in this case, aligned with that of the criminals who seek to infiltrate devices and acquire information of specific targets. It can be argued that the presence of intelligence agencies as consumers of this "bad" could increase the price of zero-days even further as legitimate markets provide bargaining power to black-market sellers.
Finally, private companies are unwilling to raise the prices of their rewards to those levels reached in the gray and black markets arguing that they are not sustainable for defensive markets. Previous studies have shown that reward programs are more cost-effective for private firms as compared to hiring in-house security researchers, but if the prize of rewards keeps increasing that might not be the case anymore.
In 2015, Zerodium, a new start-up focused on the acquisition of "high-risk vulnerabilities", announced their new bounty program. They published the formats required for vulnerability submissions, their criteria to determine prices—the popularity and complexity of the affected software, and the quality of the submitted exploit—and the prices themselves. This represents a mixture of the transparency offered by traditional vulnerability reward program and the high rewards offered in the gray and black markets. Software developer companies perceived this new approach as a threat, primarily due to the fact that very high bounties could cause developer and tester employees to leave their day jobs. Its effects on the market, however, are yet to be defined.
The NSA was criticized for buying up and stockpiling zero-day vulnerabilities, keeping them secret and developing mainly offensive capabilities instead of helping patch vulnerabilities.
See also
Bug bounty program
Cybercrime
Cybercrime countermeasures
Cyber-arms industry
Duqu
Mass surveillance industry
Stuxnet
Proactive cyber defense
References
Hacking (computer security)
Cybercrime
Darknet markets
Cyberwarfare
Cyberpunk themes
Retail markets
Mass surveillance
Information economy | Market for zero-day exploits | Technology | 2,674 |
16,794,181 | https://en.wikipedia.org/wiki/Chloride%20Group | Chloride is a global company that specializes in the design, production, and maintenance of industrial uninterruptible power supply (UPS) systems to ensure a reliable power supply for critical equipment across multiple industries. Formerly listed on the London Stock Exchange and a constituent of FTSE 250 index, the company has become privately-owned since 2021.
History
Chloride Group was founded in 1891 as The Chloride Electrical Syndicate Limited to manufacture batteries. Brand names used included Ajax, Exide, Dagenite, Kathanode, Shednought and Tudor.
In the 1970s, under its then managing director Sir Michael Edwardes it showcased the UK's first battery-powered buses.
In 1999, it diversified into secure power systems acquiring Oneac in the US, BOAR SA in Spain and Hytek in Australia. In 2000, it acquired the power protection division of Siemens in Germany and in 2001 it acquired Continuous Power International followed, in 2005, by Harath Engineering Services in the UK. In 2007, it acquired AST Electronique Services, a similar business in France.
In July 2009, the Company announced the acquisition of a 90% stake in India’s leading Uninterruptible power supply company, DB Power Electronics.
In September 2010, Chloride Group was fully acquired by Emerson Electric (joining the Emerson Network Power platform) of the United States for US$1.5 billion.
In 2016, Emerson Network Power was acquired by Platinum Equity for US$4 billion. The business was rebranded under the name Vertiv, launching as a stand-alone business.
In 2021, Chloride became an independent privately held company as a result of the buy-out of the business division of Vertiv by its management team supported by private investment fond Innovafonds and sovereign bank Bpifrance. The scope of the transaction comprised all industrial business activity globally including the manufacturing site in France, all patent and intellectual property, the registered trademarks Chloride and AEES as well as several regional assets. The product portfolio of Industrial AC and DC UPS systems as well as safety lighting portfolio were transferred in their entirety. On completion of this transaction the new group with pro-forma sales of $90 million in 2021 restored its historical name, Chloride.
References
External links
Company website
Technology companies established in 1891
1891 establishments in England
Electrical equipment manufacturers
Privately held companies of France
Companies formerly listed on the London Stock Exchange
2010 mergers and acquisitions
2021 mergers and acquisitions | Chloride Group | Engineering | 487 |
298,509 | https://en.wikipedia.org/wiki/La%20Brea%20Tar%20Pits | La Brea Tar Pits is an active paleontological research site in urban Los Angeles. Hancock Park was formed around a group of tar pits where natural asphalt (also called asphaltum, bitumen, or pitch; brea in Spanish) has seeped up from the ground for tens of thousands of years. Over many centuries, the bones of trapped animals have been preserved. The George C. Page Museum is dedicated to researching the tar pits and displaying specimens from the animals that died there. La Brea Tar Pits is a registered National Natural Landmark.
Formation
Tar pits are composed of heavy oil fractions called gilsonite, which seep from the earth as oil. Crude oil seeps up along the 6th Street Fault from the Salt Lake Oil Field, which underlies much of the Fairfax District north of Hancock Park. The oil reaches the surface and forms pools, becoming asphalt as the lighter fractions of the petroleum biodegrade or evaporate. The asphalt then normally hardens into stubby mounds, which can be seen in several areas of the park.
This seepage has been happening for tens of thousands of years, during which the asphalt would sometimes form a deposit thick enough to trap animals. The deposit would then become covered over with water, dust, and leaves. Animals would wander in, become trapped, and die. Predators would enter to eat the trapped animals and would also become stuck, a phenomenon called a "predator trap". As the bones of a dead animal sank, the asphalt would soak into them, turning them dark-brown or black in color. Lighter fractions of petroleum evaporated from the asphalt, leaving a more solid substance which then encased the bones.
Dramatic fossils of large mammals have been extricated, and the asphalt also preserves microfossils: wood and plant remnants, rodent bones, insects, mollusks, dust, seeds, leaves, and pollen grains. Examples of these are on display in the George C. Page Museum. Radiometric dating of preserved wood and bones has given an age of 38,000 years for the oldest known material from the La Brea seeps.
History
The Chumash and Tongva people used tar from the pits to build plank boats by sealing planks of California redwood trunks and pieces of driftwood from the Santa Barbara Channel, which they used to navigate the California coastline and Channel Islands.
The Portolá expedition, a group of Spanish explorers led by Gaspar de Portolá, made the first written record of the tar pits in 1769. Father Juan Crespí wrote,
While crossing the basin, the scouts reported having seen some geysers of tar issuing from the ground like springs; it boils up molten, and the water runs to one side and the tar to the other. The scouts reported that they had come across many of these springs and had seen large swamps of them, enough, they said, to caulk many vessels. We were not so lucky ourselves as to see these tar geysers, much though we wished it; as it was some distance out of the way we were to take, the Governor [Portolá] did not want us to go past them. We christened them Los Volcanes de Brea [the Tar Volcanoes].
Harrison Rogers, who accompanied Jedediah Smith on his 1826 expedition to California, was shown a piece of the solidified asphalt while at Mission San Gabriel, and noted in his journal, "The Citizens of the Country make great use of it to pitch the roofs of their houses".
The La Brea Tar Pits and Hancock Park were formerly part of the Mexican land grant of Rancho La Brea. For some years, tar-covered bones were found on the property but were not initially recognized as fossils because the ranch had lost various animals—including horses, cattle, dogs, and even camels—whose bones closely resemble several of the fossil species. Initially, they mistook the bones in the pits for the remains of pronghorn or cattle that had become mired. The original Rancho La Brea land grant stipulated that the tar pits be open to the public for the use of the local Pueblo.
There were originally more than 100 separate pits of tar (or asphaltum) but most of those have been filled in with rock or dirt since settlement, leaving about a dozen accessible from ground level.
In 1886, the first excavation for land pitch in the village of La Brea was undertaken by Messrs Turnbull, Stewart & co..
Excavations
Union Oil geologist W. W. Orcutt is credited, in 1901, with first recognizing that fossilized prehistoric animal bones were preserved in pools of asphalt on the Hancock ranch. In commemoration of Orcutt's initial discovery, paleontologists named the La Brea coyote (Canis latrans orcutti) in his honor. John C. Merriam of the University of California, Berkeley led much of the original work in this area early in the 20th century. Contemporary excavations of the bones started in 1913.
In the 1940s and 1950s, public excitement was generated by the preparation of previously recovered large mammal bones. A subsequent study demonstrated the fossil vertebrate material was well preserved, with little evidence of bacterial degradation of bone protein. They are believed to be some 10–20,000 years old, dating from the Last Glacial Period.
On February 18, 2009, George C. Page Museum announced the 2006 discovery of 16 fossil deposits that had been removed from the ground during the construction of an underground parking garage for the Los Angeles County Museum of Art next to the tar pits. Among the finds are remains of a saber-toothed cat, dire wolves, bison, horses, a giant ground sloth, turtles, snails, clams, millipedes, fish, gophers, and an American lion. Also discovered is a nearly intact mammoth skeleton, nicknamed Zed; the only pieces missing are a rear leg, a vertebra, and the top of its skull, which was sheared off by construction equipment in preparation to build the parking structure. These fossils were packaged in boxes at the construction site and moved to a compound behind Pit 91, on Page Museum property, so that construction could continue. Twenty-three large accumulations of tar and specimens were taken to the Page Museum. These deposits are worked on under the name "Project 23".
As work for the public transit D Line is extended, museum researchers know more tar pits will be uncovered, for example near the intersection of Wilshire and Curson. In an exploratory subway dig in 2014 on the Miracle Mile, prehistoric objects unearthed included geoducks, sand dollars, and a from a pine tree, of a type now found in Central California's woodlands.
George C. Page Museum
In 1913, George Allan Hancock, the owner of Rancho La Brea, granted the Natural History Museum of Los Angeles County exclusive excavation rights at the Tar Pits for two years. In those two years, the museum was able to extract 750,000 specimens at 96 sites, guaranteeing that a large collection of fossils would remain consolidated and available to the community. Then in 1924, Hancock donated to Los Angeles County with the stipulation that the county provide for the preservation of the park and the exhibition of fossils found there.
The George C. Page Museum of La Brea Discoveries, part of the Natural History Museum of Los Angeles County, was built next to the tar pits in Hancock Park on Wilshire Boulevard. It was named for a local philanthropist. Construction began in 1975, and the museum opened to the public in 1977. The area is part of urban Los Angeles in the Miracle Mile District.
The museum tells the story of the tar pits and presents specimens excavated from them. Visitors can walk around the park and see the tar pits. On the grounds of the park are life-sized models of prehistoric animals in or near the tar pits. Of more than 100 pits, only Pit 91 is still regularly excavated by researchers and can be seen at the Pit 91 viewing station. In addition to Pit 91, the one other ongoing excavation is called "Project 23". Paleontologists supervise and direct the work of volunteers at both sites.
As a result of a design competition in 2019, the Natural History Museum of Los Angeles County chose Weiss/Manfredi over Dorte Mandrup and Diller Scofidio + Renfro to redesign the park, including by adding a pedestrian walkway framing Lake Pitt, which is long.
The museum is featured prominently in the 1992 cult classic film Encino Man, where the title character recollects he was previously a caveman during his exploration of the museum's exhibits.
Heritage site
In respect of it being the "richest paleontological site on Earth for terrestrial fossils of late Quaternary age," the International Union of Geological Sciences (IUGS) included the "Late Quaternary asphalt seeps and paleontological site of La Brea Tar Pits" in its assemblage of 100 geological heritage sites around the world in a listing published in October 2022. The organization defines an IUGS Geological Heritage Site as "a key place with geological elements and/or processes of international scientific relevance, used as a reference, and/or with a substantial contribution to the development of geological sciences through history."
Flora and fauna
Among the prehistoric Pleistocene species associated with the La Brea Tar Pits are Columbian mammoths, dire wolves, short-faced bears, American lions, ground sloths (predominantly Paramylodon harlani, with much rarer Megalonyx jeffersonii and Nothrotheriops shastensis) and the state fossil of California, the saber-toothed cat (Smilodon fatalis). Contrary to popular belief, the tar pits don't contain dinosaur remains, as these were extinct before the pits formed.
The park is known for producing myriad mammal fossils dating from the Wisconsin glaciation. While mammal fossils generate significant interest, other fossils including fossilized insects and plants, and even pollen grains, are also valued. These fossils help define a picture of what is thought to have been a cooler, moister climate in the Los Angeles basin during the glacial age. Microfossils are retrieved from the matrix of asphalt and sandy clay by washing with a solvent to remove the petroleum, then picking through the remains under a high-powered lens.
Historically, the majority of the mammals excavated from the La Brea deposits had been large carnivores, supporting a hypothesized "carnivore trap" in which large herbivores entrapped in asphalt attracted predators and scavengers which then became entrapped while trying to steal a quick meal. However, new research with an eye towards microfossils has revealed a stunning diversity and abundance of many types of mammals. According to paleontologist Thomas Halliday, "Rancho La Brea Tar Pits... where big herbivores typically get stuck in tar which naturally seeps from the ground, and as a result, you get huge concentrations of just specifically herbivores. You get a herbivorous sample of the ecosystem and very few carnivores, except those that are trying to scavenge on the already dead carcasses that have just got stuck in the tar."
Bacteria
Methane gas escapes from the tar pits, causing bubbles that make the asphalt appear to boil. Asphalt and methane appear under surrounding buildings and require special operations for removal to prevent the weakening of building foundations. In 2007, researchers from UC Riverside discovered that the bubbles were caused by hardy forms of bacteria embedded in the natural asphalt. After consuming petroleum, the bacteria release methane. Around 200 to 300 species of bacteria were newly discovered here.
Human presence
Only one human has been found, a partial skeleton of La Brea Woman dated to around 10,000 calendar years (about 9,000 radiocarbon years) BP, who was 17 to 25 years old at death and found associated with remains of a domestic dog, so was interpreted to have been ceremonially interred. In 2016, however, the dog was determined to be much younger in date.
Also, some even older fossils showed possible tool marks, indicating humans active in the area at the time. Bones of saber-toothed cats from La Brea showing signs of "artificial" cut marks at oblique angles to the long axis of each bone were radiocarbon dated to 15,200 ± 800 BP (uncalibrated). If these cuts are in fact tool marks resultant from butchering activities, then this material would provide the earliest solid evidence for human association with the Los Angeles Basin. Yet it is also possible that there was some residual contamination of the material as a result of saturation by asphaltum, influencing the radiocarbon dates.
Gallery
See also
Binagadi asphalt lake
Carpinteria Tar Pits
Lagerstätten
Lake Bermudez
List of fossil sites
Los Angeles County Museum of Art
McKittrick Tar Pits
Pitch Lake
Volcano, disaster film involving a volcano that forms from the La Brea Tar Pits.
References
External links
Page Museum – La Brea Tar Pits
UCMP Berkeley website: describes the geology and paleontology of the asphalt seeps.
Gocalifornia.com: La Brea Tar Pits – visitor guide.
Palaeo.uk: "Setting the La Brea site in context."
NHM.org: Pit 91 excavations
Asphalt lakes
Fossil parks in the United States
Parks in Los Angeles
Museums in Los Angeles
Natural history museums in California
Natural history of Los Angeles County, California
Paleontology in California
Pleistocene paleontological sites of North America
California Historical Landmarks
Landmarks in Los Angeles
National Natural Landmarks in California
Lagerstätten
Petroleum in California
Pleistocene California
Articles containing video clips
1901 in paleontology
1964 in paleontology
20th century in Los Angeles
Environment of Greater Los Angeles
Mid-Wilshire, Los Angeles
Wilshire Boulevard
First 100 IUGS Geological Heritage Sites | La Brea Tar Pits | Chemistry | 2,819 |
931,106 | https://en.wikipedia.org/wiki/CTIA%20and%20GTIA | Color Television Interface Adaptor (CTIA) and its successor Graphic Television Interface Adaptor (GTIA) are custom chips used in the Atari 8-bit computers and Atari 5200 home video game console. In these systems, a CTIA or GTIA chip works together with ANTIC to produce the video display. ANTIC generates the playfield graphics (text and bitmap) while CTIA/GTIA provides the color for the playfield and adds overlay objects known as player/missile graphics (sprites). Under the direction of Jay Miner, the CTIA/GTIA chips were designed by George McLeod with technical assistance of Steve Smith.
Color Television Interface Adaptor and Graphic Television Interface Adaptor are names of the chips as stated in the Atari field service manual. Various publications named the chips differently, sometimes using the alternative spelling Adapter or Graphics, or claiming that the "C" in "CTIA" stands for Colleen/Candy and "G" in "GTIA" is for George.
History
2600 and TIA
Atari had built their first display driver chip, the Television Interface Adaptor but universally referred to as the TIA, as part of the Atari 2600 console. The TIA display logically consisted of two primary sets of objects, the "players" and "missiles" that represented moving objects, and the "playfield" which represented the static background image on which the action took place. The chip used data in memory registers to produce digital signals that were converted in realtime via a digital-to-analog converter and RF modulator to produce a television display.
The conventional way to draw the playfield is to use a bitmap held in a frame buffer, in which each memory location in the frame buffer represents one or more locations on the screen. In the case of the 2600, which normally used a resolution of 160x192 pixels, a frame buffer would need to have at least 160x192/8 = 3840 bytes of memory. Built in an era where RAM was very expensive, the TIA could not afford this solution.
Instead, the system implemented a display system that used a single 20-bit memory register that could be copied or mirrored on the right half of the screen to make what was effectively a 40-bit display. Each location could be displayed in one of four colors, from a palette of 128 possible colors. The TIA also included several other display objects, the "players" and "missiles". These consisted of two 8-bit wide objects known as "players", a single 1-bit object known as the "ball", and two 1-bit "missiles". All of these objects could be moved to arbitrary horizontal locations via settings in other registers.
The key to the TIA system, and the 2600's low price, was that the system implemented only enough memory to draw a single line of the display, all of which held in registers. To draw an entire screen full of data, the user code would wait until the television display reached the right side of the screen and update the registers for the playfield and player/missiles to correctly reflect the next line on the display. This scheme drew the screen line-by-line from program code on the ROM cartridge, a technique known as "racing the beam".
CTIA
Atari initially estimated that the 2600 would have short market lifetime of three years when it was designed in 1976, which meant the company would need a new design by 1979. Initially this new design was simply an updated 2600-like game console, and was built around a similar basic design, simply updated. Work on what would become the CTIA started in 1977, and aimed at delivering a system with twice the resolution and twice the number of colours. Moreover, by varying the number of colours in the playfield, much higher resolutions up to 320 pixels horizontally could be supported. Players and missiles were also updated, including four 8-bit players and four 2-bit missiles, but also allowing an additional mode to combine the four missiles into a fifth player.
Shortly after design began, the home computer revolution started in earnest in the later half of 1977. In response, Atari decided to release two versions of the new machine, a low-end model as a games console, and a high-end version as a home computer. In either role, a more complex playfield would be needed, especially support for character graphics in the computer role. Design of the CTIA was well advanced at this point, so instead of a redesign a clever solution was provided by adding a second chip that would effectively automate the process of racing the beam. Instead of the user's programming updating the CTIA's registers based on its interrupt timing, the new ANTIC would handle this chore, reading data from a framebuffer and feeding that to the CTIA on the fly.
As a result of these changes, the new chips provide greatly improved number and selection of graphics modes over the TIA. Instead of a single playfield mode with 20 or 40 bits of resolution, the CTIA/ANTIC pair can display six text modes and eight graphics modes with various resolutions and color depths, allowing the programmer to choose a balance between resolution, colours, and memory use for their display.
CTIA vs. GTIA
The original design of the CTIA chip also included three additional color interpretations of the normal graphics modes. This feature provides alternate expressions of ANTIC's high-resolution graphics modes presenting 1 bit per pixel, 2 colors with one-half color clock wide pixels as 4 bits per pixel, up to 16 colors, two-color clock wide pixels. This feature was ready before the computers' November 1979 debut, but was delayed so much in the development cycle that Atari had already ordered a batch of about 100,000 CTIA chips with the graphics modes missing. Not wanting to throw away the already-produced chips, the company decided to use them in the initial release of the Atari 400 and 800 models in the US market. The CTIA-equipped computers, lacking the 3 extra color modes, were shipped until October–November 1981. From this point, all new Atari units were equipped with the new chip, now called GTIA, that supported the new color interpretation modes.
The original Atari 800/400 operating system supported the GTIA alternate color interpretation modes from the start, which allowed for easy replacement of the CTIA with the GTIA once it was ready. Atari authorized service centers would install a GTIA chip in CTIA-equipped computers free of charge if the computer was under warranty; otherwise the replacement would cost $62.52.
GTIA was also mounted in all later Atari XL and XE computers and Atari 5200 consoles.
Features
The list below describes CTIA/GTIA's inherent hardware capabilities meaning the intended functionality of the hardware itself, not including results achieved by CPU-serviced interrupts or display kernels driving frequent register changes.
CTIA/GTIA is a television interface device with the following features:
Interprets the Playfield graphics data stream from ANTIC to apply color to the display.
Merges four Player and four Missile overlay objects (aka sprites) with ANTIC's Playfield graphics. Player/Missile features include:
Player/Missile pixel positioning is independent of the Playfield:
Player/Missile objects function normally in the vertical and horizontal overscan areas beyond the displayed Playfield.
Player/Missile objects function normally without an ANTIC Playfield.
Eight-bit wide Player objects and two-bit wide Missile objects where each bit represents one displayed pixel.
Variable pixel width (1, 2, or 4 color clocks wide)
Each Player/Missile object is vertically the height of the entire screen.
Variable pixel height when the data is supplied by ANTIC DMA (single or double scan lines per data)
Ability to independently shift each P/M object by one scan line vertically when operating on double scan lines per data.
Each Player and its associated Missile has a dedicated color register separate from the Playfield colors.
Multiple priority schemes for the order of graphics layers (P/M Graphics vs playfield)
Color merging between Players and Playfield producing extra colors.
Color merging between pairs of Players producing multi-color Players.
Missiles can be grouped together into a Fifth Player that uses a separate color register.
Collision detection between Players, Missiles, and Playfield graphics.
There are no fixed colors for normal (CTIA) color interpretation mode. All colors are generated via indirection through nine color registers. (Four for Player/Missile graphics, four for the Playfield, and one shared between the Playfield and the Fifth Player feature.)
Normal color interpretation mode provides choice of colors from a 128 color palette (16 colors with 8 luminance values for each)
A GTIA color interpretation mode can generate 16 luminances per color providing a 256 color palette.
The GTIA version of the chip adds three alternate color interpretation modes for the Playfield graphics.
16 shades of a single hue from the 16 possible hues in the Atari palette. This is accessible in Atari BASIC as Graphics 9.
15 hues in a single shade/luminance value plus background. This is accessible in Atari BASIC as Graphics 11.
9 colors in any hue and luminance from the palette accomplished using all the Player/Missile and Playfield color registers for the Playfield colors. This is accessible in Atari BASIC as Graphics 10.
Reads the state of the joystick triggers (bottom buttons only for the Atari 5200 controllers).
It includes four input/output pins that are used in different ways depending on the system:
In Atari 8-bit computers, three of the pins are used to read state of the console keys (Start/Select/Option).
The fourth pin controls the speaker built into the Atari 400/800 to generate keyboard clicks. On later models there is no speaker, but the key click is still generated by GTIA and mixed with the regular audio output.
In the Atari 5200, the pins are used as part of the process to read the controller keyboards.
Versions
by part number
C012295 — NTSC CTIA
C014805 — NTSC GTIA
C014889 — PAL GTIA
C020120 — French SECAM GTIA (FGTIA)
Atari, Inc. intended to combine functions of the ANTIC and GTIA chips in one integrated circuit to reduce production costs of Atari computers and 5200 consoles. Two such prototype circuits were being developed, however none of them entered production.
C020577 — CGIA
C021737 — KERI
Pinout
Registers
The Atari 8-bit computers map CTIA/GTIA to the $D0xxhex page and the Atari 5200 console maps it to the $C0xxhex page.
CTIA/GTIA provides 54 Read/Write registers controlling Player/Missile graphics, Playfield colors, joystick triggers, and console keys. Many CTIA/GTIA register addresses have dual purposes performing different functions as a Read vs a Write register. Therefore, no code should read Hardware registers expecting to retrieve the previously written value.
This problem is solved for many write registers by Operating System Shadow registers implemented in regular RAM as places to store the last value written to registers. Operating System Shadow registers are copied from RAM to the hardware registers during the vertical blank. Therefore, any write to hardware registers which have corresponding shadow registers will be overwritten by the value of the Shadow registers during the next vertical blank.
Some Write registers do not have corresponding Shadow registers. They can be safely written by an application without the value being overwritten during the vertical blank. If the application needs to know the last state of the register then it is the responsibility of the application to remember what it wrote.
Operating System Shadow registers also exist for some Read registers where reading the value directly from hardware at an unknown stage in the display cycle may return inconsistent results.
In the individual register listings below the following legend applies:
Player/Missile Horizontal Coordinates
These registers specify the horizontal position in color clocks of the left edge (the high bit of the GRAF* byte patterns) of Player/Missile objects. Coordinates are always based on the display hardware's color clock engine, NOT simply the current Playfield display mode. This also means Player/Missile objects can be moved into overscan areas beyond the current Playfield mode.
Note that while Missile objects bit patterns share the same byte for displayed pixels (GRAFM) each Missile can be independently positioned. When the "fifth Player" option is enabled (See PRIOR/GPRIOR register) turning the four Missiles into one "Player" the Missiles switch from displaying the color of the associated Player object to displaying the value of COLPF3. The new "Player's" position on screen must be set by specifying the position of each Missile individually.
Player/Missile pixels are only rendered within the visible portions of the GTIA's pixel engine. Player/Missile objects are not rendered during the horizontal blank or the vertical blank. However, an object can be partially within the horizontal blank. The objects' pixels that fall outside of the horizontal blank are then within the visible portion of the display and can still register collisions. The horizontal position range of visible color clocks is $22hex/34dec to $DDhex/221dec.
To remove a Player/Missile object from the visible display area horizontal positions (left) 0 and (right) $DEhex/222dec (or greater) will insure no pixels are rendered regardless of the size of the Player/Missile object and so no unintentional collisions can be flagged.
HPOSP0 $D000 Write
Horizontal Position of Player 0
HPOSP1 $D001 Write
Horizontal Position of Player 1
HPOSP2 $D002 Write
Horizontal Position of Player 2
HPOSP3 $D003 Write
Horizontal Position of Player 3
HPOSM0 $D004 Write
Horizontal Position of Missile 0
HPOSM1 $D005 Write
Horizontal Position of Missile 1
HPOSM2 $D006 Write
Horizontal Position of Missile 2
HPOSM3 $D007 Write
Horizontal Position of Missile 3
Below are the color clock coordinates of the left and right edges of the possible Playfield sizes, useful when aligning Player/Missile objects to Playfield components:
Player/Missile Size Control
Three sizes can be chosen: Normal, Double, and Quad width. The left edge (See Horizontal Coordinates) is fixed and the size adjustment expands the Player or Missile toward the right in all cases.
Normal - 1 bit (pixel) is 1 color clock wide
Double - 1 bit (pixel) is 2 color clocks wide
Quad - 1 bit (pixel) is 4 color clocks wide
Note that in Quad size a single Player/Missile pixel is the same width as an Antic Mode 2 text character. Player/Missile priority selection mixed with Quad width Player Missile graphics can be used to create multiple text colors per Mode line.
Each Player has its own size control register:
SIZEP0 $D008 Write
Size of Player 0
SIZEP1 $D009 Write
Size of Player 1
SIZEP2 $D00A Write
Size of Player 2
SIZEP3 $D00B Write
Size of Player 3
Player size controls:
Values:
SIZEM $D00C Write
All Missile sizes are controlled by one register, but each Missile can be sized independently of the others. When the "fifth Player" option is enabled (See PRIOR/GPRIOR register) turning the four Missiles into one "Player" the width is still set by specifying the size for each Missile individually.
Values:
Player/Missile Graphics Patterns
Each Player object has its own 8-bit pattern register. Missile objects share one register with 2 bits per each Missile. Once a value is set it will continue to be displayed on each scan line. With no other intervention by CPU or ANTIC DMA to update the values the result is vertical stripe patterns the height of the screen including overscan areas. This mode of operation does not incur a CPU or DMA toll on the computer. It is useful for displaying alternate colored borders and vertical lines separating screen regions.
GRAFP0 $D00D Write
Graphics pattern for Player 0
GRAFP1 $D00E Write
Graphics pattern for Player 1
GRAFP2 $D00F Write
Graphics pattern for Player 2
GRAFP3 $D010 Write
Graphics pattern for Player 3
Each Player is 8 bits (pixels) wide. Where a bit is set, a pixel is displayed in the color assigned to the color register associated to the Player. Where a bit is not set the Player object is transparent, showing Players, Missiles, Playfield pixels, or the background color. Pixel output begins at the horizontal position specified by the Player's HPOS value with the highest bit output first.
GRAFM $D011 Write
Graphics pattern for all Missiles
Each Missile is 2 bits (pixels) wide. Where a bit is set, a pixel is displayed in the color assigned to the color register for the Player associated to the Missile. When Fifth Player is enabled (see PRIOR/GPRIOR) the Missiles pixels all display COLPF3. Where a bit is not set the Missile object is transparent, showing Players, Missiles, Playfield pixels, or the background color. Pixel output begins at the horizontal position specified by the Missile's HPOS value with the highest bit output first.
Missile Values:
Player/Missile Collisions
CTIA/GTIA has 60 bits providing automatic detection of collisions when Player, Missile, and Playfield pixels intersect. A single bit indicates a non-zero pixel of the Player/Missile object has intersected a pixel of a specific color register. There is no collision registered for pixels rendered using the background color register/value. This system provides instant, pixel-perfect overlap comparison without expensive CPU evaluation of bounding box or image bitmap masking.
The actual color value of an object is not considered. If Player, Missile, Playfield, and Background color registers are all the same value making the objects effectively "invisible", the intersections of objects will still register collisions. This is useful for making hidden or secret objects and walls.
Obscured intersections will also register collisions. If a Player object priority is behind a Playfield color register and another Player object priority is higher (foreground) than the Playfield, and the foreground Player pixels obscure both the Playfield and the Player object behind the Playfield, then the collision between the Playfield and both the background and foreground Player objects will register along with the collision between the foreground and background Player objects.
Note that there is no Missile to Missile collision.
Player/Missile collisions can only occur when Player/Missile object pixels occur within the visible portions of the display. Player/Missile objects are not rendered during the horizontal blank or the vertical blank. The range of visible color clocks is 34 to 221, and the visible scan lines range from line 8 through line 247. Player/Missile data outside of these coordinates are not rendered and will not register collisions. An object can be partially within the horizontal blank. The objects' pixels that fall outside of the horizontal blank are within the visible portion of the display and can still register collisions.
To remove a Player/Missile object from the visible display area horizontal positions (left) 0 and (right) 222 (or greater) will insure no pixels are rendered regardless of the size of the Player/Missile object and so no unintentional collisions can be flagged.
Finally, Player, Missile, and Playfield objects collision detection is real-time, registering a collision as the image pixels are merged and output for display. Checking an object's collision bits before the object has been rendered by CTIA/GTIA will show no collision.
Once set, collisions remain in effect until cleared by writing to the HITCLR register. Effective collision response routines should occur after the targeted objects have been displayed, or at the end of a frame or during the vertical blank to react to the collisions and clear collisions before the next frame begins.
Because collisions are only a single bit, collisions are quite obviously not additive. No matter how many times and different locations a collision between pixels occurs within one frame there is only 1 bit to indicate there was a collision. A set collision bit informs a program that it can examine the related objects to identify collision locations and then decide how to react for each location.
Since HITCLR and collision detection is real-time, Display List Interrupts can divide the display into sections with HITCLR used at the beginning of each section and separate collision evaluation at the end of each section.
When the "fifth Player" option is enabled (See PRIOR/GPRIOR register) the only change is the Missiles 0 to 3 switch from displaying the color of the associated Player object to displaying the value of COLPF3. The new "Player's" collisions are still reported for the individual Missiles.
Player/Missile to Playfield Collisions
Each bit indicates a pixel of the Player/Missile object has intersected a pixel of the specified Playfield color object. There is no collision registered for the background color.
Obscured intersections will also register collisions. If a Player/Missile object priority is behind a Playfield color register and another Player/Missile object priority is higher (foreground) than the Playfield, and the foreground Player/Missile pixels obscure both the Playfield and the Player/Missile object behind the Playfield, then the collision between the Playfield and both the background and foreground Player/Missile objects will register.
High-resolution, 1/2 color clock pixel modes (ANTIC Modes 2, 3, and F) are treated differently. The "background" color rendered as COLPF2 where pixel values are 0 does not register a collision. High-resolution pixels are rendered as the luminance value from COLPF1. The pixels are grouped together in color clock-wide pairs (pixels 0 and 1, pixels 2 and 3, continuing to pixels 318 and 319). Where either pixel of the pair is 1 a collision is detected between the Player or Missile pixels and Playfield color COLPF2.
GTIA modes 9 and 11 do not process playfield collisions. In GTIA mode 10 Playfield collisions will register where Playfield pixels use COLPF0 through COLPF3
M0PF $D000 Read
Missile 0 to Playfield collisions
M1PF $D001 Read
Missile 1 to Playfield collisions
M2PF $D002 Read
Missile 2 to Playfield collisions
M3PF $D003 Read
Missile 3 to Playfield collisions
P0PF $D004 Read
Player 0 to Playfield collisions
P1PF $D005 Read
Player 1 to Playfield collisions
P2PF $D006 Read
Player 2 to Playfield collisions
P3PF $D007 Read
Player 3 to Playfield collisions
Missile to Player Collisions
Missiles collide with Players and Playfields. There is no Missile to Missile collision.
M0PL $D008 Read
Missile 0 to Player collisions
M1PL $D009 Read
Missile 1 to Player collisions
M2PL $D00A Read
Missile 2 to Player collisions
M3PL $D00B Read
Missile 3 to Player collisions
Player to Player Collisions
A collision between two players sets the collision bit in both Players' collision registers. When Player 0 and Player 1 collide, Player 0's collision bit for Player 1 is set, and Player 1's collision bit for Player 0 is set.
A Player cannot collide with itself, so its bit is always 0.
P0PL $D00C Read
Player 0 to Player collisions
P1PL $D00D Read
Player 1 to Player collisions
P2PL $D00E Read
Player 2 to Player collisions
P3PL $D00F Read
Player 3 to Player collisions
Player/Missile and Playfield Color and Luminance
All Player/Missile objects' pixels and all Playfield pixels in the default CTIA/GTIA color interpretation mode use indirection to specify color. Indirection means that the values of the pixel data do not directly specify the color, but point to another source of information for color. CTIA/GTIA contain hardware registers that set the values used for colors, and the pixels' information refer to these registers. The palette on the Atari is 8 luminance levels of 16 colors for a total 128 colors. The color indirection flexibility allows a program to tailor the screen's colors to fit the purpose of the program's display.
All hardware color registers have corresponding shadow registers.
COLPM0 $D012 Write
SHADOW: PCOLOR0 $02C0
Color/luminance of Player and Missile 0.
When GTIA 9-color mode is enabled (PRIOR/GPRIOR value $80) this register is used for the border and background (Playfield pixel value 0), rather than COLBK.
COLPM1 $D013 Write
SHADOW: PCOLOR1 $02C1
Color/luminance of Player and Missile 1.
COLPM2 $D014 Write
SHADOW: PCOLOR2 $02C2
Color/luminance of Player and Missile 2.
COLPM3 $D015 Write
SHADOW: PCOLOR3 $02C3
Color/luminance of Player and Missile 3.
COLPF0 $D016 Write
SHADOW: COLOR0 $02C4
Color/luminance of Playfield 0.
COLPF1 $D017 Write
SHADOW: COLOR1 $02C5
Color/luminance of Playfield 1.
This register is used for the set pixels (value 1) in ANTIC text modes 2 and 3, and map mode F. Only the luminance portion is used and is OR'd with the color value of COLPF2. In other Character and Map modes this register provides the expected color and luminance for a pixel.
COLPF2 $D018 Write
SHADOW: COLOR2 $02C6
Color/luminance of Playfield 2.
This register is used for Playfield background color of ANTIC text modes 2 and 3, and map mode F. That is, where pixel value 0 is used. In other Character and Map modes this register provides the expected color and luminance for a pixel.
COLPF3 $D019 Write
SHADOW: COLOR3 $02C7
Color/luminance of Playfield 3
COLPF3 is available is several special circumstances:
When Missiles are converted to the "fifth Player" they switch from displaying the color of the associated Player object to displaying COLPF3 and change priority. See PRIOR/GPRIOR register.
Playfield Text Modes 4 and 5. Inverse video characters (high bit $80 set) cause CTIA/GTIA to substitute COLPF3 value for COLPF2 pixels in the character matrix. (See ANTIC's Glyph Rendering)
Playfield Text Modes 6 and 7. When the character value has bits 6 and 7 set (character range $C0-FF) the entire character pixel matrix is displayed in COLPF3. (See ANTIC's Glyph Rendering)
This register is also available in GTIA's special 9 color, pixel indirection color mode.
COLBK $D01A Write
SHADOW: COLOR4 $02C8
Color/luminance of Playfield background.
The background color is displayed where no other pixel occurs through the entire overscan display area. The following exceptions occur for the background:
In ANTIC text modes 2 and 3, and map mode F the background of the playfield area where pixels may be rendered is from COLPF2 and the COLBK color appears as a border around the playfield.
In GTIA color interpretation mode $8 (9 color indirection) the display background color is provided by color register COLPM0 while COLBAK is used for Playfield pixel value $8.
In GTIA color interpretation mode $C (15 colors in one luminance level, plus background) uses COLBK to set the luminance level of all other pixels (pixel value $1 through $F). However, the background itself uses only the color component set in the COLBK register. The luminance value of the background is forced to 0.
Color Registers' Bits:
The high nybble of the color register specifies one of 16 colors color ($00, $10, $20... to $F0).
The low nybble of the register specifies one of 16 luminance values ($00, $01, $02... to $0F).
In the normal color interpretation mode the lowest bit is not significant and only 8 luminance values are available ($00, $02, $04, $06, $08, $0A, $0C, $0E), so the complete color palette is 128 color values.
In GTIA color interpretation mode $4 (luminance-only mode) the full 16 bits of luminance values are available for Playfield pixels providing a palette of 256 colors. Any Player/Missile objects displayed in this mode are colored by indirection which still uses the 128 color palette.
In normal color interpretation mode the pixel values range from $0 to $3 ordinarily pointing to color registers COLBK, COLPF0, COLPF1, COLPF2 respectively. The color text modes also include options to use COLPF3 for certain ranges of character values. See ANTIC's graphics modes for more information.
When Player/Missile graphics patterns are enabled for display where the graphics patterns bits are set the color displayed comes from the registers assigned to the objects.
There are exceptions for color generation and display:
ANTIC Text modes 2 and 3, and Map mode F:
The pixel values in these modes is only $0 and $1. The $0 pixels specify the Playfield background which is color register COLPF2. The $1 pixels use the color component of COLPF2, and the luminance specified by COLPF1. The border around the Playfield uses the color from COLBK.
ANTIC Text modes 2 and 3, and Map mode F behave differently with Player/Missile graphics from the other modes. COLPF1 used for the glyph or graphics pixels always has the highest priority and cannot be obscured by Players or Missiles. The color of COLPF1 always comes from the "background" which is ordinarily COLPF2. Therefore, where Players/Missiles and Fifth Player have priority over COLPF2 the COLPF1 glyph/graphics pixels use the color component of the highest priority color (Player or Missile), and the luminance component of COLPF1. This behavior is consistent where Player/Missile priority conflicts result in true black for the "background". In summary, the color CTIA/GTIA finally determines to use "behind" the high-res pixel is then used to "tint" the COLPF1 foreground glyph/graphics pixels.
GTIA Exceptions
GTIA color interpretation mode $8 (9 color indirection) uses color register COLPM0 for the display background and border color while COLBAK is used for Playfield pixel value $8.
GTIA color interpretation mode $C (15 colors in one luminance level, plus background) uses COLBK to set the luminance level of all other pixels (pixel value $1 through $F). However, the background itself uses only the color component set in the COLBK register. The luminance value of the background is forced to 0. Note that the background's color component is also OR'd with the other pixels' colors. Therefore, the overall number of colors in the mode is reduced when the background color component is not black (numerically zero).
Player/Missile Exceptions:
Player/Missile Priority value $0 (See PRIOR/GPRIOR) will cause overlapping Player and Playfield pixels to be OR'd together displaying a different color.
Conflicting Player/Missile Priority configuration will cause true black (color 0, luma 0) to be output where conflicts occur.
The Player/Missile Multi-Color option will cause overlapping Player pixels to be OR'd together displaying a different color.
Color Registers' Use per ANTIC Character Modes:
Color Registers' Use per ANTIC Map Modes:
Color Registers' Use per GTIA Modes (ANTIC F):
Player/Missile colors are always available for Player/Missile objects in all modes, though colors may be modified when the special GTIA modes (16 shades/16 color) are in effect.
Miscellaneous Player/Missile and GTIA Controls
PRIOR $D01B Write
SHADOW: GPRIOR $026F
This register controls several CTIA/GTIA color management features: The GTIA Playfield color interpretation mode, Multi-Color Player objects, the Fifth Player, and Player/Missile/Playfield priority.
GTIA Playfield Color Interpretations
CTIA includes only one default color interpretation mode for the ANTIC Playfield data stream. That is the basic functionality assumed in the majority of the ANTIC and CTIA/GTIA discussion unless otherwise noted. GTIA includes three alternate color interpretations modes for Playfield data. These modes work by pairing adjacent color clocks from ANTIC, thus the pixels output by GTIA are always two color clocks wide. Although these modes can be engaged while displaying any ANTIC Playfield Mode, the full color palette possible with these GTIA color processing options are only realized in the ANTIC Modes based on color clock pixels (ANTIC modes 2, 3, F.) These GTIA options are most often used with a Mode F display. The special GTIA color processing modes also alter the display or behavior of Player/Missile graphics in various ways.
The color interpretation control is a global function of GTIA affecting the entire screen. GTIA is not inherently capable of mixing on one display the various GTIA color interpretation modes and the default CTIA mode needed for most ANTIC Playfields. Mixing color interpretation modes requires software writing to the PRIOR register as the display is generated (usually, by a Display List Interrupt).
PRIOR bits 7 and 6 provide four values specifying the color interpretation modes:
16 Shades
This mode uses the COLBK register to specify the background color. Rather than using indirection, pixel values directly represent Luminance. This mode allows all four luminance bits to be used in the Atari color palette and so is capable of displaying 256 colors.
Player/Missile graphics (without the fifth Player option) display properly in this mode, however collision detection with the Playfield is disabled. Playfield priority is always on the bottom. When the Missiles are switched to act as a fifth Player then where the Missile objects overlap the Playfield the Missile pixels luminance merges with the Playfield pixels' Luminance value.
9 Color
Unlike the other two special GTIA modes, this mode is entirely driven by color indirection. All nine color registers work on the display for pixel values 0 through 8. The remaining 7 pixel values repeat previous color registers.
The pixels are delayed by one color clock (half a GTIA mode pixel) when output. This offset permits interesting effects. For an example, page flipping rapidly between this mode and a different GTIA mode produces a display with apparent higher resolution and greater number of colors.
This mode is unique in that is uses color register COLPM0 for the border and background (Playfield 0 value pixels) rather than COLBK.
Player/Missile graphics display properly with the exception that Player/Missile 0 are not distinguishable from the background pixels, since they use the same color register, COLPM0. The Playfield pixels using the Player/Missile colors are modified by priority settings as if they were Player/Missile objects and so can affect the display of Players/Missiles. (See discussion later about Player/Missile/Playfield priorities).
The Playfield pixels using Player/Missile colors do not trigger collisions when Player/Missile objects overlay them. However, Player/Missile graphics overlapping Playfield colors COLPF0 to COLPF3 will trigger the expected collision.
16 Colors
This mode uses the COLBK register to specify the luminance of all Playfield pixels (values 1hex/1dec through Fhex/15dec.) The least significant bit of the luminance value is not observed, so only the standard/CTIA 8 luminance values are available (, , , , , , , ). Additionally, the background itself uses only the color component set in the COLBK register. The luminance value of the background is forced to 0. As with the Luminance mode indirection is disabled and pixel values directly represent a color.
Note that the color component of the background also merges with the playfield pixels. Colors other than black for the background reduce the overall number of colors displayed in the mode.
Player/Missile graphics (without the fifth Player option) display properly in this mode, however collision detection with the Playfield is disabled. Playfield priority is always on the bottom. When the Missiles are switched to act as a fifth Player then where the Missile objects overlap the Playfield the Missile pixels inherit the Playfield pixels' Color value.
Multi-Color Player
PRIOR bit 5, value 20hex/32dec enables Multi-Color Player objects. Where pixels of two Player/Missile objects overlap a third color appears. This is implemented by eliminating priority processing between pairs of Player/Missile objects resulting in CTIA/GTIA performing a bitwise OR of the two colored pixels to output a new color.
Example: A Player pixel with color value 98hex/152dec (blue) overlaps a Player pixel with color value 46hex/70dec (red) resulting in a pixel color of DEhex/228dec (light green/yellow).
The Players/Missiles pairs capable of Multi-Color output:
Player 0 + Player 1
Missile 0 + Missile 1
Player 2 + Player 3
Missile 2 + Missile 3
Fifth Player
PRIOR bit 4, value $10hex/16dec enables Missiles to become a fifth Player. No functional change occurs to the Missile other than the color processing of the Missiles. Normally the Missiles display using the color of the associated Player. When Fifth Player is enabled all Missiles display the color of Playfield 3 (COLPF3). Horizontal position, size, vertical delay, and Player/Missile collisions all continue to operate the same way. The priority of the Fifth Player for Player objects pixel intersections is COLPF3, but the Fifth Player's pixels have priority over all Playfield colors.
The color processing change also causes some exceptions for the Missiles' display in GTIA's alternative color modes:
GTIA 16 Shades mode: Where Missile pixels overlap the Playfield the pixels inherit the Playfield pixels' Luminance value.
GTIA 16 Colors mode: Where Missile pixels overlap the Playfield the pixels inherit the Playfield pixels' Color value.
The Fifth Player introduces an exception for Priority value $8 (bits 1000) (See Priority discussion below.)
Priority
PRIOR bits 3 to 0 provide four Player/Missile and Playfield priority values that determine which pixel value is displayed when Player/Missile objects pixels and Playfield pixels intersect. The four values provide specific options listed in the Priority chart below. "PM" mean normal Player/Missile implementation without the Fifth Player. The Fifth Player, "P5", is shown where its priority occurs when it is enabled.
The chart is accurate for ANTIC Playfield Character and Map modes using the default (CTIA) color interpretation mode. GTIA color interpretation modes, and the ANTIC modes based on high-resolution, color clock pixels behave differently (noted later).
If multiple bits are set, then where there is a conflict CTIA/GTIA outputs a black pixel—Note that black means actual black, not simply the background color, COLBK.
Although the Fifth Player is displayed with the value of COLPF3, its priority is above all Playfield colors. This produces an exception for Priority value $8 (Bits 1000). In this mode Playfield 0 and 1 are higher priority than the Players, and the Players are higher priority than Playfield 2 and 3. Where Playfield 0 or 1 pixels intersect any Player pixel the result displayed is the Playfield pixel. However, if the Fifth player also intersects the same location, its value is shown over the Playfield causing it to appear as if Playfield 3 has the highest priority. If the Playfield 0 or 1 pixel is removed from this intersection then the Fifth Player's pixel has no Playfield pixel to override and so also falls behind the Player pixels.
When the Priority bits are all 0 a different effect occurs—Player and Playfield pixels are logically OR'd together in the a manner similar to the Multi-Color Player feature. In this situation Players 0 and 1 pixels can mix with Playfield 0 and 1 pixels, and Players 2 and 3 pixels can mix with Playfield 2 and 3 pixels. Additionally, when the Multi-Color Player option is used the resulting merged Players' color can also mix with the Playfield producing more colors. When all color merging possibilities are considered, the CTIA/GTIA hardware can output 23 colors per scan line. Starting with the background color as the first color, the remaining 22 colors and color merges are possible:
When Priority bits are all 0 the Missiles colors function the same way as the corresponding Players as described above. When Fifth Player is enabled, the Missile pixels cause the same color merging as shown for COLPF3 in the table above (colors 19 through 22).
Priority And High-Resolution Modes
The priority result differ for the Character and Map modes using high-resolution, color clock pixels—ANTIC modes 2, 3, and F. These priority handling differences can be exploited to produce color text or graphics in these modes that are traditionally thought of as "monochrome".
In these ANTIC modes COLPF2 is output as the "background" of the Playfield and COLBK is output as the border around the Playfield. The graphics or glyph pixels are output using only the luminance component of COLPF1 mixed with the color component of the background (usually COLPF2).
The priority relationship between Players/Missiles, and COLPF2 work according to the priority chart below. Player/Missile pixels with higher priorities will replace COLPF2 as the "background" color. COLPF1 always has the highest priority and cannot be obscured by Players or Missiles. The glyph/graphics pixels use the color component of highest priority color (Playfield, Player, or Missile), and the luminance component of COLPF1. Note that this behavior is also consistent where Player/Missile priority conflicts result in true black for the "background". In effect, the color value CTIA/GTIA finally uses for the "background" color "tints" the COLPF1 foreground glyph/graphics pixels.
VDELAY $D01C Write
Vertical Delay P/M Graphics
This register is used to provide single scan line movement when Double Line Player/Missile resolution is enabled in ANTIC's DMACTL register. This works by masking ANTIC DMA updates to the GRAF* registers on even scan lines, causing the graphics pattern to shift down one scan line.
Since Single Line resolution requires ANTIC DMA updates on each scan line and VDELAY masks the updates on even scan lines, then this bit reduces Single line Player/Missile resolution to Double line.
GRACTL $D01D Write
Graphics Control
GRACTL controls CTIA/GTIA's receipt of Player/Missile DMA data from ANTIC and toggles the mode of Joystick trigger input.
Receipt of Player/Missile DMA data requires CTIA/GTIA be configured to receive the data. This is done with a pair of bits in GRACTL that match a pair of bits in ANTIC's DMACTL register that direct ANTIC to send Player data and Missile data. GRACTL's Bit 0 corresponds to DMACTL's Bit 2, enabling transfer of Missile data. GRACTL's Bit 1 corresponds to DMACTL's Bit 3, enabling transfer of Player data. These bits must be set for GTIA to receive Player/Missile data from ANTIC via DMA. When Player/Missile graphics are being operated directly by the CPU then these bits must be off.
The joystick trigger registers report the pressed/not pressed state in real-time. If a program's input polling may not be frequent enough to catch momentary joystick button presses, then the triggers can be set to lock in the closed/pressed state and remain in that state even after the button is released. Setting GRACTL Bit 2 enables the latching of all triggers. Clearing the bit returns the triggers to the unlatched, real-time behavior.
HITCLR $D01E Write
Clear Collisions
Any write to this register clears all the Player/Missile collision detection bits.
Other CTIA/GTIA Functions
Joystick Triggers
TRIG0 $D010 Read
SHADOW: STRIG0 $0284
Joystick 0 trigger
TRIG1 $D011 Read
SHADOW: STRIG1 $0285
Joystick 1 trigger.
TRIG2 $D012 Read
SHADOW: STRIG2 $0286
Joystick 2 trigger.
TRIG3 $D013 Read
SHADOW: STRIG3 $0287
Joystick 3 trigger
Bits 7 through 1 are always 0. Bit 0 reports the state of the joystick trigger. Value 1 indicates the trigger is not pressed. Value 0 indicates the trigger is pressed.
The trigger registers report button presses in real-time. The button pressed state will instantly clear when the button is released.
The triggers may be configured to latch, that is, lock, in the pressed state and remain that way until specifically cleared. GRACTL bit 2 enables the latch behavior for all triggers. Clearing GRACTL bit 2 returns all triggers to real-time behavior.
PAL $D014 Read
PAL flags.
This register reports the display standard for the system. When Bits 3 to 0 are set to 1 (value $fhex/15dec) the system is operating in NTSC. When the bits are zero the system is operating in PAL mode.
CONSPK $D01F Write
Console Speaker
Bit3 controls the internal speaker of the Atari 800/400. In later models the console speaker is removed and the sound is mixed with the regular POKEY audio signals for output to the monitor port and RF adapter. The Atari OS uses the console speaker to output the keyboard click and the bell/buzzer sound.
The Operating System sets the speaker bit during the vertical blank routine. Repeatedly writing 0 to the bit will produce a 60 Hz buzzing sound as the vertical blank resets the value. Useful tones can be generated using 6502 code effectively adding a fifth audio channel, albeit a channel requiring CPU time to maintain the audio tones.
CONSOL $D01F Read
Console Keys
A bit is assigned to report the state of each of the special console keys, Start, Select, and Option. Bit value 0 indicates a key is pressed and 1 indicates the key is not pressed. Key/Bit values:
Start Key = Bit value $1
Select Key = Bit value $2
Option Key = Bit value $4
Player/Missile Graphics (sprites) operation
A hardware "sprite" system is handled by CTIA/GTIA. The official ATARI name for the sprite system is "Player/Missile Graphics", since it was designed to reduce the need to manipulate display memory for fast-moving objects, such as the "player" and his weapons, "missiles", in a shoot 'em up game.
A Player is essentially a glyph 8 pixels wide and 256 TV lines tall, and has two colors: the background (transparent) (0 in the glyph) and the foreground (1). A Missile object is similar, but only 2 pixels wide. CTIA/GTIA combines the Player/Missile objects' pixels with the Playfield pixels according to their priority. Transparent (0) player pixels have no effect on the Playfield and display either a Playfield or background pixel without change. All Player/Missile objects' normal pixel width is one color clock. A register value can set the Player or Missile pixels' width to 1, 2, or 4 color clocks wide.
The Player/Missile implementation by CTIA/GTIA is similar to the TIA's. A Player is an 8-bit value or pattern at a specified horizontal position which automatically repeats for each scan line or until the pattern is changed in the register. Missiles are 2-bits wide and share one pattern register, so that four, 2-bit wide values occupy the 8-bit wide pattern register, but each missile has an independent horizontal position and size. Player/Missile objects extend the height of the display including the screen border. That is, the default implementation of Player/Missile graphics by CTIA/GTIA is a stripe down the screen. While seemingly limited this method facilitates Player/Missile graphics use as alternate colored vertical borders or separators on a display, and when priority values are set to put Player/Missile pixels behind playfield pixels they can be used to add additional colors to a display. All Players and Missiles set at maximum width and placed side by side can cover the entire normal width Playfield.
CTIA/GTIA supports several options controlling Player/Missile color. The PRIOR/GPRIOR register value can switch the four Missiles between two color display options—each Missile (0 to 3) expresses the color of the associated Player object (0 to 3) or all Missiles show the color of register COLPF3/COLOR3. When Missiles are similarly colored they can be treated as a fifth player, but correct placement on screen still requires storing values in all four Missile Horizontal Position registers. PRIOR/GPRIOR also controls a feature that causes the overlapping pixels of two Players to generate a third color allowing multi-colored Player objects at the expense of reducing the number of available objects. Finally, PRIOR/GPRIOR can be used to change the foreground/background layering (called, "priority") of Player/Missile pixels vs Playfield pixels, and can create priority conflicts that predictably affect the colors displayed.
The conventional idea of a sprite with an image/pattern that varies vertically is also built into the Player/Missile graphics system. The ANTIC chip includes a feature to perform DMA to automatically feed new pixel patterns to CTIA/GTIA as the display is generated. This can be done for each scan line or every other scan line resulting in Player/Missile pixels one or two scan lines tall. In this way the Player/Missile object could be considered an extremely tall character in a font, 8 bits/pixels wide, by the height of the display.
Moving the Player/Missile objects horizontally is as simple as changing a register in the CTIA/GTIA (in Atari BASIC, a single POKE statement moves a player or missile horizontally). Moving an object vertically is achieved by either block moving the definition of the glyph to a new location in the Player or Missile bitmap, or by rotating the entire Player/Missile bitmap (128 or 256 bytes). The worst case rotation of the entire bitmap is still quite fast in 6502 machine language, even though the 6502 lacks a block-move instruction found in the 8080. Since the sprite is exactly 128 or 256 bytes long, the indexing can be easily accommodated in a byte-wide register on the 6502. Atari BASIC lacks a high speed memory movement command and moving memory using BASIC PEEK()s and POKE(s) is painfully slow. Atari BASIC programs using Player/Missile graphics have other options for performing high speed memory moves. One method is calling a short machine language routine via the USR() function to perform the memory moves. Another option is utilizing a large string as the Player/Missile memory map and performing string copy commands which result in memory movement at machine language speed.
Careful use of Player/Missile graphics with the other graphics features of the Atari hardware can make graphics programming, particularly games, significantly simpler.
GTIA enhancements
The GTIA chip is backward compatible with the CTIA, and adds 3 color interpretations for the 14 "normal" ANTIC Playfield graphics modes. The normal color interpretation of the CTIA chip is limited, per scanline, to a maximum of 4 colors in Map modes or 5 colors in Text modes (plus 4 colors for Player/Missile graphics) unless special programming techniques are used. The three, new color interpretations in GTIA provide a theoretical total of 56 graphics modes (14 ANTIC modes multiplied by four possible color interpretations). However, only the graphics modes based on high-resolution, color clock pixels (that is, Antic text modes 2, 3, and graphics mode F) are capable of fully expressing the color palettes of these 3 new color interpretations. The three additional color interpretations use the information in two color clocks (four bits) to generate a pixel in one of 16 color values. This changes a mode F display from 2 colors per pixel, 320 pixels horizontally, one scan line per mode line, to 16 colors and 80 pixels horizontally. The additional color interpretations allow the following:
GTIA color interpretation mode 16 shades of a single hue (set by the background color, COLBK) from the 16 possible hues in the Atari palette. This is also accessible in Atari BASIC as Graphics 9.
GTIA color interpretation mode This mode allows 9 colors of indirection per horizontal line in any hue and luminance from the entire Atari palette of 128 colors. This is accomplished using all the Player/Missile and Playfield color registers for the Playfield pixels. In this mode the background color is provided by color register COLPM0 while COLBAK is used for Playfield pixel value $8. This mode is accessible in Atari BASIC as Graphics 10,
GTIA color interpretation mode 15 hues in a single shade/luminance value, plus the background. The value of the background, COLBK sets the luminance level of all other pixels (pixel value through ). The least significant bit of the luminance value is not observed, so only the standard/CTIA 8 luminance values are available (, , , , , , , ). Additionally, the background itself uses only the color component set in the COLBK register. The luminance value of the background is forced to 0. This mode is accessible in Atari BASIC as Graphics 11.
Of these modes, Atari BASIC Graphics 9 is particularly notable. It enables the Atari to display gray-scale digitized photographs, which despite their low resolution were very impressive at the time. Additionally, by allowing 16 shades of a single hue rather than the 8 shades available in other graphics modes, it increases the amount of different colors the Atari could display from 128 to 256. Unfortunately, this feature is limited for use in this mode only, which due to its low resolution was not widely used.
The Antic 2 and 3 text modes are capable of displaying the same color ranges as mode F graphics when using the GTIA's alternate color interpretations. However, since the pixel reduction also applies and turns 8 pixel wide, 2 color text into 2 pixel wide, 16 color blocks these modes are unsuitable for actual text, and so these graphics modes are not popular outside of demos. Effective use of the GTIA color interpretation feature with text modes requires a carefully constructed character set treating characters as pixels. This method allows display of an apparent GTIA "high resolution" graphics mode that would ordinarily occupy 8K of RAM to instead use only about 2K (1K for the character set, and 1K for the screen RAM and display list.)
The GTIA also fixed an error in CTIA that caused graphics to be misaligned by "half a color clock". The side effect of the fix was that programs that relied on color artifacts in high-resolution monochrome modes would show a different pair of colors.
Atari owners can determine if their machine is equipped with the CTIA or GTIA by executing the BASIC command POKE 623,64. If the screen blackens after execution, the machine is equipped with the new GTIA chip. If it stays blue, the machine has a CTIA chip instead.
Bugs
The last Atari XE computers made for the Eastern European market were built in China. Many if not all have a buggy PAL GTIA chip. The luma values in Graphics 9 and higher are at fault, appearing as stripes. Replacing the chip fixes the problem. Also, there have been attempts to fix faulty GTIA chips with some external circuitry.
See also
List of home computers by video hardware
References
External links
De Re Atari published by the Atari Program Exchange
Mapping the Atari, Revised Edition by Ian Chadwick
GTIA chip data sheet scanned to PDF.
jindroush site(archived) GTIA info
CTIA die shot
GTIA die shot
Atari 8-bit computers
Graphics chips
Integrated circuits
Computer display standards | CTIA and GTIA | Technology,Engineering | 11,669 |
1,452,622 | https://en.wikipedia.org/wiki/Sparta%20%28rocket%29 | The Sparta (or Redstone Sparta) was a three-stage rocket that launched Australia's first Earth satellite, WRESAT, on 29 November 1967.
Sparta used surplus American Redstone rockets as its first stage, a Thiokol Antares 2 from Scout rocket as a second stage, and a WRE BE-3 Alcyone solid-propellant engine as a third stage.
A first stage was recovered from the Simpson Desert in 1990 after being found in searches by explorer Dick Smith the previous year.
Launches
Several Spartas were launched between 1966 and 1967 from Woomera Test Range LA8 in Woomera, South Australia as part of a joint United States–United Kingdom–Australian research program aimed at understanding re-entry phenomena, and the US donated a spare for the scientific satellite launch into polar orbit.
The first launch was a failure, while the rest were successful.
Gallery
References
Space launch vehicles of the United States
Sounding rockets of the United States
Space programme of Australia | Sparta (rocket) | Astronomy | 200 |
4,064 | https://en.wikipedia.org/wiki/Borsuk%E2%80%93Ulam%20theorem | In mathematics, the Borsuk–Ulam theorem states that every continuous function from an n-sphere into Euclidean n-space maps some pair of antipodal points to the same point. Here, two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center.
Formally: if is continuous then there exists an such that: .
The case can be illustrated by saying that there always exist a pair of opposite points on the Earth's equator with the same temperature. The same is true for any circle. This assumes the temperature varies continuously in space, which is, however, not always the case.
The case is often illustrated by saying that at any moment, there is always a pair of antipodal points on the Earth's surface with equal temperatures and equal barometric pressures, assuming that both parameters vary continuously in space.
The Borsuk–Ulam theorem has several equivalent statements in terms of odd functions. Recall that is the n-sphere and is the n-ball:
If is a continuous odd function, then there exists an such that: .
If is a continuous function which is odd on (the boundary of ), then there exists an such that: .
History
According to , the first historical mention of the statement of the Borsuk–Ulam theorem appears in . The first proof was given by , where the formulation of the problem was attributed to Stanisław Ulam. Since then, many alternative proofs have been found by various authors, as collected by .
Equivalent statements
The following statements are equivalent to the Borsuk–Ulam theorem.
With odd functions
A function is called odd (aka antipodal or antipode-preserving) if for every : .
The Borsuk–Ulam theorem is equivalent to the following statement: A continuous odd function from an n-sphere into Euclidean n-space has a zero. PROOF:
If the theorem is correct, then it is specifically correct for odd functions, and for an odd function, iff . Hence every odd continuous function has a zero.
For every continuous function , the following function is continuous and odd: . If every odd continuous function has a zero, then has a zero, and therefore, . Hence the theorem is correct.
With retractions
Define a retraction as a function The Borsuk–Ulam theorem is equivalent to the following claim: there is no continuous odd retraction.
Proof: If the theorem is correct, then every continuous odd function from must include 0 in its range. However, so there cannot be a continuous odd function whose range is .
Conversely, if it is incorrect, then there is a continuous odd function with no zeroes. Then we can construct another odd function by:
since has no zeroes, is well-defined and continuous. Thus we have a continuous odd retraction.
Proofs
1-dimensional case
The 1-dimensional case can easily be proved using the intermediate value theorem (IVT).
Let be the odd real-valued continuous function on a circle defined by . Pick an arbitrary . If then we are done. Otherwise, without loss of generality, But Hence, by the IVT, there is a point at which .
General case
Algebraic topological proof
Assume that is an odd continuous function with (the case is treated above, the case can be handled using basic covering theory). By passing to orbits under the antipodal action, we then get an induced continuous function between real projective spaces, which induces an isomorphism on fundamental groups. By the Hurewicz theorem, the induced ring homomorphism on cohomology with coefficients [where denotes the field with two elements],
sends to . But then we get that is sent to , a contradiction.
One can also show the stronger statement that any odd map has odd degree and then deduce the theorem from this result.
Combinatorial proof
The Borsuk–Ulam theorem can be proved from Tucker's lemma.
Let be a continuous odd function. Because g is continuous on a compact domain, it is uniformly continuous. Therefore, for every , there is a such that, for every two points of which are within of each other, their images under g are within of each other.
Define a triangulation of with edges of length at most . Label each vertex of the triangulation with a label in the following way:
The absolute value of the label is the index of the coordinate with the highest absolute value of g: .
The sign of the label is the sign of g, so that: .
Because g is odd, the labeling is also odd: . Hence, by Tucker's lemma, there are two adjacent vertices with opposite labels. Assume w.l.o.g. that the labels are . By the definition of l, this means that in both and , coordinate #1 is the largest coordinate: in this coordinate is positive while in it is negative. By the construction of the triangulation, the distance between and is at most , so in particular (since and have opposite signs) and so . But since the largest coordinate of is coordinate #1, this means that for each . So , where is some constant depending on and the norm which you have chosen.
The above is true for every ; since is compact there must hence be a point u in which .
Corollaries
No subset of is homeomorphic to
The ham sandwich theorem: For any compact sets A1, ..., An in we can always find a hyperplane dividing each of them into two subsets of equal measure.
Equivalent results
Above we showed how to prove the Borsuk–Ulam theorem from Tucker's lemma. The converse is also true: it is possible to prove Tucker's lemma from the Borsuk–Ulam theorem. Therefore, these two theorems are equivalent.
Generalizations
In the original theorem, the domain of the function f is the unit n-sphere (the boundary of the unit n-ball). In general, it is true also when the domain of f is the boundary of any open bounded symmetric subset of containing the origin (Here, symmetric means that if x is in the subset then -x is also in the subset).
More generally, if is a compact n-dimensional Riemannian manifold, and is continuous, there exists a pair of points x and y in such that and x and y are joined by a geodesic of length , for any prescribed .
Consider the function A which maps a point to its antipodal point: Note that The original theorem claims that there is a point x in which In general, this is true also for every function A for which However, in general this is not true for other functions A.
See also
Topological combinatorics
Necklace splitting problem
Ham sandwich theorem
Kakutani's theorem (geometry)
Imre Bárány
Notes
References
External links
The Borsuk-Ulam Explorer. An interactive illustration of Borsuk-Ulam Theorem.
Theorems in algebraic topology
Combinatorics
Theory of continuous functions
Theorems in topology | Borsuk–Ulam theorem | Mathematics | 1,437 |
3,603,745 | https://en.wikipedia.org/wiki/Vertex%20configuration | In geometry, a vertex configuration is a shorthand notation for representing the vertex figure of a polyhedron or tiling as the sequence of faces around a vertex. For uniform polyhedra there is only one vertex type and therefore the vertex configuration fully defines the polyhedron. (Chiral polyhedra exist in mirror-image pairs with the same vertex configuration.)
A vertex configuration is given as a sequence of numbers representing the number of sides of the faces going around the vertex. The notation "" describes a vertex that has 3 faces around it, faces with , , and sides.
For example, "" indicates a vertex belonging to 4 faces, alternating triangles and pentagons. This vertex configuration defines the vertex-transitive icosidodecahedron. The notation is cyclic and therefore is equivalent with different starting points, so is the same as The order is important, so is different from (the first has two triangles followed by two pentagons). Repeated elements can be collected as exponents so this example is also represented as .
It has variously been called a vertex description, vertex type, vertex symbol, vertex arrangement, vertex pattern, face-vector, vertex sequence. It is also called a Cundy and Rollett symbol for its usage for the Archimedean solids in their 1952 book Mathematical Models.
Vertex figures
A vertex configuration can also be represented as a polygonal vertex figure showing the faces around the vertex. This vertex figure has a 3-dimensional structure since the faces are not in the same plane for polyhedra, but for vertex-uniform polyhedra all the neighboring vertices are in the same plane and so this plane projection can be used to visually represent the vertex configuration.
Variations and uses
Different notations are used, sometimes with a comma (,) and sometimes a period (.) separator. The period operator is useful because it looks like a product and an exponent notation can be used. For example, 3.5.3.5 is sometimes written as (3.5)2.
The notation can also be considered an expansive form of the simple Schläfli symbol for regular polyhedra. The Schläfli notation {p,q} means q p-gons around each vertex. So {p,q} can be written as p.p.p... (q times) or pq. For example, an icosahedron is {3,5} = 3.3.3.3.3 or 35.
This notation applies to polygonal tilings as well as polyhedra. A planar vertex configuration denotes a uniform tiling just like a nonplanar vertex configuration denotes a uniform polyhedron.
The notation is ambiguous for chiral forms. For example, the snub cube has clockwise and counterclockwise forms which are identical across mirror images. Both have a 3.3.3.3.4 vertex configuration.
Star polygons
The notation also applies for nonconvex regular faces, the star polygons. For example, a pentagram has the symbol {5/2}, meaning it has 5 sides going around the centre twice.
For example, there are 4 regular star polyhedra with regular polygon or star polygon vertex figures. The small stellated dodecahedron has the Schläfli symbol of {5/2,5} which expands to an explicit vertex configuration 5/2.5/2.5/2.5/2.5/2 or combined as (5/2)5. The great stellated dodecahedron, {5/2,3} has a triangular vertex figure and configuration (5/2.5/2.5/2) or (5/2)3. The great dodecahedron, {5,5/2} has a pentagrammic vertex figure, with vertex configuration is (5.5.5.5.5)/2 or (55)/2. A great icosahedron, {3,5/2} also has a pentagrammic vertex figure, with vertex configuration (3.3.3.3.3)/2 or (35)/2.
Inverted polygons
Faces on a vertex figure are considered to progress in one direction. Some uniform polyhedra have vertex figures with inversions where the faces progress retrograde. A vertex figure represents this in the star polygon notation of sides p/q such that p<2q, where p is the number of sides and q the number of turns around a circle. For example, "3/2" means a triangle that has vertices that go around twice, which is the same as backwards once. Similarly "5/3" is a backwards pentagram 5/2.
All uniform vertex configurations of regular convex polygons
Semiregular polyhedra have vertex configurations with positive angle defect.
NOTE: The vertex figure can represent a regular or semiregular tiling on the plane if its defect is zero. It can represent a tiling of the hyperbolic plane if its defect is negative.
For uniform polyhedra, the angle defect can be used to compute the number of vertices. Descartes' theorem states that all the angle defects in a topological sphere must sum to 4π radians or
720 degrees.
Since uniform polyhedra have all identical vertices, this relation allows us to compute the number of vertices, which is 4π/defect or
720/defect.
Example: A truncated cube 3.8.8 has an angle defect of 30 degrees. Therefore, it has
vertices.
In particular it follows that {a,b} has vertices.
Every enumerated vertex configuration potentially uniquely defines a semiregular polyhedron. However, not all configurations are possible.
Topological requirements limit existence. Specifically p.q.r implies that a p-gon is surrounded by alternating q-gons and r-gons, so either p is even or q equals r. Similarly q is even or p equals r, and r is even or p equals q. Therefore, potentially possible triples are 3.3.3, 3.4.4, 3.6.6, 3.8.8, 3.10.10, 3.12.12, 4.4.n (for any n>2), 4.6.6, 4.6.8, 4.6.10, 4.6.12, 4.8.8, 5.5.5, 5.6.6, 6.6.6. In fact, all these configurations with three faces meeting at each vertex turn out to exist.
The number in parentheses is the number of vertices, determined by the angle defect.
Triples
Platonic solids 3.3.3 (4), 4.4.4 (8), 5.5.5 (20)
prisms 3.4.4 (6), 4.4.4 (8; also listed above), 4.4.n (2n)
Archimedean solids 3.6.6 (12), 3.8.8 (24), 3.10.10 (60), 4.6.6 (24), 4.6.8 (48), 4.6.10 (120), 5.6.6 (60).
regular tiling 6.6.6
semiregular tilings 3.12.12, 4.6.12, 4.8.8
Quadruples
Platonic solid 3.3.3.3 (6)
antiprisms 3.3.3.3 (6; also listed above), 3.3.3.n (2n)
Archimedean solids 3.4.3.4 (12), 3.5.3.5 (30), 3.4.4.4 (24), 3.4.5.4 (60)
regular tiling 4.4.4.4
semiregular tilings 3.6.3.6, 3.4.6.4
Quintuples
Platonic solid 3.3.3.3.3 (12)
Archimedean solids 3.3.3.3.4 (24), 3.3.3.3.5 (60) (both chiral)
semiregular tilings 3.3.3.3.6 (chiral), 3.3.3.4.4, 3.3.4.3.4 (note that the two different orders of the same numbers give two different patterns)
Sextuples
regular tiling 3.3.3.3.3.3
Face configuration
The uniform dual or Catalan solids, including the bipyramids and trapezohedra, are vertically-regular (face-transitive) and so they can be identified by a similar notation which is sometimes called face configuration. Cundy and Rollett prefixed these dual symbols by a V. In contrast, Tilings and patterns uses square brackets around the symbol for isohedral tilings.
This notation represents a sequential count of the number of faces that exist at each vertex around a face. For example, V3.4.3.4 or V(3.4)2 represents the rhombic dodecahedron which is face-transitive: every face is a rhombus, and alternating vertices of the rhombus contain 3 or 4 faces each.
Notes
References
Cundy, H. and Rollett, A., Mathematical Models (1952), (3rd edition, 1989, Stradbroke, England: Tarquin Pub.), 3.7 The Archimedean Polyhedra. Pp. 101–115, pp. 118–119 Table I, Nets of Archimedean Duals, V.a.b.c... as vertically-regular symbols.
Peter Cromwell, Polyhedra, Cambridge University Press (1977) The Archimedean solids. Pp. 156–167.
Uses Cundy-Rollett symbol.
Pp. 58–64, Tilings of regular polygons a.b.c.... (Tilings by regular polygons and star polygons) pp. 95–97, 176, 283, 614–620, Monohedral tiling symbol [v1.v2. ... .vr]. pp. 632–642 hollow tilings.
The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, (p. 289 Vertex figures, uses comma separator, for Archimedean solids and tilings).
External links
Consistent Vertex Descriptions Stella (software), Robert Webb
Polyhedra
Polytope notation systems | Vertex configuration | Mathematics | 2,229 |
4,355,092 | https://en.wikipedia.org/wiki/Antibody-dependent%20cellular%20cytotoxicity | Antibody-dependent cellular cytotoxicity (ADCC), also referred to as antibody-dependent cell-mediated cytotoxicity, is a mechanism of cell-mediated immune defense whereby an effector cell of the immune system kills a target cell, whose membrane-surface antigens have been bound by specific antibodies. It is one of the mechanisms through which antibodies, as part of the humoral immune response, can act to limit and contain infection.
ADCC is independent of the immune complement system that also lyses targets but does not require any other cell. ADCC requires an effector cell which classically is known to be natural killer (NK) cells that typically interact with immunoglobulin G (IgG) antibodies. However, macrophages, neutrophils and eosinophils can also mediate ADCC, such as eosinophils killing certain parasitic worms known as helminths via IgE antibodies.
In general, ADCC has typically been described as the immune response to antibody-coated cells leading ultimately to the lysing of the infected or non-host cell. In recent literature, its importance in regards to treatment of cancerous cells and deeper insight into its deceptively complex pathways have been topics of increasing interest to medical researchers.
NK cells
The typical ADCC involves activation of NK cells by antibodies in a multi-tiered progression of immune control. A NK cell expresses Fcγ receptors. These receptors recognize and bind to the reciprocal portion of an antibody, such as IgG, which binds to the surface of a pathogen-infected target cell. The most common of these Fc receptors on the surface of an NK cell is CD16 or FcγRIII. Once the Fc receptor binds to the Fc region of the antibody, the NK cell releases cytotoxic factors that cause the death of the target cell.
During replication of a virus, some of the viral proteins are expressed on the cell surface membrane of the infected cell. Antibodies can then bind to these viral proteins. Next, the NK cells which have reciprocal Fcγ receptors will bind to that antibody, inducing the NK cell to release proteins such as perforin and proteases known as granzymes, which causes the lysis of the infected cell to hinder the spread of the virus.
Eosinophils
Large parasites like helminths are too big to be engulfed and killed by phagocytosis. They also have an external structure or integument that is resistant to attack by substances released by neutrophils and macrophages. After IgE coat these parasites, the Fc receptor (FcɛRI) of an eosinophil will recognize IgE. Subsequently, interaction between FcεRI and the Fc portion of helminth-bound IgE signals the eosinophil to degranulate.
In vitro assays
Several laboratory methods exist for determining the efficacy of antibodies or effector cells in eliciting ADCC. Usually, a target cell line expressing a certain surface-exposed antigen is incubated with antibody specific for that antigen. After washing, effector cells expressing Fc receptor CD16 are co-incubated with the antibody-labelled target cells. Effector cells are typically PBMCs (peripheral blood mononuclear cell), of which a small percentage are NK cells (natural killer cell); less often they are purified NK cells themselves. Over the course of a few hours a complex forms between the antibody, target cell, and effector cell which leads to lysis of the cell membrane of the target. If the target cell was pre-loaded with a label of some sort, that label is released in proportion to the amount of cell lysis. Cytotoxicity can be quantified by measuring the amount of label in solution compared to the amount of label that remains within healthy, intact cells.
The classical method of detecting this is the chromium-51 [51Cr] release assay; the sulfur-35 [35S] release assay is a little used radioisotope-based alternative. Target cell lysis is determined by measuring the amount of radiolabel released into the cell culture medium by means of a gamma counter or scintillation counter. A variety of non-radioactive methods are now in widespread use. Fluorescence-based methods include such things as direct labelling with a fluorescent dye like calcein or labelling with europium that becomes fluorescent when released Eu3+ binds to a chelator. Fluorescence can be measured by means of multi-well fluorometers or by flow cytometry methods. There are also enzymatic-based assays in which the contents of the lysed cells includes cellular enzymes like GAPDH that remain active; supplying a substrate for that enzyme can catalyze a reaction whose product can be detected by luminescence or by absorbance.
Medical applications
NK cells are involved in killing tumor cells and other cells that may lack MHC I on their surface, indicating a non-self cell. NK cells have been shown to behave similarly to memory cells due to their ability to react to destroy non-host cells only after interacting with a host cell. As NK cells are not themselves specific to certain pathways of immune control, they are utilized a majority of the time in ADCC as a less discriminate cell destroyer than antibody-specific apoptosis mechanisms. The ability of activated ex vivo NK cells has been a topic of interest for the treatment of tumors. After early clinical trials involving activation through cytokines produced poor results and severe toxicological side effects, more recent studies have produced success in regulating metastatic tumors using interleukin proteins to activate the NK cell.
The effects against solid tumors of trastuzumab and rituximab monoclonal antibodies have been shown in experiments with mice to involve ADCC as an important mechanism of therapeutic action. In the clinic, the FcgRIII 158V/F polymorphism interfere with the ability to generate ADCC responses in vitro during trastuzumab treatment.
Multiple myeloma can be treated with daratumumab (Darzalex) monoclonal antibody. Studies with in vitro materials and patient materials indicate that ADCC is an important mechanism, along with CDC (complement-dependent cytotoxicity).
ADCC as used in immune control is typically more useful for viral infections than bacterial infections due to IgG antibodies binding to virus-related antigens over prokaryotic cells. Instead of ADCC removing outside toxins, immunoglobulins neutralize products of infecting bacteria and encase infected host cells that have had bacterial toxins directly inserted through the cell membrane.
ADCC is also important in the use of vaccines, as creation of antibodies and the destruction of antigens introduced to the host body are crucial to building immunity through small exposure to viral and bacterial proteins. Examples of this include vaccines targeting repeats in toxins (RTX) that are structurally crucial to a wide variety of erythrocyte-lysing bacteria, described as hemolysins. These bacteria target the CD18 portion of leukocytes, which has historically been shown to impact ADCC in adhesion-deficient cells.
See also
Afucosylated monoclonal antibodies
References
Further reading
External links
University of Leicester, Virus Immunopathology Notes
Immune system | Antibody-dependent cellular cytotoxicity | Biology | 1,522 |
73,166,794 | https://en.wikipedia.org/wiki/List%20of%20trichloroethylene-related%20incidents | Trichloroethylene (TCE) is a common industrial solvent mostly used for metal degreasing. Due to its wide use in industries, there have been several incidences of waste TCE leaking into aquifers and contaminating groundwaters.
Due to their similar industrial uses, areas contaminated with mainly TCE may also be contaminated with tetrachloroethylene in smaller amounts.
Background
The first known report of TCE in groundwater was given in 1949 by two English public chemists who described two separate instances of well contamination by industrial releases of TCE.
Exposure to TCE occurs mainly through contaminated drinking water. With a specific gravity greater than 1 (denser than water), trichloroethylene can be present as a dense non-aqueous phase liquid (DNAPL) if sufficient quantities are spilled in the environment. Another significant source of vapor exposure in Superfund sites that had contaminated groundwater, such as the Twin Cities Army Ammunition Plant, was by showering. TCE readily volatilizes out of hot water and into the air. Long, hot showers would then volatilize more TCE into the air. In a home closed tightly to conserve the cost of heating and cooling, these vapors would then recirculate. Based on available federal and state surveys, between 9% and 34% of the drinking water supply sources tested in the U.S. may have some TCE contamination, though EPA has reported that most water supplies are in compliance with the maximum contaminant level (MCL) of 5 ppb.
In addition, a growing concern in recent years at sites with TCE contamination in soil or groundwater has been vapor intrusion in buildings, which has resulted in indoor air exposures, such is in a recent case in the McCook Field neighborhood of Dayton, Ohio, United States. Trichloroethylene has been detected in 852 Superfund sites across the United States, according to the Agency for Toxic Substances and Disease Registry (ATSDR). Under the Safe Drinking Water Act of 1974, and as amended annual water quality testing is required for all public drinking water distributors. The EPA'S current guidelines for TCE are online.
The EPA's table of "TCE Releases to Ground" is dated 1987 to 1993, thereby omitting one of the largest Superfund cleanup sites in the nation, the North IBW in Scottsdale, Arizona. Earlier, TCE was dumped here, and was subsequently detected in the municipal drinking water wells in 1982, prior to the study period.
Marine Corps Base Camp Lejeune in North Carolina may be the largest TCE contamination site in the United States. Legislation could force the EPA to establish a health advisory and a national public drinking water regulation to limit trichloroethylene.
The 1998 film A Civil Action dramatizes the EPA lawsuit Anne Anderson, et al., v. Cryovac, Inc. concerning trichloroethylene contamination that occurred in Woburn, Massachusetts in the 1970s and 1980s.
1980s
Between 1975 and 1985, the water supply of Marine Corps Base Camp Lejeune was contaminated with trichloroethylene and other volatile organic compounds.
In 1986, and later again in 2009, 2 plumes containing trichloroethylene was found on Long Island, New York due to Northrop Grumman's Bethpage factories that worked in conjunction with the United States Navy during the 1930s and 1940s.
In 1988, the EPA discovered tons of TCE that had been leaked or dumped into the ground by the United States military and semiconductor industry (companies including Fairchild Semiconductor, Intel Corporation, and Raytheon Company) just outside NASA Ames in Moffett Field, Mountain View, California.
In 1987, Hill Air Force Base, in Layton, Utah, was declared a Superfund site in 1987 and added to the U.S. Environmental Protection Agency's National Priorities List. Contamination of TCE has been detected in the groundwater throughout Weber County, Utah.
1990s
In 1990, Fort Ord, CA was added to the EPA's National Priorities List. Veterans have linked trichloroethylene as the underlying cause for high incidence rates of multiple myeloma.
In 1992, Lockformer conducted soil sampling on their property and found TCE in the soil at levels as high as 680 parts per million (ppm). During the summer of 2000, a group of residents hired legal counsel, and on October 11, 2000, these residents had their private well water tested by a private environmental consultant. The group owned homes south of the Lockformer property in the suspected path of groundwater flow. The consultant collected a second round of well water samples on November 10, 2000, and TCE was detected in some of the wells sampled. Beginning in December 2000, Illinois EPA collected about 350 more private well water samples north and south of the Lockformer property.
For over 20 years of operation, RCA Corporation had been pouring toxic wastewater into a well in its Taoyuan City, Taiwan facility. The pollution from the plant was not revealed until 1994, when former workers brought it to light. Investigation by the Taiwan Environmental Protection Administration confirmed that RCA had been dumping chlorinated organic solvents into a secret well and caused contamination to the soil and groundwater surrounding the plant site. High levels of TCE and tetrachloroethylene can be found in groundwater drawn as far as two kilometers from the site.
In 1998, the View-Master factory supply well in Beaverton, Oregon was found to have been contaminated with high levels of TCE. It was estimated that 25,000 factory workers had been exposed to it from 1950 to 2001.
In the case of Lisle, Illinois, releases of trichloroethylene had allegedly occurred on the Lockformer property beginning in 1968 and continuing for an undetermined period. The company used TCE in the past as a degreaser to clean metal parts. Contamination at the Lockformer site is presently under investigation by the U.S. Environmental Protection Agency and Illinois EPA.
2000s and 2010s
As of 2007, 57,000 pounds, or 28.5 tons of TCE have been removed from the system of wells that once supplied drinking water to the residents of Scottsdale, Arizona. One of the three drinking water wells previously owned by the City of Phoenix and ultimately sold to the City of Scottsdale, tested at 390 ppb TCE when it was closed in 1982. The City of Scottsdale recently updated its website to clarify that the contaminated wells were "in the Scottsdale area," and amended all references to the measured levels of TCE discovered when the wells were closed (including "390 ppb") to "trace".
In June 2012, residents of an area off of Stony Hill Road, Wake Forest, NC were contacted by the EPA and DWQ about possible TCE contamination after authorities followed up on existing TCE contamination in 2005. Subsequent EPA testing found multiple sites with detectable levels of TCE and several with levels above the MCL.
In December 2017, tonnes of waste TCE and tetrachloroethylene were dumped into sewers in the Tuzla district of Istanbul, Turkey. The leak had allegedly affected thousands of people, especially those with asthma, in neighbouring areas and about 97 people were hospitalised. The Istanbul Metropolitan Municipality (İBB) has stated that the situation did not possess any danger to human health. Various residents have said that it was a "normal occurrence", chemical leaks were "the fate of Tuzla" and they consumed yoghurt after the heavy exposure. Trichloroethylene is widely used and unregulated in Turkey, the TCE import was thought to be about 2.16 million dollars in 2020.
In February 2020, McClymonds High School in West Oakland, California was temporarily closed after trichloroethylene was found in groundwater beneath the school.
Regulations
United States
Until recent years, the US Agency for Toxic Substances and Disease Registry (ATSDR) contended that trichloroethylene had little-to-no carcinogenic potential, and was probably a co-carcinogen—that is, it acted in concert with other substances to promote the formation of tumors.
In 2023, the United States EPA determined that trichloroethylene presents an unreasonable risk of injury to human health under 52 out of 54 conditions of use, including during manufacturing, processing, mixing, recycling, vapor degreasing, as a lubricant, adhesive, sealant, cleaning product, and spray. It is dangerous from both inhalation and dermal exposure and was most strongly associated with immunosuppressive effects for acute exposure, as well as autoimmune effects for chronic exposures.
As of June 1, 2023, two U.S. states (Minnesota and New York) have acted on the EPA's findings and banned trichloroethylene in all cases but research and development.
Proposed U.S. federal regulation
In 2001, a draft report of the Environmental Protection Agency (EPA) laid the groundwork for tough new standards to limit public exposure to trichloroethylene. The assessment set off a fight between the EPA and the Department of Defense (DoD), the Department of Energy, and NASA, who appealed directly to the White House. They argued that the EPA had produced junk science, its assumptions were badly flawed, and that evidence exonerating the chemical was ignored.
The DoD has about 1,400 military properties nationwide that are contaminated with trichloroethylene. Many of these sites are detailed and updated by www.cpeo.org and include a former ammunition plant in the Twin Cities area. Twenty three sites in the Energy Department's nuclear weapons complex—including Lawrence Livermore National Laboratory in the San Francisco Bay area, and NASA centers, including the Jet Propulsion Laboratory in La Cañada Flintridge are reported to have TCE contamination.
Political appointees in the EPA sided with the Pentagon and agreed to pull back the risk assessment. In 2004, the National Academy of Sciences was given a $680,000 contract to study the matter, releasing its report in the summer of 2006. The report has raised more concerns about the health effects of TCE.
European Union
In the European Union, the Scientific Committee on Occupational Exposure Limit Values (SCOEL) recommends an exposure limit for workers exposed to trichloroethylene of 10 ppm (54.7 mg/m3) for 8-hour TWA and of 30 ppm (164.1 mg/m3) for STEL (15 minutes).
Existing EU legislation aimed at protection of workers against risks to their health (including Chemical Agents Directive 98/24/EC and Carcinogens Directive 2004/37/EC) currently do not impose binding minimum requirements for controlling risks to workers health during the use phase or throughout the life cycle of trichloroethylene. However, in case the ongoing discussions under the Carcinogens Directive will result in setting of a binding Occupational Exposure Limit for trichloroethylene for protection of workers; this conclusion may be revisited.
The Solvents Emissions Directive 1999/13/EC and Industrial Emissions Directive 2010/75/EC impose binding minimum requirements for emissions of trichloroethylene to the environment for certain activities, including surface cleaning. However, the activities with solvent consumption below a specified threshold are not covered by these minimum requirements.
According to European regulation, the use of trichloroethylene is prohibited for individuals at a concentration greater than 0.1%. In industry, trichloroethylene should be substituted before April 21, 2016 (unless an exemption is requested before October 21, 2014) by other products such as tetrachloroethylene (perchloroethylene), methylene chloride (dichloromethane), or other hydrocarbon derivatives (ketones, alcohols, ...).
Reduced production
In recent times, there has been a substantial reduction in the production output of trichloroethylene; alternatives for use in metal degreasing abound, some chlorinated aliphatic hydrocarbons being phased out in a large majority of industries due to the potential for health effects and the legal liability that ensues as a result.
The U.S. military has virtually eliminated its use of the chemical, allegedly purchasing only 11 gallons in 2005. About 100 tons of it was used annually in the U.S. as of 2006.
References
Soil contamination
Water pollution
Pollution-related lists | List of trichloroethylene-related incidents | Chemistry,Environmental_science | 2,578 |
20,896,234 | https://en.wikipedia.org/wiki/Deaf%20animal | Some strains of animals, such as white cats, have a tendency to congenital deafness. Some known chemicals and elements can also affect deafness in animals.
Deafness can occur in almost any breed of cat or dog. This includes both pure-breed and mixed-breed animals, although there may be more prevalence in some specific breeds.
"The association between patterns of pigmentation and deafness in the dog has a long-documented history, with reports dating back over one hundred years. Long suspected of having a genetic basis, the search for loci with a pronounced influence in the expression of hearing loss in the dog has yet to be successful."
Deafness in animals can occur as either unilateral (one ear affected) or bilateral (both ears affected). This occurrence of either type of deafness seems to be relatively the same in both mixed-breed animals and pure-breed animals.
Research has found a significant association between deafness in dogs and the pigment genes piebald and merle. Although merle dogs seem to have higher occurrences of both deafnesses than some other breeds, this research also showed that they had lower occurrences than others. So still there is more to be known about the causes of deafness in animals such as dogs.
Common misconceptions may lead potential owners to believe that deaf dogs may be more likely to have an unpleasant disposition, or that the condition implies other brain abnormalities. Many people have successfully raised and trained deaf animals. Teaching a deaf dog may present unusual challenges, but inventiveness can overcome many of them. For example, when on a walk with a deaf dog, a laser pointer could be used to attract the animal's attention.
See also
Congenital sensorineural deafness in cats
Deafness in humans
Mechanosensation
References
Further reading
FAQ: Deaf Animals from Gallaudet University
Animal anatomy
Domesticated animals
Human–animal interaction
Deafness | Deaf animal | Biology | 388 |
59,601,600 | https://en.wikipedia.org/wiki/Squeeze%20flow | Squeeze flow (also called squeezing flow, squeezing film flow, or squeeze flow theory) is a type of flow in which a material is pressed out or deformed between two parallel plates or objects. First explored in 1874 by Josef Stefan, squeeze flow describes the outward movement of a droplet of material, its area of contact with the plate surfaces, and the effects of internal and external factors such as temperature, viscoelasticity, and heterogeneity of the material. Several squeeze flow models exist to describe Newtonian and non-Newtonian fluids undergoing squeeze flow under various geometries and conditions. Numerous applications across scientific and engineering disciplines including rheometry, welding engineering, and materials science provide examples of squeeze flow in practical use.
Basic Assumptions
Conservation of mass (expressed as a continuity equation), the Navier-Stokes equations for conservation of momentum, and the Reynolds number provide the foundations for calculating and modeling squeeze flow. Boundary conditions for such calculations include assumptions of an incompressible fluid, a two-dimensional system, neglecting of body forces, and neglecting of inertial forces.
Relating applied force to material thickness:
Where is the applied squeezing force, is the initial length of the droplet, is the fluid viscosity, is the width of the assumed rectangular plate, is the final height of the droplet, and is the change in droplet height over time. To simplify most calculations, the applied force is assumed to be constant.
Newtonian fluids
Several equations accurately model Newtonian droplet sizes under different initial conditions.
Consideration of a single asperity, or surface protrusion, allows for measurement of a very specific cross-section of a droplet. To measure macroscopic squeeze flow effects, models exist for two the most common surfaces: circular and rectangular plate squeeze flows.
Single asperity
For single asperity squeeze flow:
Where is the initial height of the droplet, is the final height of the droplet, is the applied squeezing force, is the squeezing time, is the fluid viscosity, is the width of the assumed rectangular plate, and is the initial length of the droplet.
Based on conservation of mass calculations, the droplet width is inversely proportional to droplet height; as the width increases, the height decreases in response to squeezing forces.
Circular plate
For circular plate squeeze flow:
is the radius of the circular plate.
Rectangular plate
For rectangular plate squeeze flow:
These calculations assume a melt layer that has a length much larger than the sample width and thickness.
Non-Newtonian fluids
Simplifying calculations for Newtonian fluids allows for basic analysis of squeeze flow, but many polymers can exhibit properties of non-Newtonian fluids, such as viscoelastic characteristics, under deformation. The power law fluid model is sufficient to describe behaviors above the melting temperature for semicrystalline thermoplastics or the glass transition temperature for amorphous thermoplastics, and the Bingham fluid model provides calculations based on variations in yield stress calculations.
Power law fluid
For squeeze flow in a power law fluid:
Where (or ) is the flow consistency index and is the dimensionless flow behavior index.
Where is the flow consistency index, is the initial flow consistency index, is the activation energy, is the universal gas constant, and is the absolute temperature.
During experimentation to determine the accuracy of the power law fluid model, observations showed that modeling slow squeeze flow generated inaccurate power law constants ( and ) using a standard viscometer, and fast squeeze flow demonstrated that polymers may exhibit better lubrication than current constitutive models will predict. The current empirical model for power law fluids is relatively accurate for modeling inelastic flows, but certain kinematic flow assumptions and incomplete understanding of polymeric lubrication properties tend to provide inaccurate modeling of power law fluids.
Bingham fluid
Bingham fluids exhibit uncommon characteristics during squeeze flow. While undergoing compression, Bingham fluids should fail to move and act as a solid until achieving a yield stress; however, as the parallel plates move closer together, the fluid shows some radial movement. One study proposes a “biviscosity” model where the Bingham fluid retains some unyielded regions that maintain solid-like properties, while other regions yield and allow for some compression and outward movement.
Where is the known viscosity of the Bingham fluid, is the "paradoxical" viscosity of the solid-like state, and is the biviscosity region stress. To determine this new stress:
Where is the yield stress and is the dimensionless viscosity ratio. If , the fluid exhibits Newtonian behavior; as , the Bingham model applies.
Applications
Squeeze flow application is prevalent in several science and engineering fields. Modeling and experimentation assist with understanding the complexities of squeeze flow during processes such as rheological testing, hot plate welding, and composite material joining.
Rheological testing
Squeeze flow rheometry allows for evaluation of polymers under wide ranges of temperatures, shear rates, and flow indexes. Parallel plate plastometers provide analysis for high viscosity materials such as rubber and glass, cure times for epoxy resins, and fiber-filled suspension flows. While viscometers provide useful results for squeeze flow measurements, testing conditions such as applied rotation rates, material composition, and fluid flow behaviors under shear may require the use of rheometers or other novel setups to obtain accurate data.
Hot plate welding
During conventional hot plate welding, a successful joining phase depends on proper maintenance of squeeze flow to ensure that pressure and temperature create an ideal weld. Excessive pressure causes squeeze out of valuable material and weakens the bond due to fiber realignment in the melt layer, while failure to allow cooling to room temperature creates weak, brittle welds that crack or break completely during use.
Composite material joining
Prevalent in the aerospace and automotive industries, composites serve as expensive, yet mechanically strong, materials in the construction of several types of aircraft and vehicles. While aircraft parts are typically composed of thermosetting polymers, thermoplastics may become an analog to permit increased manufacturing of these stronger materials through their melting abilities and relatively inexpensive raw materials. Characterization and testing of thermoplastic composites experiencing squeeze flow allow for study of fiber orientations within the melt and final products to determine weld strength. Fiber strand length and size show significant effects on material strength, and squeeze flow causes fibers to orient along the load direction while being perpendicular to the joining direction to achieve the same final properties as thermosetting composites.
References
Welding
Plastics
Materials science
Molding processes | Squeeze flow | Physics,Materials_science,Engineering | 1,338 |
77,753,239 | https://en.wikipedia.org/wiki/La%20Parguera%20Nature%20Reserve | La Parguera Nature Reserve (Spanish: Reserva Natural de la Parguera) is a protected area located in the southwestern Puerto Rico, primarily in the municipality of Lajas but also covering cays and islets under the municipal jurisdictions of Guánica and Cabo Rojo. The nature reserve is itself a unit of the Boquerón State Forest and it protects the Bahía Montalva mangrove forest in addition to mangrove bays, salt marshes and lagoons located along the coast of the Parguera barrio of Lajas, including its numerous cays and coral reefs. The reserve is mostly famous for its bioluminescent bay, locally called Bahía Fosforecente, (Spanish for 'phosphorescent bay'), one of the three of its kind in Puerto Rico and one of the seven year-round places where bioluminescent can be seen in the Caribbean.
Geography
The nature reserve is centered around La Parguera Bay, a large body of located immediately south of the town of Parguera (Poblado de Parguera). This body of water is surrounded by heavily forested bays, some of which include Puerto Quijano, Bahía Fosforecente, Bahía Monsio José, and Bahía Montalva. It also contains numerous cays and islets, such as Isla Mata la Gata, Cayo El Palo, Cayo San Cristóbal, Cayo Laurel, Cayo El Turrumote and Isla Mattei. The general area is bound to the north by the Sierra Bermeja and the Lajas Valley, and to the south by the Caribbean Sea.
Geology
The reserve is located in the Southern Puerto Rico karst region, characterized by reddish limestone. The area is also traversed by the recently discovered Punta Montalva fault, which was responsible for the 2019–20 Puerto Rico earthquakes.
Ecology
The environment of the nature reserve belongs to the Puerto Rican dry forest and Greater Antilles mangroves ecoregions. Administratively, La Parguera Nature Reserve is intended to protect an ecological corridor between the Boquerón and Guánica State Forests. The bay is also rich in coral reefs such as La Pared, a 20-mile-deep outcrop notable for its coral colonies and numerous fish and stingrays.
Fauna
Some of the most common animal species in the reserve include Adelaide's warbler (Setophaga adelaidae), brown pelicans (Pelecanus occidentalis), mangrove cuckoos (Coccyzus minor), Puerto Rican crescent sphaero (Sphaerodactylus nicholsi) and the Puerto Rican tody (Todus mexicanus). The protection of several endangered animal species such as manatees (Trichechus manatus), Cook's anole (Anolis cooki), the Puerto Rican nightjar (Antrostomus noctitherus) and the Yellow-shouldered blackbird (Agelaius xanthomus) was another reason for the official nature reserve designation.
Flora
Key plant species in the nature reserve include the almacigo (Bursera simaruba), bullet trees (Terminalia buceras), guayacan (Guaiacum sanctum), key thatch palms (Thrinax morisii), pink manjack (Tabebuia heterophylla), pipe organ cacti (Pilosocereus royenii), Turk's cap cacti (Melocactus intortus), and the endangered species guaiacwood (Guaiacum officinale), the sebucan (Leptocereus quadricostatus) and uña de gato (Pithecellobium unguis-cati). The area is also home to the extremely rare and critically endangered Psychilis krugii orchid.
History
There is evidence in and around Isla Mattei that the area was inhabited by the Taino by the time of the Spanish conquest of Puerto Rico. Due to the swampy and dense mangrove forest cover of the area, it remained undeveloped for most of its history. During the 18th-century the developed portions of the area formed part of sugarcane farms and haciendas, most notably Hacienda Fortuna and Finca Botoncillo. Corsican-Puerto Rican businessman Don Francisco Antonio Mattey was the owner and proprietor of these terrains, and the cay Isla Mattei is named after him. The salt marshes located immediately to the north and the northeast of the bioluminescent bay were further developed as salt evaporation ponds during the late 18th and early 19th-centuries.
The town of Parguera (Poblado de Parguera), also known as simply La Parguera, was first settled as a fishing village (villa pesquera) in 1825 as Villa Parguera (also the official name of the settlement), meaning 'red snapper village' after the prominence of Northern red snapper (Lutjanus campechanus) in the area. The local fishing industry however soon diminished due to overfishing and the economy quickly transitioned to tourism as the main local industry during the 20th-century. Tourism boomed with the establishment of Parador Villa Parguera by comedian and tourism businessman Henry LaFont during the 1960s. The quick development prompted the establishment of a zone of ecological protection, and, in 1972, the federal government established the Coastal Zone Management Law (Ley de Manejo de la Zona Costanera) included the area as a critical zone of protection. The Puerto Rico Department of Natural and Environmental Resources (DRNA) further established the Puerto Rico Coastal Zone Management Program (Programa de Manejo de la Zona Costanera de Puerto Rico) to mitigate the impact of tourism development in the coastal zones of the territory. The nature reserve was finally designated on September 20, 1979, making it the official fifth nature reserve in Puerto Rico after La Esperanza (1975), Punta Yeguas (1975), Punta Guaniquilla (1977) and La Cordillera Reef (1978).
Although tourism has proven to be the lifeline of the community it has brought negative impact onto the environment with the destruction of mangroves to build hotels, such as Parador Villa Parguera, and holiday residences, and the busy boating activity around the cays and reefs which at times has proven fatal to local animal communities such as manatees. Human activity has also proven disastrous for the bioluminescence in the area with Bahía Fosforecente now being the most endangered and least preserved out of the three bio bays in Puerto Rico.
See also
Boquerón State Forest
Puerto Mosquito
References
IUCN Category V
Bays of Puerto Rico
Bioluminescence
Cabo Rojo, Puerto Rico
Guánica, Puerto Rico
Lajas, Puerto Rico
Protected areas established in 1979
Protected areas of Puerto Rico
Tourist attractions in Puerto Rico
1979 establishments in Puerto Rico | La Parguera Nature Reserve | Chemistry,Biology | 1,425 |
20,881,371 | https://en.wikipedia.org/wiki/Non-place | Non-place or nonplace is a neologism coined by the French anthropologist Marc Augé to refer to anthropological spaces of transience where human beings remain anonymous, and that do not hold enough significance to be regarded as "places" in their anthropological definition. Examples of non-places would be motorways, hotel rooms, airports and shopping malls. The term was introduced by Marc Augé in his work Non-places: introduction to an anthropology of supermodernity, although it bears a strong resemblance to earlier concepts introduced by Edward Relph in Place and Placelessness and Melvin Webber in his writing on the 'nonplace urban realm'.
The concept of non-place is opposed, according to Augé, to the notion of "anthropological place". The place offers people a space that empowers their identity, where they can meet other people with whom they share social references. The non-places, on the contrary, are not meeting spaces and do not build common references to a group. Finally, a non-place is a place we do not live in, in which the individual remains anonymous and lonely. Augé avoids making value judgments on non-places and looks at them from the perspective of an ethnologist who has a new field of studies to explore.
From Places to Non-Places
A significant debate concerning the term and its interpretation is described in Marc Augé's writings under the title of "From Places to Non-Places". The distinction between places and non-places derives from the opposition between space and place. As essential preliminary here is the analyses of the notions of place and space suggested by Michel de Certeau. Space for him is a frequented space and intersection of moving bodies: it is the pedestrians, who transform the street (geometrically defined by town planners) into a space.
Mark Fisher's notion of non-time
For Mark Fisher, whereas cyberspace-time tends towards the generation of cultural moments that are interchangeable, hauntology involves the staining of particular places with time: a time that is out of joint. A "flattening sense of time" appears to Fisher as a byproduct of Augé's non-places, which by being absent of local flavour are indeterminate temporally as well as locally. He describes music created decades in the past as deprived of any sense of disjuncture with the present, a clear connection with his theory of capitalist realism.
See also
Sense of place
Liminality
Urban vitality
References
Anthropology
Architectural design | Non-place | Engineering | 514 |
31,632,219 | https://en.wikipedia.org/wiki/Cyclohexanedimethanol | Cyclohexanedimethanol (CHDM) is a mixture of isomeric organic compounds with formula C6H10(CH2OH)2. It is a colorless low-melting solid used in the production of polyester resins. Commercial samples consist of a mixture of cis and trans isomers. It is a di-substituted derivative of cyclohexane and is classified as a diol, meaning that it has two OH functional groups. Commercial CHDM typically has a cis/trans ratio of 30:70.
Production
CHDM is produced by catalytic hydrogenation of dimethyl terephthalate (DMT). The reaction conducted in two steps beginning with the conversion of DMT to the diester dimethyl 1,4-cyclohexanedicarboxylate (DMCD):
C6H4(CO2CH3)2 + 3 H2 → C6H10(CO2CH3)2
In the second step DMCD is further hydrogenated to CHDM:
C6H10(CO2CH3)2 + 4 H2 → C6H10(CH2OH)2 + 2 CH3OH
A copper chromite catalyst is usually used industrially. The cis/trans ratio of the CHDM is affected by the catalyst.
Byproduct of this process are 4-methylcyclohexanemethanol (CH3C6H10CH2OH) and the monoester methyl
4-methyl-4-cyclohexanecarboxylate (CH3C6H10CO2CH3, CAS registry number 51181-40-9). The leading producers in CHDM are Eastman Chemical in US and SK Chemicals in South Korea.
Applications
Via the process called polycondensation, CHDM is a precursor to polyesters. It is one of the most important comonomers for production of polyethylene terephthalate (PET), or polyethylene terephthalic ester (PETE), from which plastic bottles are made. In addition it maybe spun to form carpet fibers.
Thermoplastic polyesters containing CHDM exhibit enhanced strength, clarity, and solvent resistance. The properties of the polyesters vary from the high melting crystalline poly(1,4-cyclohexylenedimethylene terephthalate), PCT, to the non-crystalline copolyesters derived from both ethylene glycol and CHDM. The properties of these polyesters also is affected by the cis/trans ratio of the CHDM monomer.
CHDM reduces the degree of crystallinity of PET homopolymer, improving its processability. The copolymer tends to resist degradation, e.g. to acetaldehyde. The copolymer with PET is known as glycol-modified polyethylene terephthalate, PETG. PETG is used in many fields, including electronics, automobiles, barrier, and medical, etc.
CHDM is a raw material for the production of 1,4-cyclohexanedimethanol diglycidyl ether, an epoxy diluent. The key use for this diglycidyl ether is to reduce the viscosity of epoxy resins.
References
External links
U.S. National Library of Medicine: Hazardous Substances Databank – 1,4-cyclohexanedimethanol
Monomers
Diols | Cyclohexanedimethanol | Chemistry,Materials_science | 736 |
38,633,736 | https://en.wikipedia.org/wiki/Triangulum%20in%20Chinese%20astronomy | According to traditional Chinese uranography, the modern constellation Triangulum is located within the western quadrant of the sky, which is symbolized as the White Tiger of the West (西方白虎) (北方玄武, Xī Fāng Bái Hǔ).
The name of the western constellation in modern Chinese is 三角座 (sān jiǎo zuò), meaning "the triangle constellation".
Stars
The map of Chinese constellation in constellation Triangulum area consists of:
See also
Traditional Chinese star names
Chinese constellations
References
External links
Triangulum – Chinese associations
香港太空館研究資源
中國星區、星官及星名英譯表
天象文學
台灣自然科學博物館天文教育資訊網
中國古天文
中國古代的星象系統
Astronomy in China
Triangulum | Triangulum in Chinese astronomy | Astronomy | 178 |
295,194 | https://en.wikipedia.org/wiki/Georgi%E2%80%93Glashow%20model | In particle physics, the Georgi–Glashow model is a particular Grand Unified Theory (GUT) proposed by Howard Georgi and Sheldon Glashow in 1974. In this model, the Standard Model gauge groups SU(3) × SU(2) × U(1) are combined into a single simple gauge group SU(5). The unified group SU(5) is then thought to be spontaneously broken into the Standard Model subgroup below a very high energy scale called the grand unification scale.
Since the Georgi–Glashow model combines leptons and quarks into single irreducible representations, there exist interactions which do not conserve baryon number, although they still conserve the quantum number associated with the symmetry of the common representation. This yields a mechanism for proton decay, and the rate of proton decay can be predicted from the dynamics of the model. However, proton decay has not yet been observed experimentally, and the resulting lower limit on the lifetime of the proton contradicts the predictions of this model. Nevertheless, the elegance of the model has led particle physicists to use it as the foundation for more complex models which yield longer proton lifetimes, particularly SO(10) in basic and SUSY variants.
(For a more elementary introduction to how the representation theory of Lie algebras are related to particle physics, see the article Particle physics and representation theory.)
Also, this model suffers from the doublet–triplet splitting problem.
Construction
SU(5) acts on and hence on its exterior algebra . Choosing a splitting restricts SU(5) to , yielding matrices of the form
with kernel , hence isomorphic to the Standard Model's true gauge group . For the zeroth power , this acts trivially to match a left-handed neutrino, . For the first exterior power , the Standard Model's group action preserves the splitting . The transforms trivially in , as a doublet in , and under the representation of (as weak hypercharge is conventionally normalized as ); this matches a right-handed anti-lepton, (as in SU(2)). The transforms as a triplet in SU(3), a singlet in SU(2), and under the Y = − representation of U(1) (as ); this matches a right-handed down quark, .
The second power is obtained via the formula . As SU(5) preserves the canonical volume form of , Hodge duals give the upper three powers by . Thus the Standard Model's representation of one generation of fermions and antifermions lies within .
Similar motivations apply to the Pati–Salam model, and to SO(10), E6, and other supergroups of SU(5).
Explicit Embedding of the Standard Model (SM)
Owing to its relatively simple gauge group , GUTs can be written in terms of vectors and matrices which allows for an intuitive understanding of the Georgi–Glashow model. The fermion sector is then composed of an anti fundamental and an antisymmetric . In terms of SM degrees of freedoms, this can be written as
and
with and the left-handed up and down type quark, and their righthanded counterparts, the neutrino, and the left and right-handed electron, respectively.
In addition to the fermions, we need to break ; this is achieved in the Georgi–Glashow model via a fundamental which contains the SM Higgs,
with and the charged and neutral components of the SM Higgs, respectively. Note that the are not SM particles and are thus a prediction of the Georgi–Glashow model.
The SM gauge fields can be embedded explicitly as well. For that we recall a gauge field transforms as an adjoint, and thus can be written as with the generators. Now, if we restrict ourselves to generators with non-zero entries only in the upper block, in the lower block, or on the diagonal, we can identify
with the colour gauge fields,
with the weak fields, and
with the hypercharge (up to some normalization .)
Using the embedding, we can explicitly check that the fermionic fields transform as they should.
This explicit embedding can be found in Ref. or in the original paper by Georgi and Glashow.
Breaking SU(5)
SU(5) breaking occurs when a scalar field (Which we will denote as ), analogous to the Higgs field and transforming in the adjoint of SU(5), acquires a vacuum expectation value (vev) proportional to the weak hypercharge generator
.
When this occurs, SU(5) is spontaneously broken to the subgroup of SU(5) commuting with the group generated by Y.
Using the embedding from the previous section, we can explicitly check that is indeed equal to by noting that . Computation of similar commutators further shows that all other gauge fields acquire masses.
To be precise, the unbroken subgroup is actually
Under this unbroken subgroup, the adjoint 24 transforms as
to yield the gauge bosons of the Standard Model plus the new X and Y bosons. See restricted representation.
The Standard Model's quarks and leptons fit neatly into representations of SU(5). Specifically, the left-handed fermions combine into 3 generations of Under the unbroken subgroup these transform as
to yield precisely the left-handed fermionic content of the Standard Model where every generation , , , and correspond to anti-down-type quark, anti-up-type quark, anti-down-type lepton, and anti-up-type lepton, respectively. Also, and correspond to quark and lepton. Fermions transforming as 1 under SU(5) are now thought to be necessary because of the evidence for neutrino oscillations, unless a way is found to introduce an infinitesimal Majorana coupling for the left-handed neutrinos.
Since the homotopy group is
,
this model predicts 't Hooft–Polyakov monopoles.
Because the electromagnetic charge is a linear combination of some SU(2) generator with , these monopoles also have quantized magnetic charges , where by magnetic, here we mean magnetic electromagnetic charges.
Minimal supersymmetric SU(5)
The minimal supersymmetric SU(5) model assigns a matter parity to the chiral superfields with the matter fields having odd parity and the Higgs having even parity to protect the electroweak Higgs from quadratic radiative mass corrections (the hierarchy problem). In the non-supersymmetric version the action is invariant under a similar symmetry because the matter fields are all fermionic and thus must appear in the action in pairs, while the Higgs fields are bosonic.
Chiral superfields
As complex representations:
Superpotential
A generic invariant renormalizable superpotential is a (complex) invariant cubic polynomial in the superfields. It is a linear combination of the following terms:
The first column is an Abbreviation of the second column (neglecting proper normalization factors), where capital indices are SU(5) indices, and and are the generation indices.
The last two rows presupposes the multiplicity of is not zero (i.e. that a sterile neutrino exists). The coupling has coefficients which are symmetric in and . The coupling has coefficients which are symmetric in and . The number of sterile neutrino generations need not be three, unless the SU(5) is embedded in a higher unification scheme such as SO(10).
Vacua
The vacua correspond to the mutual zeros of the and terms. Let's first look at the case where the VEVs of all the chiral fields are zero except for .
The sector
The zeros corresponds to finding the stationary points of subject to the traceless constraint So, where is a Lagrange multiplier.
Up to an SU(5) (unitary) transformation,
The three cases are called case I, II, and III and they break the gauge symmetry into and respectively (the stabilizer of the VEV).
In other words, there are at least three different superselection sections, which is typical for supersymmetric theories.
Only case III makes any phenomenological sense and so, we will focus on this case from now onwards.
It can be verified that this solution together with zero VEVs for all the other chiral multiplets is a zero of the F-terms and D-terms. The matter parity remains unbroken (right up to the TeV scale).
Decomposition
The gauge algebra 24 decomposes as
This 24 is a real representation, so the last two terms need explanation. Both and are complex representations. However, the direct sum of both representation decomposes into two irreducible real representations and we only take half of the direct sum, i.e. one of the two real irreducible copies. The first three components are left unbroken. The adjoint Higgs also has a similar decomposition, except that it is complex. The Higgs mechanism causes one real HALF of the and of the adjoint Higgs to be absorbed. The other real half acquires a mass coming from the D-terms. And the other three components of the adjoint Higgs, and acquire GUT scale masses coming from self pairings of the superpotential,
The sterile neutrinos, if any exist, would also acquire a GUT scale Majorana mass coming from the superpotential coupling .
Because of matter parity, the matter representations and 10 remain chiral.
It is the Higgs fields 5 and which are interesting.
The two relevant superpotential terms here are and Unless there happens to be some fine tuning, we would expect both the triplet terms and the doublet terms to pair up, leaving us with no light electroweak doublets. This is in complete disagreement with phenomenology. See doublet-triplet splitting problem for more details.
Fermion masses
Problems of the Georgi–Glashow model
Proton decay in SU(5)
Unification of the Standard Model via an SU(5) group has significant phenomenological implications. Most notable of these is proton decay which is present in SU(5) with and without supersymmetry. This is allowed by the new vector bosons introduced from the adjoint representation of SU(5) which also contains the gauge bosons of the Standard Model forces. Since these new gauge bosons are in (3,2)−5/6 bifundamental representations, they violated baryon and lepton number. As a result, the new operators should cause protons to decay at a rate inversely proportional to their masses. This process is called dimension 6 proton decay and is an issue for the model, since the proton is experimentally determined to have a lifetime greater than the age of the universe. This means that an SU(5) model is severely constrained by this process.
As well as these new gauge bosons, in SU(5) models, the Higgs field is usually embedded in a 5 representation of the GUT group. The caveat of this is that since the Higgs field is an SU(2) doublet, the remaining part, an SU(3) triplet, must be some new field - usually called D or T. This new scalar would be able to generate proton decay as well and, assuming the most basic Higgs vacuum alignment, would be massless so allowing the process at very high rates.
While not an issue in the Georgi–Glashow model, a supersymmeterised SU(5) model would have additional proton decay operators due to the superpartners of the Standard Model fermions. The lack of detection of proton decay (in any form) brings into question the veracity of SU(5) GUTs of all types; however, while the models are highly constrained by this result, they are not in general ruled out.
Mechanism
In the lowest-order Feynman diagram corresponding to the simplest source of proton decay in SU(5), a left-handed and a right-handed up quark annihilate yielding an X+ boson which decays to a right-handed (or left-handed) positron and a left-handed (or right-handed) anti-down quark:
This process conserves weak isospin, weak hypercharge, and color. GUTs equate anti-color with having two colors, and SU(5) defines left-handed normal leptons as "white" and right-handed antileptons as "black". The first vertex only involves fermions of the representation, while the second only involves fermions in the (or ), demonstrating the preservation of SU(5) symmetry.
Mass relations
Since SM states are regrouped into representations their Yukawa matrices have the following relations:
In particular this predicts at energies close to the scale of unification. This is however not realized in nature.
Doublet-triplet splitting
As mentioned in the above section the colour triplet of the which contains the SM Higgs can mediate dimension 6 proton decay. Since protons seem to be quite stable such a triplet has to acquire a quite large mass in order to suppress the decay. This is however problematic. For that consider the scalar part of the Greorgi-Glashow Lagrangian:
We here have denoted the adjoint used to break to the SM with is VEV by and the defining representation. which contains the SM Higgs and the colour triplet which can induce proton decay. As mentioned, we require in order to sufficiently suppress proton decay. On the other hand, the is typically of order in order to be consistent with observations. Looking at the above equation it becomes clear that one has to be very precise in choosing the parameters and any two random parameters will not do, since then and could be of the same order!
This is known as the doublet–triplet (DT) splitting problem: In order to be consistent we have to 'split' the 'masses' of and but for that we need to fine-tune and There are however some solutions to this problem (see e.g.) which can work quite well in SUSY models.
A review of the DT splitting problem can be found in.
Neutrino masses
As the SM the original Georgi–Glashow model proposed in does not include neutrino masses. However, since neutrino oscillation has been observed such masses are required. The solutions to this problem follow the same ideas which have been applied to the SM: One on hand on can include a singulet which then can generate either Dirac masses or Majorana masses. As in the SM one can also implement the type-I seesaw mechanism which then generates naturally light masses.
On the other hand, on can just parametrize the ignorance about neutrinos using the dimension 5 Weinbergoperator:
with the Yukawa matrix required for the mixing between flavours.
References
Grand Unified Theory
Supersymmetric quantum field theory | Georgi–Glashow model | Physics | 3,131 |
454,964 | https://en.wikipedia.org/wiki/T%20Pyxidis | T Pyxidis (T Pyx) is a recurrent nova
and nova remnant in the constellation Pyxis. It is a binary star system and its distance is estimated at from Earth. It contains a Sun-like star and a white dwarf. Because of their close proximity and the larger mass of the white dwarf, the latter draws matter from the larger, less massive star. The influx of matter on the white dwarf's surface causes periodic thermonuclear explosions to occur.
The usual apparent magnitude of this star system is 15.5, but there have been observed eruptions with maximal apparent magnitude of about 7.0 in the years 1890, 1902, 1920, 1944, 1966 and 2011. Evidence seems to indicate that T Pyxidis may have increased in mass despite the nova eruptions, and is now close to the Chandrasekhar limit when it might explode as a supernova. When a white dwarf reaches this limit it will collapse under its own weight and cause a type Ia supernova.
Effect on Earth
Because of its relative proximity, some—in particular, Edward Sion, astronomer & astrophysicist at Villanova University, and his team therefrom—contend that a type 1a supernova could have a significant impact on Earth. The received gamma radiation would equal the total (all spectra) radiation of approximately 1,000 solar flares, but the type Ia supernova would have to be closer than to cause significant damage to the ozone layer, and perhaps closer than 500 parsecs. The X-radiation that reaches Earth in such an event, however, would be less than the X-radiation of a single average solar flare.
However, Sion's calculations were challenged by Alex Filippenko of the University of California at Berkeley who said that Sion had possibly miscalculated the damage that could be caused by a T Pyxidis supernova. He had used data for a far more deadly gamma-ray burst (GRB) occurring 1 kiloparsec from Earth, not a supernova, and T Pyxidis certainly is not expected to produce a GRB. According to another expert, "[a] supernova would have to be 10 times closer [to Earth] to do the damage described." Mankind survived when the radiation from the Crab Nebula supernova, at a distance of about 6,500 light-years, reached Earth in the year 1054. A type Ia supernova at a distance of 3,300 light-years would have an apparent magnitude of around -9.3, about as bright as the brightest Iridium (satellite) flares.
Recent data indicates his distance estimate is five times too close. Astronomers used NASA's Hubble Space Telescope to observe the light emitted during its latest outburst in April 2011. The team also used the light echo to refine estimates of the nova's distance from Earth. The new distance is 15,600 light-years (4780 pc) from Earth. Previous estimates were between 6,500 and 16,000 light-years (2000 and 4900 pc).
It has been reported that T Pyx would "soon" become a supernova. However, when Scientific American contacted Sion, it became apparent that "soon" was meant in astronomical terms: Sion said that "soon" in the press announcement meant "[a]t the accretion rate we derived, the white dwarf in T Pyxidis will reach the Chandrasekhar Limit in ten million years." By that time it will have moved far enough away from the Solar System to have little effect.
2011 outburst
Mike Linnolt detected T Pyx's first outburst in nearly 45 years on April 14, 2011, at magnitude 13. According to AAVSO observers, it reached magnitude 7.5 in the visual and V bands by April 27, and reached magnitude 6.8 by May 3.
X-ray source
T Pyxidis is a super soft X-ray source.
References
External links
AAVSO Variable Star Of The Month April, 2002: T Pyxidis PDF / HTML (17 July 2010)
AAVSO: Quick Look View of AAVSO Observations (get recent magnitude estimates for T Pyx)
Interview with Brad Schaefer about recurrent novae, and T Pyx (@19:40 into recording : 30 March 2009)
Variable star T PYXIDIS
Astronomers await a nova (Space.com 22 December 2006)
Sion, Edward; A Supernova Could Nuke Us. Big Think.
Explosive Nearby Star Could Threaten Earth
Pyxis
Recurrent novae
Nova remnants
Binary stars
Pyxidis, T | T Pyxidis | Astronomy | 952 |
900,733 | https://en.wikipedia.org/wiki/Plasma%20oscillation | Plasma oscillations, also known as Langmuir waves (after Irving Langmuir), are rapid oscillations of the electron density in conducting media such as plasmas or metals in the ultraviolet region. The oscillations can be described as an instability in the dielectric function of a free electron gas. The frequency depends only weakly on the wavelength of the oscillation. The quasiparticle resulting from the quantization of these oscillations is the plasmon.
Langmuir waves were discovered by American physicists Irving Langmuir and Lewi Tonks in the 1920s. They are parallel in form to Jeans instability waves, which are caused by gravitational instabilities in a static medium.
Mechanism
Consider an electrically neutral plasma in equilibrium, consisting of a gas of positively charged ions and negatively charged electrons. If one displaces by a tiny amount an electron or a group of electrons with respect to the ions, the Coulomb force pulls the electrons back, acting as a restoring force.
'Cold' electrons
If the thermal motion of the electrons is ignored, it is possible to show that the charge density oscillates at the plasma frequency
(SI units),
(cgs units),
where is the number density of electrons, is the electric charge, is the effective mass of the electron, and is the permittivity of free space. Note that the above formula is derived under the approximation that the ion mass is infinite. This is generally a good approximation, as the electrons are so much lighter than ions.
Proof using Maxwell equations. Assuming charge density oscillations the continuity equation:
the Gauss law
and the conductivity
taking the divergence on both sides and substituting the above relations:
which is always true only if
But this is also the dielectric constant (see Drude Model)
and the condition of transparency (i.e. from a certain plasma frequency and above), the same condition here apply to make possible also the propagation of density waves in the charge density.
This expression must be modified in the case of electron-positron plasmas, often encountered in astrophysics. Since the frequency is independent of the wavelength, these oscillations have an infinite phase velocity and zero group velocity.
Note that, when , the plasma frequency, , depends only on physical constants and electron density . The numeric expression for angular plasma frequency is
Metals are only transparent to light with a frequency higher than the metal's plasma frequency. For typical metals such as aluminium or silver, is approximately 1023 cm−3, which brings the plasma frequency into the ultraviolet region. This is why most metals reflect visible light and appear shiny.
'Warm' electrons
When the effects of the electron thermal speed are considered, the electron pressure acts as a restoring force, and the electric field and oscillations propagate with frequency and wavenumber related by the longitudinal Langmuir wave:
called the Bohm–Gross dispersion relation. If the spatial scale is large compared to the Debye length, the oscillations are only weakly modified by the pressure term, but at small scales the pressure term dominates and the waves become dispersionless with a speed of . For such waves, however, the electron thermal speed is comparable to the phase velocity, i.e.,
so the plasma waves can accelerate electrons that are moving with speed nearly equal to the phase velocity of the wave. This process often leads to a form of collisionless damping, called Landau damping. Consequently, the large-k portion in the dispersion relation is difficult to observe and seldom of consequence.
In a bounded plasma, fringing electric fields can result in propagation of plasma oscillations, even when the electrons are cold.
In a metal or semiconductor, the effect of the ions' periodic potential must be taken into account. This is usually done by using the electrons' effective mass in place of m.
Plasma oscillations and the effect of the negative mass
Plasma oscillations may give rise to the effect of the “negative mass”. The mechanical model giving rise to the negative effective mass effect is depicted in Figure 1. A core with mass is connected internally through the spring with constant to a shell with mass . The system is subjected to the external sinusoidal force . If we solve the equations of motion for the masses and and replace the entire system with a single effective mass we obtain:
where . When the frequency approaches from above the effective mass will be negative.
The negative effective mass (density) becomes also possible based on the electro-mechanical coupling exploiting plasma oscillations of a free electron gas (see Figure 2). The negative mass appears as a result of vibration of a metallic particle with a frequency of which is close the frequency of the plasma oscillations of the electron gas relatively to the ionic lattice . The plasma oscillations are represented with the elastic spring , where is the plasma frequency. Thus, the metallic particle vibrated with the external frequency ω is described by the effective mass
which is negative when the frequency approaches from above. Metamaterials exploiting the effect of the negative mass in the vicinity of the plasma frequency were reported.
See also
Electron wake
Plasmon
Relativistic quantum chemistry
Surface plasmon resonance
Upper hybrid oscillation, in particular for a discussion of the modification to the mode at propagation angles oblique to the magnetic field
Waves in plasmas
References
pt:Oscilação plasmática
Further reading
Waves in plasmas
Plasmonics | Plasma oscillation | Physics,Chemistry,Materials_science | 1,133 |
13,183,237 | https://en.wikipedia.org/wiki/Thomas%20Anderson%20%28chemist%29 | Thomas Anderson (2 July 1819 – 2 November 1874) was a 19th-century Scottish chemist. In 1853 his work on alkaloids led him to discover the correct formula/composition for codeine. In 1868 he discovered pyridine and related organic compounds such as picoline through studies on the distillation of bone oil and other animal matter.
As well as his work on organic chemistry, Anderson made important contributions to agricultural chemistry, writing over 130 reports on soils, fertilisers and plant diseases. He kept abreast of all areas of science, and was able to advise his colleague Joseph Lister on Pasteur's germ theory and the use of carbolic acid as an antiseptic.
Biographys
Born in Leith, Thomas Anderson graduated from the University of Edinburgh with a medical doctorate in 1841. Having developed an interest in chemistry during his medical studies, he then spent several years studying chemistry in Europe, including spells under Jöns Jakob Berzelius in Sweden and Justus von Liebig in Germany. Returning to Edinburgh, he worked at the University of Edinburgh and at the Highland and Agricultural Society of Scotland. In 1852, he was appointed Regius Professor of Chemistry at the University of Glasgow and remained in that post for the remainder of his career. In 1854, he became one of the editors of the Edinburgh New Philosophical Journal. In 1872, Anderson was awarded a Royal Medal from the Royal Society "for his investigations on the organic bases of Dippells animal oil; on codeine; on the crystallized constituents of opium; on piperin and on papaverin; and for his researches in physiological and animal chemistry."
His later years were marred by a progressive neurological disease which may have been syphilis. He resigned his chair in early 1874, and died later that year in Chiswick.
He was succeeded by John Ferguson.
References
External links
1819 births
1874 deaths
19th-century Scottish chemists
Organic chemists
People educated at Edinburgh Academy
Alumni of the University of Edinburgh Medical School
British expatriates in Sweden
British expatriates in Germany
Academics of the University of Glasgow
Royal Medal winners
Regius Professors | Thomas Anderson (chemist) | Chemistry | 433 |
58,739 | https://en.wikipedia.org/wiki/Timeline%20of%20microscope%20technology | Timeline of microscope technology
c. 700 BC: The "Nimrud lens" of Assyrians manufacture, a rock crystal disk with a convex shape believed to be a burning or magnifying lens.
13th century: The increase in use of lenses in eyeglasses probably led to the wide spread use of simple microscopes (single lens magnifying glasses) with limited magnification.
1590: earliest date of a claimed Hans Martens/Zacharias Janssen invention of the compound microscope (claim made in 1655).
After 1609: Galileo Galilei is described as being able to close focus his telescope to view small objects close up and/or looking through the wrong end in reverse to magnify small objects. A telescope used in this fashion is the same as a compound microscope but historians debate whether Galileo was magnifying small objects or viewing near by objects with his terrestrial telescope (convex objective/concave eyepiece) reversed.
1619: Earliest recorded description of a compound microscope, Dutch Ambassador Willem Boreel sees one in London in the possession of Dutch inventor Cornelis Drebbel, an instrument about eighteen inches long, two inches in diameter, and supported on three brass dolphins.
1621: Cornelis Drebbel presents, in London, a compound microscope with a convex objective and a convex eyepiece (a "Keplerian" microscope).
c.1622: Drebbel presents his invention in Rome.
1624: Galileo improves on a compound microscope he sees in Rome and presents his occhiolino to Prince Federico Cesi, founder of the Accademia dei Lincei (in English, The Linceans).
1625: Francesco Stelluti and Federico Cesi publish Apiarium, the first account of observations using a compound microscope
1625: Giovanni Faber of Bamberg (1574–1629) of the Linceans, after seeing Galileo's occhiolino, coins the word microscope by analogy with telescope.
1655: In an investigation by Willem Boreel, Dutch spectacle-maker Johannes Zachariassen claims his father, Zacharias Janssen, invented the compound microscope in 1590. Zachariassen's claimed dates are so early it is sometimes assumed, for the claim to be true, that his grandfather, Hans Martens, must have invented it. Findings are published by writer Pierre Borel. Discrepancies in Boreel's investigation and Zachariassen's testimony (including misrepresenting his date of birth and role in the invention) has led some historians to consider this claim dubious.
1661: Marcello Malpighi observed capillary structures in frog lungs.
1665: Robert Hooke publishes Micrographia, a collection of biological drawings. He coins the word cell for the structures he discovers in cork bark.
1674: Antonie van Leeuwenhoek improves on a simple microscope for viewing biological specimens (see Van Leeuwenhoek's microscopes).
1825: Joseph Jackson Lister develops combined lenses that cancelled spherical and chromatic aberration.
1846: Carl Zeiss founded Carl Zeiss AG, to mass-produce microscopes and other optical instruments.
1850s: John Leonard Riddell, Professor of Chemistry at Tulane University, invents the first practical binocular microscope.
1863: Henry Clifton Sorby develops a metallurgical microscope to observe structure of meteorites.
1860s: Ernst Abbe, a colleague of Carl Zeiss, discovers the Abbe sine condition, a breakthrough in microscope design, which until then was largely based on trial and error. The company of Carl Zeiss exploited this discovery and becomes the dominant microscope manufacturer of its era.
1928: Edward Hutchinson Synge publishes theory underlying the near-field scanning optical microscope
1931: Max Knoll and Ernst Ruska start to build the first electron microscope. It is a transmission electron microscope (TEM).
1936: Erwin Wilhelm Müller invents the field emission microscope.
1938: James Hillier builds another TEM.
1951: Erwin Wilhelm Müller invents the field ion microscope and is the first to see atoms.
1953: Frits Zernike, professor of theoretical physics, receives the Nobel Prize in Physics for his invention of the phase-contrast microscope.
1955: Georges Nomarski, professor of microscopy, published the theoretical basis of differential interference contrast microscopy.
1957: Marvin Minsky, a professor at MIT, invents the confocal microscope, an optical imaging technique for increasing optical resolution and contrast of a micrograph by means of using a spatial pinhole to block out-of-focus light in image formation. This technology is a predecessor to today's widely used confocal laser scanning microscope.
1967: Erwin Wilhelm Müller adds time-of-flight spectroscopy to the field ion microscope, making the first atom probe and allowing the chemical identification of each individual atom.
1981: Gerd Binnig and Heinrich Rohrer develop the scanning tunneling microscope (STM).
1986: Gerd Binnig, Quate, and Gerber invent the atomic force microscope (AFM).
1988: Alfred Cerezo, Terence Godfrey, and George D. W. Smith applied a position-sensitive detector to the atom probe, making it able to resolve materials in three dimensions with near-atomic resolution.
1988: Kingo Itaya invents the electrochemical scanning tunneling microscope.
1991: Kelvin probe force microscope invented.
2008: The scanning helium microscope is introduced.
References
Microscopy
Microscope | Timeline of microscope technology | Chemistry | 1,109 |
39,224,575 | https://en.wikipedia.org/wiki/George%20Michael%20%28computational%20physicist%29 | George Anthony Michael (February 16, 1926 – June 5, 2008) was an American computational physicist at Lawrence Livermore Laboratories,
involved in the development of supercomputing. He was one of the founders of the annual ACM/IEEE Supercomputing Conference, first held in 1988. The George Michael
Memorial Fellowship was established in his honor. George was the person primarily responsible for doing the interviews and gathering the materials for the web site: Stories of the Development of Large Scale Scientific Computing at Lawrence Livermore National Laboratory.
References
1926 births
2008 deaths
20th-century American physicists
American computer scientists
Lawrence Livermore National Laboratory staff
Computational physicists | George Michael (computational physicist) | Physics | 130 |
167,741 | https://en.wikipedia.org/wiki/Hydrographic%20survey | Hydrographic survey is the science of measurement and description of features which affect maritime navigation, marine construction, dredging, offshore wind farms, offshore oil exploration and drilling and related activities. Surveys may also be conducted to determine the route of subsea cables such as telecommunications cables, cables associated with wind farms, and HVDC power cables. Strong emphasis is placed on soundings, shorelines, tides, currents, seabed and submerged obstructions that relate to the previously mentioned activities. The term hydrography is used synonymously to describe maritime cartography, which in the final stages of the hydrographic process uses the raw data collected through hydrographic survey into information usable by the end user.
Hydrography is collected under rules which vary depending on the acceptance authority. Traditionally conducted by ships with a sounding line or echo sounding, surveys are increasingly conducted with the aid of aircraft and sophisticated electronic sensor systems in shallow waters.
Offshore survey is a specific discipline of hydrographic survey primarily concerned with the description of the condition of the seabed and the condition of the subsea oilfield infrastructure that interacts with it.
Organizations
National and international offices
Hydrographic offices evolved from naval heritage and are usually found within national naval structures, for example Spain's Instituto Hidrográfico de la Marina. Coordination of those organizations and product standardization is voluntarily joined with the goal of improving hydrography and safe navigation is conducted by the International Hydrographic Organization (IHO). The IHO publishes Standards and Specifications followed by its Member States as well as Memoranda of Understanding and Co-operative Agreements with hydrographic survey interests.
The product of such hydrography is most often seen on nautical charts published by the national agencies and required by the International Maritime Organization (IMO), the Safety of Life at Sea (SOLAS) and national regulations to be carried on vessels for safety purposes. Increasingly those charts are provided and used in electronic form unders IHO standards.
Non-national agencies
Governmental entities below the national level conduct or contract for hydrographic surveys for waters within their jurisdictions with both internal and contract assets. Such surveys commonly are conducted by national organizations or under their supervision or the standards they have approved, particularly when the use is for the purposes of chart making and distribution or the dredging of state-controlled waters.
In the United States, there is coordination with the National Hydrography Dataset in survey collection and publication. State environmental organizations publish hydrographic data relating to their mission.
Private organizations
Commercial entities also conduct large-scale hydrographic and geophysical surveying, particularly in the dredging, marine construction, oil exploration, and drilling industries. Industrial entities installing submarine communications cables or power require detailed surveys of cable routes prior to installation and increasingly use acoustic imagery equipment previously found only in military applications when conducting their surveys. Specialized companies exist that have both the equipment and expertise to contract with both commercial and governmental entities to perform such surveys .
Companies, universities, and investment groups will often fund hydrographic surveys of public waterways prior to developing areas adjacent those waterways. Survey firms are also contracted to survey in support of design and engineering firms that are under contract for large public projects. Private surveys are also conducted before dredging operations and after these operations are completed. Companies with large private slips, docks, or other waterfront installations have their facilities and the open water near their facilities surveyed regularly, as do islands in areas subject to variable erosion such as in the Maldives.
Methods
Lead lines and sounding poles
The history of hydrographic surveying dates almost as far back as that of sailing. For many centuries, a hydrographic survey required the use of lead lines – ropes or lines with depth markings attached to lead weights to make one end sink to the bottom when lowered over the side of a ship or boat – and sounding poles, which were poles with depth markings which could be thrust over the side until they touched bottom. In either case, the depths measured had to be read manually and recorded, as did the position of each measurement with regard to mapped reference points as determined by three-point sextant fixes. The process was labor-intensive and time-consuming and, although each individual depth measurement could be accurate, even a thorough survey as a practical matter could include only a limited number of sounding measurements relative to the area being surveyed, inevitably leaving gaps in coverage between soundings.
Wire-drag surveying
In 1904, wire-drag surveys were introduced into hydrography, and the United States Coast and Geodetic Survey′s Nicholas H. Heck played a prominent role in developing and perfecting the technique between 1906 and 1916. In the wire-drag method, a wire attached to two ships or boats and set at a certain depth by a system of weights and buoys was dragged between two points. If the wire encountered an obstruction, it would become taut and form a "V" shape. The location of the "V" revealed the position of submerged rocks, wrecks, and other obstructions, while the depth at which the wire was set showed the depth at which the obstruction was encountered. This method revolutionized hydrographic surveying, as it allowed a quicker, less laborious, and far more complete survey of an area than did the use of lead lines and sounding poles. From a navigational safety point of view, a wire-drag survey would not miss a hazard to navigation that projected above the drag wire depth.
Prior to the advent of sidescan sonar, wire-drag surveying was the only method for searching large areas for obstructions and lost vessels and aircraft. Between 1906 and 1916, Heck expanded the capability of wire-drag systems from a relatively limited area to sweeps covering channels in width. The wire-drag technique was a major contribution to hydrographic surveying during much of the rest of the 20th century. So valuable was wire-drag surveying in the United States that for decades the U.S. Coast and Geodetic Survey, and later the National Oceanic and Atmospheric Administration, fielded a pair of sister ships of identical design specifically to work together on such surveys. USC&GS Marindin and USC&GS Ogden conducted wire-drag surveys together from 1919 to 1942, USC&GS Hilgard (ASV 82) and USC&GS Wainwright (ASV 83) took over from 1942 to 1967, and USC&GS Rude (ASV 90) (later NOAAS Rude (S 590)) and USC&GS Heck (ASV 91) (later NOAAS Heck (S 591)) worked together on wire-drag operations from 1967.
The rise of new electronic technologies – sidescan sonar and multibeam swath systems – in the 1950s, 1960s and 1970s eventually made the wire-drag system obsolete. Sidescan sonar could create images of underwater obstructions with the same fidelity as aerial photography, while multibeam systems could generate depth data for 100 percent of the bottom in a surveyed area. These technologies allowed a single vessel to do what wire-drag surveying required two vessels to do, and wire-drag surveys finally came to an end in the early 1990s. Vessels were freed from working together on wire-drag surveys, and in the U.S. National Oceanic and Atmospheric Administration (NOAA), for example, Rude and Heck operated independently in their later years.
Single-beam echosounders
Single-beam echosounders and fathometers began to enter service in the 1930s which used sonar to measure the depth beneath a vessel. This greatly increased the speed of acquiring sounding data over that possible with lead lines and sounding poles by allowing information on depths beneath a vessel to be gathered in a series of lines spaced at a specified distance. However, it shared the weakness of earlier methods by lacking depth information for areas in between the strips of sea bottom the vessel sounded.
Multibeam Echosounders
A multibeam echosounder (MBES) is a type of sonar that is used to map the seabed. It emits acoustic waves in a fan shape beneath its transceiver. The time it takes for the sound waves to reflect off the seabed and return to the receiver is used to calculate the water depth. Unlike other sonars and echo sounders, MBES uses beamforming to extract directional information from the returning soundwaves, producing a swath of depth soundings from a single ping.
Explicit inclusion of phraseology like: "For all MBES surveys for LINZ, high resolution, geo-referenced backscatter intensity is to be logged and rendered as a survey deliverable." in a set of contract survey requirements, is a clear indication that the wider hydrographic community is embracing the benefits that can be accrued by employing MBES technology and, in particular, are accepting as a fact that a MBES which provides acoustic backscatter data is a valuable tool of the trade.
The introduction of multispectral multibeam echosounders continues the trajectory of technological innovations providing the hydrographic surveying community with better tools for more rapidly acquiring better data for multiple uses. A multispectral multibeam echosounder is the culmination of many progressive advances in hydrography from the early days of acoustic soundings when the primary concern about the strength of returning echoes from the bottom was whether, or not, they would be sufficiently large to be noted (detected). The operating frequencies of the early acoustic sounders were primarily based on the ability of magneostrictive and piezoelectric materials whose physical dimensions could be modified by means of electrical current or voltage. Eventually it became apparent, that while the operating frequency of the early single vertical beam acoustic sounders had little, or no, bearing on the measured depths when the bottom was hard (composed primarily of sand, pebbles, cobbles, boulders, or rock), there was a noticeable frequency dependency of the measured depths when the bottom was soft (composed primarily of silt, mud or flocculent suspensions). It was observed that higher frequency single vertical beam echosounders could provide detectable echo amplitudes from high porosity sediments, even if those sediments appeared to be acoustically transparent at lower frequencies.
In the late 1960s, single-beam hydrographic surveys were conducted using widely spaced track lines and the shallow (peak) soundings in the bottom data were retained in preference to deeper soundings in the sounding record. During that same time period, early side scan sonar was introduced into the operational practices of shallow water hydrographic surveying. The frequencies of the early side scan sonars were a matter of engineering design expediency and the most important aspect of the side scanning echoes was not the value of their amplitudes, but rather that the amplitudes were spatially variable. In fact, important information was deduced about the shape of the bottom and manmade items on the bottom, based on the regions where there were absences of detectable echo amplitudes (shadows) In 1979, in hopes of a technological solution to the problems of surveying in "floating mud", the Director of the National Ocean Survey (NOS) established a NOS study team to conduct investigations to determine the functional specifications for a replacement shallow water depth sounder. The outcome of the study was a class of vertical-beam depth sounders, which is still widely used. It simultaneously pinged at two acoustic frequencies, separated by more than 2 octaves, making depth and echo-amplitude measurements that were concurrent, both spatially and temporally, albeit at a single vertical grazing angle.
The first MBES generation was dedicated to mapping the seafloor in deep water. Those pioneering MBES made little, or no, explicit use of the amplitudes, as their objective was to obtain accurate measurements of the bathymetry (representing both the peaks and deeps). Furthermore, their technical characteristics did not make it easy to observe spatial variations in the echo amplitudes. Subsequent to the early MBES bathymetric surveys and at the time when single frequency side scan sonar had begun to produce high quality images of the seabed that were capable of providing a degree of discrimination between different types of sediments, the potential of the echo amplitudes from a MBES was recognized.
With Marty Klein's introduction of dual frequency (nominally 100 kHz and 500 kHz) side scan sonar, it was apparent that spatially and temporally coincident backscatter from any given seabed at those two widely separated acoustic frequencies, would likely provide two separate and unique images of that seascape. Admittedly, the along-track insonification and receiving beam patterns were different, and due to the absence of bathymetric data, the precise backscatter grazing angles were unknown. However, the overlapping sets of side scanning across-track grazing angles at the two frequencies were always the same.
Following the grounding of the off Cape Cod, Massachusetts, in 1992, the emphasis for shallow water surveying migrated toward full bottom coverage surveys by employing MBES with increasing operating frequencies to further improve the spatial resolution of the soundings. Given that side scan sonar, with its across-track fan-shaped swath of insonification, had successfully exploited the cross-track variation in echo amplitudes, to achieve high quality images of the seabed, it seemed a natural progression that the fan-shaped across-track pattern of insonification associated with the new monotone higher frequency shallow water MBES, might also be exploited for seabed imagery. Images acquired under the initial attempts at MBES bottom imaging were less than stellar, but fortunately improvements were forthcoming.
Side scan sonar parses the continual echo returns from a receive beam that is perfectly aligned with the insonification beam using time-after-transmit, a technique that is independent of water depth and the cross-track beam opening angle of the sonar receive transducer. The initial attempt at multibeam imagery employed multiple receive beams, which only partially overlapped the MBES fan-shaped insonification beam, to segment the continual echo returns into intervals that were dependent on water depth and receiver cross-track beam opening angle. Consequently, the segmented intervals were non-uniform in both their length of time and time-after-transmit. The backscatter from each ping in each of the beam-parsed segments was reduced to a single value and assigned to the same geographical coordinates as those assigned to that beam's measured sounding. In subsequent modifications to MBES bottom imaging, the echo sequence in each of the beam-parsed intervals was designated as a snippet. On each ping, each snippet from each beam was additionally parsed according to time-after-transmit. Each of the echo amplitude measurements made within a snippet from a particular beam was assigned a geographical position based on linear interpolation between positions assigned to the soundings measured, on that ping, in the two adjacent cross-track beams. The snippet modification to MBES imagery significantly improved the quality of the imagery by increasing the number of echo amplitude measurements available to be rendered as a pixel in the image and also by having a more uniform spatial distribution of the pixels in the image which represented an actual measured echo amplitude.
The introduction of multispectral multibeam echosounders continued the progressive advances in hydrography. In particular, multispectral multibeam echosounders not only provide "multiple look" depth measurements of a seabed, they also provide multispectral backscatter data that are spatially and temporally coincident with those depth measurements. A multispectral multibeam echosounder directly computes a position of origin for each of the backscatter amplitudes in the output data set. Those positions are based on the backscatter measurements themselves and not by interpolation from some other derived data set. Consequently, multispectral multibeam imagery is more acute compared to previous multibeam imagery. The inherent precision of the bathymetric data from a multispectral multibeam echosounder is also a benefit to those users that may be attempting to employ the acoustic backscatter angular response function to discriminate between different sediment types. Multispectral multibeam echosounders reinforces the fact that spatially and temporally coincident backscatter, from any given seabed, at widely separated acoustic frequencies provides separate and unique images of the seascape.
Crowdsourcing
Crowdsourcing also is entering hydrographic surveying, with projects such as OpenSeaMap, TeamSurv and ARGUS. Here, volunteer vessels record position, depth, and time data using their standard navigation instruments, and then the data is post-processed to account for speed of sound, tidal, and other corrections. With this approach there is no need for a specific survey vessel, or for professionally qualified surveyors to be on board, as the expertise is in the data processing that occurs once the data is uploaded to the server after the voyage. Apart from obvious cost savings, this also gives a continuous survey of an area, but the drawbacks are time in recruiting observers and getting a high enough density and quality of data. Although sometimes accurate to 0.1 – 0.2m, this approach cannot substitute for a rigorous systematic survey, where this is required. Nevertheless, the results are often adequate for many requirements where high resolution, high accuracy surveys are not required, are unaffordable or simply have not been done yet.
General Bathymetric Chart of the Oceans
Modern integrated hydrographic surveying
In suitable shallow-water areas lidar (light detection and ranging) may be used. Equipment can be installed on inflatable craft, such as Zodiacs, small craft, autonomous underwater vehicles (AUVs), unmanned underwater vehicles (UUVs), Remote Operated Vehicles (ROV) or large ships, and can include sidescan, single-beam and multibeam equipment. At one time different data collection methods and standards were used in collecting hydrographic data for maritime safety and for scientific or engineering bathymetric charts, but increasingly, with the aid of improved collection techniques and computer processing, the data is collected under one standard and extracted for specific use.
After data is collected, it has to undergo post-processing. A massive amount of data is collected during the typical hydrographic survey, often several soundings per square foot. Depending on the final use intended for the data (for example, navigation charts, Digital Terrain Model, volume calculation for dredging, topography, or bathymetry) this data must be thinned out. It must also be corrected for errors (i.e., bad soundings,) and for the effects of tides, heave, water level salinity and thermoclines (water temperature differences) as the velocity of sound varies with temperature and salinity and affects accuracy. Usually the surveyor has additional data collection equipment on site to measure and record the data required for correcting the soundings. The final output of charts can be created with a combination of specialty charting software or a computer-aided design (CAD) package, usually Autocad.
Although the accuracy of crowd-sourced surveying can rarely reach the standards of traditional methods, the algorithms used rely on a high data density to produce final results that are more accurate than single measurements. A comparison of crowd-sourced surveys with multibeam surveys indicates an accuracy of crowd-sourced surveys of around plus or minus 0.1 to 0.2 meter (about 4 to 8 inches).
See also
References
External links
International Hydrographic Organization
IHO – Download / OHI – Téléchargement
NGA – Products and Services Available to the Public
United Kingdom Hydrographic Office
Indian Naval Hydrographic Department
Australian Hydrographic Service (AHS)
Armada Esapñola – Instituto Hidrográfico de la Marina
NOAA, Office of Coast Survey, Survey Data
NOAA Marine Operations (Survey Fleet)
Hydro International (Professional journal for hydrography with technical and industry news articles.)
NOAA maintains a massive database of survey results, charts, and data on the NOAA site.
NOAA's Hydrographic Website
NOS Data Explorer portal
Hydrography
Surveying
Field surveys | Hydrographic survey | Engineering,Environmental_science | 4,048 |
9,351,532 | https://en.wikipedia.org/wiki/Froude%E2%80%93Krylov%20force | In fluid dynamics, the Froude–Krylov force—sometimes also called the Froude–Kriloff force—is a hydrodynamical force named after William Froude and Alexei Krylov. The Froude–Krylov force is the force introduced by the unsteady pressure field generated by undisturbed waves. The Froude–Krylov force does, together with the diffraction force, make up the total non-viscous forces acting on a floating body in regular waves. The diffraction force is due to the floating body disturbing the waves.
Formulas
The Froude–Krylov force can be calculated from:
where
is the Froude–Krylov force,
is the wetted surface of the floating body,
is the pressure in the undisturbed waves and
the body's normal vector pointing into the water.
In the simplest case the formula may be expressed as the product of the wetted surface area (A) of the floating body, and the dynamic pressure acting from the waves on the body:
The dynamic pressure, , close to the surface, is given by:
where
is the sea water density (approx. 1030 kg/m3)
is the acceleration due to the earth's gravity (9.81 m/s2)
is the wave height from crest to trough.
See also
Response Amplitude Operator
References
Shipbuilding
Naval architecture
Fluid dynamics | Froude–Krylov force | Chemistry,Engineering | 291 |
43,512,319 | https://en.wikipedia.org/wiki/Protein%20M | Protein M (locus MG281) is an immunoglobulin-binding protein originally found on the cell surface of the human pathogenic bacterium Mycoplasma genitalium. It is presumably a universal antibody-binding protein, as it is known to be reactive against all antibody types tested so far. It is capable of preventing the antigen-antibody interaction due to its high binding affinity to any antibody. The Scripps Research Institute announced its discovery in 2014. It was detected from the bacterium while investigating its role in patients with a cancer, multiple myeloma.
Homologous proteins are found in other Mycoplasma bacteria. Mycoplasma pneumoniae, another human pathogen, has a homolog termed IbpM (locus MPN400).
Discovery
Mycoplasma genitalium was discovered in 1980 from two male patients with non-gonococcal urethritis at St Mary's Hospital, Paddington, London. After two years, in 1983, it was identified as a new species. After several years of intense research, it was found to be the cause of sexually transmitted diseases, such as urethritis (inflammation of the urinary tract) both in men and women, and also cervicitis (inflammation of cervix) and pelvic inflammation in women. However, the molecular nature of its pathogenicity remained unknown for three decades.
On 6 February 2014, The Scripps Research Institute announced the discovery of a novel protein, which they named Protein M, from the M. genitalium cell membrane. Scientists identified the protein during investigations on the origin of multiple myeloma, a type of B-cell carcinoma. To understand the long-term M. genitalium infection, Rajesh Grover, a senior staff scientist in the Lerner laboratory, tested antibodies from the blood samples of patients with multiple myeloma against different Mycoplasma species. He found that M. genitalium was particularly responsive to all types of antibodies he tested from 20 patients. The antibody reactivity was found to be due to an undiscovered protein that is chemically responsive to all types of human and non-human antibodies available. When they isolated and analysed the protein, they discovered that it was unique both in structure and biological functions. Its structure has no resemblance to any known protein listed in the Protein Data Bank.
Structure and properties
Protein M is about 50 kDa in size, and composed of 556 amino acids. Contrary to the initial hypothesis that the antibody reactions could be an immune response to mass infection with the bacterium, they found that Protein M evolved simply to bind to any antibody it encounters, with specifically high affinity. By this property the bacterium can effectively evade the immune system of the host. This makes the protein an ideal target for developing new drugs. Rajesh Grover estimated that the protein can bind to an average of 100,000,000 different kinds of antibodies circulating in human blood.
Unlike functionally similar proteins such as Protein A, Protein G, and Protein L, which all contain small, multiple immunoglobulin domains, Protein M has a large domain of 360 amino acid residues that binds primarily to the variable light chain of the immunoglobulin, as well as a binding site called LRR-like motif. In addition, Protein M has a C-terminal domain with 115 amino acid residues that probably protrudes over the antibody binding site. It binds to an antibody at either κ or λ light chains using hydrogen bonds and salt bridges, from backbone atoms and conserved side chains, and some conserved van der Waals interactions with other nonconserved interactions.
References
Bacterial proteins
Sexually transmitted diseases and infections
Immune system | Protein M | Biology | 765 |
25,911,755 | https://en.wikipedia.org/wiki/Airborne%20Launch%20Control%20Center | Airborne Launch Control Centers (ALCC—pronounced "Al-see") provide a survivable launch capability for the United States Air Force's LGM-30 Minuteman Intercontinental Ballistic Missile (ICBM) force by utilizing the Airborne Launch Control System (ALCS) on board which is operated by an airborne missileer crew. Historically, from 1967–1998, the ALCC mission was performed by United States Air Force Boeing EC-135 command post aircraft. This included EC-135A, EC-135C, EC-135G, and EC-135L aircraft.
Today, the ALCC mission is performed by airborne missileers from Air Force Global Strike Command's (AFGSC) 625th Strategic Operations Squadron (STOS) and United States Strategic Command (USSTRATCOM). Starting on October 1, 1998, the ALCS has been located on board the United States Navy's E-6B Mercury. The ALCS crew is integrated into the battle staff of the USSTRATCOM "Looking Glass" Airborne Command Post (ABNCP) and is on alert around-the-clock.
Aircraft
The ALCS mission has been held by multiple aircraft during the last 50 years:
EC-135 – performed Looking Glass and ALCC mission for the Strategic Air Command
EC-135A (ALCC)
EC-135C (ABNCP and ALCC)
EC-135G (ALCC and ABNCP)
EC-135L Post Attack Command and Control System (PACCS) Radio Relay
E-6B Mercury – performs Looking Glass and ALCC mission today for USSTRATCOM
History
From 1967 to 1992, three dedicated Airborne Launch Control Centers (ALCC) were on ground or airborne alert around the clock providing ALCS coverage for five of the six Minuteman ICBM wings. These dedicated ALCCs were mostly EC-135A aircraft but could also have been EC-135C or EC-135G aircraft depending on availability. ALCC No. 1 was on ground alert at Ellsworth AFB, South Dakota, and during a wartime scenario would have taken off and orbited between the Minuteman Wings at Ellsworth AFB, South Dakota, and F.E. Warren AFB, Wyoming, providing ALCS assistance if needed. ALCCs No. 2 and No. 3 were routinely on forward deployed ground alert at Minot AFB, North Dakota. During a wartime scenario, ALCC No. 3 would have orbited between the Minuteman ICBM Wings at Minot AFB and Grand Forks AFB, both in North Dakota, providing ALCS assistance if needed. ALCC No. 2 was dedicated to orbiting near the Minuteman ICBM Wing at Malmstrom AFB, Montana, providing ALCS assistance if needed.
After 1992, with the end of the Cold War and the disbanding of the Strategic Air Command (SAC), ALCS remained on alert with the SAC and the US Strategic Command (USSTRATCOM) EC-135C Airborne Command Posts. On October 1, 1998 the Air Force's EC-135Cs ceased to perform USSTRATCOM Looking Glass operations and was subsequently retired. The Navy's E-6B Mercury took over USSTRATCOM's Looking Glass mission and associated ALCC mission.
ALCC operations today
Today, at least one E-6B Looking Glass Airborne Command Post (ABNCP) is on alert around the clock performing the ALCC mission. It is postured with a full USSTRATCOM battlestaff and ALCS crew on board to perform the Looking Glass mission in the event the USSTRATCOM Global Operations Center (GOC) is incapacitated. The aircraft can take off quickly to avoid any threat. The ALCS crew on board still provides a survivable launch capability for the Air Force's Minuteman III ICBMs located at the three remaining missile wings located at Malmstrom AFB, Montana, Minot AFB, North Dakota; and F.E. Warren AFB, Wyoming. Just like its original inception, ALCS on alert today provides an adversary with an insurmountable task of trying to destroy the Minuteman ICBM force. Even if the ground Launch Control Centers are destroyed, airborne missileers utilizing the ALCS can fly overhead and launch the Minuteman ICBM force.
See also
AN/DRC-8 Emergency Rocket Communications System
References
Missile launchers
Military radio systems of the United States
Military communications
United States nuclear command and control | Airborne Launch Control Center | Engineering | 909 |
64,599,419 | https://en.wikipedia.org/wiki/All%20the%20Fish%20in%20the%20Sea | All the Fish in the Sea: Maximum Sustainable Yield and the Failure of Fisheries Management is a 2011 book by Carmel Finley. The book argues that the policies for international fishing and whaling management were essentially locked in place by 1958, and that the United States played a large role in setting them. In the development of the international law covering fisheries, the US supported laws that would protect the US tuna and salmon fisheries while limiting the ability of other nations, and Japan in particular, to fish in US waters. The book thus ties fisheries management inseparably with Cold War politics.
In particular, Finley traces the development of the concept of maximum sustainable yield (MSY), arguing that MSY had no scientific basis and thus was a political and economic construct more than a scientific one. The "model did not represent the codification of quantitative, empirical evidence." Once instituted, instead of limiting fishing, MSY's assertion that underfishing wasted oceanic resources meant MSY "was not really a limit, but a goal to be reached," thus encouraging more fishing rather than less. The book engages the myth of the "Tragedy of the Commons" by demonstrating that governmental action and international policy led to overfishing, and not the self-interested actions of individual fishers.
Finley argues that to achieve a sustainable future for fisheries, "we need to change the focus of management from estimating harvest to maintaining the population structure of fish stocks and their ecosystems." The book is also important as part of a movement to understand the oceans as a place with a history, rather than an unchanging void around which human history happens.
References
2011 non-fiction books
History of science and technology
Environmental history
Environmental non-fiction books
Maritime history
Law of the sea
Fisheries law
University of Chicago Press books | All the Fish in the Sea | Technology | 360 |
23,193,471 | https://en.wikipedia.org/wiki/Dansyl%20amide | Dansyl amide is a fluorescent dye that forms in a reaction between dansyl chloride and ammonia. It is the simplest representative of the class of dansyl derivatized amines, which are widely used in biochemistry and chemistry as fluorescent labels. The dansyl amide moiety is also called a dansyl group, and it can be introduced into amino acids or other amines in a reaction with dansyl chloride. The dansyl group is highly fluorescent, and it has a very high Stokes shift. The excitation maximum (ca 350 nm) is essentially independent on the medium, whereas the emission maximum strongly depends on the solvent and varies from 520 to 550 nm.
References
Chemical tests
Reagents for organic chemistry | Dansyl amide | Chemistry | 146 |
75,504,801 | https://en.wikipedia.org/wiki/Receptor%20degrader | A receptor degrader binds to a receptor and induces its breakdown, causing down-regulation of signaling of that receptor. It is distinct from the mechanism of action of receptor antagonists and inverse agonists, which reduce receptor signaling but do not cause receptor breakdown. Examples include selective estrogen receptor degraders and androgen receptor degraders, both developed for hormone-sensitive cancers.
References
Receptor degraders | Receptor degrader | Chemistry | 86 |
1,328,116 | https://en.wikipedia.org/wiki/Neutrino%20oscillation | Neutrino oscillation is a quantum mechanical phenomenon in which a neutrino created with a specific lepton family number ("lepton flavor": electron, muon, or tau) can later be measured to have a different lepton family number. The probability of measuring a particular flavor for a neutrino varies between three known states, as it propagates through space.
First predicted by Bruno Pontecorvo in 1957, neutrino oscillation has since been observed by a multitude of experiments in several different contexts. Most notably, the existence of neutrino oscillation resolved the long-standing solar neutrino problem.
Neutrino oscillation is of great theoretical and experimental interest, as the precise properties of the process can shed light on several properties of the neutrino. In particular, it implies that the neutrino has a non-zero mass outside the Einstein-Cartan torsion, which requires a modification to the Standard Model of particle physics. The experimental discovery of neutrino oscillation, and thus neutrino mass, by the Super-Kamiokande Observatory and the Sudbury Neutrino Observatories was recognized with the 2015 Nobel Prize for Physics.
Observations
A great deal of evidence for neutrino oscillation has been collected from many sources, over a wide range of neutrino energies and with many different detector technologies. The 2015 Nobel Prize in Physics was shared by Takaaki Kajita and Arthur B. McDonald for their early pioneering observations of these oscillations.
Neutrino oscillation is a function of the ratio, where is the distance traveled and is the neutrino's energy. (Details in below.) All available neutrino sources produce a range of energies, and oscillation is measured at a fixed distance for neutrinos of varying energy. The limiting factor in measurements is the accuracy with which the energy of each observed neutrino can be measured. Because current detectors have energy uncertainties of a few percent, it is satisfactory to know the distance to within 1%.
Solar neutrino oscillation
The first experiment that detected the effects of neutrino oscillation was Ray Davis' Homestake experiment in the late 1960s, in which he observed a deficit in the flux of solar neutrinos with respect to the prediction of the Standard Solar Model, using a chlorine-based detector. This gave rise to the solar neutrino problem. Many subsequent radiochemical and water Cherenkov detectors confirmed the deficit, but neutrino oscillation was not conclusively identified as the source of the deficit until the Sudbury Neutrino Observatory provided clear evidence of neutrino flavor change in 2001.
Solar neutrinos have energies below 20 MeV. At energies above 5 MeV, solar neutrino oscillation actually takes place in the Sun through a resonance known as the MSW effect, a different process from the vacuum oscillation described later in this article.
Atmospheric neutrino oscillation
Following the theories that were proposed in the 1970s suggesting unification of electromagnetic, weak, and strong forces, a few experiments on proton decay followed in the 1980s. Large detectors such as IMB, MACRO, and Kamiokande II have observed a deficit in the ratio of the flux of muon to electron flavor atmospheric neutrinos (see muon decay). The Super-Kamiokande experiment provided a very precise measurement of neutrino oscillation in an energy range of hundreds of MeV to a few TeV, and with a baseline of the diameter of the Earth; the first experimental evidence for atmospheric neutrino oscillations was announced in 1998.
Reactor neutrino oscillation
Many experiments have searched for oscillation of electron anti-neutrinos produced in nuclear reactors. No oscillations were found until a detector was installed at a distance 1–2 km. Such oscillations give the value of the parameter . Neutrinos produced in nuclear reactors have energies similar to solar neutrinos, of around a few MeV. The baselines of these experiments have ranged from tens of meters to over 100 km (parameter ). Mikaelyan and Sinev proposed to use two identical detectors to cancel systematic uncertainties in reactor experiment to measure the parameter .
In December 2011, the Double Chooz experiment found that Then, in 2012, the Daya Bay experiment found thatwith a significance of These results have since been confirmed by RENO.
Beam neutrino oscillation
Neutrino beams produced at a particle accelerator offer the greatest control over the neutrinos being studied. Many experiments have taken place that study the same oscillations as in atmospheric neutrino oscillation using neutrinos with a few GeV of energy and several-hundred-km baselines. The MINOS, K2K, and Super-K experiments have all independently observed muon neutrino disappearance over such long baselines.
Data from the LSND experiment appear to be in conflict with the oscillation parameters measured in other experiments. Results from the MiniBooNE appeared in Spring 2007 and contradicted the results from LSND, although they could support the existence of a fourth neutrino type, the sterile neutrino.
In 2010, the INFN and CERN announced the observation of a tauon particle in a muon neutrino beam in the OPERA detector located at Gran Sasso, 730 km away from the source in Geneva.
T2K, using a neutrino beam directed through 295 km of earth and the Super-Kamiokande detector, measured a non-zero value for the parameter in a neutrino beam. NOνA, using the same beam as MINOS with a baseline of 810 km, is sensitive to the same.
Theory
Neutrino oscillation arises from mixing between the flavor and mass eigenstates of neutrinos. That is, the three neutrino states that interact with the charged leptons in weak interactions are each a different superposition of the three (propagating) neutrino states of definite mass. Neutrinos are emitted and absorbed in weak processes in flavor eigenstates but travel as mass eigenstates.
As a neutrino superposition propagates through space, the quantum mechanical phases of the three neutrino mass states advance at slightly different rates, due to the slight differences in their respective masses. This results in a changing superposition mixture of mass eigenstates as the neutrino travels; but a different mixture of mass eigenstates corresponds to a different mixture of flavor states. For example, a neutrino born as an electron neutrino will be some mixture of electron, mu, and tau neutrino after traveling some distance. Since the quantum mechanical phase advances in a periodic fashion, after some distance the state will nearly return to the original mixture, and the neutrino will be again mostly electron neutrino. The electron flavor content of the neutrino will then continue to oscillate – as long as the quantum mechanical state maintains coherence. Since mass differences between neutrino flavors are small in comparison with long coherence lengths for neutrino oscillations, this microscopic quantum effect becomes observable over macroscopic distances.
In contrast, due to their larger masses, the charged leptons (electrons, muons, and tau leptons) have never been observed to oscillate. In nuclear beta decay, muon decay, pion decay, and kaon decay, when a neutrino and a charged lepton are emitted, the charged lepton is emitted in incoherent mass eigenstates such as because of its large mass. Weak-force couplings compel the simultaneously emitted neutrino to be in a "charged-lepton-centric" superposition such as which is an eigenstate for a "flavor" that is fixed by the electron's mass eigenstate, and not in one of the neutrino's own mass eigenstates. Because the neutrino is in a coherent superposition that is not a mass eigenstate, the mixture that makes up that superposition oscillates significantly as it travels. No analogous mechanism exists in the Standard Model that would make charged leptons detectably oscillate. In the four decays mentioned above, where the charged lepton is emitted in a unique mass eigenstate, the charged lepton will not oscillate, as single mass eigenstates propagate without oscillation.
The case of (real) W boson decay is more complicated: W boson decay is sufficiently energetic to generate a charged lepton that is not in a mass eigenstate; however, the charged lepton would lose coherence, if it had any, over interatomic distances (0.1 nm) and would thus quickly cease any meaningful oscillation. More importantly, no mechanism in the Standard Model is capable of pinning down a charged lepton into a coherent state that is not a mass eigenstate, in the first place; instead, while the charged lepton from the W boson decay is not initially in a mass eigenstate, neither is it in any "neutrino-centric" eigenstate, nor in any other coherent state. It cannot meaningfully be said that such a featureless charged lepton oscillates or that it does not oscillate, as any "oscillation" transformation would just leave it the same generic state that it was before the oscillation. Therefore, detection of a charged lepton oscillation from W boson decay is infeasible on multiple levels.
Pontecorvo–Maki–Nakagawa–Sakata matrix
The idea of neutrino oscillation was first put forward in 1957 by Bruno Pontecorvo, who proposed that neutrino–antineutrino transitions may occur in analogy with neutral kaon mixing. Although such matter–antimatter oscillation had not been observed, this idea formed the conceptual foundation for the quantitative theory of neutrino flavor oscillation, which was first developed by Maki, Nakagawa, and Sakata in 1962 and further elaborated by Pontecorvo in 1967. One year later the solar neutrino deficit was first observed, and that was followed by the famous article by Gribov and Pontecorvo published in 1969 titled "Neutrino astronomy and lepton charge".
The concept of neutrino mixing is a natural outcome of gauge theories with massive neutrinos, and its structure can be characterized in general. In its simplest form it is expressed as a unitary transformation relating the flavor and mass eigenbasis and can be written as
where
is a neutrino with definite flavor = (electron), (muon) or (tauon),
is a neutrino with definite mass
the superscript asterisk () represents a complex conjugate; for antineutrinos, the complex conjugate should be removed from the first equation and inserted into the second.
The symbol represents the Pontecorvo–Maki–Nakagawa–Sakata matrix (also called the PMNS matrix, lepton mixing matrix, or sometimes simply the MNS matrix). It is the analogue of the CKM matrix describing the analogous mixing of quarks. If this matrix were the identity matrix, then the flavor eigenstates would be the same as the mass eigenstates. However, experiment shows that it is not.
When the standard three-neutrino theory is considered, the matrix is 3×3. If only two neutrinos are considered, a 2×2 matrix is used. If one or more sterile neutrinos are added (see later), it is 4×4 or larger. In the 3×3 form, it is given by
where and The phase factors and are physically meaningful only if neutrinos are Majorana particles—i.e. if the neutrino is identical to its antineutrino (whether or not they are is unknown)—and do not enter into oscillation phenomena regardless. If neutrinoless double beta decay occurs, these factors influence its rate. The phase factor is non-zero only if neutrino oscillation violates CP symmetry; this has not yet been observed experimentally. If experiment shows this 3×3 matrix to be not unitary, a sterile neutrino or some other new physics is required.
Propagation and interference
Since are mass eigenstates, their propagation can be described by plane wave solutions of the form
where
quantities are expressed in natural units and
is the energy of the mass-eigenstate ,
is the time from the start of the propagation,
is the three-dimensional momentum,
is the current position of the particle relative to its starting position
In the ultrarelativistic limit, we can approximate the energy as
where is the energy of the wavepacket (particle) to be detected.
This limit applies to all practical (currently observed) neutrinos, since their masses are less than 1 eV and their energies are at least 1 MeV, so the Lorentz factor, , is greater than in all cases. Using also where is the distance traveled and also dropping the phase factors, the wavefunction becomes
Eigenstates with different masses propagate with different frequencies. The heavier ones oscillate faster compared to the lighter ones. Since the mass eigenstates are combinations of flavor eigenstates, this difference in frequencies causes interference between the corresponding flavor components of each mass eigenstate. Constructive interference causes it to be possible to observe a neutrino created with a given flavor to change its flavor during its propagation. The probability that a neutrino originally of flavor will later be observed as having flavor is
This is more conveniently written as
where
The phase that is responsible for oscillation is often written as (with and restored)
where 1.27 is unitless. In this form, it is convenient to plug in the oscillation parameters since:
The mass differences, , are known to be on the order of eV = ( eV)
Oscillation distances, , in modern experiments are on the order of kilometers
Neutrino energies, , in modern experiments are typically on order of MeV or GeV.
If there is no CP-violation ( is zero), then the second sum is zero. Otherwise, the CP asymmetry can be given as
In terms of Jarlskog invariant
the CP asymmetry is expressed as
Two-neutrino case
The above formula is correct for any number of neutrino generations. Writing it explicitly in terms of mixing angles is extremely cumbersome if there are more than two neutrinos that participate in mixing. Fortunately, there are several meaningful cases in which only two neutrinos participate significantly. In this case, it is sufficient to consider the mixing matrix
Then the probability of a neutrino changing its flavor is
Or, using SI units and the convention introduced above
This formula is often appropriate for discussing the transition in atmospheric mixing, since the electron neutrino plays almost no role in this case. It is also appropriate for the solar case of where is a mix (superposition) of and These approximations are possible because the mixing angle is very small and because two of the mass states are very close in mass compared to the third.
Classical analogue of neutrino oscillation
The basic physics behind neutrino oscillation can be found in any system of coupled harmonic oscillators. A simple example is a system of two pendulums connected by a weak spring (a spring with a small spring constant). The first pendulum is set in motion by the experimenter while the second begins at rest. Over time, the second pendulum begins to swing under the influence of the spring, while the first pendulum's amplitude decreases as it loses energy to the second. Eventually all of the system's energy is transferred to the second pendulum and the first is at rest. The process then reverses. The energy oscillates between the two pendulums repeatedly until it is lost to friction.
The behavior of this system can be understood by looking at its normal modes of oscillation. If the two pendulums are identical then one normal mode consists of both pendulums swinging in the same direction with a constant distance between them, while the other consists of the pendulums swinging in opposite (mirror image) directions. These normal modes have (slightly) different frequencies because the second involves the (weak) spring while the first does not. The initial state of the two-pendulum system is a combination of both normal modes. Over time, these normal modes drift out of phase, and this is seen as a transfer of motion from the first pendulum to the second.
The description of the system in terms of the two pendulums is analogous to the flavor basis of neutrinos. These are the parameters that are most easily produced and detected (in the case of neutrinos, by weak interactions involving the W boson). The description in terms of normal modes is analogous to the mass basis of neutrinos. These modes do not interact with each other when the system is free of outside influence.
When the pendulums are not identical the analysis is slightly more complicated. In the small-angle approximation, the potential energy of a single pendulum system is , where g is the standard gravity, L is the length of the pendulum, m is the mass of the pendulum, and x is the horizontal displacement of the pendulum. As an isolated system the pendulum is a harmonic oscillator with a frequency of . The potential energy of a spring is where k is the spring constant and x is the displacement. With a mass attached it oscillates with a period of . With two pendulums (labeled a and b) of equal mass but possibly unequal lengths and connected by a spring, the total potential energy is
This is a quadratic form in xa and xb, which can also be written as a matrix product:
The 2×2 matrix is real symmetric and so (by the spectral theorem) it is orthogonally diagonalizable. That is, there is an angle θ such that if we define
then
where λ1 and λ2 are the eigenvalues of the matrix. The variables x1 and x2 describe normal modes which oscillate with frequencies of and . When the two pendulums are identical (La = Lb), θ is 45°.
The angle θ is analogous to the Cabibbo angle (though that angle applies to quarks rather than neutrinos).
When the number of oscillators (particles) is increased to three, the orthogonal matrix can no longer be described by a single angle; instead, three are required (Euler angles). Furthermore, in the quantum case, the matrices may be complex. This requires the introduction of complex phases in addition to the rotation angles, which are associated with CP violation but do not influence the observable effects of neutrino oscillation.
Theory, graphically
Two neutrino probabilities in vacuum
In the approximation where only two neutrinos participate in the oscillation, the probability of oscillation follows a simple pattern:
The blue curve shows the probability of the original neutrino retaining its identity. The red curve shows the probability of conversion to the other neutrino. The maximum probability of conversion is equal to sin22θ. The frequency of the oscillation is controlled by Δm2.
Three neutrino probabilities
If three neutrinos are considered, the probability for each neutrino to appear is somewhat complex. The graphs below show the probabilities for each flavor, with the plots in the left column showing a long range to display the slow "solar" oscillation, and the plots in the right column zoomed in, to display the fast "atmospheric" oscillation. The parameters used to create these graphs (see below) are consistent with current measurements, but since some parameters are still quite uncertain, some aspects of these plots are only qualitatively correct.
The illustrations were created using the following parameter values:
sin2(2θ13) = 0.10 (Determines the size of the small wiggles.)
sin2(2θ23) = 0.97
sin2(2θ12) = 0.861
δ = 0 (If the actual value of this phase is large, the probabilities will be somewhat distorted, and will be different for neutrinos and antineutrinos.)
Normal mass hierarchy: m1 ≤ m2 ≤ m3
Δm =
Δm ≈ Δm =
Observed values of oscillation parameters
. PDG combination of Daya Bay, RENO, and Double Chooz results.
. This corresponds to θsol (solar), obtained from KamLand, solar, reactor and accelerator data.
at 90% confidence level, corresponding to (atmospheric)
(normal mass hierarchy)
and the sign of are currently unknown.
Solar neutrino experiments combined with KamLAND have measured the so-called solar parameters and Atmospheric neutrino experiments such as Super-Kamiokande together with the K2K and MINOS long baseline accelerator neutrino experiment have determined the so-called atmospheric parameters and The last mixing angle, 13, has been measured by the experiments Daya Bay, Double Chooz and RENO as
For atmospheric neutrinos the relevant difference of masses is about and the typical energies are ; for these values the oscillations become visible for neutrinos traveling several hundred kilometres, which would be those neutrinos that reach the detector traveling through the earth, from below the horizon.
The mixing parameter 13 is measured using electron anti-neutrinos from nuclear reactors. The rate of anti-neutrino interactions is measured in detectors sited near the reactors to determine the flux prior to any significant oscillations and then it is measured in far detectors (placed kilometres from the reactors). The oscillation is observed as an apparent disappearance of electron anti-neutrinos in the far detectors (i.e. the interaction rate at the far site is lower than predicted from the observed rate at the near site).
From atmospheric and solar neutrino oscillation experiments, it is known that two mixing angles of the MNS matrix are large and the third is smaller. This is in sharp contrast to the CKM matrix in which all three angles are small and hierarchically decreasing. The CP-violating phase of the MNS matrix is as of April 2020 to lie somewhere between −2 and −178 degrees, from the T2K experiment.
If the neutrino mass proves to be of Majorana type (making the neutrino its own antiparticle), it is then possible that the MNS matrix has more than one phase.
Since experiments observing neutrino oscillation measure the squared mass difference and not absolute mass, one might claim that the lightest neutrino mass is exactly zero, without contradicting observations. This is however regarded as unlikely by theorists.
Origins of neutrino mass
The question of how neutrino masses arise has not been answered conclusively. In the Standard Model of particle physics, fermions only have intrinsic mass because of interactions with the Higgs field (see Higgs boson). These interactions require both left- and right-handed versions of the fermion (see chirality). However, only left-handed neutrinos have been observed so far.
Neutrinos may have another source of mass through the Majorana mass term. This type of mass applies for electrically neutral particles since otherwise it would allow particles to turn into anti-particles, which would violate conservation of electric charge.
The smallest modification to the Standard Model, which only has left-handed neutrinos, is to allow these left-handed neutrinos to have Majorana masses. The problem with this is that the neutrino masses are surprisingly smaller than the rest of the known particles (at least 600,000 times smaller than the mass of an electron), which, while it does not invalidate the theory, is widely regarded as unsatisfactory as this construction offers no insight into the origin of the neutrino mass scale.
The next simplest addition would be to add into the Standard Model right-handed neutrinos that interact with the left-handed neutrinos and the Higgs field in an analogous way to the rest of the fermions. These new neutrinos would interact with the other fermions solely in this way and hence would not be directly observable, so are not phenomenologically excluded. The problem of the disparity of the mass scales remains.
Seesaw mechanism
The most popular conjectured solution currently is the seesaw mechanism, where right-handed neutrinos with very large Majorana masses are added. If the right-handed neutrinos are very heavy, they induce a very small mass for the left-handed neutrinos, which is proportional to the reciprocal of the heavy mass.
If it is assumed that the neutrinos interact with the Higgs field with approximately the same strengths as the charged fermions do, the heavy mass should be close to the GUT scale. Because the Standard Model has only one fundamental mass scale, all particle masses must arise in relation to this scale.
There are other varieties of seesaw and there is currently great interest in the so-called low-scale seesaw schemes, such as the inverse seesaw mechanism.
The addition of right-handed neutrinos has the effect of adding new mass scales, unrelated to the mass scale of the Standard Model, hence the observation of heavy right-handed neutrinos would reveal physics beyond the Standard Model. Right-handed neutrinos would help to explain the origin of matter through a mechanism known as leptogenesis.
Other sources
There are alternative ways to modify the standard model that are similar to the addition of heavy right-handed neutrinos (e.g., the addition of new scalars or fermions in triplet states) and other modifications that are less similar (e.g., neutrino masses from loop effects and/or from suppressed couplings). One example of the last type of models is provided by certain versions supersymmetric extensions of the standard model of fundamental interactions, where R parity is not a symmetry. There, the exchange of supersymmetric particles such as squarks and sleptons can break the lepton number and lead to neutrino masses. These interactions are normally excluded from theories as they come from a class of interactions that lead to unacceptably rapid proton decay if they are all included. These models have little predictive power and are not able to provide a cold dark matter candidate.
Oscillations in the early universe
During the early universe when particle concentrations and temperatures were high, neutrino oscillations could have behaved differently. Depending on neutrino mixing-angle parameters and masses, a broad spectrum of behavior may arise including vacuum-like neutrino oscillations, smooth evolution, or self-maintained coherence. The physics for this system is non-trivial and involves neutrino oscillations in a dense neutrino gas.
See also
MSW effect
Majoron
Neutral kaon mixing
Lorentz-violating neutrino oscillations
Neutral particle oscillation
Neutrino astronomy
Notes
References
Further reading
External links
Review Articles on arxiv.org
Ganesan Srinivasan was elected in 1984 a fellow of the Indian Academy of Sciences.
Neutrinos
Standard Model
Electroweak theory
Physics beyond the Standard Model | Neutrino oscillation | Physics | 5,778 |
5,334,646 | https://en.wikipedia.org/wiki/Minifloat | In computing, minifloats are floating-point values represented with very few bits. This reduced precision makes them ill-suited for general-purpose numerical calculations, but they are useful for special purposes such as:
Computer graphics, where iterations are small and precision has aesthetic effects.
Machine learning, which can be relatively insensitive to numeric precision. bfloat16 and fp8 are common formats.
Additionally, they are frequently encountered as a pedagogical tool in computer-science courses to demonstrate the properties and structures of floating-point arithmetic and IEEE 754 numbers.
Minifloats with 16 bits are half-precision numbers (opposed to single and double precision). There are also minifloats with 8 bits or even fewer.
Minifloats can be designed following the principles of the IEEE 754 standard. In this case they must obey the (not explicitly written) rules for the frontier between subnormal and normal numbers and must have special patterns for infinity and NaN. Normalized numbers are stored with a biased exponent. The new revision of the standard, IEEE 754-2008, has 16-bit binary minifloats.
Notation
A minifloat is usually described using a tuple of four numbers, (S, E, M, B):
S is the length of the sign field. It is usually either 0 or 1.
E is the length of the exponent field.
M is the length of the mantissa (significand) field.
B is the exponent bias.
A minifloat format denoted by (S, E, M, B) is, therefore, bits long. The (S, E, M, B) notation can be converted to a (B, P, L, U) format as (with IEEE use of exponents).
Example 8-bit float (1.4.3)
A minifloat in 1 byte (8 bit) with 1 sign bit, 4 exponent bits and 3 significand bits (in short, a 1.4.3 minifloat) is demonstrated here. The exponent bias is defined as 7 to center the values around 1 to match other IEEE 754 floats so (for most values) the actual multiplier for exponent is . All IEEE 754 principles should be valid.
Numbers in a different base are marked as ..., for example, 101 = 5. The bit patterns have spaces to visualize their parts.
Representation of zero
Zero is represented as zero exponent with a zero mantissa. The zero exponent means zero is a subnormal number with a leading "0." prefix, and with the zero mantissa all bits after the decimal point are zero, meaning this value is interpreted as . Floating point numbers use a signed zero, so is also available and is equal to positive .
0 0000 000 = 0
1 0000 000 = −0
Subnormal numbers
The significand is extended with "0." and the exponent value is treated as 1 higher like the least normalized number:
0 0000 001 = 0.0012 × 21 - 7 = 0.125 × 2−6 = 0.001953125 (least subnormal number)
...
0 0000 111 = 0.1112 × 21 - 7 = 0.875 × 2−6 = 0.013671875 (greatest subnormal number)
Normalized numbers
The significand is extended with "1.":
0 0001 000 = 1.0002 × 21 - 7 = 1 × 2−6 = 0.015625 (least normalized number)
0 0001 001 = 1.0012 × 21 - 7 = 1.125 × 2−6 = 0.017578125
...
0 0111 000 = 1.0002 × 27 - 7 = 1 × 20 = 1
0 0111 001 = 1.0012 × 27 - 7 = 1.125 × 20 = 1.125 (least value above 1)
...
0 1110 000 = 1.0002 × 214 - 7 = 1.000 × 27 = 128
0 1110 001 = 1.0012 × 214 - 7 = 1.125 × 27 = 144
...
0 1110 110 = 1.1102 × 214 - 7 = 1.750 × 27 = 224
0 1110 111 = 1.1112 × 214 - 7 = 1.875 × 27 = 240 (greatest normalized number)
Infinity
Infinity values have the highest exponent, with the mantissa set to zero. The sign bit can be either positive or negative.
0 1111 000 = +infinity
1 1111 000 = −infinity
Not a number
NaN values have the highest exponent, with a non-zero value for the mantissa. A float with 1-bit sign and 3-bit mantissa has NaN values.
s 1111 mmm = NaN (if mmm ≠ 000)
Table of values
This is a chart of all possible values for this example 8-bit float.
There are only 242 different non-NaN values (if +0 and −0 are regarded as different), because 14 of the bit patterns represent NaNs.
Alternative bias values
At these small sizes other bias values may be interesting, for instance a bias of -2 will make the numbers 0-16 have the same bit representation as the integers 0-16, with the loss that no non-integer values can be represented.
0 0000 000 = 0.0002 × 21 - (-2) = 0.0 × 23 = 0 (subnormal number)
0 0000 001 = 0.0012 × 21 - (-2) = 0.125 × 23 = 1 (subnormal number)
0 0000 111 = 0.1112 × 21 - (-2) = 0.875 × 23 = 7 (subnormal number)
0 0001 000 = 1.0002 × 21 - (-2) = 1.000 × 23 = 8 (normalized number)
0 0001 111 = 1.1112 × 21 - (-2) = 1.875 × 23 = 15 (normalized number)
0 0010 000 = 1.0002 × 22 - (-2) = 1.000 × 24 = 16 (normalized number)
Different bit allocations
The above describes an example 8-bit float with 1 sign bit, 4 exponent bits, and 3 significand bits, which is a nice balance. However, any bit allocation is possible. A format could choose to give more of the bits to the exponent if they need more dynamic range with less precision, or give more of the bits to the significand if they need more precision with less dynamic range. At the extreme, it is possible to allocate all bits to the exponent, or all but one of the bits to the significand, leaving the exponent with only one bit. The exponent must be given at least one bit, or else it no longer makes sense as a float, it just becomes a signed number.
Here is a chart of all possible values for a different 8-bit float with 1 sign bit, 3 exponent bits and 4 significand bits. Having 1 more significand bit than exponent bits ensures that the precision remains at least 0.5 throughout the entire range.
Tables like the above can be generated for any combination of SEMB (sign, exponent, mantissa/significand, and bias) values using a script in Python or in GDScript.
Arithmetic
Addition
The graphic demonstrates the addition of even smaller (1.3.2.3)-minifloats with 6 bits. This floating-point system follows the rules of IEEE 754 exactly. NaN as operand produces always NaN results. Inf − Inf and (−Inf) + Inf results in NaN too (green area). Inf can be augmented and decremented by finite values without change. Sums with finite operands can give an infinite result (i.e. 14.0 + 3.0 = +Inf as a result is the cyan area, −Inf is the magenta area). The range of the finite operands is filled with the curves x + y = c, where c is always one of the representable float values (blue and red for positive and negative results respectively).
Subtraction, multiplication and division
The other arithmetic operations can be illustrated similarly:
Other sizes
The Radeon R300 and R420 GPUs used an "fp24" floating-point format with 7 bits of exponent and 16 bits (+1 implicit) of mantissa.
"Full Precision" in Direct3D 9.0 is a proprietary 24-bit floating-point format. Microsoft's D3D9 (Shader Model 2.0) graphics API initially supported both FP24 (as in ATI's R300 chip) and FP32 (as in Nvidia's NV30 chip) as "Full Precision", as well as FP16 as "Partial Precision" for vertex and pixel shader calculations performed by the graphics hardware.
Khronos defines 10-bit and 11-bit float formats for use with Vulkan. Both formats have no sign bit and a 5-bit exponent. The 10-bit format has a 5-bit mantissa, and the 11-bit format has a 6-bit mantissa.
IEEE SA Working Group P3109 is currently working on a standard for 8-bit minifloats optimized for machine learning. The current draft defines not one format, but a family of 7 different formats, named "binary8pP", where "P" is a number from 1 to 7. These floats are designed to be compact and efficient, but do not follow the same semantics as other IEEE floats, and are missing features such as negative zero and multiple NaN values. Infinity is defined as both the exponent and significand having all ones, instead of other IEEE floats where the exponent is all ones and the significand is all zeroes.
4 bits and fewer
The smallest possible float size that follows all IEEE principles, including normalized numbers, subnormal numbers, signed zero, signed infinity, and multiple NaN values, is a 4-bit float with 1-bit sign, 2-bit exponent, and 1-bit mantissa. In the table below, the columns have different values for the sign and mantissa bits, and the rows are different values for the exponent bits.
If normalized numbers are not required, the size can be reduced to 3-bit by reducing the exponent down to 1.
In situations where the sign bit can be excluded, each of the above examples can be reduced by 1 bit further, keeping only the left half of the above tables. A 2-bit float with 1-bit exponent and 1-bit mantissa would only have 0, 1, Inf, NaN values.
If the mantissa is allowed to be 0-bit, a 1-bit float format would have a 1-bit exponent, and the only two values would be 0 and Inf. The exponent must be at least 1 bit or else it no longer makes sense as a float (it would just be a signed number).
4-bit floating point numbers — without the four special IEEE values — have found use in accelerating large language models.
In embedded devices
Minifloats are also commonly used in embedded devices, especially on microcontrollers where floating-point will need to be emulated in software. To speed up the computation, the mantissa typically occupies exactly half of the bits, so the register boundary automatically addresses the parts without shifting.
See also
Fixed-point arithmetic
Half-precision floating-point format
bfloat16 floating-point format
G.711 A-Law
References
External links
OpenGL half float pixel
Floating point types
Computer arithmetic | Minifloat | Mathematics | 2,484 |
19,509,478 | https://en.wikipedia.org/wiki/Multibrot%20set | In mathematics, a Multibrot set is the set of values in the complex plane whose absolute value remains below some finite value throughout iterations by a member of the general monic univariate polynomial family of recursions. The name is a portmanteau of multiple and Mandelbrot set. The same can be applied to the Julia set, this being called Multijulia set.
where d ≥ 2. The exponent d may be further generalized to negative and fractional values.
Examples
The case of
is the classic Mandelbrot set from which the name is derived.
The sets for other values of d also show fractal images when they are plotted on the complex plane.
Each of the examples of various powers d shown below is plotted to the same scale. Values of c belonging to the set are black. Values of c that have unbounded value under recursion, and thus do not belong in the set, are plotted in different colours, that show as contours, depending on the number of recursions that caused a value to exceed a fixed magnitude in the Escape Time algorithm.
Positive powers
The example is the original Mandelbrot set. The examples for are often called multibrot sets. These sets include the origin and have fractal perimeters, with rotational symmetry.
Negative powers
When d is negative the set appears to surround but does not include the origin, However this is just an artifact of the fixed maximum radius allowed by the Escape Time algorithm, and is not a limit of the sets that actually have a shape in the middle with an no hole (You can see this by using the Lyapunov exponent [No hole because the origin diverges to undefined not infinity because the origin {0 or 0+0i} taken to a negative power becomes undefined]). There is interesting complex behaviour in the contours between the set and the origin, in a star-shaped area with rotational symmetry. The sets appear to have a circular perimeter, however this is an artifact of the fixed maximum radius allowed by the Escape Time algorithm, and is not a limit of the sets that actually extend in all directions to infinity.
Fractional powers
Rendering along the exponent
An alternative method is to render the exponent along the vertical axis. This requires either fixing the real or the imaginary value, and rendering the remaining value along the horizontal axis. The resulting set rises vertically from the origin in a narrow column to infinity. Magnification reveals increasing complexity. The first prominent bump or spike is seen at an exponent of 2, the location of the traditional Mandelbrot set at its cross-section. The third image here renders on a plane that is fixed at a 45-degree angle between the real and imaginary axes.
Rendering images
All the above images are rendered using an Escape Time algorithm that identifies points outside the set in a simple way. Much greater fractal detail is revealed by plotting the Lyapunov exponent, as shown by the example below. The Lyapunov exponent is the error growth-rate of a given sequence. First calculate the iteration sequence with N iterations, then calculate the exponent as
and if the exponent is negative the sequence is stable. The white pixels in the picture are the parameters c for which the exponent is positive aka unstable. The colours show the periods of the cycles which the orbits are attracted to. All points colored dark-blue (outside) are attracted by a fixed point, all points in the middle (lighter blue) are attracted by a period 2 cycle and so on.
Pseudocode
ESCAPE TIME ALGORITHM
for each pixel on the screen do
x = x0 = x co-ordinate of pixel
y = y0 = y co-ordinate of pixel
iteration := 0
max_iteration := 1000
while (x*x + y*y ≤ (2*2) and iteration < max_iteration do
/* INSERT CODE(S)FOR Z^d FROM TABLE BELOW */
iteration := iteration + 1
if iteration = max_iteration then
colour := black
else
colour := iteration
plot(x0, y0, colour)
The complex value z has coordinates (x,y) on the complex plane and is raised to various powers inside the iteration loop by codes shown in this table. Powers not shown in the table can be obtained by concatenating the codes shown.
References
Complex dynamics
Fractals
Articles containing video clips
Articles with example pseudocode | Multibrot set | Mathematics | 916 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.