id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
2,247,115
https://en.wikipedia.org/wiki/Iodine%20%28125I%29%20human%20albumin
{{DISPLAYTITLE:Iodine (125I) human albumin}} Iodine (125I) human albumin (trade name Jeanatope) is human serum albumin iodinated with iodine-125, typically injected to aid in the determination of total blood and plasma volume. Iodine-131 iodinated albumin (trade name Volumex) is used for the same purposes. Medical uses Iodine (125I) human albumin is used to determine a person's blood volume. For this purpose, a defined amount of radioactivity in form of this drug is injected into a vein, and blood samples are drawn from a different body location after five and fifteen minutes. From the radioactivity of these samples, the original radioactivity per blood volume can be calculated; and knowing the total amount of radioactivity injected, one can calculate the total blood volume. It can also be used to calculate the blood plasma volume using a similar method. The main difference is that the drawn blood sample has to be centrifuged to separate the plasma from the blood cells. Contraindications The US Food and Drug Administration lists no contraindications for this drug. Adverse effects There is a theoretical possibility of allergic reactions after repeated use of this medication. Pharmacokinetics Iodine-125 is a radioactive isotope of iodine that decays by electron capture with a physical half-life of 60.14 days. The biological half-life in normal individuals for iodine (125I) human albumin has been reported to be approximately 14 days. Its radioactivity is excreted almost exclusively via the kidneys. References Radiopharmaceuticals Iodine
Iodine (125I) human albumin
[ "Chemistry" ]
350
[ "Chemicals in medicine", "Radiopharmaceuticals", "Medicinal radiochemistry" ]
14,684,966
https://en.wikipedia.org/wiki/MacDowell%E2%80%93Mansouri%20action
The MacDowell–Mansouri action (named after S. W. MacDowell and Freydoon Mansouri) is an action that is used to derive Einstein's field equations of general relativity. It can usefully be formulated in terms of Cartan geometry. References Further reading Wise, D. (2010). “MacDowell-Mansouri gravity and Cartan geometry”. Class. Quantum Grav. 27, 155010. Reid, James A.; Wang, Charles H.-T. (2014). "Conformal holonomy in MacDowell-Mansouri gravity". J. Math. Phys. 55, 032501. General relativity
MacDowell–Mansouri action
[ "Physics" ]
146
[ "General relativity", "Relativity stubs", "Theory of relativity" ]
14,685,581
https://en.wikipedia.org/wiki/Hybrid%20physical%E2%80%93chemical%20vapor%20deposition
Hybrid physical–chemical vapor deposition (HPCVD) is a thin-film deposition technique, that combines physical vapor deposition (PVD) with chemical vapor deposition (CVD). For the instance of magnesium diboride (MgB2) thin-film growth, HPCVD process uses diborane (B2H6) as the boron precursor gas, but unlike conventional CVD, which only uses gaseous sources, heated bulk magnesium pellets (99.95% pure) are used as the Mg source in the deposition process. Since the process involves chemical decomposition of precursor gas and physical evaporation of metal bulk, it is named as hybrid physical–chemical vapor deposition. System configuration The HPCVD system usually consists of a water-cooled reactor chamber, gas inlet and flow control system, pressure maintenance system, temperature control system and gas exhaust and cleaning system. The main difference between HPCVD and other CVD systems is in the heating unit. For HPCVD, both substrate and solid metal source are heated up by the heating module. The conventional HPCVD system usually has only one heater. The substrate and solid metal source sit on the same susceptor and are heated up inductively or resistively at the same time. Above certain temperature, the bulk metal source melts and generates a high vapor pressure in the vicinity of the substrate. Then the precursor gas is introduced into the chamber and decomposes around the substrate at high temperature. The atoms from the decomposed precursor gas react with the metal vapor, forming thin films on the substrate. The deposition ends when the precursor gas is switched off. The main drawback of single heater setup is the metal source temperature and the substrate temperature cannot be controlled independently. Whenever the substrate temperature is changed, the metal vapor pressure changes as well, limiting the ranges of the growth parameters. In the two-heater HPCVD arrangement, the metal source and substrate are heated up by two separate heaters. Thus it can provide more flexible control of growth parameters. Magnesium diboride thin films by HPCVD HPCVD has been the most effective technique for depositing magnesium diboride (MgB2) thin films. Other MgB2 deposition technologies either have a reduced superconducting transition temperature and poor crystallinity, or require ex situ annealing in Mg vapor. The surfaces of these MgB2 films are rough and non-stoichiometric. Instead, HPCVD system can grow high-quality in situ pure MgB2 films with smooth surfaces, which are required to make reproducible uniform Josephson junctions, the fundamental element of superconducting circuits. Principle From the theoretical phase diagram of Mg-B system, a high Mg vapor pressure is required for the thermodynamic phase stability of MgB2 at elevated temperature. MgB2 is a line compound and as long as the Mg/B ratio is above the stoichiometric 1:2, any extra Mg at elevated temperature will be in the gas phase and be evacuated. Also, once MgB2 is formed, it has to overcome a significant kinetic barrier to thermally decompose. So one does not have to be overly concerned about maintaining a high Mg vapor pressure during the cooling stage of the MgB2 film deposition. Pure films During the growth process of magnesium diboride thin films by HPCVD, the carrier gas is purified hydrogen gas H2 at a pressure of about 100 Torr. This H2 environment prevents oxidation during the deposition. Bulk pure Mg pieces are placed next to the substrate on the top of the susceptor. When the susceptor is heated to about 650 °C, pure Mg pieces are also heated, which generates a high Mg vapor pressure in the vicinity of the substrate. Diborane (B2H6) is used as the boron source. MgB2 films starts to grow when the boron precursor gas B2H6 is introduced into the reactor chamber. The growth rate of the MgB2 film is controlled by the flow rate of B2H6/H2 mixture. The film growth stops when the boron precursor gas is switched off. Carbon-alloyed films To improve the performance of superconducting magnesium diboride thin films in magnetic field, it is desirable to dope impurities into the films. The HPCVD technique is also an efficient method to grow carbon-doped or carbon-alloyed MgB2 thin films. The carbon-alloyed MgB2 films can be grown in the same way as the pure MgB2 films deposition process described above except adding a metalorganic magnesium precursor, bis(methylcyclopentadienyl)magnesium precursor, into the carrier gas. The carbon-alloyed MgB2 thin films by HPCVD exhibit extraordinarily high upper critical field (Hc2). Hc2 over 60 T at low temperatures is observed when the magnetic field is parallel to the ab-plane. See also Chemical vapor deposition Physical vapor deposition References Thin film deposition Coatings
Hybrid physical–chemical vapor deposition
[ "Chemistry", "Materials_science", "Mathematics" ]
1,042
[ "Thin film deposition", "Coatings", "Thin films", "Planes (geometry)", "Solid state engineering" ]
14,691,611
https://en.wikipedia.org/wiki/D-block%20contraction
The d-block contraction (sometimes called scandide contraction) is a term used in chemistry to describe the effect of having full d orbitals on the period 4 elements. The elements in question are gallium, germanium, arsenic, selenium, bromine, and krypton. Their electronic configurations include completely filled d orbitals (d10). The d-block contraction is best illustrated by comparing some properties of the group 13 elements to highlight the effect on gallium. Gallium can be seen to be anomalous. The most obvious effect is that the sum of the first three ionization potentials of gallium is higher than that of aluminium, whereas the trend in the group would be for it to be lower. The second table below shows the trend in the sum of the first three ionization potentials for the elements B, Al, Sc, Y, and La. Sc, Y, and La have three valence electrons above a noble gas electron core. In contrast to the group 13 elements, this sequence shows a smooth reduction. Other effects of the d-block contraction are that the Ga3+ ion is smaller than expected, being closer in size to Al3+. Care must be taken in interpreting the ionization potentials for indium and thallium, since other effects, e.g. the inert-pair effect, become increasingly important for the heavier members of the group.The cause of the d-block contraction is the poor shielding of the nuclear charge by the electrons in the d orbitals. The outer valence electrons are more strongly attracted by the nucleus causing the observed increase in ionization potentials. The d-block contraction can be compared to the lanthanide contraction, which is caused by inadequate shielding of the nuclear charge by electrons occupying f orbitals. See also Periodic table Electronegativity Electron affinity Effective nuclear charge Electron configuration Exchange interaction Lanthanide contraction References Chemical bonding Atomic radius
D-block contraction
[ "Physics", "Chemistry", "Materials_science" ]
396
[ "Atomic radius", "Condensed matter physics", "nan", "Chemical bonding", "Atoms", "Matter" ]
17,506,694
https://en.wikipedia.org/wiki/Wang%20and%20Landau%20algorithm
The Wang and Landau algorithm, proposed by Fugao Wang and David P. Landau, is a Monte Carlo method designed to estimate the density of states of a system. The method performs a non-Markovian random walk to build the density of states by quickly visiting all the available energy spectrum. The Wang and Landau algorithm is an important method to obtain the density of states required to perform a multicanonical simulation. The Wang–Landau algorithm can be applied to any system which is characterized by a cost (or energy) function. For instance, it has been applied to the solution of numerical integrals and the folding of proteins. The Wang–Landau sampling is related to the metadynamics algorithm. Overview The Wang and Landau algorithm is used to obtain an estimate for the density of states of a system characterized by a cost function. It uses a non-Markovian stochastic process which asymptotically converges to a multicanonical ensemble. (I.e. to a Metropolis–Hastings algorithm with sampling distribution inverse to the density of states) The major consequence is that this sampling distribution leads to a simulation where the energy barriers are invisible. This means that the algorithm visits all the accessible states (favorable and less favorable) much faster than a Metropolis algorithm. Algorithm Consider a system defined on a phase space , and a cost function, E, (e.g. the energy), bounded on a spectrum , which has an associated density of states , which is to be estimated. The estimator is . Because Wang and Landau algorithm works in discrete spectra, the spectrum is divided in N discrete values with a difference between them of , such that . Given this discrete spectrum, the algorithm is initialized by: setting all entries of the microcanonical entropy to zero, initializing and initializing the system randomly, by putting in a random configuration . The algorithm then performs a multicanonical ensemble simulation: a Metropolis–Hastings random walk in the phase space of the system with a probability distribution given by and a probability of proposing a new state given by a probability distribution . A histogram of visited energies is stored. Like in the Metropolis–Hastings algorithm, a proposal-acceptance step is performed, and consists in (see Metropolis–Hastings algorithm overview): proposing a state according to the arbitrary proposal distribution accept/refuse the proposed state according to where and . After each proposal-acceptance step, the system transits to some value , is incremented by one and the following update is performed: . This is the crucial step of the algorithm, and it is what makes the Wang and Landau algorithm non-Markovian: the stochastic process now depends on the history of the process. Hence the next time there is a proposal to a state with that particular energy , that proposal is now more likely refused; in this sense, the algorithm forces the system to visit all of the spectrum equally. The consequence is that the histogram is more and more flat. However, this flatness depends on how well-approximated the calculated entropy is to the exact entropy, which naturally depends on the value of f. To better and better approximate the exact entropy (and thus histogram's flatness), f is decreased after M proposal-acceptance steps: . It was later shown that updating the f by constantly dividing by two can lead to saturation errors. A small modification to the Wang and Landau method to avoid this problem is to use the f factor proportional to , where is proportional to the number of steps of the simulation. Test system We want to obtain the DOS for the harmonic oscillator potential. The analytical DOS is given by, by performing the last integral we obtain In general, the DOS for a multidimensional harmonic oscillator will be given by some power of E, the exponent will be a function of the dimension of the system. Hence, we can use a simple harmonic oscillator potential to test the accuracy of Wang–Landau algorithm because we know already the analytic form of the density of states. Therefore, we compare the estimated density of states obtained by the Wang–Landau algorithm with . Sample code The following is a sample code of the Wang–Landau algorithm in Python, where we assume that a symmetric proposal distribution g is used: The code considers a "system" which is the underlying system being studied. currentEnergy = system.randomConfiguration() # A random initial configuration while f > epsilon: system.proposeConfiguration() # A proposed configuration is proposed proposedEnergy = system.proposedEnergy() # The energy of the proposed configuration computed if random() < exp(entropy[currentEnergy] - entropy[proposedEnergy]): # If accepted, update the energy and the system: currentEnergy = proposedEnergy system.acceptProposedConfiguration() else: # If rejected system.rejectProposedConfiguration() H[currentEnergy] += 1 entropy[currentEnergy] += f if isFlat(H): # isFlat tests whether the histogram is flat (e.g. 95% flatness) H[:] = 0 f *= 0.5 # Refine the f parameter Wang and Landau molecular dynamics: Statistical Temperature Molecular Dynamics (STMD) Molecular dynamics (MD) is usually preferable to Monte Carlo (MC), so it is desirable to have a MD algorithm incorporating the basic WL idea for flat energy sampling. That algorithm is Statistical Temperature Molecular Dynamics (STMD), developed by Jaegil Kim et al at Boston University. An essential first step was made with the Statistical Temperature Monte Carlo (STMC) algorithm. WLMC requires an extensive increase in the number of energy bins with system size, caused by working directly with the density of states. STMC is centered on an intensive quantity, the statistical temperature, , where E is the potential energy. When combined with the relation, , where we set , the WL rule for updating the density of states gives the rule for updating the discretized statistical temperature, where is the energy bin size, and denotes the running estimate. We define f as in, a factor >1 that multiplies the estimate of the DOS for the i'th energy bin when the system visits an energy in that bin. The details are given in Ref. With an initial guess for and the range restricted to lie between and , the simulation proceeds as in WLMC, with significant numerical differences. An interpolation of gives a continuum expression of the estimated upon integration of its inverse, allowing the use of larger energy bins than in WL. Different values of are available within the same energy bin when evaluating the acceptance probability. When histogram fluctuations are less than 20% of the mean, is reduced according to . STMC was compared with WL for the Ising model and the Lennard-Jones liquid. Upon increasing energy bin size, STMC gets the same results over a considerable range, while the performance of WL deteriorates rapidly. STMD can use smaller initial values of for more rapid convergence. In sum, STMC needs fewer steps to obtain the same quality of results. Now consider the main result, STMD. It is based on the observation that in a standard MD simulation at temperature with forces derived from the potential energy , where denotes all the positions, the sampling weight for a configuration is . Furthermore, if the forces are derived from a function , the sampling weight is . For flat energy sampling, let the effective potential be - entropic molecular dynamics. Then the weight is . Since the density of states is , their product gives flat energy sampling. The forces are calculated as where denotes the usual force derived from the potential energy. Scaling the usual forces by the factor produces flat energy sampling. STMD starts with an ordinary MD algorithm at constant and V. The forces are scaled as indicated, and the statistical temperature is updated every time step, using the same procedure as in STMC. As the simulation converges to flat energy sampling, the running estimate converges to the true . Technical details including steps to speed convergence are described in and. In STMD is called the kinetic temperature as it controls the velocities as usual, but does not enter the configurational sampling, which is unusual. Thus STMD can probe low energies with fast particles. Any canonical average can be calculated with reweighting, but the statistical temperature, , is immediately available with no additional analysis. It is extremely valuable for studying phase transitions. In finite nanosystems has a feature corresponding to every “subphase transition”. For a sufficiently strong transition, an equal-area construction on an S-loop in gives the transition temperature. STMD has been refined by the BU group, and applied to several systems by them and others. It was recognized by D. Stelter that despite our emphasis on working with intensive quantities, is extensive. However is intensive, and the procedure based on histogram flatness is replaced by cutting in half every fixed number of time steps. This simple change makes STMD entirely intensive and substantially improves performance for large systems. Furthermore, the final value of the intensive is a constant that determines the magnitude of error in the converged , and is independent of system size. STMD is implemented in LAMMPS as fix stmd. STMD is particularly useful for phase transitions. Equilibrium information is impossible to obtain with a canonical simulation, as supercooling or superheating is necessary to cause the transition. However an STMD run obtains flat energy sampling with a natural progression of heating and cooling, without getting trapped in the low energy or high energy state. Most recently it has been applied to the fluid/gel transition in lipid-wrapped nanoparticles. Replica exchange STMD has also been presented by the BU group. References Markov chain Monte Carlo Statistical algorithms Computational physics Articles with example Python (programming language) code
Wang and Landau algorithm
[ "Physics" ]
2,034
[ "Computational physics" ]
17,507,355
https://en.wikipedia.org/wiki/Dining%20cryptographers%20problem
In cryptography, the dining cryptographers problem studies how to perform a secure multi-party computation of the boolean-XOR function. David Chaum first proposed this problem in the early 1980s and used it as an illustrative example to show that it was possible to send anonymous messages with unconditional sender and recipient untraceability. Anonymous communication networks based on this problem are often referred to as DC-nets (where DC stands for "dining cryptographers"). Despite the word dining, the dining cryptographers problem is unrelated to the dining philosophers problem. Description Three cryptographers gather around a table for dinner. The waiter informs them that the meal has been paid for by someone, who could be one of the cryptographers or the National Security Agency (NSA). The cryptographers respect each other's right to make an anonymous payment, but want to find out whether the NSA paid. So they decide to execute a two-stage protocol. In the first stage, every two cryptographers establish a shared one-bit secret, say by tossing a coin behind a menu so that only two cryptographers see the outcome in turn for each two cryptographers. Suppose, for example, that after the coin tossing, cryptographer A and B share a secret bit , A and C share , and B and C share . In the second stage, each cryptographer publicly announces a bit, which is: if they didn't pay for the meal, the exclusive OR (XOR) of the two shared bits they hold with their two neighbours, if they did pay for the meal, the opposite of that XOR. Supposing none of the cryptographers paid, then A announces , B announces , and C announces . On the other hand, if A paid, she announces . The three public announcements combined reveal the answer to their question. One simply computes the XOR of the three bits announced. If the result is 0, it implies that none of the cryptographers paid (so the NSA must have paid the bill). Otherwise, one of the cryptographers paid, but their identity remains unknown to the other cryptographers. David Chaum coined the term dining cryptographers network, or DC-net, for this protocol. Limitations The DC-net protocol is simple and elegant. It has several limitations, however, some solutions to which have been explored in follow-up research (see the References section below). Collision If two cryptographers paid for the dinner, their messages will cancel each other out, and the final XOR result will be . This is called a collision and allows only one participant to transmit at a time using this protocol. In a more general case, a collision happens as long as any even number of participants send messages. Disruption Any malicious cryptographer who does not want the group to communicate successfully can jam the protocol so that the final XOR result is useless, simply by sending random bits instead of the correct result of the XOR. This problem occurs because the original protocol was designed without using any public key technology and lacks reliable mechanisms to check whether participants honestly follow the protocol. Complexity The protocol requires pairwise shared secret keys between the participants, which may be problematic if there are many participants. Also, though the DC-net protocol is "unconditionally secure", it actually depends on the assumption that "unconditionally secure" channels already exist between pairs of the participants, which is not easy to achieve in practice. A related anonymous veto network algorithm computes the logical OR of several users' inputs, rather than a logical XOR as in DC-nets, which may be useful in applications to which a logical OR combining operation is naturally suited. History David Chaum first thought about this problem in the early 1980s. The first publication that outlines the basic underlying ideas is his. The journal version appeared in the very first issue of the Journal of Cryptology. Generalizations DC-nets are readily generalized to allow for transmissions of more than one bit per round, for groups larger than three participants, and for arbitrary "alphabets" other than the binary digits 0 and 1, as described below. Transmissions of longer messages To enable an anonymous sender to transmit more than one bit of information per DC-nets round, the group of cryptographers can simply repeat the protocol as many times as desired to create a desired number of bits worth of transmission bandwidth. These repetitions need not be performed serially. In practical DC-net systems, it is typical for pairs of participants to agree up-front on a single shared "master" secret, using Diffie–Hellman key exchange for example. Each participant then locally feeds this shared master secret into a pseudorandom number generator, in order to produce as many shared "coin flips" as desired to allow an anonymous sender to transmit multiple bits of information. Larger group sizes The protocol can be generalized to a group of participants, each with a shared secret key in common with each other participant. In each round of the protocol, if a participant wants to transmit an untraceable message to the group, they invert their publicly announced bit. The participants can be visualized as a fully connected graph with the vertices representing the participants and the edges representing their shared secret keys. Sparse secret sharing graphs The protocol may be run with less than fully connected secret sharing graphs, which can improve the performance and scalability of practical DC-net implementations, at the potential risk of reducing anonymity if colluding participants can split the secret sharing graph into separate connected components. For example, an intuitively appealing but less secure generalization to participants using a ring topology, where each cryptographer sitting around a table shares a secret only with the cryptographer to their immediate left and right, and not with every other cryptographer. Such a topology is appealing because each cryptographer needs to coordinate two coin flips per round, rather than . However, if Adam and Charlie are actually NSA agents sitting immediately to the left and right of Bob, an innocent victim, and if Adam and Charlie secretly collude to reveal their secrets to each other, then they can determine with certainty whether or not Bob was the sender of a 1 bit in a DC-net run, regardless of how many participants there are in total. This is because the colluding participants Adam and Charlie effectively "split" the secret sharing graph into two separate disconnected components, one containing only Bob, the other containing all other honest participants. Another compromise secret sharing DC-net topology, employed in the Dissent system for scalability, may be described as a client/server or user/trustee topology. In this variant, we assume there are two types of participants playing different roles: a potentially large number n of users who desire anonymity, and a much smaller number of trustees whose role is to help the users obtain that anonymity. In this topology, each of the users shares a secret with each of the trustees—but users share no secrets directly with other users, and trustees share no secrets directly with other trustees—resulting in an secret sharing matrix. If the number of trustees is small, then each user needs to manage only a few shared secrets, improving efficiency for users in the same way the ring topology does. However, as long as at least one trustee behaves honestly and does not leak his or her secrets or collude with other participants, then that honest trustee forms a "hub" connecting all honest users into a single fully connected component, regardless of which or how many other users and/or trustees might be dishonestly colluding. Users need not know or guess which trustee is honest; their security depends only on the existence of at least one honest, non-colluding trustee. Alternate alphabets and combining operators Though the simple DC-nets protocol uses binary digits as its transmission alphabet, and uses the XOR operator to combine cipher texts, the basic protocol generalizes to any alphabet and combining operator suitable for one-time pad encryption. This flexibility arises naturally from the fact that the secrets shared between the many pairs of participants are, in effect, merely one-time pads combined symmetrically within a single DC-net round. One useful alternate choice of DC-nets alphabet and combining operator is to use a finite group suitable for public-key cryptography as the alphabet—such as a Schnorr group or elliptic curve—and to use the associated group operator as the DC-net combining operator. Such a choice of alphabet and operator makes it possible for clients to use zero-knowledge proof techniques to prove correctness properties about the DC-net ciphertexts that they produce, such as that the participant is not "jamming" the transmission channel, without compromising the anonymity offered by the DC-net. This technique was first suggested by Golle and Juels, further developed by Franck, and later implemented in Verdict, a cryptographically verifiable implementation of the Dissent system. Handling or avoiding collisions The measure originally suggested by David Chaum to avoid collisions is to retransmit the message once a collision is detected, but the paper does not explain exactly how to arrange the retransmission. Dissent avoids the possibility of unintentional collisions by using a verifiable shuffle to establish a DC-nets transmission schedule, such that each participant knows exactly which bits in the schedule correspond to his own transmission slot, but does not know who owns other transmission slots. Countering disruption attacks Herbivore divides a large anonymity network into smaller DC-net groups, enabling participants to evade disruption attempts by leaving a disrupted group and joining another group, until the participant finds a group free of disruptors. This evasion approach introduces the risk that an adversary who owns many nodes could selectively disrupt only groups the adversary has not completely compromised, thereby "herding" participants toward groups that may be functional precisely because they are completely compromised. Dissent implements several schemes to counter disruption. The original protocol used a verifiable cryptographic shuffle to form a DC-net transmission schedule and distribute "transmission assignments", allowing the correctness of subsequent DC-nets ciphertexts to be verified with a simple cryptographic hash check. This technique required a fresh verifiable before every DC-nets round, however, leading to high latencies. A later, more efficient scheme allows a series of DC-net rounds to proceed without intervening shuffles in the absence of disruption, but in response to a disruption event uses a shuffle to distribute anonymous accusations enabling a disruption victim to expose and prove the identity of the perpetrator. Finally, more recent versions support fully verifiable DC-nets - at substantial cost in computation efficiency due to the use of public-key cryptography in the DC-net - as well as a hybrid mode that uses efficient XOR-based DC-nets in the normal case and verifiable DC-nets only upon disruption, to distribute accusations more quickly than is feasible using verifiable shuffles. References Cryptography Mathematical problems Zero-knowledge protocols
Dining cryptographers problem
[ "Mathematics", "Engineering" ]
2,256
[ "Mathematical problems", "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
17,509,328
https://en.wikipedia.org/wiki/Discrete%20event%20dynamic%20system
In control engineering, a discrete-event dynamic system (DEDS) is a discrete-state, event-driven system of which the state evolution depends entirely on the occurrence of asynchronous discrete events over time. Although similar to continuous-variable dynamic systems (CVDS), DEDS consists solely of discrete state spaces and event-driven state transition mechanisms. Topics in DEDS include: Automata theory Supervisory control theory Petri net theory Discrete event system specification Boolean differential calculus Markov chain Queueing theory Discrete-event simulation Concurrent estimation References Control theory
Discrete event dynamic system
[ "Mathematics" ]
115
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
17,511,256
https://en.wikipedia.org/wiki/SYZ%20conjecture
The SYZ conjecture is an attempt to understand the mirror symmetry conjecture, an issue in theoretical physics and mathematics. The original conjecture was proposed in a paper by Strominger, Yau, and Zaslow, entitled "Mirror Symmetry is T-duality". Along with the homological mirror symmetry conjecture, it is one of the most explored tools applied to understand mirror symmetry in mathematical terms. While the homological mirror symmetry is based on homological algebra, the SYZ conjecture is a geometrical realization of mirror symmetry. Formulation In string theory, mirror symmetry relates type IIA and type IIB theories. It predicts that the effective field theory of type IIA and type IIB should be the same if the two theories are compactified on mirror pair manifolds. The SYZ conjecture uses this fact to realize mirror symmetry. It starts from considering BPS states of type IIA theories compactified on X, especially 0-branes that have moduli space X. It is known that all of the BPS states of type IIB theories compactified on Y are 3-branes. Therefore, mirror symmetry will map 0-branes of type IIA theories into a subset of 3-branes of type IIB theories. By considering supersymmetric conditions, it has been shown that these 3-branes should be special Lagrangian submanifolds. On the other hand, T-duality does the same transformation in this case, thus "mirror symmetry is T-duality". Mathematical statement The initial proposal of the SYZ conjecture by Strominger, Yau, and Zaslow, was not given as a precise mathematical statement. One part of the mathematical resolution of the SYZ conjecture is to, in some sense, correctly formulate the statement of the conjecture itself. There is no agreed upon precise statement of the conjecture within the mathematical literature, but there is a general statement that is expected to be close to the correct formulation of the conjecture, which is presented here. This statement emphasizes the topological picture of mirror symmetry, but does not precisely characterise the relationship between the complex and symplectic structures of the mirror pairs, or make reference to the associated Riemannian metrics involved. SYZ Conjecture: Every 6-dimensional Calabi–Yau manifold has a mirror 6-dimensional Calabi–Yau manifold such that there are continuous surjections , to a compact topological manifold of dimension 3, such that There exists a dense open subset on which the maps are fibrations by nonsingular special Lagrangian 3-tori. Furthermore for every point , the torus fibres and should be dual to each other in some sense, analogous to duality of Abelian varieties. For each , the fibres and should be singular 3-dimensional special Lagrangian submanifolds of and respectively. The situation in which so that there is no singular locus is called the semi-flat limit of the SYZ conjecture, and is often used as a model situation to describe torus fibrations. The SYZ conjecture can be shown to hold in some simple cases of semi-flat limits, for example given by Abelian varieties and K3 surfaces which are fibred by elliptic curves. It is expected that the correct formulation of the SYZ conjecture will differ somewhat from the statement above. For example the possible behaviour of the singular set is not well understood, and this set could be quite large in comparison to . Mirror symmetry is also often phrased in terms of degenerating families of Calabi–Yau manifolds instead of for a single Calabi–Yau, and one might expect the SYZ conjecture to reformulated more precisely in this language. Relation to homological mirror symmetry conjecture The SYZ mirror symmetry conjecture is one possible refinement of the original mirror symmetry conjecture relating Hodge numbers of mirror Calabi–Yau manifolds. The other is Kontsevich's homological mirror symmetry conjecture (HMS conjecture). These two conjectures encode the predictions of mirror symmetry in different ways: homological mirror symmetry in an algebraic way, and the SYZ conjecture in a geometric way. There should be a relationship between these three interpretations of mirror symmetry, but it is not yet known whether they should be equivalent or one proposal is stronger than the other. Progress has been made toward showing under certain assumptions that homological mirror symmetry implies Hodge theoretic mirror symmetry. Nevertheless, in simple settings there are clear ways of relating the SYZ and HMS conjectures. The key feature of HMS is that the conjecture relates objects (either submanifolds or sheaves) on mirror geometric spaces, so the required input to try to understand or prove the HMS conjecture includes a mirror pair of geometric spaces. The SYZ conjecture predicts how these mirror pairs should arise, and so whenever an SYZ mirror pair is found, it is a good candidate to try and prove the HMS conjecture on this pair. To relate the SYZ and HMS conjectures, it is convenient to work in the semi-flat limit. The important geometric feature of a pair of Lagrangian torus fibrations which encodes mirror symmetry is the dual torus fibres of the fibration. Given a Lagrangian torus , the dual torus is given by the Jacobian variety of , denoted . This is again a torus of the same dimension, and the duality is encoded in the fact that so and are indeed dual under this construction. The Jacobian variety has the important interpretation as the moduli space of line bundles on . This duality and the interpretation of the dual torus as a moduli space of sheaves on the original torus is what allows one to interchange the data of submanifolds and subsheaves. There are two simple examples of this phenomenon: If is a point which lies inside some fibre of the special Lagrangian torus fibration, then since , the point corresponds to a line bundle supported on . If one chooses a Lagrangian section such that is a Lagrangian submanifold of , then precisely since chooses one point in each torus fibre of the SYZ fibration, this Lagrangian section is mirror dual to a choice of line bundle structure supported on each torus fibre of the mirror manifold , and consequently a line bundle on the total space of , the simplest example of a coherent sheaf appearing in the derived category of the mirror manifold. If the mirror torus fibrations are not in the semi-flat limit, then special care must be taken when crossing over singular set of the base . Another example of a Lagrangian submanifold is the torus fibre itself, and one sees that if the entire torus is taken as the Lagrangian , with the added data of a flat unitary line bundle over it, as is often necessary in homological mirror symmetry, then in the dual torus this corresponds to a single point which represents that line bundle over the torus. If one takes the skyscraper sheaf supported on that point in the dual torus, then we see torus fibres of the SYZ fibration get sent to skyscraper sheaves supported on points in the mirror torus fibre. These two examples produce the most extreme kinds of coherent sheaf, locally free sheaves (of rank 1) and torsion sheaves supported on points. By more careful construction one can build up more complicated examples of coherent sheaves, analogous to building a coherent sheaf using the torsion filtration. As a simple example, a Lagrangian multisection (a union of k Lagrangian sections) should be mirror dual to a rank k vector bundle on the mirror manifold, but one must take care to account for instanton corrections by counting holomorphic discs which are bounded by the multisection, in the sense of Gromov-Witten theory. In this way enumerative geometry becomes important for understanding how mirror symmetry interchanges dual objects. By combining the geometry of mirror fibrations in the SYZ conjecture with a detailed understanding of enumerative invariants and the structure of the singular set of the base , it is possible to use the geometry of the fibration to build the isomorphism of categories from the Lagrangian submanifolds of to the coherent sheaves of , the map . By repeating this same discussion in reverse using the duality of the torus fibrations, one similarly can understand coherent sheaves on in terms of Lagrangian submanifolds of , and hope to get a complete understanding of how the HMS conjecture relates to the SYZ conjecture. References String theory Symmetry Duality theories Conjectures
SYZ conjecture
[ "Physics", "Astronomy", "Mathematics" ]
1,780
[ "Astronomical hypotheses", "Mathematical structures", "Unsolved problems in mathematics", "Applied mathematics", "Conjectures", "Category theory", "Duality theories", "Geometry", "Applied mathematics stubs", "String theory", "Mathematical problems", "Symmetry" ]
17,511,413
https://en.wikipedia.org/wiki/George%20Zames
George Zames (January 7, 1934 – August 10, 1997) was a Polish-Canadian control theorist and professor at McGill University, Montreal, Quebec, Canada. Zames is known for his fundamental contributions to the theory of robust control, and was credited for the development of various well-known results such as small-gain theorem, passivity theorem, circle criterion in input–output form, and most famously, H-infinity methods. Biography Childhood George Zames was born on January 7, 1934, in Łódź, Poland to a Jewish family. Growing up in Warsaw, Zames and his family escaped the city at the onset of World War II, and moved to Kobe (Japan), through Lithuania and Siberia, and finally to the Anglo-French International Settlement in Shanghai. Zames indicated later that he and his family owe their lives to the transit visa provided by the Japanese Consul to Lithuania, Chiune Sugihara. In Shanghai, Zames continued his schooling, and in 1948, the family emigrated to Canada. Education Zames entered McGill University at the age of 15 and received a B.Eng. degree in Engineering Physics. Graduating at the top of his class, Zames won an Athlone Fellowship to study in England, and moved to the Imperial College. Graduating in two years, his advisors included Colin Cherry, Dennis Gabor, and John Hugh Westcott. In 1956, Zames entered the Massachusetts Institute of Technology to start his doctoral studies, and in 1960 earned a Sc.D. for a thesis titled Nonlinear Operations of System Analysis. He was advised by Norbert Wiener and Yuk-Wing Lee. Career From 1960 to 1965, Zames held various teaching positions at MIT and Harvard University. In 1965, Zames received a Guggenheim Fellowship and moved to the NASA Electronic Research Center (ERC), where he founded the Office of Control Theory and Applications (OCTA). In 1969, it was announced that NASA ERC was to be closed, and Zames joined the newly established Department of Transportation Research Center in 1970. In 1972, Zames spent a sabbatical at the Technion in Haifa, Israel, and in 1974, he returned to McGill University to become a professor and eventually the MacDonald Chair of Electrical Engineering until his death in 1997. Family Zames was married to Eva, whom he met in Israel. They have two sons, Ethan and Jonathan. His cousin, the architect Israel Stein, Who he grew up with in Warsaw, lives in Israel after surviving the holocaust. Research Zames’s research focused on imprecisely modelled systems using the input-output method, an approach that is distinct from the state space representation that dominated control theory for several decades. At the core of much of his work is the objective of complexity reduction through organization: For the purposes of control design, gross qualitative properties such as robustness can be analyzed and predicted without depending on accurate models or syntheses. Mathematical analysis provides topological tools that are very well suited for this purpose, such as compactness, contraction, and fixed-point methods. Furthermore, in control design, where there is lots of model uncertainty, it is often more important to be able to gauge qualitative behaviour (robustness, stability, existence of oscillations) than to compute exactly. Legacy The International Journal of Robust and Nonlinear Control published in 2000 a special issue in George Zames’s honour, including a complete list of his publications. Reviews of Zames’s life and legacy were published by S. Mitter and A. Tannenbaum, J. C. Willems, and in a volume resulting from a conference held to honor the occasion of Zames's 60th birthday. Awards and honors In 1984 the IEEE Control Systems Science and Engineering Award In 1995 the Killam Prize In 1996 the Rufus Oldenburger Medal from the American Society of Mechanical Engineers References External links Obituary Mathematics Genealogy Project profile 1934 births 1997 deaths Anglophone Quebec people Control theorists Jews who emigrated to escape Nazism Polish emigrants to Canada Jewish Canadian scientists McGill University Faculty of Engineering alumni Harvard University staff Sugihara's Jews Massachusetts Institute of Technology alumni
George Zames
[ "Engineering" ]
836
[ "Control engineering", "Control theorists" ]
17,514,050
https://en.wikipedia.org/wiki/Krascheninnikovia%20lanata
Krascheninnikovia lanata is a species of flowering plant currently placed in the family Amaranthaceae (previously, Chenopodiaceae), known by the common names winterfat, white sage, and wintersage. It is native to much of western North America: from central Western Canada; through the Western United States; to northern Mexico. The genus was named for Stepan Krasheninnikov—the early 18th-century Russian botanist and explorer of Siberia and Kamchatka. Distribution and habitat Winterfat grows in a great variety of habitats at in elevation—from grassland plains and xeric scrublands to rain shadow faces of montane locations. Winterfat is a halophyte that thrives in salty soils such as those on alkali flats, including those of the Great Basin, Central Valley, Great Plains, and Mojave Desert. Description Krascheninnikovia lanata is a small shrub sending erect stem branches to heights between 0. It produces flat lance-shaped leaves up to 3 centimeters long. The stems and gray foliage are covered in woolly white hairs that age to a reddish color. The woolly hairs start development in the late fall and gradually diminish through the winter season. The tops of the stem branches are occupied by plentiful spike inflorescences from March to June. The shrub is generally monoecious, with each upright inflorescence holding mostly staminate flowers with a few pistillate flowers clustered near the bottom. The staminate flowers have large, woolly leaflike bracts. The pistillate flowers have smaller bracts and develop tiny white fruits. The silky hairs on the fruits allow for wind dispersal. Cultivation Krascheninnikovia lanata is cultivated in the specialty plant nursery trade as an ornamental plant for xeriscape and wildlife gardens, and native plant natural landscapes. The light gray foliage can be a distinctive feature in garden designs. The plants are very long-lived. Uses Winterfat is an important winter forage for livestock and wildlife because its evergreen leaves are high in protein, hence its common name. Cultivation Winterfat is sometimes grown in xeriscape or native plant gardens for its striking whitish wool. It is especially valued for the fall and winter interest it provides in gardens. Small plants are easily transplanted. Native American use Winter fat was a traditional medicinal plant used by many Native American tribes that lived within its large North American range. These tribes used traditional plants to treat a wide variety of ailments and for other benefits. The Zuni people use a poultice of ground root bound with a cotton cloth to treat burns. References External links Jepson Manual Treatment - Krascheninnikovia lanata (Winterfat) USDA: Plants Profile of Krascheninnikovia lanata - with numerous Related Web Sites links. U.S. Forest Service: Krascheninnikovia lanata Ecology Native American Ethnobotany - 'Winterfat' - (University of Michigan - Dearborn) Krascheninnikovia lanata (Winterfat) - U.C. Photo gallery Chenopodioideae Halophytes Flora of Northwestern Mexico Flora of the Southwestern United States Flora of the Northwestern United States Flora of Western Canada Flora of Yukon Flora of the Rocky Mountains Flora of the Great Basin Flora of the Sierra Nevada (United States) Flora of the California desert regions Forages Plants used in traditional Native American medicine Garden plants of North America Drought-tolerant plants Flora without expected TNC conservation status
Krascheninnikovia lanata
[ "Chemistry" ]
721
[ "Halophytes", "Salts" ]
17,514,197
https://en.wikipedia.org/wiki/S%2A
S* (pronounced "S Star") is the diminutive for the S* Life Science Informatics Alliance, a collaboration between seven universities and the Karolinska Institutet of Sweden, and its course, the S-Star Bioinformatics Online course. The goal is to provide course material for training in bioinformatics and genomics. Member institutions The following institutions are members of the S* Life Science Informatics Alliance: Macquarie University, Sydney, Australia University of Sydney (School of Molecular Bioscience), Australia (as of 2001) Karolinska Institutet, Sweden (as of 2001) University of Uppsala, Sweden (as of 2001) National University of Singapore, Singapore (as of 2001) University of the Western Cape, South Africa (as of 2001) Stanford University, United States (as of 2001) University of California, San Diego, United States, via the San Diego Supercomputer Center (as of 2002) References Further reading https://www.learntechlib.org/p/100842 Bioinformatics organizations
S*
[ "Chemistry", "Biology" ]
222
[ "Bioinformatics stubs", "Bioinformatics organizations", "Biotechnology stubs", "Biochemistry stubs", "Bioinformatics" ]
8,853,472
https://en.wikipedia.org/wiki/Ponderomotive%20energy
In strong-field laser physics, ponderomotive energy is the cycle-averaged quiver energy of a free electron in an electromagnetic field. Equation The ponderomotive energy is given by , where is the electron charge, is the linearly polarised electric field amplitude, is the laser carrier frequency and is the electron mass. In terms of the laser intensity , using , it reads less simply: , where is the vacuum permittivity. For typical orders of magnitudes involved in laser physics, this becomes: , where the laser wavelength is , and is the speed of light. The units are electronvolts (eV), watts (W), centimeters (cm) and micrometers (μm). Atomic units In atomic units, , , where . If one uses the atomic unit of electric field, then the ponderomotive energy is just Derivation The formula for the ponderomotive energy can be easily derived. A free particle of charge interacts with an electric field . The force on the charged particle is . The acceleration of the particle is . Because the electron executes harmonic motion, the particle's position is . For a particle experiencing harmonic motion, the time-averaged energy is . In laser physics, this is called the ponderomotive energy . See also Ponderomotive force Electric constant Harmonic generation List of laser articles References and notes Laser science Energy (physics)
Ponderomotive energy
[ "Physics", "Mathematics" ]
279
[ "Energy (physics)", "Wikipedia categories named after physical quantities", "Quantity", "Physical quantities" ]
8,853,878
https://en.wikipedia.org/wiki/IUCLID
IUCLID (; International Uniform Chemical Information Database) is a software application to capture, store, maintain and exchange data on intrinsic and hazard properties of chemical substances. Distributed free of charge, the software is especially useful to chemical industry companies and to government authorities. It is the key tool for chemical industry to fulfill data submission obligations under REACH, the most important European Union legal document covering the production and use of chemical substances. The software is maintained by the European Chemicals Agency, ECHA. The latest version, version 6, was made available on 29 April 2016. History IUCLID versions 1 to 4 1993: First version of IUCLID for the European Existing Substances Regulation 793/93/EEC. 1999: IUCLID becomes the recommended tool for the OECD HPV Programme. 2000: IUCLID is the software prescribed in the EU Biocides legislation to notify existing active substances (Art. 4 of Commission Regulation (EC) No 1896/2000). IUCLID 4 was used worldwide by about 500 organizations. These included chemical industry companies, EU Member State Competent Authorities, the OECD Secretariat, the US EPA, the Japan METI, and third-party service providers. IUCLID 5 In 2003, when it became clear that the REACH proposal would be adopted by the European Union, the European Commission decided to completely overhaul IUCLID 4 and to create a new version, IUCLID 5, which would be used by chemical industry companies to fulfill their data submission obligations under REACH. Migration of data in the IUCLID 4 format was supported by IUCLID 5.1. IUCLID 5.1 became available on 13 June 2007. IUCLID is also mentioned in article 111 of the REACH of the REACH legislation as the format to be used for data collection and submission dossier preparation. The following IUCLID 5 major versions have been released: IUCLID 5.0: 12 June 2007 IUCLID 5.1: 16 January 2009 IUCLID 5.2: 15 February 2010 IUCLID 5.3: 24 February 2011 IUCLID 5.4: 5 June 2012 IUCLID 5.5: 2 April 2013 IUCLID 5.6: 16 April 2014 IUCLID 5 Data Format and Exchange Data that can be stored and maintained with IUCLID encompass information about: The party running IUCLID (production sites, contact persons etc.) The chemical substances managed by the company, namely their identity, composition, and supporting analytical data reference information like CAS number and other identifiers, classification and labelling physical/chemical properties, toxicological properties, eco-toxicological properties. OECD and the European Commission have agreed on a standard XML format (OECD Harmonized Templates) in which these data are stored for easy data sharing. IUCLID 5 will be the first application fully implementing this international reporting standard, which has been accepted by many national and international regulatory authorities. Numerous parties were involved in the creation and the review of the OECD Harmonized Templates, among them the Business and Industry Advisory Committee (BIAC) to the OECD, the European Chemical Industry Council (CEFIC) and other bodies and authorities. IUCLID5 can be used to enter robust study summaries summarising toxicologically-relevant endpoints. A Klimisch score is assigned within robust study summary as one field. Possible IUCLID 5 uses Anyone can use a local IUCLID 5 installation to collect, store, maintain and exchange relevant data on chemical substances. In addition to dossier creation for REACH, IUCLID 5 data can be (re-)used for a large number of other purposes, due to the compatibility of IUCLID 5 data with the OECD Harmonized Templates. The European Commission IUCLID project team and international authorities are currently in deliberation in order to further promote acceptance of IUCLID5 data in non-REACH jurisdictions. Legislations and programmes under which IUCLID 5 data are certainly accepted are: OECD Chemical Assessment Programme US HPV Challenge Programme Japan HPV Challenge Programme (provided OECD guidance for SIDS dossiers is followed there). The IUCLID 5 data model also features Biocides/Pesticides elements. A dataset prepared for a substance under REACH can therefore be quickly complemented with data about possible biocidal or pesticidal properties and be re-used for data reporting obligations under the EU Biocides regulation. The data are available and can be searched through the OECD eChemPortal. Technology IUCLID development and deployment IUCLID 5 is a Java-based application, using the Hibernate framework for persistence. It features a Java Swing graphical user interface (GUI) and can be deployed on both single workstation and distributed environments. IUCLID 5 offers the possibility to be deployed in: either a 100% open source system environment, using Tomcat as web container and PostgreSQL as Database Management System (DBMS), or a commercial system environment, using the Oracle WebLogic Server from Oracle Corporation as application server and/or Oracle as Database Management System (DBMS). .i5z files IUCLID 5 exports and imports files in the I5Z format. Files may be swapped between different IUCLID 5 installations and dossiers may be uploaded to ECHA via REACH-IT. I5Z stands for "IUCLID 5 Zip", as the file uses Zip file compression. IUCLID 5 system requirements IUCLID can be deployed on any current PC. For optimal performance, RAM should not be less than 1 GB. IUCLID 6 IUCLID 6 was made available on 24 June 2015 as a beta version so that large companies and other organisations could begin preparing their IT systems for the full release of IUCLID 6 in 2016. However, individual users and SMEs could also download the beta version to get a preview, and to become familiar with the user-interface. The first official version of IUCLID 6 was published on 29 April 2016. See also European Chemicals Agency OECD European Commission REACH Institute for Health and Consumer Protection Joint Research Centre References External links IUCLID 5 website IUCLID 6 website European Chemicals Agency Institute for Health and Consumer Protection On-Line Joint Research Centre On-Line REACH Legislation Full text The REACH CD-ROM, a practical guide for REACH REACH Services Overview & Official REACH Texts eChemPortal, a global portal to information on Chemical Substances Data entry accelerator for IUCLID Cheminformatics Government databases of the European Union Java platform software Regulation of chemicals in the European Union
IUCLID
[ "Chemistry" ]
1,361
[ "Regulation of chemicals in the European Union", "Regulation of chemicals", "Computational chemistry", "nan", "Cheminformatics" ]
8,855,042
https://en.wikipedia.org/wiki/Building%20insulation%20material
Building insulation materials are the building materials that form the thermal envelope of a building or otherwise reduce heat transfer. Insulation may be categorized by its composition (natural or synthetic materials), form (batts, blankets, loose-fill, spray foam, and panels), structural contribution (insulating concrete forms, structured panels, and straw bales), functional mode (conductive, radiative, convective), resistance to heat transfer, environmental impacts, and more. Sometimes a thermally reflective surface called a radiant barrier is added to a material to reduce the transfer of heat through radiation as well as conduction. The choice of which material or combination of materials is used depends on a wide variety of factors. Some insulation materials have health risks, some so significant the materials are no longer allowed to be used but remain in use in some older buildings such as asbestos fibers and urea. Consideration of materials used Factors affecting the type and amount of insulation to use in a building include: Thermal conductivity Moisture sensitivity Compressive strength Ease of installation Durability – resistance to degradation from compression, moisture, decomposition, etc. Ease of replacement at end of life Cost effectiveness Toxicity Flammability Environmental impact and sustainability Considerations regarding building and climate: The average climate conditions in the geographical area the building is located The temperature the building is used at Often a combination of materials is used to achieve an optimum solution and there are products which combine different types of insulation into a single form. Spray foam Spray foam is a type of insulation that is sprayed in place through a gun. Polyurethane and isocyanate foams are applied as a two-component mixture that comes together at the tip of a gun, and forms an expanding foam. Cementitious foam is applied in a similar manner but does not expand. Spray foam insulation is sprayed onto concrete slabs, into wall cavities of an unfinished wall, against the interior side of sheathing, or through holes drilled in sheathing or drywall into the wall cavity of a finished wall. Advantages Blocks airflow by expanding & sealing off leaks, gaps and penetrations. (This can also keep out bugs or other vermin) Can serve as a semi-permeable vapor barrier with a better permeability rating than plastic sheeting vapor barriers and consequently reduce the buildup of moisture, which can cause mold growth. Can fill wall cavities in finished walls without tearing the walls apart (as required with batts). Works well in tight spaces (like loose-fill, but superior). Provides acoustical insulation (like loose-fill, but superior). Expands while curing, filling bypasses, and providing excellent resistance to air infiltration (unlike batts and blankets, which can leave bypasses and air pockets, and superior to some types of loose-fill. Wet-spray cellulose is comparable.). Increases structural stability (unlike loose-fill, similar to wet-spray cellulose). Can be used in places where loose-fill cannot, such as between joists and rafters. When used between rafters, the spray foam can cover up the nails protruding from the underside of the sheathing, protecting your head. Can be applied in small quantities. Cementitious foam is fireproof. Disadvantages The cost can be high compared to traditional insulation. Most foams, with the exception of cementitious foams, release toxic fumes when they burn. According to the US Environmental Protection Agency, there is insufficient data to accurately assess the potential for exposures to the toxic and environmentally harmful isocyanates which constitute 50% of the foam material. Depending on usage and building codes and environment, most foams require protection with a thermal barrier such as drywall on the interior of a house. For example, a 15-minute fire rating may be required. Can shrink slightly while curing if not applied on a substrate heated to the manufacturer's recommended temperature. Although CFCs are no longer used, some use HCFCs or HFCs as blowing agents. Both are potent greenhouse gases, and HCFCs have some ozone depletion potential. Many foam insulations are made from petrochemicals and may be a concern for those seeking to reduce the use of fossil fuels and oil. However, some foams are becoming available that are made from renewable or recycled sources. R-value will diminish slightly with age, though the degradation of R-value stops once an equilibrium with the environment is reached. Even after this process, the stabilized R-value is very high. Most foams require protection from sunlight and solvents. It is difficult to retrofit some foams to an existing building structure because of the chemicals and processes involved. If one does not wear a protective mask or goggles, it is possible to temporarily impair one's vision. (2–5 days). May require the HVAC system to have a source of fresh outside air, since the structure may not refresh inside air without it. Advantages of closed-cell over open-cell foams Open-cell foam is porous, allowing water vapor and liquid water to penetrate the insulation. Closed-cell foam is non-porous, and not moisture-penetrable, thereby effectively forming a semi-permeable vapor barrier. (N.B., vapor barriers are usually required by the Building Codes, regardless of the type of insulation used. Check with the local authorities to find out the requirements for your area.) Closed-cell foams are superior insulators. While open-cell foams typically have R-values of 3 to 4 per inch (RSI-0.53 to RSI-0.70 per inch), closed-cell foams can attain R-values of 5 to 8 per inch (RSI-0.88 to RSI-1.41 per inch). This is important if space is limited, because it allows a thinner layer of insulation to be used. For example, a 1-inch layer of closed-cell foam provides about the same insulation factor as 2 inches of open-cell foam. Closed-cell foam is very strong, and structurally reinforces the insulated surface. By contrast, open-cell foam is soft when cured, with little structural strength. Open-cell foam requires trimming after installation, and disposal of the waste material. Unlike open-cell foam, closed-cell foam rarely requires any trimming, with little or no waste. Advantages of open-cell over closed-cell foams Open cell foams will allow timber to breathe. Open cell foams are incredibly effective as a sound barrier, having about twice the sound resistance in normal frequency ranges as closed-cell foam. Open cell foams provide a better economical yield. Open cell foams often have a low exothermic reaction temperature; will not harm coatings on electrical wiring, plumbing or other building components. Types Cementitious foam One example is AirKrete, at R-3.9 (RSI-0.69) per inch and no restriction on depth of application. Non-hazardous. Being fireproof, it will not smoke at all upon direct contact with flame, and is a two-hour firewall at a (or normal stud wall) application, per ASTM E-814 testing (UL 1479). Great for sound deadening; does not echo like other foams. Environmentally friendly. Non-expansive (good for existing homes where interior sheathing is in place). Fully sustainable: Consists of magnesium oxide cement and air, which is made from magnesium oxide extracted from seawater. Blown with air (no CFCs, HCFCs or other harmful blowing agents). Nontoxic, even during application. Does not shrink or settle. Zero VOC emission. Chemically inert (no known symptoms of exposure per MSDS). Insect resistant. Mold Proof. Insoluble in water. Disadvantages: Fragile at the low densities needed to achieve the quoted R value and, like all foams, it is more expensive than conventional fiber insulations. In 2010, the Ontario Building Code Commission ruled that AirKrete did not conform to requirements for a specific application in the building code. Their ruling states "As the proposed insulation is not impermeable, it could allow water or moisture to enter the wall assembly, which could then cause damage or deterioration of the building elements." As of 2014-08-21, the domain airkretecanada.com appears to be abandoned. Polyisocyanurate Typically R-5.6 (RSI-0.99) or slightly better after stabilization – higher values (at least R-7, or RSI-1.23) in stabilized boards. Less flammable than polyurethane. Phenolic injection foam Such as Tripolymer R-5.1 per inch (ASTM-C-177). Known for its air sealing abilities. Tripolymer can be installed in wall cavities that have fiberglass and cellulose in them. Non-hazardous. Not restricted by depth of application. Fire resistant – flame spread 5, smoke spread 0 (ASTM-E-84) – will not smoke at all upon direct contact with flame and is a two-hour firewall at a , or normal stud wall, application per ASTM E-199. Great for sound deadening, STC 53 (ASTM E413-73); does not echo like other foams. Environmentally friendly. Non-expansive (good for existing homes where interior sheathing is in place). Fully sustainable: Consists of phenolic, a foaming agent, and air. Blown with air (no CFCs, HCFCs or other harmful blowing agents). Nontoxic, even during application. Does not shrink or settle. Zero VOC emission. Chemically inert (no known symptoms of exposure per MSDS). Insect resistant. Mold Proof. Insoluble in water. Disadvantages: Like all foams, it is more expensive than conventional fiber insulations when only comparing sq ft pricing. When you compare price to R value per sq ft the price is about the same. Polystyrene (expanded polystyrene (EPS) and extruded polystyrene (XPS)) Closed-cell polyurethane White or yellow. May use a variety of blowing agents. Resistant to water wicking and water vapor.: An example of a commercial closed-cell polyurethane product: Ecomate ® R-8 per inch. Ecomate ® is a trademarked foam blowing agent technology and family of polyurethanes which has a neutral impact on the environment (the worldwide patent was awarded to Foam Supplies Incorporated (FSI) in 2002. This is a new generation eco-friendly foam blowing agent that is free of Chlorofluorocarbons (CFCs), Hydrochlorofluorocarbons (HCFCs), and Hydrofluorocarbons (HFCs) based on naturally occurring methyl methanoate. Open-cell (low density) polyurethane White or yellow. Expands to fill and seal cavity, but expands slowly, preventing damage to the wall. Resistant to water wicking, but permeable to water vapor. Fire resistant. Some types of polyurethane insulation are pour-able. Here are two commercial open-cell, low-density polyurethane products: Icynene Icynene is a trademarked brand of isocyanate open-cell spray foam from Huntsman Building Solutions. The classic version has a thermal resistance (R value) of 3.7 per inch and other versions have even higher values. The formula also includes a flame retardant. Icynene uses water for its spray application and the chemical expansion is caused by the carbon dioxide generated between the water and isocyanate material. Icynene will expand up to 100 times it original size within the first 6 seconds of being applied. Icynene contains no ozone-depleting substances such as CFCs, HFC's, HCFC's. Icynene contains volatile organic compounds (VOCs). Icynene will not emit any harmful gases once cured. Icynene has a Global warming potential of 1. Flammability is relatively low. Icynene maintains its efficiency with no loss of R-Value for the life of the install. Icynene is more expensive compared to traditional insulation methods. Any potential for harm is primarily during the installation phase and particularly for installers. The manufacture of icynene involves many toxic petrochemicals. Sealection 500 spray foamR-3.8 (RSI-0.67) per inch. a water-blown low density spray polyurethane foam that uses water in a chemical reaction to create carbon dioxide and steam which expands the foam. Flame spread is 21 and smoke developed is 217 which makes it a Class I material (best fire rating). Disadvantages: Is an Isocyanate. Insulating concrete forms Insulating concrete forms (ICFs) are stay-in-place formwork made from insulating materials to build energy-efficient, cast-in-place, reinforced concrete walls. Rigid panels Rigid panel insulation, also known as continuous insulation can be made from foam plastics such as polyisocyanurate or polystyrene, or from fibrous materials such as fiberglass, rock and slag wool. Rigid panel continuous insulation is often used to provide a thermal break in the building envelope, thus reducing thermal bridging. Structural insulated panels Structural insulated panels (SIPs), also called stressed-skin walls, use the same concept as in foam-core external doors, but extend the concept to the entire house. They can be used for ceilings, floors, walls, and roofs. The panels usually consist of plywood, oriented strandboard, or drywall glued and sandwiched around a core consisting of expanded polystyrene, polyurethane, polyisocyanurate, compressed wheat straw, or epoxy. Epoxy is too expensive to use as an insulator on its own, but it has a high R-value (7 to 9), high strength, and good chemical and moisture resistance. SIPs come in various thicknesses. When building a house, they are glued together and secured with lumber. They provide the structural support, rather than the studs used in traditional framing. Advantages Strong. Able to bear loads, including external loads from precipitation and wind. Faster construction than stick-built house. Less lumber required. Insulate acoustically. Impermeable to moisture. Can truck prefabricated panels to construction site and assemble on site. Create shell of solid insulation around house, while reducing bypasses common with stick-frame construction. The result is an inherently energy-efficient house. Do not use formaldehyde, CFCs, or HCFCs in manufacturing. True R-values and lower energy costs. Disadvantages More expensive than other types of insulation. Thermal bridging at splines and lumber fastening points unless a thermally broken spline is used (insulated lumber). Fiberglass batts and blankets (glass wool) Batts are precut, whereas blankets are available in continuous rolls. Compressing the material reduces its effectiveness. Cutting it to accommodate electrical boxes and other obstructions allows air a free path to cross through the wall cavity. One can install batts in two layers across an unfinished attic floor, perpendicular to each other, for increased effectiveness at preventing heat bridging. Blankets can cover joists and studs as well as the space between them. Batts can be challenging and unpleasant to hang under floors between joists; straps, or staple cloth or wire mesh across joists, can hold it up. Gaps between batts (bypasses) can become sites of air infiltration or condensation (both of which reduce the effectiveness of the insulation) and requires strict attention during the installation. By the same token careful weatherization and installation of vapour barriers is required to ensure that the batts perform optimally. Air infiltration can be also reduced by adding a layer of cellulose loose-fill on top of the material. Types Rock and slag wool. Usually made from rock (basalt, diabase) or iron ore blast furnace slag. Some rock wool contains recycled glass. Nonflammable. Fiberglass. Made from molten glass, usually with 20% to 30% recycled industrial waste and post-consumer content. Nonflammable, except for the facing (if present). Sometimes, the manufacturer modifies the facing so that it is fire-resistant. Some fiberglass is unfaced, some is paper-faced with a thin layer of asphalt, and some is foil-faced. Paper-faced batts are vapor retarders, not vapor barriers. Foil-faced batts are vapor barriers. The vapor barrier must be installed toward the warm side. High-density fiberglass Plastic fiber, usually made from recycled plastic. Does not cause irritation like fiberglass, but more difficult to cut than fiberglass. Not used in US. Flammable, but treated with fire-retardant. Natural fiber Natural fiber insulations, treated as necessary with low toxicity fire and insect retardants, are available in Europe : Natural fiber insulations can be used loose as granulats or formed into flexible or semi-rigid panels and rigid panels using a binder (mostly synthetic such as polyester, polyurethane or polyolefin). The binder material can be new or recycled. Examples include cork, cotton, recycled tissue/clothes, hemp, flax, coco, wool, lightweight wood fiber, cellulose, seaweed, etc. Similarly, many plant-based waste materials can be used as insulation such as nut shells, corncobs, most straws including lavender straw, recycled wine bottle corks (granulated), etc. They usually have significantly less thermal performance than industrial products; this can be compensated by increasing thickness of the insulation layer. They may or may not require fire retardants or anti-insect/pest treatments. Clay coating is a nontoxic additive which often meets these requirements. Traditional clay-impregnated light straw insulation has been used for centuries in the northern climates of Europe. The clay coating gives the insulation a half hour fire rating according to DIN (German) standards. An additional source of insulation derived from hemp is hempcrete, which consists of hemp hurds (shives) mixed with a lime binder. It has little structural strength but can provide racking strength and insulation with comparable or superior R-values depending on the ratio of hemp to binder. Cork insulation Board During the 2nd century C.100 -C.200 it was the first time human civilisation was introduced to material of cork, and it was only until the 19th century when cork was widely used leading to major industrial production. Cork, which is harvested from the Oak trees generally found in Portugal, Spain and other Mediterranean countries. When a tree reaches 20 to 35 years old, it can be harvested in 10-year intervals for more than 200 years. Oak bark has a lattice-like molecular structure filled with millions of air bubbles giving the bark resilience, elasticity, thermal insulating, acoustic dampening, and shock absorbing properties. The material is sustainable, reusable and recyclable. There are two types of cork, the pure cork, which is preferable due to its natural bonding properties, and the agglomeration cork. The pure cork is made by processes of heating and steaming whereby cork granulates are molded into a block. The natural resin of the cork acts as a bonding agent. An artificial bonding agent is required for the production of agglomeration cork. Cork is typically used for acoustic and thermal insulation within walls, floors, ceilings and facades. A natural fire retardant, thermal insulating cork board is also non-allergenic, simple-to-install and a considerably safer substitute to fiber and plastic based insulation. Notable challenges with cork include difficulty in maintenance and cleaning especially if the material is exposed to heavy use such as insulation for flooring. Minor damages to cork surface can make the material more prone to staining. Sheep's wool insulation Sheep's wool insulation is a very efficient thermal insulator with a similar performance to fiberglass, approximately R13-R16 for a 4-inch-thick layer. Sheep's wool has no reduction in performance even when condensation is present, but its fire retarding treatment can deteriorate through repeated moisture. It is made from the waste wool that the carpet and textile industries reject, and is available in both rolls and batts for both thermal and acoustic insulation of housing and commercial buildings. Wool is capable of absorbing as much as 40% of its own weight in condensation while remaining dry to the touch. As wool absorbs moisture it heats up and therefore reduces the risk of condensation. It has the unique ability to absorb VOC gases such as formaldehyde, nitrogen dioxide, sulphur dioxide and lock them up permanently. Sheep's wool insulation has a long lifetime due to the natural crimp in the fibre, endurance testing has shown it has a life expectancy of over 100 years. Wood fiber Wood fiber insulation is available as loose fill, flexible batts and rigid panels for all thermal and sound insulation uses. It can be used as internal insulation : between studs, joists or ceiling rafters, under timber floors to reduce sound transmittance, against masonry walls or externally : using a rain screen cladding or roofing, or directly plastered/rendered, over timber rafters or studs or masonry structures as external insulation to reduce thermal bridges. There are two manufacturing processes: a wet process similar to pulp mills in which the fibers are softened and under heat and pressure the ligin in the fibres is used to create boards. The boards are limited to approximately 25 mm thickness; thicker boards are made by gluing (with modified starch or PVA wood glue). Additives such as latex or bitumen are added to increase water resistance. a dry process where a synthetic binder such as pet (polyester melted bond), polyolefin or polyurethane is added and the boards/batts pressed to different densities to make flexible batts or rigid boards. Cotton batts Cotton insulation is increasing in popularity as an environmentally preferable option for insulation. It has an R-value of around 3.7 (RSI-0.65), equivalent to the median value for fiberglass batts. The cotton is primarily recycled industrial scrap, providing a sustainability benefit. The batts do not use the toxic formaldehyde backing found in fiberglass, and the manufacture is nowhere near as energy intensive as the mining and production process required for fiberglass. Boric acid is used as a flame retardant. A small quantity of polyolefin is melted as an adhesive to bind the product together (and is preferable to formaldehyde adhesives). Installation is similar to fiberglass, without the need for a respirator but requiring some additional time to cut the material. Cotton insulation costs about 10-20% more than fiberglass insulation. As with any batt insulation, proper installation is important to ensure high energy efficiency. Advantages Equivalent R-Value to typical fiberglass batts Recycled content, no formaldehyde or other toxic substances, and very low toxicity during manufacture (only from the polyolefin) May help qualify for LEED or similar environmental building certification programs Fibers do not cause itchiness, no cancer risk from airborne fibers Disadvantages Difficult to cut. Some installers may charge a slightly higher cost for installation as compared to other batts. This does not affect the effectiveness of the insulation, but may require choosing an installer more carefully, as any batt should be cut to fit the cavity well. Even with proper installation, batts do not completely seal the cavity against air movement (as with cellulose or expanding foam). Still requires a vapor retarder or barrier (unlike cellulose) May be hard to dry if a leak allows excessive moisture into the insulated cavity Loose-fill (including cellulose) Loose-fill materials can be blown into attics, finished wall cavities, and hard-to-reach areas. They are ideal for these tasks because they conform to spaces and fill in the nooks and crannies. They can also be sprayed in place, usually with water-based adhesives. Many types are made of recycled materials (a type of cellulose) and are relatively inexpensive. General procedure for retrofits in walls: Drill holes in wall with hole saw, taking firestops, plumbing pipes, and other obstructions into account. It may be desirable to drill two holes in each wall cavity/joist section, one at the bottom and a second at the top for both verification and top-off. Pump loose fill into wall cavity, gradually pulling the hose up as the cavity fills. Cap the holes in the wall. Advantages Cellulose insulation is environmentally preferable (80% recycled newspaper) and safe. It has a high recycled content and less risk to the installer than fiberglass (loose fill or batts). R-Value 3.4 – 3.8 (RSI-0.60 – 0.67) per inch (imperial units) Loose fill insulation fills the wall cavity better than batts. Wet-spray applications typically seal even better than dry-spray. Class I fire safety rating No formaldehyde-based binders Not made from petrochemicals nor chemicals with a high toxicity Disadvantages Weight may cause ceilings to sag if the material is very heavy. Professional installers know how to avoid this, and typical sheet rock is fine when dense-packed. Will settle over time, losing some of its effectiveness. Unscrupulous contractors may "fluff" insulation using fewer bags than optimal for a desired R-value. Dry-spray (but not wet-spray) cellulose can settle 20% of its original volume. However, the expected settling is included in the stated R-Value. The dense-pack dry installation reduces settling and increases R-value. R-values stated on packaging are based on laboratory conditions; air infiltration can significantly reduce effectiveness, particularly for fiberglass loose fill. Cellulose inhibits convection more effectively. In general, loose fill is seen as being better at reducing the presence of gaps in insulation than batts, as the cavity is sealed more carefully. Air infiltration through the insulating material itself is not studied well, but would be lower for wet-spray insulations such as wet-spray cellulose. May absorb moisture. Types Rock and slag wool, also known as mineral wool or mineral fiber. Made from rock (basalt, diabase), iron ore blast furnace slag, or recycled glass. Nonflammable. More resistant to airflow than fiberglass. Clumps and loses effectiveness when moist or wet, but does not absorb much moisture, and regains effectiveness once dried. Older mineral wool can contain asbestos, but normally this is in trace amounts. Cellulose insulation. Cellulose, is denser and more resistant to air flow than fiberglass. Persistent moisture will weaken aluminium sulphate flame-retardants in cellulose (which are sometimes used in the US). However, borate fire retardants (used primarily in Australia and commonly in the US) have been in use for more than 30 years and are not affected by moisture in any way. Dense-pack cellulose is highly resistant to air infiltration and is either installed into an open wall cavity using nets or temporary frames, or is retrofitted into finished walls. However, dense-pack cellulose blocks, but does not permanently seal, bypasses, in the way a closed-cell spray foam would. Furthermore, as with batts and blankets, warm, moist air will still pass through, unless there is a continuous near-perfect vapor barrier. Wet-spray cellulose insulation is similar to loose-fill insulation, but is applied with a small quantity of water to help the cellulose bind to the inside of open wall cavities, and to make the cellulose more resistant to settling. Spray application provides even better protection against air infiltration and improves wall rigidity. It also allows application on sloped walls, attics, and similar spaces. Wet-spray is best for new construction, as the wall must be allowed to dry completely before sealing with drywall (a moisture meter is recommended). Moist-spray (also called stabilized) cellulose uses less water to speed up drying time. Fiberglass. Usually pink, yellow, or white. Loses effectiveness when moist or wet, but does not absorb much water. Nonflammable. See Health effects of fiberglass. Natural insulations such as granulated cork, hemp fibres, grains, all which can be treated with a low toxicity fire and insect retardants Vermiculite. Generally gray or brown. Perlite. Generally white or yellow. Cotton, wool, hemp, corn cobs, strawdust and other harvested natural materials. Not common. Granulated cork. Cork is as good an insulator as foam. It does not absorb water as it consists of closed cells. Resists fire. Used in Europe. Most plant based insulations such as wood chips, wood fiber, sawdust, redwood bark, hemlock fiber, balsa wood, hemp fiber, flax fiber, etc. are hygroscopic. Wood absorbs water, which reduces its effectiveness as a thermal insulator. In the presence of moisture, wood is susceptible to mold, mildew, and rot. Careful design of wall, roof and floor systems as done in Europe avoid these problems which are due to poor design. Regulations US regulatory standards for cellulose insulation 16 CFR Part 1209 (Consumer Products Safety Commission, or CPSC) – covers settled density, corrosiveness, critical radiant flux, and smoldering combustion. ASTM Standard C-739 – loose-fill cellulose insulation – covers all factors of the CPSC regulation and five additional characteristics, R-value, starch content, moisture absorption, odor, and resistance to fungus growth. ASTM Standard C-1149 – Industry standard for self-supported spray-applied cellulose insulation for exposed or wall cavity application – covers density, R-value, surface burning, adhesive strength, smoldering combustion, fungi resistance, corrosion, moisture vapor absorption, odor, flame resistance permanency (no test exists for this characteristic), substrate deflection (for exposed application products), and air erosion (for exposed application products). 16 CFR Part 460 – (Federal Trade Commission regulation) commonly known as the "R-Value Rule," intended to eliminate misleading insulation marketing claims and ensure publication of accurate R-Value and coverage data. Aerogels Skylights, solariums and other special applications may use aerogels, a high-performance, low-density material. Silica aerogel has the lowest thermal conductivity of any known substance (short of a vacuum), and carbon aerogel absorbs infrared radiation (i.e., heat from sun rays) while still allowing daylight to enter. The combination of silica and carbon aerogel gives the best insulating properties of any known material, approximately twice the insulative protection of the next best insulative material, closed-cell foam. Straw bales The use of highly compressed straw bales as insulation, though uncommon, is gaining popularity in experimental building projects for the high R-value and low cost of a thick wall made of straw. "Research by Joe McCabe at the Univ. of Arizona found R-value for both wheat and rice bales was about R-2.4 (RSI-0.42) per inch with the grain, and R-3 (RSI-0.53) per inch across the grain. A 23" wide 3 string bale laid flat = R-54.7 (RSI-9.64), laid on edge (16" wide) = R-42.8 (RSI-7.54). For 2 string bales laid flat (18" wide) = R-42.8 (RSI-7.54), and on edge (14" wide) = R-32.1 (RSI-5.66)" (Steen et al.: The Straw Bale House, 1994). Using a straw bale in-fill sandwich roof greatly increases the R value. This compares very favorably with the R-19 (RSI-3.35) of a conventional 2 x 6 insulated wall. When using straw bales for construction, the bales must be tightly-packed and allowed to dry out sufficiently. Any air gaps or moisture can drastically reduce the insulating effectiveness. Reflective insulation and radiant barriers Reflective insulation and radiant barriers reduce the radiation of heat to or from the surface of a material. Radiant barriers will reflect radiant energy. A radiant barrier by itself will not affect heat conducted through the material by direct contact or heat transferred by moist air rising or convection. For this reason, trying to associate R-values with radiant barriers is difficult and inappropriate. The R-value test measures heat transfer through the material, not to or from its surface. There is no standard test designed to measure the reflection of radiated heat energy alone. Radiated heat is a significant means of heat transfer; the sun's heat arrives by radiating through space and not by conduction or convection. At night the absence of heat (i.e. cold) is the exact same phenomenon, with the heat radiating described mathematically as the linear opposite. Radiant barriers prevent radiant heat transfer equally in both directions. However, heat flow to and from surfaces also occurs via convection, which in some geometries is different in different directions. Reflective aluminum foil is the most common material used as a radiant barrier. It has no significant mass to absorb and retain heat. It also has very low emittance values "E-values" (typically 0.03 compared to 0.90 for most bulk insulation) which significantly reduces heat transfer by radiation. Types of radiant barriers Foil or "reflective foil laminate"s (RFL). Foil-faced polyurethane or foil-faced polyisocyanurate panels. Foil-faced polystyrene. This laminated, high density EPS is more flexible than rigid panels, works as a vapor barrier, and works as a thermal break. Uses include the underside of roof sheathing, ceilings, and on walls. For best results, this should not be used as a cavity fill type insulation. Foil-backed bubble pack. This is thin, more flexible than rigid panels, works as a vapor barrier, and resembles plastic bubble wrap with aluminum foil on both sides. Often used on cold pipes, cold ducts, and the underside of roof sheathing. Light-colored roof shingles and reflective paint. Often called cool roofs, these help to keep attics cooler in the summer and in hot climates. To maximize radiative cooling at night, they are often chosen to have high thermal emissivity, whereas their low emissivity for the solar spectrum reflects heat during the day. Metal roofs; e.g., aluminum or copper. Radiant barriers can function as a vapor barriers and serve both purposes with one product. Materials with one shiny side (such as foil-faced polystyrene) must be positioned with the shiny side facing an air space to be effective. An aluminum foil radiant barrier can be placed either way – the shiny side is created by the rolling mill during the manufacturing process and does not affect the reflective of the foil material. As radiant barriers work by reflecting infra-red energy, the aluminum foil would work just the same if both sides were dull. Reflective Insulation Insulation is a barrier material to resist/reduce substance (water, vapor, etc. ) /energy (sound, heat, electric, etc.) to transfer from one side to another. Heat/ Thermal Insulation is a barrier material to resist / block / reflect the heat energy (either one or more of the Conduction, Convection or Radiation) to transfer from one side to another. Reflective Insulation is one of the Heat/Thermal Insulation to reflect Radiation Heat (Radiant Heat) transfer from one side to another due to the reflective surface (or low emittance). There are a lot of definitions about “Thermal/Heat Insulation” and the common misinterpretation of “Thermal/Heat Insulation” = “Bulk/Mass/Batt Insulation” which is actually uses to resist Conduction Heat Transfer with certain "R-Value". As such Materials reflecting Radiant Heat with negligible “R-Value” should also be classified as “Thermal/ Heat Insulation”. Thus Reflective Insulation = Radiant Barrier Advantages Very effective in warmer climates No change in thermal performance over time due to compaction, disintegration or moisture absorption Thin sheets takes up less room than bulk insulation Can act as a vapor barriers Non-toxic/non-carcinogenic Will not mold or mildew Radon retarder, will limit radon penetration through the floor Disadvantages Must be combined with other types of insulation in very cold climates May result in an electrical safety hazard where the foil comes into contact with faulty electrical wiring Hazardous and discontinued insulation Certain forms of insulation used in the past are now no longer used because of recognized health risks. Urea-formaldehyde foam (UFFI) and panels Urea-formaldehyde insulation releases poisonous formaldehyde gas, causing indoor air quality problems. The chemical bond between the urea and formaldehyde is weak, resulting in degradation of the foam cells and emission of toxic formaldehyde gas into the home over time. Furthermore, some manufacturers used excess formaldehyde to ensure chemical bonding of all of the urea. Any leftover formaldehyde would escape after the mixing. Most states outlawed it in the early 1980s after dangers to building occupants were discovered. However emissions are highest when the urea-formaldehyde is new and decrease over time, so houses that have had urea-formaldehyde within their walls for years or decades do not require remediation. UFFI provides little mechanical strength, as the material is weak and brittle. Before its risks were recognized, it was used because it was a cheap, effective insulator with a high R-value and its open-cell structure was a good acoustic insulator. Though it absorbed moisture easily, it regained effectiveness as an insulator when dried. Asbestos Asbestos is a mineral fiber that occurs in rock and soil that has traditionally been used as an insulation material in many homes and buildings. It is fireproof, a good thermal and electrical insulator, and resistant to chemical attack and wear. It has also been found that asbestos can cause cancer when in friable form (that is, when likely to release fibers into the air – when broken, jagged, shredded, or scuffed). When found in the home, asbestos often resembles grayish-white corrugated cardboard coated with cloth or canvas, usually held in place around pipes and ducts with metal straps. Things that typically might contain asbestos: Boiler and furnace insulation. Heating duct wrapping. Pipe insulation ("lagging"). Ducting and transite pipes within slabs. Acoustic ceilings. Textured materials. Resilient flooring. Blown-in insulation. Roofing materials and felts. Health and safety issues Spray polyurethane foam (SPF) All polyurethane foams are composed of petrochemicals. Foam insulation often uses hazardous chemicals with high human toxicity, such as isocyanates, benzene and toluene. The foaming agents no longer use ozone-depleting substances. Personal Protective Equipment is required for all people in the area being sprayed to eliminate exposure to isocyanates which constitute about 50% of the foam raw material. Fiberglass Fiberglass is the most common residential insulating material, and is usually applied as batts of insulation, pressed between studs. Health and safety issues include potential cancer risk from exposure to glass fibers, formaldehyde off-gassing from the backing/resin, use of petrochemicals in the resin, and the environmental health aspects of the production process. Green building practices shun Fiberglass insulation. The World Health Organization has declared fiber glass insulation as potentially carcinogenic (WHO, 1998). In October 2001, an international expert review by the International Agency for Research on Cancer (IARC) re-evaluated the 1988 IARC assessment of glass fibers and removed glass wools from its list of possible carcinogens by downgrading the classification of these fibers from Group 2B (possible carcinogen) to Group 3 (not classifiable as to carcinogenicity in humans). All fiber glass wools that are commonly used for thermal and acoustical insulation are included in this classification. IARC noted specifically: "Epidemiologic studies published during the 15 years since the previous IARC Monographs review of these fibers in 1988 provide no evidence of increased risks of lung cancer or mesothelioma (cancer of the lining of the body cavities) from occupational exposures during manufacture of these materials, and inadequate evidence overall of any cancer risk." The IARC downgrade is consistent with the conclusion reached by the US National Academy of Sciences, which in 2000 found "no significant association between fiber exposure and lung cancer or nonmalignant respiratory disease in the MVF [man-made vitreous fiber] manufacturing environment." However, manufacturers continue to provide cancer risk warning labels on their products, apparently as indeminfication against claims. However, the literature should be considered carefully before determining that the risks should be disregarded. The OSHA chemical sampling page provides a summary of the risks, as does the NIOSH Pocket Guide. Miraflex is a new type of fiberglass batt that has curly fibers that are less itchy and create less dust. You can also look for fiberglass products factory-wrapped in plastic or fabric. Fiberglass is energy intensive in manufacture. Fiberglass fibers are bound into batts using adhesive binders, which can contain adhesives that can slowly release formaldehyde over many years. The industry is mitigating this issue by switching to binder materials not containing formaldehyde; some manufacturers offer agriculturally based binder resins made from soybean oil. Formaldehyde-free batts and batts made with varying amounts of recycled glass (some approaching 50% post-consumer recycled content) are available. Loose-fill cellulose Cellulose is 100% natural and 75–85% of it is made from recycled newsprint. Health issues (if any) appear to be minor, and most concerns around the flame retardants and mold potential seem to be misrepresentations. Cellulose is classified by OSHA as a dust nuisance during installation, and the use of a dust mask is recommended. Cellulose is treated with a flame retardant and insect repellent, usually boric acid and sometimes borax to resist insects and rodents. To humans, boric acid has a toxicity comparable to table salt. Mold has been seen as a potential concern. However, according to the Cellulose Manufacturer's Association, "One thing that has not contributed to mold problems is the growing popularity of cellulose insulation among knowledgeable home owners who are interested in sustainable building practices and energy conservation. Mycology experts (mycology is the study of mold) are often quoted as saying: “Mold grows on cellulose.” They are referring to cellulose the generic material that forms the cell walls of all plants, not to cellulose insulation. Unfortunately, all too often this statement is taken to mean that cellulose insulation is exceptionally susceptible to mold contamination. In fact, due to its favorable moisture control characteristics and other factors associated with the manufacturing process relatively few cases of significant mold growth on cellulose insulation have been reported. All the widely publicized incidents of serious mold contamination of insulation have involved fiber insulation materials other than cellulose.". Moisture is always a concern for homes, and the wet-spray application of cellulose may not be a good choice in particularly wet climates unless the insulation can be verified to be dry before drywall is added. In very wet climates the use of a moisture meter will ensure proper installation and eliminate any installation mold issues (almost any insulation that becomes and remains wet can in the future cause a mold issue). The dry-spray application is another option for very wet climates, allowing for a faster installation (though the wet-spray cellulose has an even higher R-value and can increase wall rigidity). US Health and Safety Partnership Program In May 1999, the North American Insulation Manufacturers Association began implementing a comprehensive voluntary work practice partnership with the US Occupational Safety and Health Administration (OSHA). The program, known as the Health and Safety Partnership Program, or HSPP, promotes the safe handling and use of insulation materials and incorporates education and training for the manufacture, fabrication, installation and removal of fiber glass, rock wool and slag wool insulation products. (See health effects of fiberglass). (For authoritative and definitive information on fiber glass and rock and slag wool insulation, as well as the HSPP, consult the North American Insulation Manufacturers Association (NAIMA) website). See also Condensation Enovate Low-energy building Superinsulation Thermal mass Quadruple glazing Weatherization Notes References U.S. Environmental Protection Agency and the US Department of Energy's Office of Building Technologies. Loose-Fill Insulations, DOE/GO-10095-060, FS 140, Energy Efficiency and Renewable Energy Clearinghouse (EREC), May 1995. Insulation Fact Sheet, US Department of Energy, update to be published 1996. Also available from EREC. Lowe, Allen. "Insulation Update," The Southface Journal, 1995, No. 3. Southface Energy Institute, Atlanta, Georgia, US ICAA Directory of Professional Insulation Contractors, 1996, and A Plan to Stop Fluffing and Cheating of Loose-Fill Insulation in Attics, Insulation Contractors Association of America, 1321 Duke St., #303, Alexandria, VA 22314, (703)739-0356. US DOE Consumer Energy Information. Insulation Information for Nebraska Homeowners, NF 91–40. Article in Daily Freeman, Thursday, 8 September 2005, Kingston, New York, US TM 5-852-6 AFR 88–19, Volume 6 (Army Corps of Engineers publication). CenterPoint Energy Customer Relations. US DOE publication, Residential Insulation US DOE publication, Energy Efficient Windows US EPA publication on home sealing DOE/CE 2002 University of North Carolina at Chapel Hill Alaska Science Forum, May 7, 1981, Rigid Insulation, Article #484, by T. Neil Davis, provided as a public service by the Geophysical Institute, University of Alaska Fairbanks, in cooperation with the UAF research community. Guide raisonné de la construction écologique (Guide to products /fabricants of green building materials mainly in France but also surrounding countries), Batir-Sain 2004 Insulators Building materials de:Dämmstoff#W.C3.A4rmed.C3.A4mmstoffe im Vergleich .5B
Building insulation material
[ "Physics", "Engineering" ]
9,812
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
8,855,773
https://en.wikipedia.org/wiki/Allied%20technological%20cooperation%20during%20World%20War%20II
The Allies of World War II cooperated extensively in the development and manufacture of new and existing technologies to support military operations and intelligence gathering during the Second World War. There are various ways in which the allies cooperated, including the American Lend-Lease scheme and hybrid weapons such as the Sherman Firefly as well as the British Tube Alloys nuclear weapons research project which was absorbed into the American-led Manhattan Project. Several technologies invented in Britain proved critical to the military and were widely manufactured by the Allies during the Second World War. Tizard Mission The origin of the cooperation stemmed from a 1940 visit by the Aeronautical Research Committee chairman Henry Tizard, during which Tizard arranged to transfer UK military technology to the US in the event that Hitler's planned invasion of the UK should succeed. Tizard led a British technical mission, known as the Tizard Mission, containing details and examples of British technological developments in fields such as radar, jet propulsion and also the early British research into the atomic bomb. One of the devices brought to the US by the mission, the cavity magnetron, was later described as "the most valuable cargo ever brought to our shores". Small arms Small arms began to be shared after the fall of France, most of the 'sharing' being one sided as America was not yet directly involved in the conflict and thus all the movement was from the United States to the United Kingdom. In the months following Operation Dynamo, as British manufacturers progressed in building replacements for the materiel lost by the British Army in France, the British government looked overseas for additional sources of equipment to assist in overcoming shortages and prepare for future offensives. The most extreme example of the shortages were found in the quickly improvised Local Defence Volunteers, later renamed the Home Guard, who were forced to train with broom handles and makeshift pikes using lengths of piping and old bayonets until weapons could be supplied. In addition to those produced in Britain, small arms and ammunition were obtained from Commonwealth countries and also purchased from U.S. manufacturers until they were supplied under Lend-Lease beginning in 1941. The weapons obtained from the United States included the Tommy gun, M1911A1 pistol and the M1917 revolver produced by Colt and Smith & Wesson, all primarily produced in .45 ACP. The Home Guard received the Browning .30 machine gun, the M1918 .30 BAR and the P17 .30 Enfield rifle. M1917 Enfield rifles chambered for .303 British were also provided by the U.S. while all .30-caliber U.S. rifles, BARs and machine guns were chambered for .30-06 Springfield Later, the M1919 .30 machine gun and the M2HB .50 machine gun chambered in .50 BMG were provided by the U.S. for infantry and anti-aircraft use. Browning AN2 light machine guns in .303 British caliber were already in standard use on British aircraft beginning in the late 1930s. Britain supplied small arms to the USSR, and the 9mm Sten submachine gun was supplied to Soviet partisan troops. Artillery The British made use of many American towed artillery pieces during the war, such as the M2 105 mm howitzers, M1A1 75 mm pack howitzers, 155 mm guns (Long Toms). These weapons were supplied under lend-lease or bought outright. Tank/tank destroyer guns used by the British included the 37 mm M5/M6 gun (General Stuart and General Grant/Lee tanks), 75 mm M2 gun (General Grant/Lee), 75 mm M3 gun (General Grant/Lee and General Sherman), 76 mm gun M1 (General Sherman) and 3-inch gun M7 (3-inch GMC M10). The Americans in turn used a British artillery piece, the Ordnance QF 6-pounder 7 cwt anti-tank gun. The US realized at the start of the war that their own 37 mm gun M3 would soon be obsolete and thus they produced a license built version of the QF 6-pounder under the designation 57 mm gun M1. Both 76 mm and 75 mm guns were mounted on tanks sent to the Soviets by the US, while the British tanks sent were armed with both the Ordnance QF 2-pounder and the Ordnance QF 6-pounder. Another technology taken to the US, by Henry Tizard, for further development and mass production, was the (radio-frequency) proximity fuse. It was five times as effective as contact or timed fuzes and was devastating in naval use against Japanese aircraft and so effective against German ground troops that General George S. Patton said it "won the Battle of the Bulge for us." Tanks and other vehicles The medium tank M4 was used in all theatres of the Second World War. It had a versatile reliable design and was easy to produce, thus huge numbers were made and provided to both Britain and the USSR by the United States under Lend-Lease. Despite official opinions, the medium tank M4 was well liked by some Soviet tankers, while others called it the best tank for peacetime service. When Britain received the tank, it was given the designation Sherman, as part of the UK practice of naming its US-built tanks after American Civil War generals. Both the British and the Soviets re-armed their M4s with their own tank guns. The Soviets re-armed a small number with the standard 76 mm F-34 tank gun but so much 75 mm ammunition was supplied by the US that the conversions were not widespread. Unfortunately, the fairly short-barreled 75mm gun most Shermans came equipped with did not offer very good armor penetration even with specialty ammunition, especially against the then-new Panther and Tiger. However, the British 76.2mm (3-inch) Ordnance QF 17-pounder, one of the best anti-tank guns of the period could be fitted in the Sherman's turret with modifications to the gun, a new gun mantlet and welding a bustle to the turret rear; this modification was known as the Firefly. The combination of British and American weaponry proved desirable, although despite the United States building a few 17-pounder Fireflies from new, it never went into mass production and did not see action. The US had its own 76 mm calibre long-barrel gun for the Sherman. While it wasn't as good as the 17-pounder, it still had a much better chance of successfully engaging German heavy tanks especially at close range, offered consistent kill-power against more equally-matched opponents at all ranges, and didn't require major modification to fit like the 17-pounder did. The Firefly thus remained a British variant of the Sherman. The M10 tank destroyer was also up-gunned with the 17-pounder, creating the M10C tank destroyer, sometimes known as "Achilles". This was used in accordance with British tactical doctrine for tank destroyers, in that they were considered self-propelled anti-tank guns rather than aggressive 'tank hunters'. Used in this fashion, it proved an effective weapon. The British also used the Sherman hull for two other Sherman variants known as the Crab, a mine flailing tank, and the DD Sherman, the 'DD' (Duplex Drive) The DD was an amphibious tank. A flotation screen gave buoyancy and two propellers powered by the tank's engine gave propulsion in the water. On reaching land the screens could be dropped and the tank could fight in the normal manner. The DD, another key example of combining technologies, was used by both British and American forces during Operation Overlord. The DD had impressed US General Dwight D. Eisenhower during demonstrations and was readily accepted by the Americans. The Americans did not accept the Sherman Crab, which could have assisted combat engineers with clearing mines under fire, protected by armour. Armoured recovery vehicles (ARVs) were also converted from Shermans by the British as well as the specialist BARV (Beach Armoured Recovery Vehicle) designed to push-off landing craft and salvage vehicles which would otherwise have been lost. The British supplied tanks to the USSR in the form of the Matilda, Valentine and Churchill infantry tanks. Soviet tank soldiers liked the Valentine for its reliability, cross country performance and low silhouette. The Soviet's opinion of the Matilda and Churchill was less favourable as a result of their weak 40-mm guns (without HE shells) and inability to operate in harsh rasputitsa, winter and offroad conditions. Deliveries of M3 Half-tracks from the US to the Soviet Union were a significant benefit to mechanized Red Army units. Soviet industry produced few armoured personnel carriers, so Lend-Lease American vehicles were in great demand for fast movement of troops in front-line conditions. While M3s had only limited protection, common trucks had no protection at all. In addition, a large part of the Red Army truck fleet was American Studebakers, which were highly regarded by Soviet drivers. After the war, Soviet designers paid a lot of attention to create their own 6x6 army truck and the Studebaker was the template for this development. In 1942, a T-34 and a KV-1 tank were sent by the Soviet Union to the US where they were evaluated at the Aberdeen Proving Ground. Another T-34 was sent to the British. Aircraft Britain supplied Hawker Hurricanes to the Soviet Union early in the Great Patriotic War to help equip the Soviet Air Force against the then technologically superior Luftwaffe. British RAF engineer Frank Whittle travelled to the US in 1942 to help General Electric start jet engine production. The American P-51 Mustang was originally designed to a British specification for use by the Royal Air Force and entered service with them in 1942, and later versions were built with a Rolls-Royce Merlin aero-engine. This engine was being produced in the United States by Packard as the Packard Merlin. In addition to the British making use of American planes the US also made use of some Supermarine Spitfires both in escorting USAAF 8th Air Force bombers in Europe as well as being the primary fighter of the 12th Air Force in North Africa. In addition Bristol Beaufighter served as night fighters in the Mediterranean, and two squadrons of de Havilland Mosquito equipped the 8th Air Force as its primary photo reconnaissance and chaff deployment aircraft. The United States supplied several aircraft types to both the Royal Navy and RAF - all three of the U.S. Navy's primary fighters during the war years, the Wildcat, Corsair (with the RN assisting the Americans with preparing the Corsair for U.S. naval carrier service by 1944), and Hellcat also served with the RN's Fleet Air Arm, with the Royal Air Force using a wide range of USAAF types. A wide range of American aircraft designs also went to the Soviet Union's VVS air arm through Lend-Lease, primarily fighters like the P-39 and P-63 used for aerial combat, along with attack and medium bombers like the A-20 and the B-25 being among the more prominent types, both bombers being well suited to the type of lower-altitude strike missions the Soviets had as a top priority. Radar The British demonstrated the cavity magnetron to the Americans at RCA, Bell Labs. It was 100 times as powerful than anything they had seen and enabled the development of airborne radar. Nuclear weapons In 1942, the British nuclear weapons research had fallen behind US and unable to match US resources, the United Kingdom agreed to merging their work with the American efforts. Around 20 British scientists and technical staff to America, along with their work, which had been carried out under the codename 'Tube Alloys'. The scientists joined the Manhattan Project at Los Alamos, New Mexico, where their work on uranium enrichment was instrumental in jump-starting the project. In addition Britain, was vital in sourcing raw materials for the project, both as the only source in the world of Nickel Powder required to build gaseous diffusers and providing Uranium both from its mine in British Congo as well as contracting a secondary supply from Sweden. Code-breaking technology Considerable information was transmitted from the UK to the US during and after WWII relating to code-breaking methods, the codes themselves, cryptoanalyst visits, mechanical and digital devices for speeding code-breaking, etc. When the Atlantic convoys of war material from the US to the UK came under serious threat from U-boats, considerable encouragement and practical help was given by the US to accelerate the development of code-breaking machines. Subsequent co-operation led to significant success in Australia and the far East for breaking encrypted Japanese messages. Other technologies Other technologies developed by the British and shared with the Americans and other Allies include ASDIC (sonar), the Bailey bridge, gyro gunsight, jet engine, Liberty ship, RDX, Rhino tank, Torpex, traveling-wave tube, proximity fuze. Technologies developed by the Americans and shared with the British and Allies include the bazooka, LVT, DUKW, Fido (acoustic torpedo). Canada and the U.S. independently developed and shared the walkie-talkie. Legacy The Tizard Mission was the foundation for cooperation in scientific research at institutions within and across the United States, United Kingdom and Canada. Many Norwegian scientists and technologists took part in British scientific research during the period when Germany occupied Norway between 1940 and 1945. This resulted in the Norwegian Defence Research Establishment, formed in 1946. After the war ended, the US ended all nuclear co-operation with Britain. However, the demonstration of British Hydrogen bomb, and the launch of Sputnik 1 by the Soviet Union, both in 1957, resulted in the US resuming the wartime co-operation and led to a Mutual Defence Agreement between the two nations in 1958. Under this agreement, American technology was adapted for British nuclear weapons and various fissile materials were exchanged to resolve each other's specific shortages. Cooperation between British intelligence agencies and the United States Intelligence Community in the post-war period became the cornerstone of Western intelligence gathering and the "Special Relationship" between the United Kingdom and the United States. Many military inventions during the war found civilian uses. See also British Purchasing Commission List of World War II electronic warfare equipment Operations research Radiation Laboratory Telecommunications Research Establishment References Military equipment of World War II United Kingdom–United States military relations Soviet Union–United States military relations Soviet Union–United Kingdom military relations Technological races Science and technology during World War II Allies of World War II
Allied technological cooperation during World War II
[ "Technology" ]
2,931
[ "Science and technology during World War II", "Science and technology by war" ]
8,855,979
https://en.wikipedia.org/wiki/Nmrpipe
NMRPipe is a Nuclear Magnetic Resonance data processing program. The project was preceded by other functionally similar programs but is, by and large, one of the most popular software packages for NMR Data Processing in part due to its efficiency (due to its utilization of Unix pipes) and ease of use (due to the large amount of logic embedded in its individual functions). NMRPipe consists of a series of "functions" which can be applied to a FID data file in any sequence, by using UNIX pipes. Each individual function in NMRPipe has a specific task and a set of arguments which can be sent to configure its behavior. See also Comparison of NMR software External links NmrPipe website nmrPipe on NMR wiki Nuclear magnetic resonance software Medical software
Nmrpipe
[ "Chemistry", "Biology" ]
162
[ "Nuclear magnetic resonance", "Nuclear magnetic resonance software", "Medical software", "Nuclear chemistry stubs", "Nuclear magnetic resonance stubs", "Medical technology" ]
8,861,079
https://en.wikipedia.org/wiki/Euler%27s%20continued%20fraction%20formula
In the analytic theory of continued fractions, Euler's continued fraction formula is an identity connecting a certain very general infinite series with an infinite continued fraction. First published in 1748, it was at first regarded as a simple identity connecting a finite sum with a finite continued fraction in such a way that the extension to the infinite case was immediately apparent. Today it is more fully appreciated as a useful tool in analytic attacks on the general convergence problem for infinite continued fractions with complex elements. The original formula Euler derived the formula as connecting a finite sum of products with a finite continued fraction. The identity is easily established by induction on n, and is therefore applicable in the limit: if the expression on the left is extended to represent a convergent infinite series, the expression on the right can also be extended to represent a convergent infinite continued fraction. This is written more compactly using generalized continued fraction notation: Euler's formula If ri are complex numbers and x is defined by then this equality can be proved by induction . Here equality is to be understood as equivalence, in the sense that the 'th convergent of each continued fraction is equal to the 'th partial sum of the series shown above. So if the series shown is convergent – or uniformly convergent, when the ri's are functions of some complex variable z – then the continued fractions also converge, or converge uniformly. Proof by induction Theorem: Let be a natural number. For complex values , and for complex values , Proof: We perform a double induction. For , we have and Now suppose both statements are true for some . We have where by applying the induction hypothesis to . But if implies implies , contradiction. Hence completing that induction. Note that for , if , then both sides are zero. Using and , and applying the induction hypothesis to the values , completing the other induction. As an example, the expression can be rearranged into a continued fraction. This can be applied to a sequence of any length, and will therefore also apply in the infinite case. Examples The exponential function The exponential function ex is an entire function with a power series expansion that converges uniformly on every bounded domain in the complex plane. The application of Euler's continued fraction formula is straightforward: Applying an equivalence transformation that consists of clearing the fractions this example is simplified to and we can be certain that this continued fraction converges uniformly on every bounded domain in the complex plane because it is equivalent to the power series for ex. The natural logarithm The Taylor series for the principal branch of the natural logarithm in the neighborhood of 1 is well known: This series converges when |x| < 1 and can also be expressed as a sum of products: Applying Euler's continued fraction formula to this expression shows that and using an equivalence transformation to clear all the fractions results in This continued fraction converges when |x| < 1 because it is equivalent to the series from which it was derived. The trigonometric functions The Taylor series of the sine function converges over the entire complex plane and can be expressed as the sum of products. Euler's continued fraction formula can then be applied An equivalence transformation is used to clear the denominators: The same argument can be applied to the cosine function: The inverse trigonometric functions The inverse trigonometric functions can be represented as continued fractions. An equivalence transformation yields The continued fraction for the inverse tangent is straightforward: A continued fraction for We can use the previous example involving the inverse tangent to construct a continued fraction representation of π. We note that And setting x = 1 in the previous result, we obtain immediately The hyperbolic functions Recalling the relationship between the hyperbolic functions and the trigonometric functions, And that the following continued fractions are easily derived from the ones above: The inverse hyperbolic functions The inverse hyperbolic functions are related to the inverse trigonometric functions similar to how the hyperbolic functions are related to the trigonometric functions, And these continued fractions are easily derived: See also Gauss's continued fraction Engel expansion List of topics named after Leonhard Euler References Continued fractions Leonhard Euler
Euler's continued fraction formula
[ "Mathematics" ]
849
[ "Continued fractions", "Number theory" ]
8,861,618
https://en.wikipedia.org/wiki/Patterson%20power%20cell
The Patterson power cell is a cold fusion device invented by chemist James A. Patterson, which he claimed created 200 times more energy than it used. Patterson claimed the device neutralized radioactivity without emitting any harmful radiation. Cold fusion was the subject of an intense scientific controversy in 1989, before being discredited in the eyes of mainstream science. Physicist Robert L. Park describes the device as fringe science in his book Voodoo Science. Company formed In 1995, Clean Energy Technologies Inc. was formed to produce and promote the power cell. Claims and observations Patterson variously said it produced a hundred or two hundred times more power than it used. Representatives promoting the device at the Power-Gen '95 Conference said that an input of 1 watt would generate more than 1,000 watts of excess heat (waste heat). This supposedly happened as hydrogen or deuterium nuclei fuse together to produce heat through a form of low energy nuclear reaction. The by-products of nuclear fusion, e.g. a tritium nucleus and a proton or an 3He nucleus and a neutron, were not detected in any reliable way, leading experts to think that no such fusion was taking place. It was further claimed that if radioactive isotopes such as uranium were present, the cell enables the hydrogen nuclei to fuse with these isotopes, transforming them into stable elements and thus neutralizing the radioactivity. It was claimed that the transformation would be achieved without releasing any radiation to the environment and without expending any energy. A televised demonstration on June 11, 1997, on Good Morning America provided no proof for the claims. As at 2002, the neutralization of radioactive isotopes has only been achieved through intense neutron bombardment in a nuclear reactor or large scale high energy particle accelerator, and at a large expense of energy. Patterson has carefully distanced himself from the work of Fleischmann and Pons and from the label of "cold fusion", due to the negative connotations associated to them since 1989. Ultimately, this effort was unsuccessful, and not only did it inherit the label of pathological science, but it managed to make cold fusion look a little more pathological in the public eye. Some cold fusion proponents view the cell as a confirmation of their work, while critics see it as "the fringe of the fringe of cold fusion research", since it attempts to commercialize cold fusion on top of making bad science. In 2002, John R. Huizenga, professor of nuclear chemistry at the University of Rochester, who was head of a government panel convened in 1989 to investigate the cold fusion claims of Fleischmann and Pons, and who wrote a book about the controversy, said "I would be willing to bet there's nothing to it", when asked about the Patterson Power Cell. Replications George H. Miley is a professor of nuclear engineering and a cold fusion researcher who claims to have replicated the Patterson power cell. During the 2011 World Green Energy Symposium, Miley stated that his device continuously produces several hundred watts of power. Earlier results by Miley have not convinced researchers. On Good Morning America, Quintin Bowles, professor of mechanical engineering at the University of Missouri–Kansas City, claimed in 1996 to have successfully replicated the Patterson power cell. In the book Voodoo Science, Bowles is quoted as having stated: "It works, we just don't know how it works." A replication has been attempted at Earthtech, using a CETI supplied kit. They were not able to replicate the excess heat. References Further reading Bailey, Patrick and Fox, Hal (October 20, 1997). A review of the Patterson Power Cell. Retrieved November 19, 2011. An earlier version of this paper appears in: Energy Conversion Engineering Conference, 1997; Proceedings of the 32nd Intersociety Energy Conversion Engineering Conference. Publication Date: Jul 27 – Aug 1, 1997. Volume 4, pages 2289–2294. Meeting Date: July 27, 1997 – January 8, 1997. Location: Honolulu, HI, USA. Ask the experts, "What is the current scientific thinking on cold fusion? Is there any possible validity to this phenomenon?", Scientific American, October 21, 1999,(Patterson is mentioned on page 2). Retrieved December 5, 2007 Chemical equipment Electrochemistry Electrolysis Fringe physics Cold fusion
Patterson power cell
[ "Physics", "Chemistry", "Engineering" ]
877
[ "Nuclear physics", "Chemical equipment", "Electrochemistry", "Cold fusion", "nan", "Electrolysis", "Nuclear fusion" ]
1,595,681
https://en.wikipedia.org/wiki/Lie%20theory
In mathematics, the mathematician Sophus Lie ( ) initiated lines of study involving integration of differential equations, transformation groups, and contact of spheres that have come to be called Lie theory. For instance, the latter subject is Lie sphere geometry. This article addresses his approach to transformation groups, which is one of the areas of mathematics, and was worked out by Wilhelm Killing and Élie Cartan. The foundation of Lie theory is the exponential map relating Lie algebras to Lie groups which is called the Lie group–Lie algebra correspondence. The subject is part of differential geometry since Lie groups are differentiable manifolds. Lie groups evolve out of the identity (1) and the tangent vectors to one-parameter subgroups generate the Lie algebra. The structure of a Lie group is implicit in its algebra, and the structure of the Lie algebra is expressed by root systems and root data. Lie theory has been particularly useful in mathematical physics since it describes the standard transformation groups: the Galilean group, the Lorentz group, the Poincaré group and the conformal group of spacetime. Elementary Lie theory The one-parameter groups are the first instance of Lie theory. The compact case arises through Euler's formula in the complex plane. Other one-parameter groups occur in the split-complex number plane as the unit hyperbola and in the dual number plane as the line In these cases the Lie algebra parameters have names: angle, hyperbolic angle, and slope. These species of angle are useful for providing polar decompositions which describe sub-algebras of 2 x 2 real matrices. There is a classical 3-parameter Lie group and algebra pair: the quaternions of unit length which can be identified with the 3-sphere. Its Lie algebra is the subspace of quaternion vectors. Since the commutator ij − ji = 2k, the Lie bracket in this algebra is twice the cross product of ordinary vector analysis. Another elementary 3-parameter example is given by the Heisenberg group and its Lie algebra. Standard treatments of Lie theory often begin with the classical groups. History and scope Early expressions of Lie theory are found in books composed by Sophus Lie with Friedrich Engel and Georg Scheffers from 1888 to 1896. In Lie's early work, the idea was to construct a theory of continuous groups, to complement the theory of discrete groups that had developed in the theory of modular forms, in the hands of Felix Klein and Henri Poincaré. The initial application that Lie had in mind was to the theory of differential equations. On the model of Galois theory and polynomial equations, the driving conception was of a theory capable of unifying, by the study of symmetry, the whole area of ordinary differential equations. According to historian Thomas W. Hawkins, it was Élie Cartan that made Lie theory what it is: While Lie had many fertile ideas, Cartan was primarily responsible for the extensions and applications of his theory that have made it a basic component of modern mathematics. It was he who, with some help from Weyl, developed the seminal, essentially algebraic ideas of Killing into the theory of the structure and representation of semisimple Lie algebras that plays such a fundamental role in present-day Lie theory. And although Lie envisioned applications of his theory to geometry, it was Cartan who actually created them, for example through his theories of symmetric and generalized spaces, including all the attendant apparatus (moving frames, exterior differential forms, etc.) Lie's three theorems In his work on transformation groups, Sophus Lie proved three theorems relating the groups and algebras that bear his name. The first theorem exhibited the basis of an algebra through infinitesimal transformations. The second theorem exhibited structure constants of the algebra as the result of commutator products in the algebra. The third theorem showed these constants are anti-symmetric and satisfy the Jacobi identity. As Robert Gilmore wrote: Lie's three theorems provide a mechanism for constructing the Lie algebra associated with any Lie group. They also characterize the properties of a Lie algebra. ¶ The converses of Lie’s three theorems do the opposite: they supply a mechanism for associating a Lie group with any finite dimensional Lie algebra ... Taylor's theorem allows for the construction of a canonical analytic structure function φ(β,α) from the Lie algebra. ¶ These seven theorems – the three theorems of Lie and their converses, and Taylor's theorem – provide an essential equivalence between Lie groups and algebras. Aspects of Lie theory Lie theory is frequently built upon a study of the classical linear algebraic groups. Special branches include Weyl groups, Coxeter groups, and buildings. The classical subject has been extended to Groups of Lie type. In 1900 David Hilbert challenged Lie theorists with his Fifth Problem presented at the International Congress of Mathematicians in Paris. See also Baker–Campbell–Hausdorff formula Glossary of Lie groups and Lie algebras List of Lie groups topics Lie group integrator Notes and references John A. Coleman (1989) "The Greatest Mathematical Paper of All Time", The Mathematical Intelligencer 11(3): 29–38. Further reading M.A. Akivis & B.A. Rosenfeld (1993) Élie Cartan (1869–1951), translated from Russian original by V.V. Goldberg, chapter 2: Lie groups and Lie algebras, American Mathematical Society . P. M. Cohn (1957) Lie Groups, Cambridge Tracts in Mathematical Physics. J. L. Coolidge (1940) A History of Geometrical Methods, pp 304–17, Oxford University Press (Dover Publications 2003). Robert Gilmore (2008) Lie groups, physics, and geometry: an introduction for physicists, engineers and chemists, Cambridge University Press . F. Reese Harvey (1990) Spinors and calibrations, Academic Press, . . Heldermann Verlag Journal of Lie Theory Differential equations History of mathematics
Lie theory
[ "Mathematics" ]
1,211
[ "Lie groups", "Mathematical structures", "Mathematical objects", "Differential equations", "Equations", "Algebraic structures" ]
1,595,817
https://en.wikipedia.org/wiki/Energy%20demand%20management
Energy demand management, also known as demand-side management (DSM) or demand-side response (DSR), is the modification of consumer demand for energy through various methods such as financial incentives and behavioral change through education. Usually, the goal of demand-side management is to encourage the consumer to use less energy during peak hours, or to move the time of energy use to off-peak times such as nighttime and weekends. Peak demand management does not necessarily decrease total energy consumption, but could be expected to reduce the need for investments in networks and/or power plants for meeting peak demands. An example is the use of energy storage units to store energy during off-peak hours and discharge them during peak hours. A newer application for DSM is to aid grid operators in balancing variable generation from wind and solar units, particularly when the timing and magnitude of energy demand does not coincide with the renewable generation. Generators brought on line during peak demand periods are often fossil fuel units. Minimizing their use reduces emissions of carbon dioxide and other pollutants. The term DSM was coined following the time of the 1973 energy crisis and 1979 energy crisis. Governments of many countries mandated performance of various programs for demand management. An early example is the National Energy Conservation Policy Act of 1978 in the U.S., preceded by similar actions in California and Wisconsin. Demand-side management was introduced publicly by Electric Power Research Institute (EPRI) in the 1980s. Nowadays, DSM technologies become increasingly feasible due to the integration of information and communications technology and the power system, new terms such as integrated demand-side management (IDSM), or smart grid. Operation The American electric power industry originally relied heavily on foreign energy imports, whether in the form of consumable electricity or fossil fuels that were then used to produce electricity. During the time of the energy crises in the 1970s, the federal government passed the Public Utility Regulatory Policies Act (PURPA), hoping to reduce dependence on foreign oil and to promote energy efficiency and alternative energy sources. This act forced utilities to obtain the cheapest possible power from independent power producers, which in turn promoted renewables and encouraged the utility to reduce the amount of power they need, hence pushing forward agendas for energy efficiency and demand management. Electricity use can vary dramatically on short and medium time frames, depending on current weather patterns. Generally the wholesale electricity system adjusts to changing demand by dispatching additional or less generation. However, during peak periods, the additional generation is usually supplied by less efficient ("peaking") sources. Unfortunately, the instantaneous financial and environmental cost of using these "peaking" sources is not necessarily reflected in the retail pricing system. In addition, the ability or willingness of electricity consumers to adjust to price signals by altering demand (elasticity of demand) may be low, particularly over short time frames. In many markets, consumers (particularly retail customers) do not face real-time pricing at all, but pay rates based on average annual costs or other constructed prices. Energy demand management activities attempt to bring the electricity demand and supply closer to a perceived optimum, and help give electricity end users benefits for reducing their demand. In the modern system, the integrated approach to demand-side management is becoming increasingly common. IDSM automatically sends signals to end-use systems to shed load depending on system conditions. This allows for very precise tuning of demand to ensure that it matches supply at all times, reduces capital expenditures for the utility. Critical system conditions could be peak times, or in areas with levels of variable renewable energy, during times when demand must be adjusted upward to avoid over-generation or downward to help with ramping needs. In general, adjustments to demand can occur in various ways: through responses to price signals, such as permanent differential rates for evening and day times or occasional highly priced usage days, behavioral changes achieved through home area networks, automated controls such as with remotely controlled air-conditioners, or with permanent load adjustments with energy efficient appliances. Logical foundations Demand for any commodity can be modified by actions of market players and government (regulation and taxation). Energy demand management implies actions that influence demand for energy. DSM was originally adopted in electricity, but today it is applied widely to utilities including water and gas as well. Reducing energy demand is contrary to what both energy suppliers and governments have been doing during most of the modern industrial history. Whereas real prices of various energy forms have been decreasing during most of the industrial era, due to economies of scale and technology, the expectation for the future is the opposite. Previously, it was not unreasonable to promote energy use as more copious and cheaper energy sources could be anticipated in the future or the supplier had installed excess capacity that would be made more profitable by increased consumption. In centrally planned economies subsidizing energy was one of the main economic development tools. Subsidies to the energy supply industry are still common in some countries. Contrary to the historical situation, energy prices and availability are expected to deteriorate. Governments and other public actors, if not the energy suppliers themselves, are tending to employ energy demand measures that will increase the efficiency of energy consumption. Types Energy efficiency: Using less power to perform the same tasks. This involves a permanent reduction of demand by using more efficient load-intensive appliances such as water heaters, refrigerators, or washing machines. Demand response: Any reactive or preventative method to reduce, flatten or shift demand. Historically, demand response programs have focused on peak reduction to defer the high cost of constructing generation capacity. However, demand response programs are now being looked to assist with changing the net load shape as well, load minus solar and wind generation, to help with integration of variable renewable energy. Demand response includes all intentional modifications to consumption patterns of electricity of end user customers that are intended to alter the timing, level of instantaneous demand, or the total electricity consumption. Demand response refers to a wide range of actions which can be taken at the customer side of the electricity meter in response to particular conditions within the electricity system (such as peak period network congestion or high prices), including the aforementioned IDSM. Dynamic demand: Advance or delay appliance operating cycles by a few seconds to increase the diversity factor of the set of loads. The concept is that by monitoring the power factor of the power grid, as well as their own control parameters, individual, intermittent loads would switch on or off at optimal moments to balance the overall system load with generation, reducing critical power mismatches. As this switching would only advance or delay the appliance operating cycle by a few seconds, it would be unnoticeable to the end user. In the United States, in 1982, a (now-lapsed) patent for this idea was issued to power systems engineer Fred Schweppe. This type of dynamic demand control is frequently used for air-conditioners. One example of this is through the SmartAC program in California. Distributed energy resources: Distributed generation, also distributed energy, on-site generation (OSG) or district/decentralized energy is electrical generation and storage performed by a variety of small, grid-connected devices referred to as distributed energy resources (DER). Conventional power stations, such as coal-fired, gas and nuclear powered plants, as well as hydroelectric dams and large-scale solar power stations, are centralized and often require electric energy to be transmitted over long distances. By contrast, DER systems are decentralized, modular and more flexible technologies, that are located close to the load they serve, albeit having capacities of only 10 megawatts (MW) or less. These systems can comprise multiple generation and storage components; in this instance they are referred to as hybrid power systems. DER systems typically use renewable energy sources, including small hydro, biomass, biogas, solar power, wind power, and geothermal power, and increasingly play an important role for the electric power distribution system. A grid-connected device for electricity storage can also be classified as a DER system, and is often called a distributed energy storage system (DESS). By means of an interface, DER systems can be managed and coordinated within a smart grid. Distributed generation and storage enables collection of energy from many sources and may lower environmental impacts and improve security of supply. Scale Broadly, demand side management can be classified into four categories: national scale, utility scale, community scale, and individual household scale. National scale Energy efficiency improvement is one of the most important demand side management strategies. Efficiency improvements can be implemented nationally through legislation and standards in housing, building, appliances, transport, machines, etc. Utility scale During peak demand time, utilities are able to control storage water heaters, pool pumps and air conditioners in large areas to reduce peak demand, e.g. Australia and Switzerland. One of the common technologies is ripple control: high frequency signal (e.g. 1000 Hz) is superimposed to normal electricity (50 or 60 Hz) to switch on or off devices. In more service-based economies, such as Australia, electricity network peak demand often occurs in the late afternoon to early evening (4pm to 8pm). Residential and commercial demand is the most significant part of these types of peak demand. Therefore, it makes great sense for utilities (electricity network distributors) to manage residential storage water heaters, pool pumps, and air conditioners. Community scale Other names can be neighborhood, precinct, or district. Community central heating systems have been existing for many decades in regions of cold winters. Similarly, peak demand in summer peak regions need to be managed, e.g. Texas & Florida in the U.S., Queensland and New South Wales in Australia. Demand side management can be implemented in community scale to reduce peak demand for heating or cooling. Another aspect is to achieve Net zero-energy building or community. Managing energy, peak demand and bills in community level may be more feasible and viable, because of the collective purchasing power, the bargaining power, more options in energy efficiency or storage, more flexibility and diversity in generating and consuming energy at different times, e.g. using PV to compensate day time consumption or for energy storage. Household scale In areas of Australia, more than 30% (2016) of households have rooftop photo-voltaic systems. It is useful for them to use free energy from the sun to reduce energy import from the grid. Further, demand side management can be helpful when a systematic approach is considered: the operation of photovoltaic, air conditioner, battery energy storage systems, storage water heaters, building performance and energy efficiency measures. Examples Queensland, Australia The utility companies in the state of Queensland, Australia have devices fitted onto certain household appliances such as air conditioners or into household meters to control water heater, pool pumps etc. These devices would allow energy companies to remotely cycle the use of these items during peak hours. Their plan also includes improving the efficiency of energy-using items and giving financial incentives to consumers who use electricity during off-peak hours, when it is less expensive for energy companies to produce. Another example is that with demand side management, Southeast Queensland households can use electricity from rooftop photo-voltaic system to heat up water. Toronto, Canada In 2008, Toronto Hydro, the monopoly energy distributor of Ontario, had over 40,000 people signed up to have remote devices attached to air conditioners which energy companies use to offset spikes in demand. Spokeswoman Tanya Bruckmueller says that this program can reduce demand by 40 megawatts during emergency situations. Indiana, US The Alcoa Warrick Operation is participating in MISO as a qualified demand response resource, which means it is providing demand response in terms of energy, spinning reserve, and regulation service. Brazil Demand-side management can apply to electricity system based on thermal power plants or to systems where renewable energy, as hydroelectricity, is predominant but with a complementary thermal generation, for instance, in Brazil. In Brazil's case, despite the generation of hydroelectric power corresponds to more than 80% of the total, to achieve a practical balance in the generation system, the energy generated by hydroelectric plants supplies the consumption below the peak demand. Peak generation is supplied by the use of fossil-fuel power plants. In 2008, Brazilian consumers paid more than U$1 billion for complementary thermoelectric generation not previously programmed. In Brazil, the consumer pays for all the investment to provide energy, even if a plant sits idle. For most fossil-fuel thermal plants, the consumers pay for the "fuels" and other operation costs only when these plants generate energy. The energy, per unit generated, is more expensive from thermal plants than from hydroelectric. Only a few of the Brazilian's thermoelectric plants use natural gas, so they pollute significantly more than hydroelectric plants. The power generated to meet the peak demand has higher costs—both investment and operating costs—and the pollution has a significant environmental cost and potentially, financial and social liability for its use. Thus, the expansion and the operation of the current system is not as efficient as it could be using demand side management. The consequence of this inefficiency is an increase in energy tariffs that is passed on to the consumers. Moreover, because electric energy is generated and consumed almost instantaneously, all the facilities, as transmission lines and distribution nets, are built for peak consumption. During the non-peak periods their full capacity is not utilized. The reduction of peak consumption can benefit the efficiency of the electric systems, like the Brazilian system, in various ways: as deferring new investments in distribution and transmission networks, and reducing the necessity of complementary thermal power operation during peak periods, which can diminish both the payment for investment in new power plants to supply only during the peak period and the environmental impact associated with greenhouse gas emission. Issues Some people argue that demand-side management has been ineffective because it has often resulted in higher utility costs for consumers and less profit for utilities. One of the main goals of demand side management is to be able to charge the consumer based on the true price of the utilities at that time. If consumers could be charged less for using electricity during off-peak hours, and more during peak hours, then supply and demand would theoretically encourage the consumer to use less electricity during peak hours, thus achieving the main goal of demand side management. See also Alternative fuel Battery-to-grid Dynamic demand (electric power) Demand response Duck curve Energy conservation Energy intensity Energy storage as a service (ESaaS) Grid energy storage GridLAB-D List of energy storage projects Load profile Load management Time of Use Notes References . . Works cited External links Demand-Side Management Programme IEA Energy subsidies in the European Union: A brief overview Managing Energy Demand seminar Bern, nov 4 2009 UK Demand Side Response Market failure Electric power distribution Energy economics Demand management
Energy demand management
[ "Environmental_science" ]
3,002
[ "Energy economics", "Environmental social science" ]
1,596,497
https://en.wikipedia.org/wiki/Battle%20of%20the%20sexes%20%28game%20theory%29
In game theory, the battle of the sexes is a two-player coordination game that also involves elements of conflict. The game was introduced in 1957 by R. Duncan Luce and Howard Raiffa in their classic book, Games and Decisions. Some authors prefer to avoid assigning sexes to the players and instead use Players 1 and 2, and some refer to the game as Bach or Stravinsky, using two concerts as the two events. The game description here follows Luce and Raiffa's original story. Imagine that a man and a woman hope to meet this evening, but have a choice between two events to attend: a prize fight and a ballet. The man would prefer to go to prize fight. The woman would prefer the ballet. Both would prefer to go to the same event rather than different ones. If they cannot communicate, where should they go? The payoff matrix labeled "Battle of the Sexes (1)" shows the payoffs when the man chooses a row and the woman chooses a column. In each cell, the first number represents the man's payoff and the second number the woman's. This standard representation does not account for the additional harm that might come from not only going to different locations, but going to the wrong one as well (e.g. the man goes to the ballet while the woman goes to the prize fight, satisfying neither). To account for this, the game would be represented in "Battle of the Sexes (2)", where in the top right box, the players each have a payoff of 1 because they at least get to attend their favored events. Equilibrium analysis This game has two pure strategy Nash equilibria, one where both players go to the prize fight, and another where both go to the ballet. There is also a mixed strategy Nash equilibrium, in which the players randomize using specific probabilities. For the payoffs listed in Battle of the Sexes (1), in the mixed strategy equilibrium the man goes to the prize fight with probability 3/5 and the woman to the ballet with probability 3/5, so they end up together at the prize fight with probability 6/25 = (3/5)(2/5) and together at the ballet with probability 6/25 = (2/5)(3/5). Because a pure strategy is a degenerate case of a mixed strategy, the two pure strategy Nash equilibria are also part of the set of mixed strategy Nash equilibria. As a result, there are a total of three mixed strategy Nash equilibria in the Battle of the Sexes. This presents an interesting case for game theory since each of the Nash equilibria is deficient in some way. The two pure strategy Nash equilibria are unfair; one player consistently does better than the other. The mixed strategy Nash equilibrium is inefficient: the players will miscoordinate with probability 13/25, leaving each player with an expected return of 6/5 (less than the payoff of 2 from each's less favored pure strategy equilibrium). It remains unclear how expectations would form that would result in a particular equilibrium being played out. One possible resolution of the difficulty involves the use of a correlated equilibrium. In its simplest form, if the players of the game have access to a commonly observed randomizing device, then they might decide to correlate their strategies in the game based on the outcome of the device. For example, if the players could flip a coin before choosing their strategies, they might agree to correlate their strategies based on the coin flip by, say, choosing ballet in the event of heads and prize fight in the event of tails. Notice that once the results of the coin flip are revealed neither player has any incentives to alter their proposed actions if they believe the other will not. The result is that perfect coordination is always achieved and, prior to the coin flip, the expected payoffs for the players are exactly equal. It remains true, however, that even if there is a correlating device, the Nash equilibria in which the players ignore it will remain; correlated equilibria require both the existence of a correlating device and the expectation that both players will use it to make their decision. Notes References Fudenberg, D. and Tirole, J. (1991) Game theory, MIT Press. (see Chapter 1, section 2.4) External links GameTheory.net Cooperative Solution with Nash Function by Elmer G. Wiens Non-cooperative games
Battle of the sexes (game theory)
[ "Mathematics" ]
933
[ "Game theory", "Non-cooperative games" ]
1,596,638
https://en.wikipedia.org/wiki/Proof%20procedure
In logic, and in particular proof theory, a proof procedure for a given logic is a systematic method for producing proofs in some proof calculus of (provable) statements. Types of proof calculi used There are several types of proof calculi. The most popular are natural deduction, sequent calculi (i.e., Gentzen-type systems), Hilbert systems, and semantic tableaux or trees. A given proof procedure will target a specific proof calculus, but can often be reformulated so as to produce proofs in other proof styles. Completeness A proof procedure for a logic is complete if it produces a proof for each provable statement. The theorems of logical systems are typically recursively enumerable, which implies the existence of a complete but usually extremely inefficient proof procedure; however, a proof procedure is only of interest if it is reasonably efficient. Faced with an unprovable statement, a complete proof procedure may sometimes succeed in detecting and signalling its unprovability. In the general case, where provability is only a semidecidable property, this is not possible, and instead the procedure will diverge (not terminate). See also Automated theorem proving Proof complexity Deductive system References Willard Quine 1982 (1950). Methods of Logic. Harvard Univ. Press. Proof theory
Proof procedure
[ "Mathematics" ]
279
[ "Mathematical logic", "Proof theory" ]
1,598,759
https://en.wikipedia.org/wiki/Alternating%20series%20test
In mathematical analysis, the alternating series test is the method used to show that an alternating series is convergent when its terms (1) decrease in absolute value, and (2) approach zero in the limit. The test was used by Gottfried Leibniz and is sometimes known as Leibniz's test, Leibniz's rule, or the Leibniz criterion. The test is only sufficient, not necessary, so some convergent alternating series may fail the first part of the test. For a generalization, see Dirichlet's test. Formal statement Alternating series test A series of the form where either all an are positive or all an are negative, is called an alternating series. The alternating series test guarantees that an alternating series converges if the following two conditions are met: decreases monotonically, i.e., , and . Alternating series estimation theorem Moreover, let L denote the sum of the series, then the partial sum approximates L with error bounded by the next omitted term: Proof Suppose we are given a series of the form , where and for all natural numbers n. (The case follows by taking the negative.) Proof of the alternating series test We will prove that both the partial sums with odd number of terms, and with even number of terms, converge to the same number L. Thus the usual partial sum also converges to L. The odd partial sums decrease monotonically: while the even partial sums increase monotonically: both because an decreases monotonically with n. Moreover, since an are positive, . Thus we can collect these facts to form the following suggestive inequality: Now, note that a1 − a2 is a lower bound of the monotonically decreasing sequence S2m+1, the monotone convergence theorem then implies that this sequence converges as m approaches infinity. Similarly, the sequence of even partial sum converges too. Finally, they must converge to the same number because . Call the limit L, then the monotone convergence theorem also tells us extra information that for any m. This means the partial sums of an alternating series also "alternates" above and below the final limit. More precisely, when there is an odd (even) number of terms, i.e. the last term is a plus (minus) term, then the partial sum is above (below) the final limit. This understanding leads immediately to an error bound of partial sums, shown below. Proof of the alternating series estimation theorem We would like to show by splitting into two cases. When k = 2m+1, i.e. odd, then When k = 2m, i.e. even, then as desired. Both cases rely essentially on the last inequality derived in the previous proof. Examples A typical example The alternating harmonic series meets both conditions for the alternating series test and converges. An example to show monotonicity is needed All of the conditions in the test, namely convergence to zero and monotonicity, should be met in order for the conclusion to be true. For example, take the series The signs are alternating and the terms tend to zero. However, monotonicity is not present and we cannot apply the test. Actually, the series is divergent. Indeed, for the partial sum we have which is twice the partial sum of the harmonic series, which is divergent. Hence the original series is divergent. The test is only sufficient, not necessary Leibniz test's monotonicity is not a necessary condition, thus the test itself is only sufficient, but not necessary. (The second part of the test is well known necessary condition of convergence for all series.) Examples of nonmonotonic series that converge are: In fact, for every monotonic series it is possible to obtain an infinite number of nonmonotonic series that converge to the same sum by permuting its terms with permutations satisfying the condition in Agnew's theorem. See also Alternating series Dirichlet's test Notes References Konrad Knopp (1956) Infinite Sequences and Series, § 3.4, Dover Publications Konrad Knopp (1990) Theory and Application of Infinite Series, § 15, Dover Publications James Stewart, Daniel Clegg, Saleem Watson (2016) Single Variable Calculus: Early Transcendentals (Instructor's Edition) 9E, Cengage ISBN 978-0-357-02228-9 E. T. Whittaker & G. N. Watson (1963) A Course in Modern Analysis, 4th edition, §2.3, Cambridge University Press External links Jeff Cruzan. "Alternating series" Convergence tests Gottfried Wilhelm Leibniz
Alternating series test
[ "Mathematics" ]
951
[ "Theorems in mathematical analysis", "Convergence tests" ]
1,600,053
https://en.wikipedia.org/wiki/Self-replicating%20machine
A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. The concept of self-replicating machines has been advanced and examined by Homer Jacobson, Edward F. Moore, Freeman Dyson, John von Neumann, Konrad Zuse and in more recent times by K. Eric Drexler in his book on nanotechnology, Engines of Creation (coining the term clanking replicator for such machines) and by Robert Freitas and Ralph Merkle in their review Kinematic Self-Replicating Machines which provided the first comprehensive analysis of the entire replicator design space. The future development of such technology is an integral part of several plans involving the mining of moons and asteroid belts for ore and other materials, the creation of lunar factories, and even the construction of solar power satellites in space. The von Neumann probe is one theoretical example of such a machine. Von Neumann also worked on what he called the universal constructor, a self-replicating machine that would be able to evolve and which he formalized in a cellular automata environment. Notably, Von Neumann's Self-Reproducing Automata scheme posited that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell. A self-replicating machine is an artificial self-replicating system that relies on conventional large-scale technology and automation. The concept, first proposed by Von Neumann no later than the 1940s, has attracted a range of different approaches involving various types of technology. Certain idiosyncratic terms are occasionally found in the literature. For example, the term clanking replicator was once used by Drexler to distinguish macroscale replicating systems from the microscopic nanorobots or "assemblers" that nanotechnology may make possible, but the term is informal and is rarely used by others in popular or technical discussions. Replicators have also been called "von Neumann machines" after John von Neumann, who first rigorously studied the idea. However, the term "von Neumann machine" is less specific and also refers to a completely unrelated computer architecture that von Neumann proposed and so its use is discouraged where accuracy is important. Von Neumann used the term universal constructor to describe such self-replicating machines. Historians of machine tools, even before the numerical control era, sometimes figuratively said that machine tools were a unique class of machines because they have the ability to "reproduce themselves" by copying all of their parts. Implicit in these discussions is that a human would direct the cutting processes (later planning and programming the machines), and would then assemble the parts. The same is true for RepRaps, which are another class of machines sometimes mentioned in reference to such non-autonomous "self-replication". Such discussions refer to collections of machine tools, and such collections have an ability to reproduce their own parts which is finite and low for one machine, and ascends to nearly 100% with collections of only about a dozen similarly made, but uniquely functioning machines, establishing what authors Frietas and Merkle refer to as matter or material closure. Energy closure is the next most difficult dimension to close, and control the most difficult, noting that there are no other dimensions to the problem. In contrast, machines that are truly autonomously self-replicating (like biological machines) are the main subject discussed here, and would have closure in each of the three dimensions. History The general concept of artificial machines capable of producing copies of themselves dates back at least several hundred years. An early reference is an anecdote regarding the philosopher René Descartes, who suggested to Queen Christina of Sweden that the human body could be regarded as a machine; she responded by pointing to a clock and ordering "see to it that it reproduces offspring." Several other variations on this anecdotal response also exist. Samuel Butler proposed in his 1872 novel Erewhon that machines were already capable of reproducing themselves but it was man who made them do so, and added that "machines which reproduce machinery do not reproduce machines after their own kind". In George Eliot's 1879 book Impressions of Theophrastus Such, a series of essays that she wrote in the character of a fictional scholar named Theophrastus, the essay "Shadows of the Coming Race" speculated about self-replicating machines, with Theophrastus asking "how do I know that they may not be ultimately made to carry, or may not in themselves evolve, conditions of self-supply, self-repair, and reproduction". In 1802 William Paley formulated the first known teleological argument depicting machines producing other machines, suggesting that the question of who originally made a watch was rendered moot if it were demonstrated that the watch was able to manufacture a copy of itself. Scientific study of self-reproducing machines was anticipated by John Bernal as early as 1929 and by mathematicians such as Stephen Kleene who began developing recursion theory in the 1930s. Much of this latter work was motivated by interest in information processing and algorithms rather than physical implementation of such a system, however. In the course of the 1950s, suggestions of several increasingly simple mechanical systems capable of self-reproduction were made — notably by Lionel Penrose. Von Neumann's kinematic model A detailed conceptual proposal for a self-replicating machine was first put forward by mathematician John von Neumann in lectures delivered in 1948 and 1949, when he proposed a kinematic model of self-reproducing automata as a thought experiment. Von Neumann's concept of a physical self-replicating machine was dealt with only abstractly, with the hypothetical machine using a "sea" or stockroom of spare parts as its source of raw materials. The machine had a program stored on a memory tape that instructed it to retrieve parts from this "sea" using a manipulator, assemble them into a copy of itself, and then transfer the contents of its memory tape into the new duplicate. The machine was envisioned as consisting of as few as eight different types of components: four logic elements for sending and receiving stimuli and four mechanical elements for providing structural support and mobility. Although qualitatively sound, von Neumann was evidently dissatisfied with this self-replicating machine model due to the difficulty of analyzing it with mathematical precision. He went on to instead develop an even more abstract model self-replicator based on cellular automata. His original kinematic concept remained obscure until it was popularized in a 1955 issue of Scientific American. Von Neumann's goal for his self-reproducing automata theory, as specified in his lectures at the University of Illinois in 1949, was to design a machine whose complexity could grow automatically akin to biological organisms under natural selection. He asked what is the threshold of complexity that must be crossed for machines to be able to evolve. His answer was to design an abstract machine which, when run, would replicate itself. Notably, his design implies that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell. Moore's artificial living plants In 1956 mathematician Edward F. Moore proposed the first known suggestion for a practical real-world self-replicating machine, also published in Scientific American. Moore's "artificial living plants" were proposed as machines able to use air, water and soil as sources of raw materials and to draw its energy from sunlight via a solar battery or a steam engine. He chose the seashore as an initial habitat for such machines, giving them easy access to the chemicals in seawater, and suggested that later generations of the machine could be designed to float freely on the ocean's surface as self-replicating factory barges or to be placed in barren desert terrain that was otherwise useless for industrial purposes. The self-replicators would be "harvested" for their component parts, to be used by humanity in other non-replicating machines. Dyson's replicating systems The next major development of the concept of self-replicating machines was a series of thought experiments proposed by physicist Freeman Dyson in his 1970 Vanuxem Lecture. He proposed three large-scale applications of machine replicators. First was to send a self-replicating system to Saturn's moon Enceladus, which in addition to producing copies of itself would also be programmed to manufacture and launch solar sail-propelled cargo spacecraft. These spacecraft would carry blocks of Enceladean ice to Mars, where they would be used to terraform the planet. His second proposal was a solar-powered factory system designed for a terrestrial desert environment, and his third was an "industrial development kit" based on this replicator that could be sold to developing countries to provide them with as much industrial capacity as desired. When Dyson revised and reprinted his lecture in 1979 he added proposals for a modified version of Moore's seagoing artificial living plants that was designed to distill and store fresh water for human use and the "Astrochicken." Advanced Automation for Space Missions In 1980, inspired by a 1979 "New Directions Workshop" held at Wood's Hole, NASA conducted a joint summer study with ASEE entitled Advanced Automation for Space Missions to produce a detailed proposal for self-replicating factories to develop lunar resources without requiring additional launches or human workers on-site. The study was conducted at Santa Clara University and ran from June 23 to August 29, with the final report published in 1982. The proposed system would have been capable of exponentially increasing productive capacity and the design could be modified to build self-replicating probes to explore the galaxy. The reference design included small computer-controlled electric carts running on rails inside the factory, mobile "paving machines" that used large parabolic mirrors to focus sunlight on lunar regolith to melt and sinter it into a hard surface suitable for building on, and robotic front-end loaders for strip mining. Raw lunar regolith would be refined by a variety of techniques, primarily hydrofluoric acid leaching. Large transports with a variety of manipulator arms and tools were proposed as the constructors that would put together new factories from parts and assemblies produced by its parent. Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery would be placed under the canopy. A "casting robot" would use sculpting tools and templates to make plaster molds. Plaster was selected because the molds are easy to make, can make precise parts with good surface finishes, and the plaster can be easily recycled afterward using an oven to bake the water back out. The robot would then cast most of the parts either from nonconductive molten rock (basalt) or purified metals. A carbon dioxide laser cutting and welding system was also included. A more speculative, more complex microchip fabricator was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins." A 2004 study supported by NASA's Institute for Advanced Concepts took this idea further. Some experts are beginning to consider self-replicating machines for asteroid mining. Much of the design study was concerned with a simple, flexible chemical system for processing the ores, and the differences between the ratio of elements needed by the replicator, and the ratios available in lunar regolith. The element that most limited the growth rate was chlorine, needed to process regolith for aluminium. Chlorine is very rare in lunar regolith. Lackner-Wendt Auxon replicators In 1995, inspired by Dyson's 1970 suggestion of seeding uninhabited deserts on Earth with self-replicating machines for industrial development, Klaus Lackner and Christopher Wendt developed a more detailed outline for such a system. They proposed a colony of cooperating mobile robots 10–30 cm in size running on a grid of electrified ceramic tracks around stationary manufacturing equipment and fields of solar cells. Their proposal didn't include a complete analysis of the system's material requirements, but described a novel method for extracting the ten most common chemical elements found in raw desert topsoil (Na, Fe, Mg, Si, Ca, Ti, Al, C, O2 and H2) using a high-temperature carbothermic process. This proposal was popularized in Discover magazine, featuring solar-powered desalination equipment used to irrigate the desert in which the system was based. They named their machines "Auxons", from the Greek word auxein which means "to grow". Recent work NIAC studies on self-replicating systems In the spirit of the 1980 "Advanced Automation for Space Missions" study, the NASA Institute for Advanced Concepts began several studies of self-replicating system design in 2002 and 2003. Four phase I grants were awarded: Hod Lipson (Cornell University), "Autonomous Self-Extending Machines for Accelerating Space Exploration" Gregory Chirikjian (Johns Hopkins University), "Architecture for Unmanned Self-Replicating Lunar Factories" Paul Todd (Space Hardware Optimization Technology Inc.), "Robotic Lunar Ecopoiesis" Tihamer Toth-Fejel (General Dynamics), "Modeling Kinematic Cellular Automata: An Approach to Self-Replication" The study concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata. Bootstrapping self-replicating factories in space In 2012, NASA researchers Metzger, Muscatello, Mueller, and Mantovani argued for a so-called "bootstrapping approach" to start self-replicating factories in space. They developed this concept on the basis of In Situ Resource Utilization (ISRU) technologies that NASA has been developing to "live off the land" on the Moon or Mars. Their modeling showed that in just 20 to 40 years this industry could become self-sufficient then grow to large size, enabling greater exploration in space as well as providing benefits back to Earth. In 2014, Thomas Kalil of the White House Office of Science and Technology Policy published on the White House blog an interview with Metzger on bootstrapping solar system civilization through self-replicating space industry. Kalil requested the public submit ideas for how "the Administration, the private sector, philanthropists, the research community, and storytellers can further these goals." Kalil connected this concept to what former NASA Chief technologist Mason Peck has dubbed "Massless Exploration", the ability to make everything in space so that you do not need to launch it from Earth. Peck has said, "...all the mass we need to explore the solar system is already in space. It's just in the wrong shape." In 2016, Metzger argued that fully self-replicating industry can be started over several decades by astronauts at a lunar outpost for a total cost (outpost plus starting the industry) of about a third of the space budgets of the International Space Station partner nations, and that this industry would solve Earth's energy and environmental problems in addition to providing massless exploration. New York University artificial DNA tile motifs In 2011, a team of scientists at New York University created a structure called 'BTX' (bent triple helix) based around three double helix molecules, each made from a short strand of DNA. Treating each group of three double-helices as a code letter, they can (in principle) build up self-replicating structures that encode large quantities of information. Self-replication of magnetic polymers In 2001, Jarle Breivik at University of Oslo created a system of magnetic building blocks, which in response to temperature fluctuations, spontaneously form self-replicating polymers. Self-replication of neural circuits In 1968, Zellig Harris wrote that "the metalanguage is in the language," suggesting that self-replication is part of language. In 1977 Niklaus Wirth formalized this proposition by publishing a self-replicating deterministic context-free grammar. Adding to it probabilities, Bertrand du Castel published in 2015 a self-replicating stochastic grammar and presented a mapping of that grammar to neural networks, thereby presenting a model for a self-replicating neural circuit. Harvard Wyss Institute November 29, 2021 a team at Harvard Wyss Institute built the first living robots that can reproduce. Self-replicating spacecraft The idea of an automated spacecraft capable of constructing copies of itself was first proposed in scientific literature in 1974 by Michael A. Arbib, but the concept had appeared earlier in science fiction such as the 1967 novel Berserker by Fred Saberhagen or the 1950 novellette trilogy The Voyage of the Space Beagle by A. E. van Vogt. The first quantitative engineering analysis of a self-replicating spacecraft was published in 1980 by Robert Freitas, in which the non-replicating Project Daedalus design was modified to include all subsystems necessary for self-replication. The design's strategy was to use the probe to deliver a "seed" factory with a mass of about 443 tons to a distant site, have the seed factory replicate many copies of itself there to increase its total manufacturing capacity, and then use the resulting automated industrial complex to construct more probes with a single seed factory on board each. Prospects for implementation As the use of industrial automation has expanded over time, some factories have begun to approach a semblance of self-sufficiency that is suggestive of self-replicating machines. However, such factories are unlikely to achieve "full closure" until the cost and flexibility of automated machinery comes close to that of human labour and the manufacture of spare parts and other components locally becomes more economical than transporting them from elsewhere. As Samuel Butler has pointed out in Erewhon, replication of partially closed universal machine tool factories is already possible. Since safety is a primary goal of all legislative consideration of regulation of such development, future development efforts may be limited to systems which lack either control, matter, or energy closure. Fully capable machine replicators are most useful for developing resources in dangerous environments which are not easily reached by existing transportation systems (such as outer space). An artificial replicator can be considered to be a form of artificial life. Depending on its design, it might be subject to evolution over an extended period of time. However, with robust error correction, and the possibility of external intervention, the common science fiction scenario of robotic life run amok will remain extremely unlikely for the foreseeable future. In fiction Authors who have used self-replicating machine in works of fiction include: Phillip K Dick, Arthur C Clarke, Karel Čapek: (R.U.R.: Rossum’s Universal Robots (1920)), John Sladek (The Reproductive System), Samuel Butler (Erewhon), Dennis E. Taylor and E. M. Forster (The Machine Stops (1909)). Other sources A number of patents have been granted for self-replicating machine concepts. "Self reproducing fundamental fabricating machines (F-Units)" Inventor: Collins; Charles M. (Burke, Va.) (August 1997), " Self reproducing fundamental fabricating machine system" Inventor: Collins; Charles M. (Burke, Va.)(June 1998); and Collins' PCT patent WO 96/20453: "Method and system for self-replicating manufacturing stations" Inventors: Merkle; Ralph C. (Sunnyvale, Calif.), Parker; Eric G. (Wylie, Tex.), Skidmore; George D. (Plano, Tex.) (January 2003). Macroscopic replicators are mentioned briefly in the fourth chapter of K. Eric Drexler's 1986 book Engines of Creation. In 1995, Nick Szabo proposed a challenge to build a macroscale replicator from Lego robot kits and similar basic parts. Szabo wrote that this approach was easier than previous proposals for macroscale replicators, but successfully predicted that even this method would not lead to a macroscale replicator within ten years. In 2004, Robert Freitas and Ralph Merkle published the first comprehensive review of the field of self-replication (from which much of the material in this article is derived, with permission of the authors), in their book Kinematic Self-Replicating Machines, which includes 3000+ literature references. This book included a new molecular assembler design, a primer on the mathematics of replication, and the first comprehensive analysis of the entire replicator design space. See also Autopoiesis Grey goo scenario Self-reconfiguring modular robot AI takeover 3D printing Computer virus Computer worm Ecophagy Existential risk from advanced artificial intelligence Astrochicken Lights out (manufacturing) Nanorobotics Spiegelman's Monster Self-replicating spacecraft RepRap project Self-reconfiguring and self-reproducing molecube robots Quine References Further reading M. Sipper, Fifty years of research on self-replication: An overview, Artificial Life, vol. 4, no. 3, pp. 237–257, Summer 1998. Freeman Dyson expanded upon Neumann's automata theories, and advanced a biotechnology-inspired theory. See Astrochicken. The first technical design study of a self-replicating interstellar probe was published in a 1980 paper by Robert Freitas. Clanking replicators are also mentioned briefly in the fourth chapter of K. Eric Drexler's 1986 book Engines of Creation. Article about a proposed clanking replicator system to be used for developing Earthly deserts in the October 1995 Discover Magazine, featuring forests of solar panels that powered desalination equipment to irrigate the land. In 1995, Nick Szabo proposed a challenge to build a macroscale replicator from Lego(tm) robot kits and similar basic parts. Szabo wrote that this approach was easier than previous proposals for macroscale replicators, but successfully predicted that even this method would not lead to a macroscale replicator within ten years. In 1998, Chris Phoenix suggested a general idea for a macroscale replicator on the sci.nanotech newsgroup, operating in a pool of ultraviolet-cured liquid plastic, selectively solidifying the plastic to form solid parts. Computation could be done by fluidic logic. Power for the process could be supplied by a pressurized source of the liquid. In 2001, Peter Ward mentioned an escaped clanking replicator destroying the human race in his book Future Evolution. In 2004, General Dynamics completed a study for NASA's Institute for Advanced Concepts. It concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata. In 2004, Robert Freitas and Ralph Merkle published the first comprehensive review of the field of self-replication, in their book Kinematic Self-Replicating Machines, which includes 3000+ literature references. In 2005, Adrian Bowyer of the University of Bath started the RepRap project to develop a rapid prototyping machine that would be able to replicate itself, making such machines cheap enough for people to buy and use in their homes. The project is releasing material under the GNU GPL. In 2015, advances in graphene and silicene suggested that it could form the basis for a neural network with densities comparable to the human brain if integrated with silicon carbide based nanoscale CPUs containing memristors. The power source might be solar or possibly radioisotope based given that new liquid based compounds can generate substantial power from radioactive decay. Artificial life Robotics concepts Self-organization Reproduction Machines Thought experiments
Self-replicating machine
[ "Physics", "Mathematics", "Technology", "Engineering", "Biology" ]
4,930
[ "Self-organization", "Machines", "Behavior", "Self-replicating machines", "Reproduction", "Biological interactions", "Self-replication", "Physical systems", "Mechanical engineering", "Dynamical systems" ]
16,267,934
https://en.wikipedia.org/wiki/Actinorhizal%20plant
Actinorhizal plants are a group of angiosperms characterized by their ability to form a symbiosis with the nitrogen fixing actinomycetota Frankia. This association leads to the formation of nitrogen-fixing root nodules. Actinorhizal plants are distributed within three clades, and are characterized by nitrogen fixation. They are distributed globally, and are pioneer species in nitrogen-poor environments. Their symbiotic relationships with Frankia evolved independently over time, and the symbiosis occurs in the root nodule infection site. Classification Actinorhizal plants are dicotyledons distributed within 3 orders, 8 families and 26 genera, of the angiosperm clade. All nitrogen fixing plants are classified under the "Nitrogen-Fixing Clade", which consists of the three actinorhizal plant orders, as well as the order fabales. The most well-known nitrogen fixing plants are the legumes, but they are not classified as actinorhizal plants. The actinorhizal species are either trees or shrubs, except for those in the genus Datisca which are herbs. Other species of actinorhizal plants are common in temperate regions like alder, bayberry, sweetfern, avens, mountain misery and coriaria. Some Elaeagnus species, such as sea-buckthorns produce edible fruit. What characterizes an actinorhizal plant is the symbiotic relationship it forms with the bacteria Frankia, in which they infect the roots of the plant. This relationship is what is responsible for the nitrogen-fixation qualities of the plants, and what makes them important to nitrogen-poor environments. Distribution and ecology Actinorhizal plants are found on all continents except for Antarctica. Their ability to form nitrogen-fixing nodules confers a selective advantage in poor soils, and are therefore pioneer species where available nitrogen is scarce, such as moraines, volcanic flows or sand dunes. Being among the first species to colonize these disturbed environments, actinorhizal shrubs and trees play a critical role, enriching the soil and enabling the establishment of other species in an ecological succession. Actinorhizal plants like alders are also common in the riparian forest. They are also major contributors to nitrogen fixation in broad areas of the world, and are particularly important in temperate forests. The nitrogen fixation rates measured for some alder species are as high as 300 kg of N2/ha/year, close to the highest rate reported in legumes. Evolutionary origin No fossil records are available concerning nodules, but fossil pollen of plants similar to modern actinorhizal species has been found in sediments deposited 87 million years ago. The origin of the symbiotic association remains uncertain. The ability to associate with Frankia is a polyphyletic character and has probably evolved independently in different clades. Nevertheless, actinorhizal plants and Legumes, the two major nitrogen-fixing groups of plants share a relatively close ancestor, as they are all part of a clade within the rosids which is often called the nitrogen-fixing clade. This ancestor may have developed a "predisposition" to enter into symbiosis with nitrogen fixing bacteria and this led to the independent acquisition of symbiotic abilities by ancestors of the actinorhizal and Legume species. The genetic program used to establish the symbiosis has probably recruited elements of the arbuscular mycorrhizal symbioses, a much older and widely distributed symbiotic association between plants and fungi. The symbiotic nodules As in legumes, nodulation is favored by nitrogen deprivation and is inhibited by high nitrogen concentrations. Depending on the plant species, two mechanisms of infection have been described: The first is observed in casuarinas or alders and is called root hair infection. In this case the infection begins with an intracellular penetration of a Frankia hyphae root hair, and is followed by the formation of a primitive symbiotic organ known as a prenodule. The second mechanism of infection is called intercellular entry and is well described in Discaria species. In this case bacteria penetrate the root extracellularly, growing between epidermal cells then between cortical cells. Later on Frankia becomes intracellular but no prenodule is formed. In both cases the infection leads to cell divisions in the pericycle and the formation of a new organ consisting of several lobes anatomically similar to a lateral root. Cortical cells of the nodule are invaded by Frankia filaments coming from the site of infection/the prenodule. Actinorhizal nodules have generally an indeterminate growth, new cells are therefore continually produced at the apex and successively become infected. Mature cells of the nodule are filled with bacterial filaments that actively fix nitrogen. No equivalent of the rhizobial nod factors have been found, but several genes known to participate in the formation and functioning of Legume nodules (coding for haemoglobin and other nodulins) are also found in actinorhizal plants where they are supposed to play similar roles. The lack of genetic tools in Frankia and in actinorhizal species was the main factor explaining such a poor understating of this symbiosis, but the recent sequencing of 3 Frankia genomes and the development of RNAi and genomic tools in actinorhizal species should help to develop a far better understanding in the following years. Notes References External links Frankia and Actinorhizal plant Website Biogeochemical cycle Cycle Nitrogen cycle Soil biology Symbiosis
Actinorhizal plant
[ "Chemistry", "Biology" ]
1,194
[ "Behavior", "Symbiosis", "Biological interactions", "Biogeochemical cycle", "Nitrogen cycle", "Biogeochemistry", "Soil biology", "Metabolism" ]
16,269,602
https://en.wikipedia.org/wiki/Abstract%20additive%20Schwarz%20method
In mathematics, the abstract additive Schwarz method, named after Hermann Schwarz, is an abstract version of the additive Schwarz method for boundary value problems on partial differential equations, formulated only in terms of linear algebra without reference to domains, subdomains, etc. Many if not all domain decomposition methods can be cast as abstract additive Schwarz method, which is often the first and most convenient approach to their analysis. References Domain decomposition methods
Abstract additive Schwarz method
[ "Mathematics" ]
85
[ "Applied mathematics", "Applied mathematics stubs" ]
16,275,208
https://en.wikipedia.org/wiki/Atmospheric%20window
An atmospheric window is a region of the electromagnetic spectrum that can pass through the atmosphere of Earth. The optical, infrared and radio windows comprise the three main atmospheric windows. The windows provide direct channels for Earth's surface to receive electromagnetic energy from the Sun, and for thermal radiation from the surface to leave to space. Atmospheric windows are useful for astronomy, remote sensing, telecommunications and other science and technology applications. In the study of the greenhouse effect, the term atmospheric window may be limited to mean the infrared window, which is the primary escape route for a fraction of the thermal radiation emitted near the surface. In other fields of science and technology, such as radio astronomy and remote sensing, the term is used as a hypernym, covering the whole electromagnetic spectrum as in the present article. Role in Earth's energy budget Atmospheric windows, especially the optical and infrared, affect the distribution of energy flows and temperatures within Earth's energy balance. The windows are themselves dependent upon clouds, water vapor, trace greenhouse gases, and other components of the atmosphere. Out of an average 340 watts per square meter (W/m2) of solar irradiance at the top of the atmosphere, about 200 W/m2 reaches the surface via windows, mostly the optical and infrared. Also, out of about 340 W/m2 of reflected shortwave (105 W/m2) plus outgoing longwave radiation (235 W/m2), 80-100 W/m2 exits to space through the infrared window depending on cloudiness. About 40 W/m2 of this transmitted amount is emitted by the surface, while most of the remainder comes from lower regions of the atmosphere. In a complementary manner, the infrared window also transmits to the surface a portion of down-welling thermal radiation that is emitted within colder upper regions of the atmosphere. The "window" concept is useful to provide qualitative insight into some important features of atmospheric radiation transport. Full characterization of the absorption, emission, and scattering coefficients of the atmospheric medium is needed in order to perform a rigorous quantitative analysis (typically done with atmospheric radiative transfer codes). Application of the Beer-Lambert Law may yield sufficient quantitative estimates for wavelengths where the atmosphere is optically thin. Window properties are mostly encoded within the absorption profile. Other applications In astronomy Up until the 1940s, astronomers used optical telescopes to observe distant astronomical objects whose radiation reached the earth through the optical window. After that time, the development of radio telescopes gave rise to the more successful field of radio astronomy that is based on the analysis of observations made through the radio window. In telecommunications Communications satellites greatly depend on the atmospheric windows for the transmission and reception of signals: the satellite-ground links are established at frequencies that fall within the spectral bandwidth of atmospheric windows. Shortwave radio does the opposite, using frequencies that produce skywaves rather than those that escape through the radio windows. In remote sensing Both active (signal emitted by satellite or aircraft, reflection detected by sensor) and passive (reflection of sunlight detected by the sensor) remote sensing techniques work with wavelength ranges contained in the atmospheric windows. See also Optical window Infrared window Radio window Water window, for soft x-rays References Electromagnetic spectrum Atmosphere of Earth
Atmospheric window
[ "Physics" ]
643
[ "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
16,276,631
https://en.wikipedia.org/wiki/AM1%2A
AM1* is a semiempirical molecular orbital technique in computational chemistry. The method was developed by Timothy Clark and co-workers (in Computer-Chemie-Centrum, Universität Erlangen-Nürnberg) and published first in 2003. Indeed, AM1* is an extension of AM1 molecular orbital theory and uses AM1 parameters and theory unchanged for the elements H, C, N, O and F. But, other elements have been parameterized using an additional set of d-orbitals in the basis set and with two-center core–core parameters, rather than the Gaussian functions used to modify the core–core potential in AM1. Additionally, for transition metal-hydrogen interactions, a distance dependent term is used to calculate core-core potentials rather than the constant term. AM1* parameters are now available for H, C, N, O, F, Al, Si, P, S, Cl, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Br, Zr, Mo, Pd, Ag, I and Au. AM1* is implemented in VAMP 10.0 and Materials Studio (Accelrys Software Inc.). References Semiempirical quantum chemistry methods
AM1*
[ "Chemistry" ]
263
[ "Quantum chemistry stubs", "Quantum chemistry", "Theoretical chemistry stubs", "Computational chemistry", "Physical chemistry stubs", "Semiempirical quantum chemistry methods" ]
16,276,793
https://en.wikipedia.org/wiki/Cell-free%20protein%20array
Cell-free protein array technology produces protein microarrays by performing in vitro synthesis of the target proteins from their DNA templates. This method of synthesizing protein microarrays overcomes the many obstacles and challenges faced by traditional methods of protein array production that have prevented widespread adoption of protein microarrays in proteomics. Protein arrays made from this technology can be used for testing protein–protein interactions, as well as protein interactions with other cellular molecules such as DNA and lipids. Other applications include enzymatic inhibition assays and screenings of antibody specificity. Overview and background The runaway success of DNA microarrays has generated much enthusiasm for protein microarrays. However, protein microarrays have not quite taken off as expected, even with the necessary tools and know-how from DNA microarrays being in place and ready for adaptation. One major reason is that protein microarrays are much more laborious and technically challenging to construct than DNA microarrays. The traditional methods of producing protein arrays require the separate in vivo expression of hundreds or thousands of proteins, followed by separate purification and immobilization of the proteins on a solid surface. Cell-free protein array technology attempts to simplify protein microarray construction by bypassing the need to express the proteins in bacteria cells and the subsequent need to purify them. It takes advantage of available cell-free protein synthesis technology which has demonstrated that protein synthesis can occur without an intact cell as long as cell extracts containing the DNA template, transcription and translation raw materials and machinery are provided. Common sources of cell extracts used in cell-free protein array technology include wheat germ, Escherichia coli, and rabbit reticulocyte. Cell extracts from other sources such as hyperthermophiles, hybridomas, Xenopus oocytes, insect, mammalian and human cells have also been used. The target proteins are synthesized in situ on the protein microarray, directly from the DNA template, thus skipping many of the steps in traditional protein microarray production and their accompanying technical limitations. More importantly, the expression of the proteins can be done in parallel, meaning all the proteins can be expressed together in a single reaction. This ability to multiplex protein expression is a major time-saver in the production process. Methods of synthesis In situ methods In the in situ method, protein synthesis is carried out on a protein array surface that is pre-coated with a protein-capturing reagent or antibody. Once the newly synthesized proteins are released from the ribosome, the tag sequence that is also synthesized at the N- or C-terminus of each nascent protein will be bound by the capture reagent or antibody, thus immobilizing the proteins to form an array. Commonly used tags include polyhistidine (His)6 and glutathione s-transferase (GST). Various research groups have developed their own methods, each differing in their approach, but can be summarized into 3 main groups. Nucleic acid programmable protein array (NAPPA) NAPPA uses DNA template that has already been immobilized onto the same protein capture surface. The DNA template is biotinylated and is bound to avidin that is pre-coated onto the protein capture surface. Newly synthesized proteins which are tagged with GST are then immobilized next to the template DNA by binding to the adjacent polyclonal anti-GST capture antibody that is also pre-coated onto the capture surface. The main drawback of this method is the extra and tedious preparation steps at the beginning of the process: (1) the cloning of cDNAs in an expression-ready vector; and (2) the need to biotinylate the plasmid DNA but not to interfere with transcription. Moreover, the resulting protein array is not ‘pure’ because the proteins are co-localized with their DNA templates and capture antibodies. Protein in situ array (PISA) Unlike NAPPA, PISA completely bypasses DNA immobilization as the DNA template is added as a free molecule in the reaction mixture. In 2006, another group refined and miniaturized this method by using multiple spotting technique to spot the DNA template and cell-free transcription and translation mixture on a high-density protein microarray with up to 13,000 spots. This was made possible by the automated system used to accurately and sequentially supply the reagents for the transcription/translation reaction occurs in a small, sub-nanolitre droplet. In situ puromycin-capture This method is an adaptation of mRNA display technology. PCR DNA is first transcribed to mRNA, and a single-stranded DNA oligonucleotide modified with biotin and puromycin on each end is then hybridized to the 3’-end of the mRNA. The mRNAs are then arrayed on a slide and immobilized by the binding of biotin to streptavidin that is pre-coated on the slide. Cell extract is then dispensed on the slide for in situ translation to take place. When the ribosome reaches the hybridized oligonucleotide, it stalls and incorporates the puromycin molecule to the nascent polypeptide chain, thereby attaching the newly synthesized protein to the microarray via the DNA oligonucleotide. A pure protein array is obtained after the mRNA is digested with RNase. The protein spots generated by this method are very sharply defined and can be produced at a high density. Nano-well array format Nanowell array formats are used to express individual proteins in small volume reaction vessels or nanowells (Figure 4). This format is sometimes preferred because it avoids the need to immobilize the target protein which might result in the potential loss of protein activity. The miniaturization of the array also conserves solution and precious compounds that might be used in screening assays. Moreover, the structural properties of individual wells help to prevent cross-contamination among chambers. In 2012 an improved NAPPA was published, which used a nanowell array to prevent diffusion. Here the DNA was immobilized in the well together with an anti-GST antibody. Then cell-free expression mix was added and the wells closed by a lid. The nascent proteins containing a GST-tag were bound to the well surface enabling a NAPPA-array with higher density and nearly no cross-contaminations. DNA array to protein array (DAPA) DNA array to protein array (DAPA) is a method developed in 2007 to repeatedly produce protein arrays by ‘printing’ them from a single DNA template array, on demand (Figure 5). It starts with the spotting and immobilization of an array of DNA templates onto a glass slide. The slide is then assembled face-to-face with a second slide pre-coated with a protein-capturing reagent, and a membrane soaked with cell extract is placed between the two slides for transcription and translation to take place. The newly synthesized his-tagged proteins are then immobilized onto the slide to form the array. In the publication in 18 of 20 replications a protein microarray copy could be generated. Potentially the process can be repeated as often as needed, as long as the DNA is unharmed by DNAses, degradation or mechanical abrasion. Advantages Many of the advantages of cell-free protein array technology address the limitations of cell-based expression system used in traditional methods of protein microarray production. Rapid and cost-effective The method avoids DNA cloning (with the exception of NAPPA) and can quickly convert genetic information into functional proteins by using PCR DNA. The reduced steps in production and the ability to miniaturize the system saves on reagent consumption and cuts production costs. Improves protein availability Many proteins, including antibodies, are difficult to express in host cells due to problems with insolubility, disulfide bonds or host cell toxicity. Cell-free protein array makes many of such proteins available for use in protein microarrays. Enables long term storage Unlike DNA, which is a highly stable molecule, proteins are a heterogeneous class of molecules with different stability and physiochemical properties. Maintaining the proteins’ folding and function in an immobilized state over long periods of storage is a major challenge for protein microarrays. Cell-free methods provide the option to quickly obtaining protein microarrays on demand, thus eliminating any problems associated with long-term storage. Flexible The method is amenable to a range of different templates: PCR products, plasmids and mRNA. Additional components can be included during synthesis to adjust the environment for protein folding, disulfide bond formation, modification or protein activity. Limitation Post-translational modification of proteins in proteins generated by cell-free protein synthesis is still limited compared to the traditional methods, and may not be as biologically relevant. Applications Protein interactions: To screen for protein–protein interactions and protein interactions with other molecules such as metabolites, lipids, DNA and small molecules.; enzyme inhibition assay: for high throughput drug candidate screening and to discover novel enzymes for use in biotechnology; screening antibody specificity. References External links NAPPA PISA and DAPA Protein arrays resource page Molecular biology Microarrays
Cell-free protein array
[ "Chemistry", "Materials_science", "Biology" ]
1,919
[ "Biochemistry methods", "Genetics techniques", "Microtechnology", "Microarrays", "Bioinformatics", "Molecular biology techniques", "Molecular biology", "Biochemistry" ]
16,277,487
https://en.wikipedia.org/wiki/Canter%20rhythm
Canter time, canter timing or canter rhythm is a two-beat regular rhythmic pattern of a musical instrument or in dance steps within time music. The term is borrowed from the canter horse gait, which sounds three hoof beats followed by a pause, i.e., 3 accents in time. In waltz dances it may mark the 1st and the 4th eighths of the measure, producing a overlay beat over the time. In other words, when a measure is cued as "one, two-and three", the canter rhythm marks "one" and "and". This rhythm is the basis of the Canter Waltz. In modern ballroom dancing, an example is the Canter Pivot in the Viennese Waltz. In Vals (a style of Tango), the canter rhythm is also known as medio galope (which actually means "canter" in Spanish) and may accent beats 1 and 2 of the measure. The Canter Waltz or Canter is a dance with waltz music characterized by the canter rhythm of steps. A 1922 dance manual describes it as follows: "The Canter Waltz has been revived and presents an opportunity to show the use of "direction" in the straight backward and forward series of walking steps. This dance is walking to waltz time but walking most quietly and gracefully. There are two steps to the three counts of music. Step forward on 1 and make the second step between the 2 and 3 count. Give the first step the accent, although the steps are almost of the same value. It may, perhaps, help the student practicing alone with the aid of the victrola to count "one-and two-and three-and", making the second step on the second "and", until able to do the step smoothly." See also Duple metre Triple metre Polyrhythm Syncopation References Rhythm and meter Waltz, Canter Waltz
Canter rhythm
[ "Physics" ]
395
[ "Spacetime", "Rhythm and meter", "Physical quantities", "Time" ]
16,278,208
https://en.wikipedia.org/wiki/Polyhexanide
Polyhexanide (polyhexamethylene biguanide, PHMB) is a polymer used as a disinfectant and antiseptic. In dermatological use, it is spelled polihexanide (INN) and sold under various brand names. PHMB has been shown to be effective against Pseudomonas aeruginosa, Staphylococcus aureus, Escherichia coli, Candida albicans, Aspergillus brasiliensis, enterococci, and Klebsiella pneumoniae. Polihexanide, sold under the brand name Akantior is a medication used for the treatment of Acanthamoeba keratitis. Products containing PHMB are used for inter-operative irrigation, pre- and post-surgery skin and mucous membrane disinfection, post-operative dressings, surgical and non-surgical wound dressings, surgical bath/hydrotherapy, chronic wounds like diabetic foot ulcer and burn wound management, routine antisepsis during minor incisions, catheterization, first aid, surface disinfection, and linen disinfection. PHMB eye drops have been used as a treatment for eyes affected by Acanthamoeba keratitis. It is sold as a swimming pool and spa disinfectant in place of chlorine or bromine based products under the name Baquacil. PHMB is also used as an ingredient in some contact lens cleaning products, cosmetics, personal deodorants and some veterinary products. It is also used to treat clothing (Purista), purportedly to prevent the development of unpleasant odors. The PHMB hydrochloride salt (solution) is used in the majority of formulations. Medical uses Polihexanide is indicated for the treatment of Acanthamoeba keratitis in people aged 12 years of age and older. Society and culture Legal status In May 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Akantior, intended for the treatment of Acanthamoeba keratitis, a severe, progressive and sight threatening corneal infection characterized by intense pain and photophobia. Acanthamoeba keratitis is a rare disease primarily affecting contact lens wearers. The applicant for this medicinal product is SIFI SPA. Polihexanide was approved for medical use in the European Union in August 2024. Safety In 2011, polyhexamethylene biguanide was classified as category 2 carcinogen by the European Chemical Agency, but it is still allowed in cosmetics in small quantities if exposure by inhalation is impossible. Name controversy In some sources, particularly when listed as a cosmetics ingredient (INCI), the polymer is wrongly named as polyaminopropyl biguanide. References Antiseptics and disinfectants Biguanides Polymers
Polyhexanide
[ "Chemistry", "Materials_science" ]
618
[ "Polymers", "Polymer chemistry" ]
7,359,494
https://en.wikipedia.org/wiki/Laser%20accelerometer
A laser accelerometer is an accelerometer that uses a laser to measure changes in velocity/direction. Mechanism It employs a frame with three orthogonal input axes and multiple proof masses. Each proof mass has a predetermined blanking surface. A flexible beam supports each proof mass. The flexible beam permits movement of the proof mass on its axis. A laser light source provides a light ray. The laser source has a transverse field characteristic with a central null intensity region. A mirror transmits a beam of light to a detector. The detector is positioned to be centered on the light ray and responds to the light's intensity to provide an intensity signal. The signal's magnitude is related to the intensity of the light ray. The proof mass blanking surface is centrally positioned within and normal to the light ray null intensity region to provide increased blanking of the light ray in response to transverse movement of the mass on the input axis. In response to acceleration in the direction of the input axis, the proof mass deflects the beam and moves the blanking surface in a direction transverse to the light ray to partially blank the light beam. A control responds to the intensity signal to apply a restoring force to restore the proof mass to a central position and provides an output signal proportional to the restoring force. Applications Accelerometers are added to many devices, including (smart) watches, phones and vehicles of all kinds. Accelerometers oriented vertically function as gravimeters, useful for mining. Other applications include medical diagnostics and satellite measurements for climate change studies. Lasers Basic lasers operate with a frequency range (line width) of some 500 mHz. The range is widened by small temperature changes and vibrations, and by imperfections in the laser cavity. The line width of a specialised scientific laser approaches 1mHz. History 2021 An accelerometer was announced that used infrared light to measure the change in distance between two micromirrors in a Fabry–Perot cavity. The proof mass is a single silicon crystal with a mass of 10–20 mg, suspended from the first mirror using flexible 1.5 μm-thick silicon nitride () beams. The suspension allows the proof mass to move freely, with nearly ideal translational motion. The second (concave) mirror acts as the fixed reference point. Light of a certain frequency resonates – bounces back and forth – between the two mirrors in the cavity, increasing its intensity, while other frequencies are discarded. Under acceleration, the proof mass displacement relative to the concave mirror changes the intensity of reflected light. The change in intensity is measured by a single-frequency laser that matches the cavity's resonant frequency.The device can sense displacements under 1 femtometre (10−15 m) and detect accelerations as low as 3.2 × 10-8 g (the acceleration due to Earth's gravity) with uncertainty under 1%. An accelerometer was announced with a line width of 20 Hz. The SolsTiS accelerometer has a titanium-doped sapphire cavity that is shaped in a way to encourage a narrow line width and to rapidly dissipate waste heat. The device exploits the wave qualities of atoms. The laser is divided into multiple beams. One beam strikes a diffuse rubidium gas refrigerated to around 10−7 K. This temperature is achieved by using Doppler cooling with six beams to slow/cool the atoms. The atoms split into two quantum waves. A second pulse reverses the split, while a third allows them to interfere with each other, creating an interference pattern that reflects acceleration the waves underwent while separated. Another laser pulse detects the interference patterns in the various atoms, which reflects the amount of acceleration. Military-grade laser accelerometers, drift (accumulate errors at the rate of) kilometres a day. The new devices reduce drift to 2 km a month. See also List of laser articles References External links Laser applications Gravity Accelerometers Sensors
Laser accelerometer
[ "Physics", "Technology", "Engineering" ]
821
[ "Accelerometers", "Physical quantities", "Acceleration", "Measuring instruments", "Sensors" ]
7,360,758
https://en.wikipedia.org/wiki/Online%20Mendelian%20Inheritance%20in%20Animals
Online Mendelian Inheritance in Animals (OMIA) is an online database of genes, inherited disorders and traits in more than 550 animal species. It is modelled on, and is complementary to, Online Mendelian Inheritance in Man (OMIM). It aims to provide a publicly accessible catalogue of all animal phenes, excluding those in human and mouse, for which species specific resources are already available (OMIM, MLC). Authored by Professor Frank Nicholas of the University of Sydney, with some contribution from colleagues, the database contains textual information and references as well as links to relevant PubMed and Gene records at the NCBI. OMIA is hosted by the University of Sydney, with an Entrez mirror located at the NCBI. See also Medical classification Online Mendelian Inheritance in Man (OMIM) References OMIA (Online Mendelian Inheritance in Animals): an enhanced platform and integration into the Entrez search interface at NCBI. Nucleic Acids Res. 2006 Jan 1;34(Database issue):D599-601. Online Mendelian Inheritance in Animals (OMIA): a comparative knowledgebase of genetic disorders and other familial traits in non-laboratory animals. Nucleic Acids Res. 2003 Jan 1;31(1):275-7. External links Online Mendelian Inheritance in Animals (OMIA) OMIA mirror at NCBI Biological databases Genetic animal diseases Diagnosis codes
Online Mendelian Inheritance in Animals
[ "Biology" ]
294
[ "Bioinformatics", "Biological databases" ]
7,361,378
https://en.wikipedia.org/wiki/Retrogression%20heat%20treatment
Retrogression heat treatment (RHT) is a heat treatment process that rapidly heat treats age-hardenable aluminum alloys. Mainly induction heating is used for RHT. In the past, it was mainly used for 6061 and 6063 aluminum alloys. Therefore, forming of complex shapes is possible, without creating damages like cracks. Even hard tempers (for example -T6) can be formed easily after subjecting these alloys to RHT. References Materials science
Retrogression heat treatment
[ "Physics", "Materials_science", "Engineering" ]
97
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
7,362,029
https://en.wikipedia.org/wiki/Cellulin
Cellulin or cellulin granules are a type of polysaccharide found exclusively within the oomycetes of the order Leptomitales. Cellulin granules are composed of β-glucan and chitin. The experimentally determined composition of cellulin is 39% glucan (composed of beta-1,3- and beta-1,6-linked glucose units) and 60% chitin. Research β-cellulin is a possible treatment to be able to repair corneal cells. At specific concentration of β-cellulin at 0.2, 2 and 20 ng/mL rapid repair was induced to corneal epithelial stem cells. During these concentrations β-cellulin promotes phosphorylation of erk1/2 signaling pathway in mice during cornea repair. To confirm this, the mutation of erk1/2 inhibited this pathway and slowed the repair of cornea cells in mice. By increasing the growth factors up to 60 ng/mL of β-FGF and EGF and up to 30 ng/mL of activin A/β-cellulin, the production of insulin producing cells increased. However increasing the concentration of the growth factors further, had no additional effect on the increase. This study can possibly be the insight for developing a new way to treat type-1 diabetes, which currently can only be treated with injection of insulin. Despite being produced in large quantities by pancreatic islet cells, β-cellulin, an epidermal growth factor, appears to have little relevance in regulating insulin production. See also β-glucan Cellulose Chitin References Polysaccharides Water moulds
Cellulin
[ "Chemistry" ]
347
[ "Carbohydrates", "Polysaccharides" ]
7,362,920
https://en.wikipedia.org/wiki/Public%20analyst
Public Analysts are scientists in the British Isles whose principal task is to ensure the safety and correct description of food by testing for compliance with legislation. Most Public Analysts are also Agricultural Analysts who carry out similar work on animal feedingstuffs and fertilisers. Nowadays this includes checking that the food labelling is accurate. They also test drinking water, and may carry out chemical and biological tests on other consumer products. While much of the work is done by other scientists and technicians in the laboratory, the Public Analyst has legal responsibility for the accuracy of the work and the validity of any opinion expressed on the results reported. The UK-based Association of Public Analysts includes members with similar roles if different titles in other countries. History The office of Public Analyst was established by the Adulteration of Food and Drink Act 1860 (23 & 24 Vict. c. 84), the first three appointments being in London, Birmingham and Dublin. The first Scottish analyst was Henry Littlejohn in Edinburgh in 1862, who, with a strong medicinal background and brilliant mind, established many of the critical foundations of public analysis. The Sale of Food and Drugs Act 1875 (38 & 39 Vict. c. 63) made food analysis compulsory and the Sale of Food and Drugs Act 1899 (62 & 63 Vict. c. 51) extended its scope. Sampling officers generally operated through local public health or sanitary committees. By 1894 there were 99 public analysts overseeing 237 English and Welsh districts. The City of London Corporation had three food inspectors and a wharf and warehouse inspector in 1908. Bradford employed an inspector who made 756 visits to fish and chip shops in 1915. In the 1930s the staff in Birmingham comprised three qualified assistants, a clerk and a laboratory attendant. The Nuisances Removal Act for England 1855 (18 & 19 Vict. c. 121) and the Public Health Act 1875 (38 & 39 Vict. c. 55) gave authority for taking food samples "at all reasonable times". Inspectors, police constables and samplers were responsible for taking food samples, which were divided into three parts, for the vendors, the inspectors and the analysts and sealed into bottles. Food systems were engineered to allow inspection through portals, manholes and windows. Prosecution was not common though fines and prison sentences were not unknown. Adulteration rates fell from 13.8% of samples in 1879 to 4.8% in 1930. Inspectors were empowered to follow milk to sources outside their formal jurisdiction in checking for infection with tuberculosis. Sanitary authorities were required to register all dairies and enforce cleanliness regulations. The Manchester Corporation (General Powers) Act 1899 (62 & 63 Vict. c. clxxxviii), as amended in 1904, contained what were known as `milk clauses', which empowered officials to prosecute anyone who knowingly sold milk from cows with tuberculosis of the udder, to demand the isolation of infected cows and notification of any cow exhibiting signs of tuberculosis of the udder and to inspect the cows and take samples from herds which supplied milk to the city. By 1910 these provisions had been copied by 67 boroughs and 24 urban districts . The Society of Public Analysts was established in 1874, later becoming the Society for Analytical Chemistry and joining with other societies to form the Royal Society of Chemistry in 1980. Since the separation of the UK and Ireland, the function of the Public Analyst operates under different legislation, but the term and general duties are the same. The original work was chemical testing, and this is still a major part, but nowadays microbiological examination of food is an important activity, particularly in Scotland, where Public Analyst laboratories also carry out a statutory Food Examiner role. UK The primary UK legislation is the Food Safety Act 1990. All local authorities are required to appoint a Public Analyst, although there have always been fewer Public Analysts and their laboratories than local authorities, most being shared by a number of local authorities. On the UK mainland there has always been a mixture of public sector and private sector laboratories. This remains the case today - but they all provide an equivalent service, and avoidance of conflicts of interest are ensured by the statutory terms of appointment. There is a statutory qualification requirement for Public Analysts, known as the Mastership in Chemical Analysis (MChemA), awarded by the Royal Society of Chemistry. This is a specialist postgraduate qualification by examination that verifies knowledge and understanding of food and its potential defects, interpretation of food law, and the application and interpretation of chemical analysis for food law enforcement. The Public Analysts’ laboratories must be third-party accredited to International Standard BS EN ISO/IEC 17025:2017. In the mid 1980s there were some 40 Public Analyst Laboratories in the UK with over 100 appointed Public Analysts. By 1993 that had reduced to 34 Laboratories and around 80 Public Analysts, and by 2010 the number of Public Analyst Laboratories had reduced to 22 with only about 26 Public Analysts. As of 2022 there are 15 Public Analyst laboratories remaining in the UK. In part, the reduction in number of laboratories over the decades has been due to rationalisation and benefits from economies of scale; however, by a larger part, it has arisen due to lack of adequate funding. Although some of the remaining laboratories are larger than many that no longer exist, the overall capacity of the system is now far less than it used to be. Enforcement of food law in the UK is done by local authorities, principally their environmental health officers and trading standards officers. Whilst these officers are empowered to take samples of food, the actual assessment in terms of chemical analysis or microbiological examination and subsequent interpretation that are necessary to determine whether a food complies with legislation, is carried out by Public Analysts and Food Examiners respectively, scientists whose qualifications and experience are specified by regulations. Ireland Public Analyst Laboratories in Cork, Dublin and Galway provide an analytical service to the Food Safety Authority. Crown Dependencies There is one Public Analyst Laboratory in each of Guernsey, Isle of Man and Jersey serving the needs of these islands. Australia There is also one Public Analyst Laboratory in Australia. Practice The Public Analyst runs a laboratory which will: Analyse food: for composition: many foods have legally defined, customary or expected compositions for additives: which must be legally permitted and within prescribed concentrations for contamination: chemical, microbiological to assess the accuracy of labelling to investigate whether complaints by the public are justified Interpret relevant law passed by the EU and UK or Ireland: act as expert witness in prosecutions In addition to their central rôle in relation to food law enforcement, Public Analysts provide expert scientific support to local authorities and the private sector in various other areas, for example they: analyse drinking, bathing water including swimming pools, industrial effluents, industrial process waters and other waters investigate environmental products and processes including assessing land contamination, building materials and examining fuels advise on waste management investigate and monitor air pollution advise on consumer safety - in particular consumer products such as toys monitor asbestos and other hazards carry out toxicological work to assist HM Coroners Sampling Sampling is largely outside the control of the Public Analyst. Local authorities have a duty to check the safety of food and to provide adequate protection of the consumer. To achieve that, they devise sampling plans, seeking to balance their need to monitor food against limited resources and other demands on their budgets. A typical sampling plan for a local authority might include samples of the following: samples from a particular source - a supermarket, manufacturer or caterer or country meat products - to check %meat or %fat or non-meat or additives or species product marketing claims undeclared ingredients in prepared foods contaminated products nutritional content of prepared meals References See also Chartered Chemist Food Safety Act 1990 Environmental chemistry Food scientists Analytical chemistry Local government in the United Kingdom Royal Society of Chemistry Public health in the United Kingdom Food safety
Public analyst
[ "Chemistry", "Environmental_science" ]
1,566
[ "Environmental chemistry", "nan", "Royal Society of Chemistry" ]
7,364,243
https://en.wikipedia.org/wiki/Schlenk%20flask
A Schlenk flask, or Schlenk tube, is a reaction vessel typically used in air-sensitive chemistry, invented by Wilhelm Schlenk. It has a side arm fitted with a PTFE or ground glass stopcock, which allows the vessel to be evacuated or filled with gases (usually inert gases like nitrogen or argon). These flasks are often connected to Schlenk lines, which allow both operations to be done easily. Schlenk flasks and Schlenk tubes, like most laboratory glassware, are made from borosilicate glass such as Pyrex. Schlenk flasks are round-bottomed, while Schlenk tubes are elongated. They may be purchased off-the-shelf from laboratory suppliers or made from round-bottom flasks or glass tubing by a skilled glassblower. Evacuating a Schlenk flask Typically, before solvent or reagents are introduced into a Schlenk flask, the flask is dried and the atmosphere of the flask is exchanged with an inert gas. A common method of exchanging the atmosphere of the flask is to flush the flask out with an inert gas. The gas can be introduced through the sidearm of the flask, or via a wide bore needle (attached to a gas line). The contents of the flask exit the flask through the neck portion of the flask. The needle method has the advantage that the needle can be placed at the bottom of the flask to better flush out the atmosphere of the flask. Flushing a flask out with an inert gas can be inefficient for large flasks and is impractical for complex apparatus. An alternative way to exchange the atmosphere of a Schlenk flask is to use one or more "vac-refill" cycles, typically using a vacuum-gas manifold, also known as a Schlenk line. This involves pumping the air out of the flask and replacing the resulting vacuum with an inert gas. For example, evacuation of the flask to and then replenishing the atmosphere with inert gas leaves 0.13% of the original atmosphere (). Two such vac-refill cycles leaves 0.000173% (). Most Schlenk lines easily and quickly achieve a vacuum of 1 mmHg (~1.3 mBar). Varieties When using Schlenk systems, including flasks, the use of grease is often necessary at stopcock valves and ground glass joints to provide a gas tight seal and prevent glass pieces from fusing. In contrast, teflon plug valves may have a trace of oil as a lubricant but generally no grease. In the following text any "connection" is assumed to be rendered mostly air free through a series of vac-refill cycles. Standard Schlenk flask The standard Schlenk flask is a round bottom, pear-shaped, or tubular flask with a ground glass joint and a side arm. The side arm contains a valve, usually a greased stopcock, used to control the flask's exposure to a manifold or the atmosphere. This allows a material to be added to a flask through the ground glass joint, which is then capped with a septum. This operation can, for example, be done in a glove box. The flask can then be removed from the glove box and taken to a Schlenk line. Once connected to the Schlenk line, the inert gas and/or vacuum can be applied to the flask as required. While the flask is connected to the line under a positive pressure of inert gas, the septum can be replaced with other apparatus, for example a reflux condenser. Once the manipulations are complete, the contents can be vacuum dried and placed under a static vacuum by closing the side arm valve. These evacuated flasks can be taken back into a glove box for further manipulation or storage of the flasks' contents. Schlenk bomb A "bomb" flask is subclass of Schlenk flask which includes all flasks that have only one opening accessed by opening a Teflon plug valve. This design allows a Schlenk bomb to be sealed more completely than a standard Schlenk flask even if its septum or glass cap is wired on. Schlenk bombs include structurally sound shapes such as round bottoms and heavy walled tubes. Schlenk bombs are often used to conduct reactions at elevated pressures and temperatures as a closed system. In addition, all Schlenk bombs are designed to withstand the pressure differential created by the ante-chamber when pumping solvents into a glove box. In practice Schlenk bombs can perform many of the functions of a standard Schlenk flask. Even when the opening is used to fit a bomb to a manifold, the plug can still be removed to add or remove material from the bomb. In some situations, however, Schlenk bombs are less convenient than standard Schlenk flasks: they lack an accessible ground glass joint to attach additional apparatus; the opening provided by plug valves can be difficult to access with a spatula, and it can be much simpler to work with a septum designed to fit a ground glass joint than with a Teflon plug. The name "bomb" is often applied to containers used under pressure such as a bomb calorimeter. While glass does not equal the pressure rating and mechanical strength of most metal containers, it does have several advantages. Glass allows visual inspection of a reaction in progress, it is inert to a wide range of reaction conditions and substrates, it is generally more compatible with common laboratory glassware, and it is more easily cleaned and checked for cleanliness. Straus flask A Straus flask (often misspelled "Strauss") is subclass of "bomb" flask originally developed by Kontes Glass Company, commonly used for storing dried and degassed solvents. Straus flasks are sometimes referred to as solvent bombs — a name which applies to any Schlenk bomb dedicated to storing solvent. Straus flasks are mainly differentiated from other "bombs" by their neck structure. Two necks emerge from a round bottom flask, one larger than the other. The larger neck ends in a ground glass joint and is permanently partitioned by blown glass from direct access to the flask. The smaller neck includes the threading required for a teflon plug to be screwed in perpendicular to the flask. The two necks are joined through a glass tube. The ground glass joint can be connected to a manifold directly or through an adapter and hosing. Once connected, the plug valve can be partially opened to allow the solvent in the Straus flask to be vacuum transferred to other vessels. Or, once connected to the line, the neck can be placed under a positive pressure of inert gas and the plug valve can be fully removed. This allows direct access to the flask through a narrow glass tube now protected by a curtain of inert gas. The solvent can then be transferred through cannula to another flask. In contrast, other bomb flask plugs are not necessarily ideally situated to protect the atmosphere of the flask from the external atmosphere. Solvent pot Straus flasks are distinct from "solvent pots", which are flasks that contain a solvent as well as drying agents. Solvent pots are not usually bombs, or even Schlenk flasks in the classic sense. The most common configuration of a solvent pot is a simple round bottom flask attached to a 180° adapter fitted with some form of valve. The pot can be attached to a manifold and the contents distilled or vacuum transferred to other flasks free of soluble drying agents, water, oxygen or nitrogen. The term "solvent pot" can also refer to the flask containing the drying agents in a classic solvent still system. Due to fire risks, solvent stills have largely been replaced by solvent columns in which degassed solvent is forced through an insoluble drying agent before being collected. Solvent is usually collected from solvent columns through a needle connected to the column which pierces the septum of a flask or through a ground glass joint connected to the column, as in the case of a Straus flask. References Further reading Laboratory glassware Air-free techniques German inventions
Schlenk flask
[ "Chemistry", "Engineering" ]
1,764
[ "Vacuum systems", "Air-free techniques" ]
12,002,936
https://en.wikipedia.org/wiki/Displacement%E2%80%93length%20ratio
The displacement–length ratio (DLR or D/L ratio) is a calculation used to express how heavy a boat is relative to its waterline length. DLR was first published in It is calculated by dividing a boat's displacement in long tons (2,240 pounds) by the cube of one one-hundredth of the waterline length (in feet): DLR can be used to compare the relative mass of various boats no matter what their length. A DLR less than 200 is indicative of a racing boat, while a DLR greater than 300 or so is indicative of a heavy cruising boat. See also Sail Area-Displacement ratio References Ship measurements Nautical terminology Engineering ratios Naval architecture
Displacement–length ratio
[ "Mathematics", "Engineering" ]
143
[ "Naval architecture", "Metrics", "Engineering ratios", "Quantity", "Marine engineering" ]
12,004,602
https://en.wikipedia.org/wiki/Available%20name
In zoological nomenclature, an available name is a scientific name for a taxon of animals that has been published after 1757 and conforming to all the mandatory provisions of the International Code of Zoological Nomenclature for the establishment of a zoological name. In contrast, an unavailable name is a name that does not conform to the rules of that code and that therefore is not available for use as a valid name for a taxon. Such a name does not fulfil the requirements in Articles 10 through 20 of the Code, or is excluded under Article 1.3. Requirements For a name to be available, in addition to meeting certain criteria for publication, there are a number of general requirements it must fulfill: it must include a description or definition of the taxon, must use only the Latin alphabet, must be formulated within the binomial nomenclature framework, must be newly-proposed (not a redescription under the same name of a taxon previously made available) and originally used as a valid name rather than as a synonym, must not be for a hybrid or hypothetical taxon, must not be for a taxon below the rank of subspecies, etc. In some rare cases, a name which does not meet these requirements may nevertheless be available, for historical reasons, as the criteria for availability have become more stringent with successive Code editions. For example, a name originally appearing along with an illustration but no formal description may be an available name, but only if the illustration was published prior to 1930 (under Article 12.2.7). All available names must refer to a type, even if one was not provided at the time the name was first proposed. For species-level names, the type is usually a single specimen (a holotype, lectotype, or neotype); for generic-level names, the type is a single species; for family-level names, the type is a single genus. This hierarchical system of typification provides a concrete empirical anchor for all zoological names. An available name is not necessarily a valid name, because an available name may be a homonym or subsequently be placed into synonymy. However, a valid name must always be an available one. Unavailable names Unavailable names include names that have not been published, such as "Oryzomys hypenemus" and "Ubirajara jubatus", names without an accompanying description (nomina nuda), such as the subgeneric name Micronectomys proposed for the Nicaraguan rice rat, names proposed with a rank below that of subspecies (infrasubspecific names), such as Sorex isodon princeps montanus for a form of the taiga shrew, and various other categories. Despite the frequent confusion caused by common sense, an unavailable name is not necessarily a nomen nudum. A good examplification of this is the case of the unavailable dinosaur name "Ubirajara jubatus", which was assumed by common sense to be a nomen nudum before a detailed analysis of its nomenclatural status. Contrast to botany Under the International Code of Nomenclature for algae, fungi, and plants, this term is not used. In botany, the corresponding term is validly published name. The botanical equivalent of zoology's term "valid name" is correct name. References Bibliography Hershkovitz, P. 1970. Supplementary notes on Neotropical Oryzomys dimidiatus and Oryzomys hammondi (Cricetinae). Journal of Mammalogy 51(4): 789-794. Hutterer, R. & Zaitsev, M.V. 2004. Cases of homonymy in some Palaearctic and Nearctic taxa of the genus Sorex L. (Mammalia: Soricidae). Mammal Study 29:89-91. International Commission for Zoological Nomenclature. 1999. International Code of Zoological Nomenclature, 4th edition. London: The International Trust for Zoological Nomenclature. Available online at https://web.archive.org/web/20090524144249/http://www.iczn.org/iczn/index.jsp. Accessed September 27, 2009. Zoological nomenclature
Available name
[ "Biology" ]
867
[ "Zoological nomenclature", "Biological nomenclature" ]
3,094,328
https://en.wikipedia.org/wiki/Tight%20binding
In solid-state physics, the tight-binding model (or TB model) is an approach to the calculation of electronic band structure using an approximate set of wave functions based upon superposition of wave functions for isolated atoms located at each atomic site. The method is closely related to the LCAO method (linear combination of atomic orbitals method) used in chemistry. Tight-binding models are applied to a wide variety of solids. The model gives good qualitative results in many cases and can be combined with other models that give better results where the tight-binding model fails. Though the tight-binding model is a one-electron model, the model also provides a basis for more advanced calculations like the calculation of surface states and application to various kinds of many-body problem and quasiparticle calculations. Introduction The name "tight binding" of this electronic band structure model suggests that this quantum mechanical model describes the properties of tightly bound electrons in solids. The electrons in this model should be tightly bound to the atom to which they belong and they should have limited interaction with states and potentials on surrounding atoms of the solid. As a result, the wave function of the electron will be rather similar to the atomic orbital of the free atom to which it belongs. The energy of the electron will also be rather close to the ionization energy of the electron in the free atom or ion because the interaction with potentials and states on neighboring atoms is limited. Though the mathematical formulation of the one-particle tight-binding Hamiltonian may look complicated at first glance, the model is not complicated at all and can be understood intuitively quite easily. There are only three kinds of matrix elements that play a significant role in the theory. Two of those three kinds of elements should be close to zero and can often be neglected. The most important elements in the model are the interatomic matrix elements, which would simply be called the bond energies by a chemist. In general there are a number of atomic energy levels and atomic orbitals involved in the model. This can lead to complicated band structures because the orbitals belong to different point-group representations. The reciprocal lattice and the Brillouin zone often belong to a different space group than the crystal of the solid. High-symmetry points in the Brillouin zone belong to different point-group representations. When simple systems like the lattices of elements or simple compounds are studied it is often not very difficult to calculate eigenstates in high-symmetry points analytically. So the tight-binding model can provide nice examples for those who want to learn more about group theory. The tight-binding model has a long history and has been applied in many ways and with many different purposes and different outcomes. The model doesn't stand on its own. Parts of the model can be filled in or extended by other kinds of calculations and models like the nearly-free electron model. The model itself, or parts of it, can serve as the basis for other calculations. In the study of conductive polymers, organic semiconductors and molecular electronics, for example, tight-binding-like models are applied in which the role of the atoms in the original concept is replaced by the molecular orbitals of conjugated systems and where the interatomic matrix elements are replaced by inter- or intramolecular hopping and tunneling parameters. These conductors nearly all have very anisotropic properties and sometimes are almost perfectly one-dimensional. Historical background By 1928, the idea of a molecular orbital had been advanced by Robert Mulliken, who was influenced considerably by the work of Friedrich Hund. The LCAO method for approximating molecular orbitals was introduced in 1928 by B. N. Finklestein and G. E. Horowitz, while the LCAO method for solids was developed by Felix Bloch, as part of his doctoral dissertation in 1928, concurrently with and independent of the LCAO-MO approach. A much simpler interpolation scheme for approximating the electronic band structure, especially for the d-bands of transition metals, is the parameterized tight-binding method conceived in 1954 by John Clarke Slater and George Fred Koster, sometimes referred to as the SK tight-binding method. With the SK tight-binding method, electronic band structure calculations on a solid need not be carried out with full rigor as in the original Bloch's theorem but, rather, first-principles calculations are carried out only at high-symmetry points and the band structure is interpolated over the remainder of the Brillouin zone between these points. In this approach, interactions between different atomic sites are considered as perturbations. There exist several kinds of interactions we must consider. The crystal Hamiltonian is only approximately a sum of atomic Hamiltonians located at different sites and atomic wave functions overlap adjacent atomic sites in the crystal, and so are not accurate representations of the exact wave function. There are further explanations in the next section with some mathematical expressions. In the recent research about strongly correlated material the tight binding approach is basic approximation because highly localized electrons like 3-d transition metal electrons sometimes display strongly correlated behaviors. In this case, the role of electron-electron interaction must be considered using the many-body physics description. The tight-binding model is typically used for calculations of electronic band structure and band gaps in the static regime. However, in combination with other methods such as the random phase approximation (RPA) model, the dynamic response of systems may also be studied. In 2019, Bannwarth et al. introduced the GFN2-xTB method, primarily for the calculation of structures and non-covalent interaction energies. Mathematical formulation We introduce the atomic orbitals , which are eigenfunctions of the Hamiltonian of a single isolated atom. When the atom is placed in a crystal, this atomic wave function overlaps adjacent atomic sites, and so are not true eigenfunctions of the crystal Hamiltonian. The overlap is less when electrons are tightly bound, which is the source of the descriptor "tight-binding". Any corrections to the atomic potential required to obtain the true Hamiltonian of the system, are assumed small: where denotes the atomic potential of one atom located at site in the crystal lattice. A solution to the time-independent single electron Schrödinger equation is then approximated as a linear combination of atomic orbitals : , where refers to the m-th atomic energy level. Translational symmetry and normalization The Bloch theorem states that the wave function in a crystal can change under translation only by a phase factor: where is the wave vector of the wave function. Consequently, the coefficients satisfy By substituting , we find (where in RHS we have replaced the dummy index with ) or Normalizing the wave function to unity: so the normalization sets as where are the atomic overlap integrals, which frequently are neglected resulting in and The tight binding Hamiltonian Using the tight binding form for the wave function, and assuming only the m-th atomic energy level is important for the m-th energy band, the Bloch energies are of the form Here in the last step it was assumed that the overlap integral is zero and thus . The energy then becomes where Em is the energy of the m-th atomic level, and , and are the tight binding matrix elements discussed below. The tight binding matrix elements The elements are the atomic energy shift due to the potential on neighboring atoms. This term is relatively small in most cases. If it is large it means that potentials on neighboring atoms have a large influence on the energy of the central atom. The next class of terms is the interatomic matrix element between the atomic orbitals m and l on adjacent atoms. It is also called the bond energy or two center integral and it is the dominant term in the tight binding model. The last class of terms denote the overlap integrals between the atomic orbitals m and l on adjacent atoms. These, too, are typically small; if not, then Pauli repulsion has a non-negligible influence on the energy of the central atom. Evaluation of the matrix elements As mentioned before the values of the -matrix elements are not so large in comparison with the ionization energy because the potentials of neighboring atoms on the central atom are limited. If is not relatively small it means that the potential of the neighboring atom on the central atom is not small either. In that case it is an indication that the tight binding model is not a very good model for the description of the band structure for some reason. The interatomic distances can be too small or the charges on the atoms or ions in the lattice is wrong for example. The interatomic matrix elements can be calculated directly if the atomic wave functions and the potentials are known in detail. Most often this is not the case. There are numerous ways to get parameters for these matrix elements. Parameters can be obtained from chemical bond energy data. Energies and eigenstates on some high symmetry points in the Brillouin zone can be evaluated and values integrals in the matrix elements can be matched with band structure data from other sources. The interatomic overlap matrix elements should be rather small or neglectable. If they are large it is again an indication that the tight binding model is of limited value for some purposes. Large overlap is an indication for too short interatomic distance for example. In metals and transition metals the broad s-band or sp-band can be fitted better to an existing band structure calculation by the introduction of next-nearest-neighbor matrix elements and overlap integrals but fits like that don't yield a very useful model for the electronic wave function of a metal. Broad bands in dense materials are better described by a nearly free electron model. The tight binding model works particularly well in cases where the band width is small and the electrons are strongly localized, like in the case of d-bands and f-bands. The model also gives good results in the case of open crystal structures, like diamond or silicon, where the number of neighbors is small. The model can easily be combined with a nearly free electron model in a hybrid NFE-TB model. Connection to Wannier functions Bloch functions describe the electronic states in a periodic crystal lattice. Bloch functions can be represented as a Fourier series where denotes an atomic site in a periodic crystal lattice, is the wave vector of the Bloch's function, is the electron position, is the band index, and the sum is over all atomic sites. The Bloch's function is an exact eigensolution for the wave function of an electron in a periodic crystal potential corresponding to an energy , and is spread over the entire crystal volume. Using the Fourier transform analysis, a spatially localized wave function for the m-th energy band can be constructed from multiple Bloch's functions: These real space wave functions are called Wannier functions, and are fairly closely localized to the atomic site . Of course, if we have exact Wannier functions, the exact Bloch functions can be derived using the inverse Fourier transform. However it is not easy to calculate directly either Bloch functions or Wannier functions. An approximate approach is necessary in the calculation of electronic structures of solids. If we consider the extreme case of isolated atoms, the Wannier function would become an isolated atomic orbital. That limit suggests the choice of an atomic wave function as an approximate form for the Wannier function, the so-called tight binding approximation. Second quantization Modern explanations of electronic structure like t-J model and Hubbard model are based on tight binding model. Tight binding can be understood by working under a second quantization formalism. Using the atomic orbital as a basis state, the second quantization Hamiltonian operator in the tight binding framework can be written as: , - creation and annihilation operators - spin polarization - hopping integral - nearest neighbor index - the hermitian conjugate of the other term(s) Here, hopping integral corresponds to the transfer integral in tight binding model. Considering extreme cases of , it is impossible for an electron to hop into neighboring sites. This case is the isolated atomic system. If the hopping term is turned on () electrons can stay in both sites lowering their kinetic energy. In the strongly correlated electron system, it is necessary to consider the electron-electron interaction. This term can be written in This interaction Hamiltonian includes direct Coulomb interaction energy and exchange interaction energy between electrons. There are several novel physics induced from this electron-electron interaction energy, such as metal-insulator transitions (MIT), high-temperature superconductivity, and several quantum phase transitions. Example: one-dimensional s-band Here the tight binding model is illustrated with a s-band model for a string of atoms with a single s-orbital in a straight line with spacing a and σ bonds between atomic sites. To find approximate eigenstates of the Hamiltonian, we can use a linear combination of the atomic orbitals where N = total number of sites and is a real parameter with . (This wave function is normalized to unity by the leading factor 1/√N provided overlap of atomic wave functions is ignored.) Assuming only nearest neighbor overlap, the only non-zero matrix elements of the Hamiltonian can be expressed as   The energy Ei is the ionization energy corresponding to the chosen atomic orbital and U is the energy shift of the orbital as a result of the potential of neighboring atoms. The elements, which are the Slater and Koster interatomic matrix elements, are the bond energies . In this one dimensional s-band model we only have -bonds between the s-orbitals with bond energy . The overlap between states on neighboring atoms is S. We can derive the energy of the state using the above equation:    where, for example, and Thus the energy of this state can be represented in the familiar form of the energy dispersion: . For the energy is and the state consists of a sum of all atomic orbitals. This state can be viewed as a chain of bonding orbitals. For the energy is and the state consists of a sum of atomic orbitals which are a factor out of phase. This state can be viewed as a chain of non-bonding orbitals. Finally for the energy is and the state consists of an alternating sum of atomic orbitals. This state can be viewed as a chain of anti-bonding orbitals. This example is readily extended to three dimensions, for example, to a body-centered cubic or face-centered cubic lattice by introducing the nearest neighbor vector locations in place of simply n a. Likewise, the method can be extended to multiple bands using multiple different atomic orbitals at each site. The general formulation above shows how these extensions can be accomplished. Table of interatomic matrix elements In 1954 J.C. Slater and G.F. Koster published, mainly for the calculation of transition metal d-bands, a table of interatomic matrix elements which can also be derived from the cubic harmonic orbitals straightforwardly. The table expresses the matrix elements as functions of LCAO two-centre bond integrals between two cubic harmonic orbitals, i and j, on adjacent atoms. The bond integrals are for example the , and for sigma, pi and delta bonds (Notice that these integrals should also depend on the distance between the atoms, i.e. are a function of , even though it is not explicitly stated every time.). The interatomic vector is expressed as where d is the distance between the atoms and l, m and n are the direction cosines to the neighboring atom. Not all interatomic matrix elements are listed explicitly. Matrix elements that are not listed in this table can be constructed by permutation of indices and cosine directions of other matrix elements in the table. Note that swapping orbital indices amounts to taking , i.e. . For example, . See also Electronic band structure Nearly-free electron model Bloch's theorems Kronig-Penney model Fermi surface Wannier function Hubbard model t-J model Effective mass Anderson's rule Dynamical theory of diffraction Solid state physics Linear combination of atomic orbitals molecular orbital method (LCAO) Holstein–Herring method Peierls substitution Hückel method References N. W. Ashcroft and N. D. Mermin, Solid State Physics (Thomson Learning, Toronto, 1976). Stephen Blundell Magnetism in Condensed Matter(Oxford, 2001). S.Maekawa et al. Physics of Transition Metal Oxides (Springer-Verlag Berlin Heidelberg, 2004). John Singleton Band Theory and Electronic Properties of Solids (Oxford, 2001). Further reading External links Crystal-field Theory, Tight-binding Method, and Jahn-Teller Effect in E. Pavarini, E. Koch, F. Anders, and M. Jarrell (eds.): Correlated Electrons: From Models to Materials, Jülich 2012, Tight-Binding Studio: A Technical Software Package to Find the Parameters of Tight-Binding Hamiltonian Electronic structure methods Electronic band structures
Tight binding
[ "Physics", "Chemistry", "Materials_science" ]
3,471
[ "Electron", "Quantum chemistry", "Quantum mechanics", "Computational physics", "Electronic structure methods", "Electronic band structures", "Computational chemistry", "Condensed matter physics" ]
3,094,621
https://en.wikipedia.org/wiki/Charge%20conservation
In physics, charge conservation is the principle, of experimental nature, that the total electric charge in an isolated system never changes. The net quantity of electric charge, the amount of positive charge minus the amount of negative charge in the universe, is always conserved. Charge conservation, considered as a physical conservation law, implies that the change in the amount of electric charge in any volume of space is exactly equal to the amount of charge flowing into the volume minus the amount of charge flowing out of the volume. In essence, charge conservation is an accounting relationship between the amount of charge in a region and the flow of charge into and out of that region, given by a continuity equation between charge density and current density . This does not mean that individual positive and negative charges cannot be created or destroyed. Electric charge is carried by subatomic particles such as electrons and protons. Charged particles can be created and destroyed in elementary particle reactions. In particle physics, charge conservation means that in reactions that create charged particles, equal numbers of positive and negative particles are always created, keeping the net amount of charge unchanged. Similarly, when particles are destroyed, equal numbers of positive and negative charges are destroyed. This property is supported without exception by all empirical observations so far. Although conservation of charge requires that the total quantity of charge in the universe is constant, it leaves open the question of what that quantity is. Most evidence indicates that the net charge in the universe is zero; that is, there are equal quantities of positive and negative charge. History Charge conservation was first proposed by British scientist William Watson in 1746 and American statesman and scientist Benjamin Franklin in 1747, although the first convincing proof was given by Michael Faraday in 1843. Formal statement of the law Mathematically, we can state the law of charge conservation as a continuity equation: where is the electric charge accumulation rate in a specific volume at time , is the amount of charge flowing into the volume and is the amount of charge flowing out of the volume; both amounts are regarded as generic functions of time. The integrated continuity equation between two time values reads: The general solution is obtained by fixing the initial condition time , leading to the integral equation: The condition corresponds to the absence of charge quantity change in the control volume: the system has reached a steady state. From the above condition, the following must hold true: therefore, and are equal (not necessarily constant) over time, then the overall charge inside the control volume does not change. This deduction could be derived directly from the continuity equation, since at steady state holds, and implies . In electromagnetic field theory, vector calculus can be used to express the law in terms of charge density (in coulombs per cubic meter) and electric current density (in amperes per square meter). This is called the charge density continuity equation The term on the left is the rate of change of the charge density at a point. The term on the right is the divergence of the current density at the same point. The equation equates these two factors, which says that the only way for the charge density at a point to change is for a current of charge to flow into or out of the point. This statement is equivalent to a conservation of four-current. Mathematical derivation The net current into a volume is where is the boundary of oriented by outward-pointing normals, and is shorthand for , the outward pointing normal of the boundary . Here {{math|J}} is the current density (charge per unit area per unit time) at the surface of the volume. The vector points in the direction of the current. From the Divergence theorem this can be written Charge conservation requires that the net current into a volume must necessarily equal the net change in charge within the volume. The total charge q in volume V is the integral (sum) of the charge density in V'' So, by the Leibniz integral rule Equating () and () gives Since this is true for every volume, we have in general Derivation from Maxwell's Laws The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the modified Ampere's law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives:i.e.,By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary: In particular, in an isolated system the total charge is conserved. Connection to gauge invariance Charge conservation can also be understood as a consequence of symmetry through Noether's theorem, a central result in theoretical physics that asserts that each conservation law is associated with a symmetry of the underlying physics. The symmetry that is associated with charge conservation is the global gauge invariance of the electromagnetic field. This is related to the fact that the electric and magnetic fields are not changed by different choices of the value representing the zero point of electrostatic potential . However the full symmetry is more complicated, and also involves the vector potential . The full statement of gauge invariance is that the physics of an electromagnetic field are unchanged when the scalar and vector potential are shifted by the gradient of an arbitrary scalar field : In quantum mechanics the scalar field is equivalent to a phase shift in the wavefunction of the charged particle: so gauge invariance is equivalent to the well known fact that changes in the overall phase of a wavefunction are unobservable, and only changes in the magnitude of the wavefunction result in changes to the probability function . Gauge invariance is a very important, well established property of the electromagnetic field and has many testable consequences. The theoretical justification for charge conservation is greatly strengthened by being linked to this symmetry. For example, gauge invariance also requires that the photon be massless, so the good experimental evidence that the photon has zero mass is also strong evidence that charge is conserved. Gauge invariance also implies quantization of hypothetical magnetic charges. Even if gauge symmetry is exact, however, there might be apparent electric charge non-conservation if charge could leak from our normal 3-dimensional space into hidden extra dimensions. Experimental evidence Simple arguments rule out some types of charge nonconservation. For example, the magnitude of the elementary charge on positive and negative particles must be extremely close to equal, differing by no more than a factor of 10−21 for the case of protons and electrons. Ordinary matter contains equal numbers of positive and negative particles, protons and electrons, in enormous quantities. If the elementary charge on the electron and proton were even slightly different, all matter would have a large electric charge and would be mutually repulsive. The best experimental tests of electric charge conservation are searches for particle decays that would be allowed if electric charge is not always conserved. No such decays have ever been seen. The best experimental test comes from searches for the energetic photon from an electron decaying into a neutrino and a single photon: but there are theoretical arguments that such single-photon decays will never occur even if charge is not conserved. Charge disappearance tests are sensitive to decays without energetic photons, other unusual charge violating processes such as an electron spontaneously changing into a positron, and to electric charge moving into other dimensions. The best experimental bounds on charge disappearance are: See also Capacitance Charge invariance Conservation Laws and Symmetry Introduction to gauge theory – includes further discussion of gauge invariance and charge conservation Kirchhoff's circuit laws – application of charge conservation to electric circuits Maxwell's equations Relative charge density Franklin's electrostatic machine Notes Further reading Electromagnetism Conservation laws
Charge conservation
[ "Physics" ]
1,576
[ "Physical phenomena", "Electromagnetism", "Equations of physics", "Conservation laws", "Fundamental interactions", "Symmetry", "Physics theorems" ]
3,096,030
https://en.wikipedia.org/wiki/Tolman%20surface%20brightness%20test
The Tolman surface brightness test is one out of six cosmological tests that were conceived in the 1930s to check the viability of and compare new cosmological models. Tolman's test compares the surface brightness of galaxies as a function of their redshift (measured as z). Such a comparison was first proposed in 1930 by Richard C. Tolman as a test of whether the universe is expanding or static. It is a unique test of cosmology, as it is independent of dark energy, dark matter and Hubble constant parameters, testing purely for whether Cosmological Redshift is caused by an expanding universe or not. In a simple (static and flat) universe, the light received from an object drops proportional to the square of its distance and the apparent area of the object also drops proportional to the square of the distance, so the surface brightness (light received per surface area) would be constant, independent of the distance. In an expanding universe, however, there are two effects that change this relation. First, the rate at which photons are received is reduced because each photon has to travel a little farther than the one before. Second, the energy of each photon observed is reduced by the redshift. At the same time, distant objects appear larger than they really are because the photons observed were emitted at a time when the object was closer. Adding these effects together, the surface brightness in a simple expanding universe (flat geometry and uniform expansion over the range of redshifts observed) should decrease with the fourth power of . One of the earliest and most comprehensive studies was published in 1996, as observational requirements limited the practicality of the test till then. This test found consistency with an expanding universe. However, therein, the authors note that: A later paper that reviewed this one removed their assumed expansion cosmology for calculating SB, to make for a fair test, and found that the 1996 results, once the correction was made, did not rule out a static universe. To date, the most complex investigation of the relationship between surface brightness and redshift was carried out using the 10 m Keck telescope to measure nearly a thousand galaxies' redshifts and the 2.4 m Hubble Space Telescope to measure those galaxies' surface brightness. The exponent found is not 4 as expected in the simplest expanding model, but 2.6 or 3.4, depending on the frequency band. The authors summarize: Some proceeding work has pointed out that the analysis tested one possible static cosmology (analogous to Einstein–de Sitter), and that static models with different angular size-distance relationships can pass this test. The predicted difference between static and expansion diverges dramatically towards higher redshifts, however, accounting for galaxy evolution becomes increasingly uncertain. The broadest test done to date was out to z=5, this test found their results to be consistent with a static universe, but was unable to rule out expansion as it tested only a single model of galaxy size evolution. Static tired-light models remain in conflict with observations of supernovae, as these models do not predict cosmological time dilation. See also Source counts Tired light Time dilation Footnotes Physical cosmology
Tolman surface brightness test
[ "Physics", "Astronomy" ]
660
[ "Astrophysics", "Theoretical physics", "Physical cosmology", "Astronomical sub-disciplines" ]
3,096,353
https://en.wikipedia.org/wiki/Zirconium%20alloys
Zirconium alloys are solid solutions of zirconium or other metals, a common subgroup having the trade mark Zircaloy. Zirconium has very low absorption cross-section of thermal neutrons, high hardness, ductility and corrosion resistance. One of the main uses of zirconium alloys is in nuclear technology, as cladding of fuel rods in nuclear reactors, especially water reactors. A typical composition of nuclear-grade zirconium alloys is more than 95 weight percent zirconium and less than 2% of tin, niobium, iron, chromium, nickel and other metals, which are added to improve mechanical properties and corrosion resistance. The water cooling of reactor zirconium alloys elevates requirement for their resistance to oxidation-related nodular corrosion. Furthermore, oxidative reaction of zirconium with water releases hydrogen gas, which partly diffuses into the alloy and forms zirconium hydrides. The hydrides are less dense and are weaker mechanically than the alloy; their formation results in blistering and cracking of the cladding – a phenomenon known as hydrogen embrittlement. Production and properties Commercial non-nuclear grade zirconium typically contains 1–5% of hafnium, whose neutron absorption cross-section is 600 times that of zirconium. Hafnium must therefore be almost entirely removed (reduced to < 0.02% of the alloy) for reactor applications. Nuclear-grade zirconium alloys contain more than 95% Zr, and therefore most of their properties are similar to those of pure zirconium. The absorption cross section for thermal neutrons is 0.18 barn for zirconium, which is much lower than that for such common metals as iron (2.4 barn) and nickel (4.5 barn). The composition and the main applications of common reactor-grade alloys are summarized below. These alloys contain less than 0.3% of iron and chromium and 0.1–0.14% oxygen. *ZIRLO stands for zirconium low oxidation. Microstructure At temperatures below 1100 K, zirconium alloys belong to the hexagonal crystal family (HCP). Its microstructure, revealed by chemical attack, shows needle-like grains typical of a Widmanstätten pattern. Upon annealing below the phase transition temperature (α-Zr to β-Zr) the grains are equiaxed with sizes varying from 3 to 5 μm. Development Zircaloy 1 was developed after zirconium was selected by Admiral H.G. Rickover as the structural material for high flux zone reactor components and cladding for fuel pellet tube bundles in prototype submarine reactors in the late 1940s. The choice was owing to a combination of strength, low neutron cross section and corrosion resistance. Zircaloy-2 was inadvertently developed, by melting Zircaloy-1 in a crucible previously used for stainless steel. Newer alloys are Ni-free, including Zircaloy-4, ZIRLO and M5 (with 1% niobium). Oxidation of zirconium alloy Zirconium alloys readily react with oxygen, forming a nanometer-thin passivation layer. The corrosion resistance of the alloys may degrade significantly when some impurities (e.g. more than 40 ppm of carbon or more than 300 ppm of nitrogen) are present. Corrosion resistance of zirconium alloys is enhanced by intentional development of thicker passivation layer of black lustrous zirconium oxide. Nitride coatings might also be used. Whereas there is no consensus on whether zirconium and zirconium alloy have the same oxidation rate, Zircaloys 2 and 4 do behave very similarly in this respect. Oxidation occurs at the same rate in air or in water and proceeds in ambient condition or in high vacuum. A sub-micrometer thin layer of zirconium dioxide is rapidly formed in the surface and stops the further diffusion of oxygen to the bulk and the subsequent oxidation. The dependence of oxidation rate R on temperature and pressure can be expressed as R = 13.9·P1/6·exp(−1.47/kBT) The oxidation rate R is here expressed in gram/(cm2·second); P is the pressure in atmosphere, that is the factor P1/6 = 1 at ambient pressure; the activation energy is 1.47 eV; kB is the Boltzmann constant (8.617 eV/K) and T is the absolute temperature in kelvins. Thus the oxidation rate R is 10−20 g per 1 m2 area per second at 0 °C, 6 g m−2 s−1 at 300 °C, 5.4 mg m−2 s−1 at 700 °C and 300 mg m−2 s−1 at 1000 °C. Whereas there is no clear threshold of oxidation, it becomes noticeable at macroscopic scales at temperatures of several hundred °C. Oxidation of zirconium by steam One disadvantage of metallic zirconium is in the case of a loss-of-coolant accident in a nuclear reactor. Zirconium cladding rapidly reacts with water steam above . Oxidation of zirconium by water is accompanied by release of hydrogen gas. This oxidation is accelerated at high temperatures, e.g. inside a reactor core if the fuel assemblies are no longer completely covered by liquid water and insufficiently cooled. Metallic zirconium is then oxidized by the protons of water to form hydrogen gas according to the following redox reaction: Zr + 2 H2O → ZrO2 + 2 H2 Zirconium cladding in the presence of D2O deuterium oxide frequently used as the moderator and coolant in next gen pressurized heavy water reactors that CANDU designed nuclear reactors use would express the same oxidation on exposure to deuterium oxide steam as follows: Zr + 2 D2O → ZrO2 + 2 D2 This exothermic reaction, although only occurring at high temperature, is similar to that of alkali metals (such as sodium or potassium) with water. It also closely resembles the anaerobic oxidation of iron by water (reaction used at high temperature by Antoine Lavoisier to produce hydrogen for his experiments). This reaction was responsible for a small hydrogen explosion accident first observed inside the reactor building of Three Mile Island Nuclear Generating Station in 1979 that did not damage the containment building. This same reaction occurred in boiling water reactors 1, 2 and 3 of the Fukushima Daiichi Nuclear Power Plant (Japan) after reactor cooling was interrupted by related earthquake and tsunami events during the disaster of March 11, 2011, leading to the Fukushima Daiichi nuclear disaster. Hydrogen gas was vented into the reactor maintenance halls and the resulting explosive mixture of hydrogen with air oxygen detonated. The explosions severely damaged external buildings and at least one containment building. The reaction also occurred during the Chernobyl Accident, when the steam from the reactor began to escape. Many water cooled reactor containment buildings have catalyst-based passive autocatalytic recombiner units installed to rapidly convert hydrogen and oxygen into water at room temperature before the explosive limit is reached. Formation of hydrides and hydrogen embrittlement In the above oxidation scenario, 5–20% of the released hydrogen diffuses into the zirconium alloy cladding forming zirconium hydrides. The hydrogen production process also mechanically weakens the rods cladding because the hydrides have lower ductility and density than zirconium or its alloys, and thus blisters and cracks form upon hydrogen accumulation. This process is also known as hydrogen embrittlement. It has been reported that the concentration of hydrogen within hydrides is also dependent on the nucleation site of the precipitates. In case of loss-of-coolant accident (LOCA) in a damaged nuclear reactor, hydrogen embrittlement accelerates the degradation of the zirconium alloy cladding of the fuel rods exposed to high temperature steam. Deformation Zirconium alloys are used in the nuclear industry as fuel rod cladding due to zirconium's high strength and low neutron absorption cross-section. It can be subject to high strain rate loading conditions during forming and in the case of a reactor accident. In this context, the relationship between strain rate-dependent mechanical properties, crystallographic texture and deformation modes, such as slip and deformation twinning. Slip Zirconium has a hexagonal close-packed crystal structure (HCP) at room temperature, where 〈𝑎〉prismatic slip has the lowest critical resolved shear stress. 〈𝑎〉 slip is orthogonal to the unit cell 〈𝑐〉 axis and, therefore, cannot accommodate deformation along〈𝑐〉. To make up the five independent slip modes and allow arbitrary deformation in a polycrystal, secondary deformation systems such as twinning along pyramidal planes and 〈𝑐 + 𝑎〉slip on either 1st order or 2nd order pyramidal planes play an important role in Zr polycrystal deformation. Therefore, the relative activity of deformation slip and twinning modes as a function of texture and strain rate is critical in understanding deformation behaviour. Anisotropic deformation during processing affects the texture of the final Zr part; understanding the relative predominance of deformation twinning and slip is important for texture control in processing and predicting likely failure modes in-service. The known deformation systems in Zr are shown in Figure 1. The preferred room temperature slip system with the lowest critical resolved shear stress (CRSS) in dilute Zr alloys is 〈𝑎〉 prismatic slip. The CRSS of 〈𝑎〉prismatic slip increases with interstitial content, notably oxygen, carbon and nitrogen, and decreases with increasing temperature. 〈𝑎〉basal slip in high purity single crystal Zr deformed at a low strain rate of 10−4 s−1 was only seen at temperatures above 550 °C. At room temperature, basal slip is seen to occur in small amounts as a secondary slip system to 〈𝑎〉 prismatic slip, and is promoted during high strain rate loading. In-room temperature deformation studies of Zr, 〈𝑎〉 basal slip is sometimes ignored and has been shown not to affect macroscopic stress-strain response at room temperature. However, single crystal room temperature microcantilever tests in commercial purity Zr show that 〈𝑎〉 basal slip has only 1.3 times higher CRSS than 〈𝑎〉 prismatic slip, which would imply significant activation in polycrystal deformation given a favourable stress state. 1st order 〈𝑐 + 𝑎〉 pyramidal slip has a 3.5 times higher CRSS than 〈𝑎〉 prismatic slip. Slip on 2nd-order pyramidal planes are rarely seen in Zr alloys, but 〈𝑐 + 𝑎〉 1st-order pyramidal slip is commonly observed. Jensen and Backofen observed localised shear bands with 〈𝑐 + 𝑎〉 dislocations on {112̅ 4} planes during 〈𝑐〉 axis loading, which led to ductile fracture at room temperature, but this is not the slip plane as 〈𝑐 + 𝑎〉 vectors do not lie in {112̅ 4} planes. Deformation twinning Deformation twinning produces a coordinated shear transformation in a crystalline material. Twin types can be classed as either contraction (C1, C2) or extension (T1, T2) twins, which accommodate strain either to contract or extend the <𝑐> axis of the hexagonal close-packed (HCP) unit cell. Twinning is crystallographically defined by its twin plane 𝑲𝟏, the mirror plane in the twin and parent material, and 𝜼𝟏, which is the twinning shear direction. Deformation twins in Zr are generally lenticular in shape, lengthening in the 𝜼𝟏 direction and thickening along the 𝑲𝟏 plane normal. The twin plane, shear direction, and shear plane form the basis vectors of an orthogonal set. The axis-angle misorientation relationship between the parent and twin is a rotation of angle 𝜉 about the shear plane's normal direction 𝑷. More generally, twinning can be described as a 180° rotation about an axis (𝜼𝟏 or 𝑲𝟏 normal direction), or a mirror reflection in a plane (𝑲𝟏 or 𝜼𝟏 normal plane). The predominant twin type in zirconium is 𝑲𝟏 = {101̅2} 𝜼𝟏 = <101̅1> (T1) twinning, and for this {101̅2}<101̅1> twin, there is no distinction between the four transformations, as they are equivalent. Due to symmetry in the HCP crystal structure, six crystallographically equivalent twin variants exist for each type. Different twin variants of the same type in grain cannot be distinguished by their axis-angle disorientation to the parent, which are the same for all variants of a twin type. Still, they can be distinguished apart using their absolute orientations with respect to the loading axis, and in some cases (depending on the sectioning plane), the twin boundary trace. The primary twin type formed in any sample depends on the strain state and rate, temperature and crystal orientation. In macroscopic samples, this is typically influenced strongly by the crystallographic texture, grain size, and competing deformation modes (i.e., dislocation slip), combined with the loading axis and direction. The T1 twin type dominates at room temperature and quasi-static strain rates. Twin types present at liquid nitrogen temperature are {112̅2}〈112̅3̅〉(C1 twinning) and {101̅2}〈101̅1〉 (T1 twinning). Secondary twins of another type may form inside the primary twins as the crystal is reoriented with respect to the loading axis. The C2 compressive twin system {101̅1}〈1̅012〉 is only active at high temperatures, and is activated in preference to basal slip during deformation at 550 °C. Influence of loading conditions on deformation modes Kaschner and Gray observe that yield stress increases with increasing strain rate in the range of 0.001 s−1 and 3500 s−1, and that the strain rate sensitivity in the yield stress is higher when uniaxially compressing along texture components with predominantly prismatic planes than basal planes. They conclude that the rate sensitivity of the flow stress is consistent with Peierls forces inhibiting dislocation motion in low-symmetry metals during slip-dominated deformation. This is valid in the early stages of room temperature deformation, which in Zr is usually slip-dominated. Samples compressed along texture components with predominantly prismatic planes yield at lower stresses than texture components with predominantly basal planes, consistent with the higher critical resolved shear stress for <𝑐 + 𝑎> pyramidal slip compared to <𝑎> prismatic slip. In a transmission electron microscopy study of room temperature deformed zirconium, McCabe et al. observed only <𝑎> dislocations in samples with prismatic texture, which were presumed to lie on prismatic planes. Both <𝑎> (prismatic) and <112̅3̅> <𝑐 + 𝑎> ({101̅1} pyramidal) slip were observed in samples with basal texture at room temperature, but only <𝑎> dislocations were observed in the same sample at liquid nitrogen temperature. At quasi-static strain rates, McCabe et al. only observed T1 twinning in samples compressed along a plate direction with a prismatic texture component along the loading axis. They did not observe T1 twinning in samples compressed along basal textures to 25% strain. Kaschner and Gray observe that deformation at high strain rates (3000s−1) produces more twins than at quasi-static strain rates, but the twin types activated were not identified. Capolungo et al. studied twinning as a function of grain orientation within a sample. They calculated a global Schmid factor using the macroscopic applied stress direction. They found the resolved shear stress on any grain without considering local intergranular interactions, which may alter the stress state. They found that although the majority of twins occur in grains favourably oriented for twinning according to the global Schmid factor, around 30% of grains which were unfavourably oriented for twinning still contained twins. Likewise, the twins present were not always of the highest global Schmid factor variant, with only 60% twinning on the highest Schmid factor variant. This can be attributed to a strong dependence on the local stress conditions in grains or grain boundaries, which is difficult to measure experimentally, particularly at high strain rates. Knezevic et al. fitted experimental data of high-purity polycrystalline Zr to a self-consistent viscoplastic model to study slip and twinning systems' rate and temperature sensitivity. They found that T1 twinning was the dominant slip system at room temperature for strain rates between 10−3 and 103 s−1. The basal slip did not contribute to deformation below 400°C. Twinning was found to be rate insensitive, and the rate sensitivity of slip could explain changes in twinning behaviour as a function of strain rate. T1 twinning occurs during both quasi-static and high-rate loading. T2 twinning occurs only at high rate loading. Similar area fractions of T1 and T2 twinning are activated at a high strain rate, but T2 twinning carries more plastic deformation due to its higher twinning shear. T1 twins tend to thicken with incoherent boundary traces in preference to lengthening along the twinning plane, and in some cases, nearly consume the entire parent grain. Several variants of T1 twins can nucleate in the same grain, and the twin tips are pinched at grain interiors. On the other hand, T2 twins preferentially lengthen instead of thicken, and tend to nucleate in parallel rows of the same variant extending from boundary to boundary. For commercially pure zirconium (CP-Zr) of 97.0%, basal, 〈𝑎〉 pyramidal, and 〈𝑐 + 𝑎〉 pyramidal slip systems dominate room temperature compression along the normal direction (ND) at both quasi-static and high strain rate loading, which is not seen in high purity polycrystalline and single crystal Zr. In 〈𝑎〉 axis transverse direction (TD) deformation, 〈𝑎〉 prismatic and 〈𝑎〉 pyramidal slip systems are dominant. 〈𝑎〉 pyramidal and basal slip systems are more prevalent than currently reported in the literature, though this may be because 〈conventional analysis routes do not easily identify 〈𝑎〉 pyramidal slip. Basal slip systems are promoted, and 〈𝑎〉 prismatic slip is suppressed at a high strain rate (HR) compared to quasi-static strain rate (QS) loading. This is independent of loading axis texture (ND/TD). Applications Zirconium alloys are corrosion resistant and biocompatible, and therefore can be used for body implants. In one particular application, a Zr-2.5Nb alloy is formed into a knee or hip implant and then oxidized to produce a hard ceramic surface for use in bearing against a polyethylene component. This oxidized zirconium alloy material provides the beneficial surface properties of a ceramic (reduced friction and increased abrasion resistance), while retaining the beneficial bulk properties of the underlying metal (manufacturability, fracture toughness, and ductility), providing a good solution for these medical implant applications. Zr702 and Zr705 are zirconium alloys known for their high corrosion resistance. Zr702 is a commercially pure grade, widely used for its high corrosion resistance and low neutron absorption, particularly in nuclear and chemical industries. Zr705, alloyed with 2-3% niobium, shows enhanced strength and crack resistance and is used for high-stress applications such as demanding chemical processing environments, and medical implants. Reduction of zirconium demand in Russia due to nuclear demilitarization after the end of the Cold War resulted in the exotic production of household zirconium items such as the vodka shot glass shown in the picture. References See also Google books search results for the dedicated conference named "Zirconium in the nuclear industry" Construction of the Fukushima nuclear power plants Google books search results Stith, Tai. Science, Submarines & Secrets: The Incredible Early Years of the Albany Research Center. United States, Owl Room Press ISBN 9781735136646. Zirconium alloys Nuclear materials
Zirconium alloys
[ "Physics", "Chemistry" ]
4,235
[ "Materials", "Nuclear materials", "Alloys", "Zirconium alloys", "Matter" ]
3,096,890
https://en.wikipedia.org/wiki/Minor%20actinide
A minor actinide is an actinide, other than uranium or plutonium, found in spent nuclear fuel. The minor actinides include neptunium (element 93), americium (element 95), curium (element 96), berkelium (element 97), californium (element 98), einsteinium (element 99), and fermium (element 100). The most important isotopes of these elements in spent nuclear fuel are neptunium-237, americium-241, americium-243, curium-242 through -248, and californium-249 through -252. Plutonium and the minor actinides will be responsible for the bulk of the radiotoxicity and heat generation of spent nuclear fuel in the long term (300 to 20,000 years in the future). The plutonium from a power reactor tends to have a greater amount of plutonium-241 than the plutonium generated by the lower burnup operations designed to create weapons-grade plutonium. Because the reactor-grade plutonium contains so much 241Pu, the presence of 241Am makes the plutonium less suitable for making a nuclear weapon. The ingrowth of americium in plutonium is one of the methods for identifying the origin of an unknown sample of plutonium and the time since it was last separated chemically from the americium. Americium is commonly used in industry as both an alpha particle source and as a low photon-energy gamma radiation source. For example, it is commonly used in smoke detectors. Americium can be formed by neutron capture of 239Pu and 240Pu, forming 241Pu which then beta decays to 241Am. In general, as the energy of the neutrons increases, the ratio of the fission cross section to the neutron capture cross section changes in favour of fission. Hence, if MOX is used in a thermal reactor such as a boiling water reactor (BWR) or pressurized water reactor (PWR) then more americium can be expected to be found in the spent fuel than in that from a fast neutron reactor. Some of the minor actinides have been found in fallout from bomb tests. See Actinides in the environment for details. References Nuclear materials
Minor actinide
[ "Physics" ]
472
[ "Materials", "Nuclear materials", "Matter" ]
3,096,911
https://en.wikipedia.org/wiki/Feist%E2%80%93Benary%20synthesis
The Feist–Benary synthesis is an organic reaction between α-halo ketones and β-dicarbonyl compounds to produce substituted furan compounds. This condensation reaction is catalyzed by amines such as ammonia and pyridine. The first step in the ring synthesis is related to the Knoevenagel condensation. In the second step the enolate displaces an alkyl halogen in a nucleophilic aliphatic substitution. Modifications In place of α-haloketones, propargyl sulfonium salts can be used to alkylate the diketone. Another modification is the enantioselective interrupted Feist-Benary reaction with a chiral auxiliary based on the cinchona alkaloid quinine based in the presence of proton sponge to the hydroxydihydrofuran. This type of alkaloids is also used in asymmetric synthesis in the AD-mix. The alkaloid is protonated throughout the reaction and transfers its chirality by interaction of the acidic ammonium hydrogen with the dicarbonyl group of ethyl bromopyruvate in a 5-membered transition state. Historic references References Oxygen heterocycle forming reactions Heterocycle forming reactions Name reactions
Feist–Benary synthesis
[ "Chemistry" ]
265
[ "Name reactions", "Ring forming reactions", "Heterocycle forming reactions", "Organic reactions" ]
3,098,397
https://en.wikipedia.org/wiki/Molecular%20laser%20isotope%20separation
Molecular laser isotope separation (MLIS) is a method of isotope separation, where specially tuned lasers are used to separate isotopes of uranium using selective ionization of hyperfine transitions of uranium hexafluoride molecules. It is similar to AVLIS. Its main advantage over AVLIS is low energy consumption and use of uranium hexafluoride instead of vaporized uranium. MLIS was conceived in 1971 at the Los Alamos National Laboratory. MLIS operates in cascade setup, like the gaseous diffusion process. Instead of vaporized uranium as in AVLIS the working medium of the MLIS is uranium hexafluoride which requires a much lower temperature to vaporize. The UF6 gas is mixed with a suitable carrier gas (a noble gas including some hydrogen) which allows the molecules to remain in the gaseous phase after being cooled by expansion through a supersonic de Laval nozzle. A scavenger gas (e.g. methane) is also included in the mixture to bind with the fluorine atoms after they are dissociated from the UF6 and inhibit their recombination with the enriched UF5 product. In the first stage, the expanded and cooled stream of UF6 is irradiated with an infrared laser operating at the wavelength of 16 μm. The mix is then irradiated with another laser, either infrared or ultraviolet, whose photons are selectively absorbed by the excited 235UF6, causing its photolysis to 235UF5 and fluorine. The resultant enriched UF5 forms a solid which is then separated from the gas by filtration or a cyclone separator. The precipitated UF5 is relatively enriched with 235UF5 and after conversion back to UF6 it is fed to the next stage of the cascade to be further enriched. The laser for the excitation is usually a carbon dioxide laser with output wavelength shifted from 10.6 μm to 16 μm; the photolysis laser may be a excimer laser operating at 308 nm, however, infrared lasers are mostly used in existing implementations. The process is complex: many mixed UFx compounds are formed which contaminate the product and are difficult to remove. The United States, France, United Kingdom, Germany and South Africa have reported the termination of their MLIS programs; however, Japan still has a small-scale program in operation. The Commonwealth Scientific and Industrial Research Organisation in Australia has developed the SILEX pulsed laser separation process. GE, Cameco and Hitachi are currently involved in developing it for commercial use. See also Atomic vapor laser isotope separation Australian Atomic Energy Commission Calutron Nuclear fuel cycle Nuclear power References External links Laser isotope separation uranium enrichment Reed J. Jenson, O’Dean P. Judd, and J. Allan Sullivan Separating Isotopes with Lasers Los Alamos Science vol.4, 1982. Article in New York Times (August 20, 2011) regarding General Electric's plans to build a commercial laser enrichment facility in Wilmington, North Carolina, USA. Silex information Chemical processes Isotope separation Uranium
Molecular laser isotope separation
[ "Chemistry" ]
630
[ "Chemical process engineering", "Chemical processes", "nan" ]
3,098,704
https://en.wikipedia.org/wiki/Chemical%20oxygen%20iodine%20laser
A chemical oxygen iodine laser (COIL) is a near–infrared chemical laser. As the beam is infrared, it cannot be seen with the naked eye. It is capable of output power scaling up to megawatts in continuous mode. Its output wavelength is 1315 nm, a transition wavelength of atomic iodine. Principles of operation The laser is fed with gaseous chlorine, molecular iodine, and an aqueous mixture of hydrogen peroxide and potassium hydroxide. The aqueous peroxide solution undergoes chemical reaction with chlorine, producing heat, potassium chloride, and oxygen in excited state, singlet delta oxygen. Spontaneous transition of excited oxygen to the triplet sigma ground state is forbidden giving the excited oxygen a spontaneous lifetime of about 45 minutes. This allows the singlet oxygen to transfer its energy to the iodine atoms present in the gas stream;the atomic transition 2P3/2 to 2P1/2 in atomic iodine is nearly resonant with the singlet oxygen, so the energy transfer during the collision of the particles is rapid. The excited iodine atoms 2P1/2 then undergoes stimulated emission and lases at 1.315 μm in the optical resonator region of the laser. ( the upper and lower iodine atomic states are reversed with the 2P1/2 being the upper state) The laser operates at relatively low gas pressures, but the gas flow has to be nearing the speed of sound at the reaction time; even supersonic flow designs are described. The low pressure and fast flow make removal of heat from the lasing medium easy, in comparison with high-power solid-state lasers. The reaction products are potassium chloride, water, and oxygen. Traces of chlorine and iodine are removed from the exhaust gases by a halogen scrubber. History and applications COIL was developed by the US Air Force in 1977, for military purposes. However, its properties make it useful for industrial processing as well; the beam is focusable and can be transferred by an optical fiber, as its wavelength is not absorbed much by fused silica but is well absorbed by metals, making it suitable for laser cutting and drilling. Rapid cutting of stainless steel and hastelloy with a fiber-coupled COIL has been demonstrated. In 1996, TRW Incorporated managed to get a continuous beam of hundreds of kilowatts of power that lasted for several seconds. RADICL, Research Assessment, Device Improvement Chemical Laser, is a 20 kW COIL laser tested by the United States Air Force in around 1998. COIL is a component of the United States' military airborne laser and advanced tactical laser programs. On February 11, 2010, this weapon was successfully deployed to shoot down a missile off the central California coast in a test conducted with a laser aboard a Boeing 747 that took off from the Point Mugu Naval Air Warfare Center (for more details, see Boeing YAL-1). Other iodine based lasers All gas-phase iodine laser (AGIL) is a similar construction using all-gas reagents, more suitable for aerospace applications. The ElectricOIL, or EOIL, offers the same iodine lasing species in an alternate gas-electric hybrid variant. See also Peresvet (laser weapon) List of laser articles References External links Popular Science: The Flying Laser Cannon Patent for the 'High energy airborne chemical oxygen iodine laser (COIL)' 'Laser jumbo' testing moves ahead Chemical lasers American inventions
Chemical oxygen iodine laser
[ "Chemistry" ]
706
[ "Chemical reaction engineering", "Chemical lasers" ]
3,099,755
https://en.wikipedia.org/wiki/Ostwald%E2%80%93Freundlich%20equation
The Ostwald–Freundlich equation governs boundaries between two phases; specifically, it relates the surface tension of the boundary to its curvature, the ambient temperature, and the vapor pressure or chemical potential in the two phases. The Ostwald–Freundlich equation for a droplet or particle with radius is: = atomic volume = Boltzmann constant = surface tension (J m−2) = equilibrium partial pressure (or chemical potential or concentration) = partial pressure (or chemical potential or concentration) = absolute temperature One consequence of this relation is that small liquid droplets (i.e., particles with a high surface curvature) exhibit a higher effective vapor pressure, since the surface is larger in comparison to the volume. Another notable example of this relation is Ostwald ripening, in which surface tension causes small precipitates to dissolve and larger ones to grow. Ostwald ripening is thought to occur in the formation of orthoclase megacrysts in granites as a consequence of subsolidus growth. See rock microstructure for more. History In 1871, Lord Kelvin (William Thomson) obtained the following relation governing a liquid-vapor interface: where: = vapor pressure at a curved interface of radius = vapor pressure at flat interface () = = surface tension = density of vapor = density of liquid , = radii of curvature along the principal sections of the curved interface. In his dissertation of 1885, Robert von Helmholtz (son of the German physicist Hermann von Helmholtz) derived the Ostwald–Freundlich equation and showed that Kelvin's equation could be transformed into the Ostwald–Freundlich equation. The German physical chemist Wilhelm Ostwald derived the equation apparently independently in 1900; however, his derivation contained a minor error which the German chemist Herbert Freundlich corrected in 1909. Derivation from Kelvin's equation According to Lord Kelvin's equation of 1871, If the particle is assumed to be spherical, then ; hence, Note: Kelvin defined the surface tension as the work that was performed per unit area by the interface rather than on the interface; hence his term containing has a minus sign. In what follows, the surface tension will be defined so that the term containing has a plus sign. Since , then ; hence, Assuming that the vapor obeys the ideal gas law, then where: = mass of a volume of vapor = molecular weight of vapor = number of moles of vapor in volume of vapor = Avogadro constant = ideal gas constant = Since is the mass of one molecule of vapor or liquid, then volume of one molecule . Hence where . Thus Since then Since , then . If , then . Hence Therefore which is the Ostwald–Freundlich equation. See also Köhler theory Kelvin equation References Thermodynamic equations Petrology Surface science
Ostwald–Freundlich equation
[ "Physics", "Chemistry", "Materials_science" ]
587
[ "Thermodynamic equations", "Equations of physics", "Surface science", "Condensed matter physics", "Thermodynamics" ]
3,099,929
https://en.wikipedia.org/wiki/Hydrogen%20fluoride%20laser
The hydrogen fluoride laser is an infrared chemical laser. It is capable of delivering continuous output power in the megawatt range. Hydrogen fluoride lasers operate at the wavelength of 2.7–2.9 μm. This wavelength is absorbed by the atmosphere, effectively attenuating the beam and reducing its reach, unless used in a vacuum environment. However, when deuterium is used instead of hydrogen, the deuterium fluoride lases at the wavelength of about 3.8 μm. This makes the deuterium fluoride laser usable for terrestrial operations. Deuterium fluoride laser The deuterium fluoride laser constructionally resembles a rocket engine. In the combustion chamber, ethylene is burned in nitrogen trifluoride. This reaction produces free excited fluorine radicals. Just after the nozzle, the mixture of helium and hydrogen or deuterium gas is injected to the exhaust stream; the hydrogen or deuterium reacts with the fluorine radicals, producing excited molecules of deuterium fluoride or hydrogen fluoride. The excited molecules then undergo stimulated emission in the optical resonator region of the laser. Deuterium fluoride lasers have found military applications: the MIRACL laser, the Pulsed energy projectile anti-personnel weapon, and the Tactical High Energy Laser are of the deuterium fluoride type. Fusion An Argentine-American physicist and accused spy, Leonardo Mascheroni, has proposed the idea of using hydrogen fluoride lasers to produce nuclear fusion. References Chemical lasers
Hydrogen fluoride laser
[ "Chemistry" ]
320
[ "Chemical reaction engineering", "Chemical lasers" ]
3,100,090
https://en.wikipedia.org/wiki/Chemical%20laser
A chemical laser is a laser that obtains its energy from a chemical reaction. Chemical lasers can reach continuous wave output with power reaching to megawatt levels. They are used in industry for cutting and drilling. Common examples of chemical lasers are the chemical oxygen iodine laser (COIL), all gas-phase iodine laser (AGIL), and the hydrogen fluoride (HF) and deuterium fluoride (DF) lasers, all operating in the mid-infrared region. There is also a DF–CO2 laser (deuterium fluoride–carbon dioxide), which, like COIL, is a "transfer laser." The HF and DF lasers are unusual, in that there are several molecular energy transitions with sufficient energy to cross the threshold required for lasing. Since the molecules do not collide frequently enough to re-distribute the energy, several of these laser modes operate either simultaneously, or in extremely rapid succession, so that an HF or DF laser appears to operate simultaneously on several wavelengths unless a wavelength selection device is incorporated into the resonator. Origin of the CW chemical HF/DF laser The possibility of the creation of infrared lasers based on the vibrationally excited products of a chemical reaction was first proposed by John Polanyi in 1961. A pulsed chemical laser was demonstrated by Jerome V. V. Kasper and George C. Pimentel in 1965. First, chlorine (Cl2) was vigorously photo-disassociated into atoms, which then reacted with hydrogen, yielding hydrogen chloride (HCl) in an excited state suitable for a laser. Then hydrogen fluoride (HF) and deuterium fluoride (DF) were demonstrated. Pimentel went on to explore a DF-CO2 transfer laser. Although this work did not produce a purely chemical continuous wave laser, it paved the way by showing the viability of the chemical reaction as a pumping mechanism for a chemical laser. The continuous wave (CW) chemical HF laser was first demonstrated in 1969, and patented in 1972, by D. J. Spencer, T. A. Jacobs, H. Mirels and R. W. F. Gross at The Aerospace Corporation in El Segundo, California. This device used the mixing of adjacent streams of H2 and F, within an optical cavity, to create vibrationally-excited HF that lased. The atomic fluorine was provided by dissociation of SF6 gas using a DC electrical discharge. Later work at US Army, US Air Force, and US Navy contractor organizations (e.g. TRW) used a chemical reaction to provide the atomic fluorine, a concept included in the patent disclosure of Spencer et al. The latter configuration obviated the need for electrical power and led to the development of high-power lasers for military applications. The analysis of the HF laser performance is complicated due to the need to simultaneously consider the fluid dynamic mixing of adjacent supersonic streams, multiple non-equilibrium chemical reactions and the interaction of the gain medium with the optical cavity. The researchers at The Aerospace Corporation developed the first exact analytic (flame sheet) solution, the first numerical computer code solution and the first simplified model describing CW HF chemical laser performance. Chemical lasers stimulated the use of wave-optics calculations for resonator analysis. This work was pioneered by E. A. Sziklas (Pratt & Whitney) and A. E. Siegman (Stanford University). Part I of their work dealt with Hermite-Gaussian Expansion and has received little use compared with Part II, which dealt with the Fast Fourier transform method, which is now a standard tool at United Technologies Corporation, Lockheed Martin, SAIC, Boeing, tOSC, MZA (Wave Train), and OPCI. Most of these companies competed for contracts to build HF and DF lasers for DARPA, the US Air Force, the US Army, or the US Navy throughout the 1970s and 1980s. General Electric and Pratt & Whitney dropped out of the competition in the early 1980s leaving the field to Rocketdyne (now part of Pratt & Whitney - although the laser organization remains today with Boeing) and TRW (now part of Northrop Grumman). Comprehensive chemical laser models were developed at SAIC by R. C. Wade, at TRW by C.-C. Shih, by D. Bullock and M. E. Lainhart, and at Rocketdyne by D. A. Holmes and T. R. Waite. Of these, perhaps the most sophisticated was the CROQ code at TRW, outpacing the early work at Aerospace Corporation. Performance The early analytical models coupled with chemical rate studies led to the design of efficient experimental CW HF laser devices at United Aircraft, and The Aerospace Corporation. Power levels up to 10 kW were achieved. DF lasing was obtained by the substitution of D2 for H2. A group at United Aircraft Research Laboratories produced a re-circulating chemical laser, which did not rely on the continuous consumption of chemical reactants. The TRW Systems Group in Redondo Beach, California, subsequently received US Air Force contracts to build higher power CW HF/DF lasers. Using a scaled-up version of an Aerospace Corporation design, TRW achieved 100 kW power levels. General Electric, Pratt & Whitney, & Rocketdyne built various chemical lasers on company funds in anticipation of receiving DoD contracts to build even larger lasers. Only Rocketdyne received contracts of sufficient value to continue competing with TRW. TRW produced the MIRACL device for the U.S. Navy that achieved megawatt power levels. The latter is believed to be the highest power continuous laser, of any type, developed to date (2007). TRW also produced a cylindrical chemical laser (the Alpha laser) for DARPA Zenith Star, which had the theoretical advantage of being scalable to even larger powers. However, by 1990, the interest in chemical lasers had shifted toward shorter wavelengths, and the chemical oxygen iodine laser (COIL) gained the most interest, producing radiation at 1.315 μm. There is a further advantage that the COIL laser generally produces single wavelength radiation, which is very helpful for forming a very well focused beam. This type of COIL laser is used today in the ABL (Airborne Laser, the laser itself being built by Northrop Grumman) and in the ATL (Advanced Tactical Laser) produced by Boeing. Meanwhile, a lower power HF laser was used for the THEL (Tactical High Energy Laser) built in the late 1990s for the Israeli Ministry of Defense in cooperation with the U.S. Army SMDC. It is the first fielded high energy laser to demonstrate effectiveness in fairly realistic tests against rockets and artillery. The MIRACL laser has demonstrated effectiveness against certain targets flown in front of it at White Sands Missile Range, but it is not configured for actual service as a fielded weapon. ABL was successful in shooting down several full sized missiles from significant ranges, and ATL was successful in disabling moving land vehicles and other tactical targets. Despite the performance advantages of chemical lasers, the Department of Defense stopped all development of chemical laser systems with the termination of the Airborne Laser Testbed in 2012. The desire for a "renewable" power source, i.e. not having to supply unusual chemicals like fluorine, deuterium, basic hydrogen-peroxide, or iodine, led the DoD to push for electrically pumped lasers such as diode pumped alkali lasers (DPALS). An "Inside the Army" weekly report mentions "Directed Energy Master Plan" References American inventions
Chemical laser
[ "Chemistry" ]
1,572
[ "Chemical reaction engineering", "Chemical lasers" ]
5,633,169
https://en.wikipedia.org/wiki/Protocrystalline
A protocrystalline phase is a distinct phase occurring during crystal growth, which evolves into a microcrystalline form. The term is typically associated with silicon films in optical applications such as solar cells. Applications Silicon solar cells Amorphous silicon (a-Si) is a popular solar cell material owing to its low cost and ease of production. Owing to its disordered structure (Urbach tail), its absorption extends to the energies below the band gap, resulting in a wide-range spectral response; however, it has a relatively low solar cell efficiency. Protocrystalline Si (pc-Si:H) also has a relatively low absorption near the band gap, owing to its more ordered crystalline structure. Thus, protocrystalline and amorphous silicon can be combined in a tandem solar cell, where the top thin layer of a-Si:H absorbs short-wavelength light whereas the underlying protocrystalline silicon layer absorbs the longer wavelengths See also Amorphous silicon Crystallite Multijunction Polycarbonate (PC) Polyethylene terephthalate (PET) References External links Crystallography Thin-film cells
Protocrystalline
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
242
[ "Materials science stubs", "Thin-film cells", "Materials science", "Crystallography stubs", "Crystallography", "Condensed matter physics", "Planes (geometry)", "Thin films" ]
5,634,341
https://en.wikipedia.org/wiki/Bioconversion%20of%20biomass%20to%20mixed%20alcohol%20fuels
The bioconversion of biomass to mixed alcohol fuels can be accomplished using the MixAlco process. Through bioconversion of biomass to a mixed alcohol fuel, more energy from the biomass will end up as liquid fuels than in converting biomass to ethanol by yeast fermentation. The process involves a biological/chemical method for converting any biodegradable material (e.g., urban wastes, such as municipal solid waste, biodegradable waste, and sewage sludge, agricultural residues such as corn stover, sugarcane bagasse, cotton gin trash, manure) into useful chemicals, such as carboxylic acids (e.g., acetic, propionic, butyric acid), ketones (e.g., acetone, methyl ethyl ketone, diethyl ketone) and biofuels, such as a mixture of primary alcohols (e.g., ethanol, propanol, n-butanol) and/or a mixture of secondary alcohols (e.g., isopropanol, 2-butanol, 3-pentanol). Because of the many products that can be economically produced, this process is a true biorefinery. The process uses a mixed culture of naturally occurring microorganisms found in natural habitats such as the rumen of cattle, termite guts, and marine and terrestrial swamps to anaerobically digest biomass into a mixture of carboxylic acids produced during the acidogenic and acetogenic stages of anaerobic digestion, however with the inhibition of the methanogenic final stage. The more popular methods for production of ethanol and cellulosic ethanol use enzymes that must be isolated first to be added to the biomass and thus convert the starch or cellulose into simple sugars, followed then by yeast fermentation into ethanol. This process does not need the addition of such enzymes as these microorganisms make their own. As the microorganisms anaerobically digest the biomass and convert it into a mixture of carboxylic acids, the pH must be controlled. This is done by the addition of a buffering agent (e.g., ammonium bicarbonate, calcium carbonate), thus yielding a mixture of carboxylate salts. Methanogenesis, being the natural final stage of anaerobic digestion, is inhibited by the presence of the ammonium ions or by the addition of an inhibitor (e.g., iodoform). The resulting fermentation broth contains the produced carboxylate salts that must be dewatered. This is achieved efficiently by vapor-compression evaporation. Further chemical refining of the dewatered fermentation broth may then take place depending on the final chemical or biofuel product desired. The condensed distilled water from the vapor-compression evaporation system is recycled back to the fermentation. On the other hand, if raw sewage or other waste water with high BOD in need of treatment is used as the water for the fermentation, the condensed distilled water from the evaporation can be recycled back to the city or to the original source of the high-BOD waste water. Thus, this process can also serve as a water treatment facility, while producing valuable chemicals or biofuels. Because the system uses a mixed culture of microorganisms, besides not needing any enzyme addition, the fermentation requires no sterility or aseptic conditions, making this front step in the process more economical than in more popular methods for the production of cellulosic ethanol. These savings in the front end of the process, where volumes are large, allows flexibility for further chemical transformations after dewatering, where volumes are small. Carboxylic acids Carboxylic acids can be regenerated from the carboxylate salts using a process known as "acid springing". This process makes use of a high-molecular-weight tertiary amine (e.g., trioctylamine), which is switched with the cation (e.g., ammonium or calcium). The resulting amine carboxylate can then be thermally decomposed into the amine itself, which is recycled, and the corresponding carboxylic acid. In this way, theoretically, no chemicals are consumed or wastes produced during this step. Ketones There are two methods for making ketones. The first one consists on thermally converting calcium carboxylate salts into the corresponding ketones. This was a common method for making acetone from calcium acetate during World War I. The other method for making ketones consists on converting the vaporized carboxylic acids on a catalytic bed of zirconium oxide. Alcohols Primary alcohols The undigested residue from the fermentation may be used in gasification to make hydrogen (H2). This H2 can then be used to hydrogenolyze the esters over a catalyst (e.g., copper chromite), which are produced by esterifying either the ammonium carboxylate salts (e.g., ammonium acetate, propionate, butyrate) or the carboxylic acids (e.g., acetic, propionic, butyric acid) with a high-molecular-weight alcohol (e.g., hexanol, heptanol). From the hydrogenolysis, the final products are the high-molecular-weight alcohol, which is recycled back to the esterification, and the corresponding primary alcohols (e.g., ethanol, propanol, butanol). Secondary alcohols The secondary alcohols (e.g., isopropanol, 2-butanol, 3-pentanol) are obtained by hydrogenating over a catalyst (e.g., Raney nickel) the corresponding ketones (e.g., acetone, methyl ethyl ketone, diethyl ketone). Drop-in biofuels The primary or secondary alcohols obtained as described above may undergo conversion to drop-in biofuels, fuels which are compatible with current fossil fuel infrastructure such as biogasoline, green diesel and bio-jet fuel. Such is done by subjecting the alcohols to dehydration followed by oligomerization using zeolite catalysts in a manner similar to the methanex process, which used to produce gasoline from methanol in New Zealand. Acetic acid versus ethanol Cellulosic-ethanol manufacturing plants are bound to be net exporters of electricity because a large portion of the lignocellulosic biomass, namely lignin, remains undigested and it must be burned, thus producing electricity for the plant and excess electricity for the grid. As the market grows and this technology becomes more widespread, coupling the liquid fuel and the electricity markets will become more and more difficult. Acetic acid, unlike ethanol, is biologically produced from simple sugars without the production of carbon dioxide: C6H12O6     →     2 CH3CH2OH   +   2 CO2 C6H12O6     →     3 CH3COOH Because of this, on a mass basis, the yields will be higher than in ethanol fermentation. If then, the undigested residue (mostly lignin) is used to produce hydrogen by gasification, it is ensured that more energy from the biomass will end up as liquid fuels rather than excess heat/electricity. 3 CH3COOH   +   6 H2     →     3 CH3CH2OH   +   3 H2O C6H12O6 (from cellulose)   +   6 H2 (from lignin)     →     3 CH3CH2OH   +   3 H2O A more comprehensive description of the economics of each of the fuels is given on the pages alcohol fuel and ethanol fuel, more information about the economics of various systems can be found on the central page biofuel. Stage of development The system has been in development since 1991, moving from the laboratory scale (10 g/day) to the pilot scale (200 lb/day) in 2001. A small demonstration-scale plant (5 ton/day) has been constructed and is under operation and a 220 ton/day demonstration plant is expected in 2012. See also Anaerobic digestion Bioreactor Mechanical biological treatment References Anaerobic digestion Biodegradable waste management Alcohol fuels Waste treatment technology Biomass
Bioconversion of biomass to mixed alcohol fuels
[ "Chemistry", "Engineering" ]
1,762
[ "Water treatment", "Biodegradable waste management", "Biodegradation", "Anaerobic digestion", "Environmental engineering", "Water technology", "Waste treatment technology" ]
5,634,651
https://en.wikipedia.org/wiki/Vapor-compression%20evaporation
Vapor-compression evaporation is the evaporation method by which a blower, compressor or jet ejector is used to compress, and thus, increase the pressure of the vapor produced. Since the pressure increase of the vapor also generates an increase in the condensation temperature, the same vapor can serve as the heating medium for its "mother" liquid or solution being concentrated, from which the vapor was generated to begin with. If no compression was provided, the vapor would be at the same temperature as the boiling liquid/solution, and no heat transfer could take place. It is also sometimes called vapor compression distillation (VCD). If compression is performed by a mechanically driven compressor or blower, this evaporation process is usually referred to as MVR (mechanical vapor recompression). In case of compression performed by high pressure motive steam ejectors, the process is usually called thermocompression, steam compression or ejectocompression. MVR process Energy input In this case the energy input to the system lies in the pumping energy of the compressor. The theoretical energy consumption will be equal to , where E is the total theoretical pumping energy Q is the mass of vapors passing through the compressor H1, H2 are the total heat content of unit mass of vapors, respectively upstream and downstream the compressor. In SI units, these are respectively measured in kJ, kg and kJ/kg. The actual energy input will be greater than the theoretical value and will depend on the efficiency of the system, which is usually between 30% and 60%. For example, suppose the theoretical energy input is 300 kJ and the efficiency is 30%. The actual energy input would be 300 x 100/30 = 1,000 kJ. In a large unit, the compression power is between 35 and 45 kW per metric ton of compressed vapors. Equipment for MVR evaporators The compressor is necessarily the core of the unit. Compressors used for this application are usually of the centrifugal type, or positive displacement units such as the Roots blowers, similar to the (much smaller) Roots type supercharger. Very large units (evaporation capacity 100 metric tons per hour or more) sometimes use Axial-flow compressors. The compression work will deliver the steam superheated if compared to the theoretical pressure/temperature equilibrium. For this reason, the vast majority of MVR units feature a desuperheater between the compressor and the main heat exchanger. Thermocompression Energy input The energy input is here given by the energy of a quantity of steam (motive steam), at a pressure higher than those of both the inlet and the outlet vapors. The quantity of compressed vapors is therefore higher than the inlet : Where Qd is the steam quantity at ejector delivery, Qs at ejector suction and Qm is the motive steam quantity. For this reason, a thermocompression evaporator often features a vapor condenser, due to the possible excess of steam necessary for the compression if compared with the steam required to evaporate the solution. The quantity Qm of motive steam per unit suction quantity is a function of both the motive ratio of motive steam pressure vs. suction pressure and the compression ratio of delivery pressure vs. suction pressure. In principle, the higher the compression ratio and the lower the motive ratio the higher will be the specific motive steam consumption, i. e. the less efficient the energy balance. Thermocompression equipment The heart of any thermocompression evaporator is clearly the steam ejector, exhaustively described in the relevant page. The size of the other pieces of equipment, such as the main heat exchanger, the vapor head, etc. (see evaporator for details), is governed by the evaporation process. Comparison These two compression-type evaporators have different fields of application, although they do sometimes overlap. An MVR unit will be preferable for a large unit, thanks to the reduced energy consumption. The largest single body MVR evaporator built (1968, by Whiting Co., later Swenson Evaporator Co., Harvey, Ill. in Cirò Marina, Italy) was a salt crystallizer, evaporating approximately 400 metric tons per hour of water, featuring an axial-flow compressor (Brown Boveri, later ABB). This unit was transformed around 1990 to become the first effect of a multiple effect evaporator. MVR evaporators with 10 tons or more evaporating capacity are common. The compression ratio in a MVR unit does not usually exceed 1.8. At a compression ratio of 1.8, if the evaporation is performed at atmospheric pressure (0.101 MPa), the condensation pressure after compression will be 0.101 x 1.8 = 0.1818 [MPa]. At this pressure, the condensation temperature of the water vapor at the heat exchanger will be 390 K. Taking into account the boiling point elevation of the salt water we wish to evaporate (8 K for a saturated salt solution), this leaves a temperature difference of less than 8 K at the heat exchanger. A small ∆T leads to slow heat transfer, meaning that we will need a very large heating surface to transfer the required heat. Axial-flow and Roots compressor may reach slightly higher compression ratios. Thermocompression evaporators may reach higher compression ratios - at a cost. A compression ratio of 2 is possible (and sometimes more) but unless the motive steam is at a reasonably high pressure (say, 16 bar g - 250 psig - or more), the motive steam consumption will be in the range of 2 kg per kg of suction vapors. A higher compression ratio means a smaller heat exchanger, and a reduced investment cost. Moreover, a compressor is an expensive machine, while an ejector is much simpler and cheap. As a conclusion, MVR machines are used in large, energy-efficient units, while thermocompression units tend to limit their use to small units, where energy consumption is not a big issue. Efficiency The efficiency and feasibility of this process depends on the efficiency of the compressing device (e.g., blower, compressor or steam ejector) and the heat transfer coefficient attained in the heat exchanger contacting the condensing vapor and the boiling "mother" solution/liquid. Theoretically, if the resulting condensate is subcooled, this process could allow full recovery of the latent heat of vaporization that would otherwise be lost if the vapor, rather than the condensate, was the final product; therefore, this method of evaporation is very energy efficient. The evaporation process may be solely driven by the mechanical work provided by the compressing device. Some uses Clean water production (Water for injection) A vapor-compression evaporator, like most evaporators, can make reasonably clean water from any water source. In a salt crystallizer, for example, a typical analysis of the resulting condensate shows a typical content of residual salt not higher than 50 ppm or, in terms of electrical conductance, not higher than 10 μS/cm. This results in a drinkable water, if the other sanitary requirements are fulfilled. While this cannot compete in the marketplace with reverse osmosis or demineralization, vapor compression chiefly differs from these thanks to its ability to make clean water from saturated or even crystallizing brines with total dissolved solids (TDS) up to 650 g/L. The other two technologies can make clean water from sources no higher in TDS than approximately 35 g/L. For economic reasons evaporators are seldom operated on low-TDS water sources. Those applications are filled by reverse osmosis. The already brackish water which enters a typical evaporator is concentrated further. The increased dissolved solids act to increase the boiling point well beyond that of pure water. Seawater with a TDS of approximately 30 g/L exhibits a boiling point elevation of less than 1 K but saturated sodium chloride solution at 360 g/L has a boiling point elevation of about 7 K. This boiling point elevation is a challenge for vapor-compression evaporation in that it increases the pressure ratio that the steam compressor must attain to effect vaporization. Since boiling point elevation determines the pressure ratio in the compressor, it is the main overall factor in operating costs. Steam-assisted gravity drainage The technology used today to extract bitumen from the Athabasca oil sands is the water-intensive steam-assisted gravity drainage (SAGD) method. In the late 1990s former nuclear engineer Bill Heins of General Electric Company's RCC Thermal Products conceived an evaporator technology called falling film or mechanical vapor compression evaporation. In 1999 and 2002 Petro-Canada's MacKay River facility was the first to install 1999 and 2002 GE SAGD zero-liquid discharge (ZLD) systems using a combination of the new evaporative technology and crystallizer system in which all the water was recycled and only solids were discharged off site. This new evaporative technology began to replace older water treatment techniques employed by SAGD facilities which involved the use of warm lime softening to remove silica and magnesium and weak acid cation ion exchange used to remove calcium. The vapor-compression evaporation process replaced the once-through steam generators (OTSG) traditionally used for steam production. OTSG generally ran on natural gas which in 2008 had become increasingly valuable. The water quality of evaporators is four times better which is needed for the drum boilers. The evaporators, when coupled with standard drum boilers, produce steam which is more "reliable, less costly to operate, and less water-intensive." By 2008 about 85 per cent of SAGD facilities in the Alberta oil sands had adopted evaporative technology. "SAGD, unlike other thermal processes such as cyclic steam stimulation (CSS), requires 100 per cent quality steam." See also Cristiani compressed steam system Slingshot (water vapor distillation system) Vapor-compression refrigeration Vapor-compression desalination References Evaporators Chemical processes Unit operations Water treatment Water technology
Vapor-compression evaporation
[ "Chemistry", "Engineering", "Environmental_science" ]
2,131
[ "Unit operations", "Water treatment", "Chemical equipment", "Water pollution", "Chemical processes", "Environmental engineering", "Distillation", "Evaporators", "nan", "Water technology", "Chemical process engineering" ]
5,636,766
https://en.wikipedia.org/wiki/Fast-ion%20conductor
In materials science, fast ion conductors are solid conductors with highly mobile ions. These materials are important in the area of solid state ionics, and are also known as solid electrolytes and superionic conductors. These materials are useful in batteries and various sensors. Fast ion conductors are used primarily in solid oxide fuel cells. As solid electrolytes they allow the movement of ions without the need for a liquid or soft membrane separating the electrodes. The phenomenon relies on the hopping of ions through an otherwise rigid crystal structure. Mechanism Fast ion conductors are intermediate in nature between crystalline solids which possess a regular structure with immobile ions, and liquid electrolytes which have no regular structure and fully mobile ions. Solid electrolytes find use in all solid-state supercapacitors, batteries, and fuel cells, and in various kinds of chemical sensors. Classification In solid electrolytes (glasses or crystals), the ionic conductivity σi can be any value, but it should be much larger than the electronic one. Usually, solids where σi is on the order of 0.0001 to 0.1 Ω−1 cm−1 (300 K) are called superionic conductors. Proton conductors Proton conductors are a special class of solid electrolytes, where hydrogen ions act as charge carriers. One notable example is superionic water. Superionic conductors Superionic conductors where σi is more than 0.1 Ω−1 cm−1 (300 K) and the activation energy for ion transport Ei is small (about 0.1 eV), are called advanced superionic conductors. The most famous example of advanced superionic conductor-solid electrolyte is RbAg4I5 where σi > 0.25 Ω−1 cm−1 and σe ~10−9 Ω−1 cm−1 at 300 K. The Hall (drift) ionic mobility in RbAg4I5 is about 2 cm2/(V•s) at room temperatures. The σe – σi systematic diagram distinguishing the different types of solid-state ionic conductors is given in the figure. No clear examples have been described as yet, of fast ion conductors in the hypothetical advanced superionic conductors class (areas 7 and 8 in the classification plot). However, in crystal structure of several superionic conductors, e.g. in the minerals of the pearceite-polybasite group, the large structural fragments with activation energy of ion transport Ei < kBT (300 К) had been discovered in 2006. Examples Zirconia-based materials A common solid electrolyte is yttria-stabilized zirconia, YSZ. This material is prepared by doping Y2O3 into ZrO2. Oxide ions typically migrate only slowly in solid Y2O3 and in ZrO2, but in YSZ, the conductivity of oxide increases dramatically. These materials are used to allow oxygen to move through the solid in certain kinds of fuel cells. Zirconium dioxide can also be doped with calcium oxide to give an oxide conductor that is used in oxygen sensors in automobile controls. Upon doping only a few percent, the diffusion constant of oxide increases by a factor of ~1000. Other conductive ceramics function as ion conductors. One example is NASICON, (Na3Zr2Si2PO12), a sodium super-ionic conductor beta-Alumina Another example of a popular fast ion conductor is beta-alumina solid electrolyte. Unlike the usual forms of alumina, this modification has a layered structure with open galleries separated by pillars. Sodium ions (Na+) migrate through this material readily since the oxide framework provides an ionophilic, non-reducible medium. This material is considered as the sodium ion conductor for the sodium–sulfur battery. Fluoride ion conductors Lanthanum trifluoride (LaF3) is conductive for F− ions, used in some ion selective electrodes. Beta-lead fluoride exhibits a continuous growth of conductivity on heating. This property was first discovered by Michael Faraday. Iodides A textbook example of a fast ion conductor is silver iodide (AgI). Upon heating the solid to 146 °C, this material adopts the alpha-polymorph. In this form, the iodide ions form a rigid cubic framework, and the Ag+ centers are molten. The electrical conductivity of the solid increases by 4000x. Similar behavior is observed for copper(I) iodide (CuI), rubidium silver iodide (RbAg4I5), and Ag2HgI4. Other Inorganic materials Silver sulfide, conductive for Ag+ ions, used in some ion selective electrodes Lead(II) chloride, conductive for Cl− ions at higher temperatures Some perovskite ceramics – strontium titanate, strontium stannate – conductive for O2− ions Zr(HPO4)2.\mathit{n}H2O – conductive for H+ ions UO2HPO4.4H2O (hydrogen uranyl phosphate tetrahydrate) – conductive for H+ ions Cerium(IV) oxide – conductive for O2− ions Organic materials Many gels, such polyacrylamides, agar, etc. are fast ion conductors A salt dissolved in a polymer – e.g. lithium perchlorate in polyethylene oxide Polyelectrolytes and Ionomers – e.g. Nafion, a H+ conductor History The important case of fast ionic conduction is one in a surface space-charge layer of ionic crystals. Such conduction was first predicted by Kurt Lehovec. As a space-charge layer has nanometer thickness, the effect is directly related to nanoionics (nanoionics-I). Lehovec's effect is used as a basis for developing nanomaterials for portable lithium batteries and fuel cells. See also Mixed conductor References Electric and magnetic fields in matter Electrochemical concepts
Fast-ion conductor
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,247
[ "Materials science", "Electric and magnetic fields in matter", "Electrochemical concepts", "Electrochemistry", "Condensed matter physics" ]
11,025,540
https://en.wikipedia.org/wiki/Piezomagnetism
Piezomagnetism is a phenomenon observed in some antiferromagnetic and ferrimagnetic crystals. It is characterized by a linear coupling between the system's magnetic polarization and mechanical strain. In a piezomagnetic material, one may induce a spontaneous magnetic moment by applying mechanical stress, or a physical deformation by applying a magnetic field. Piezomagnetism differs from the related property of magnetostriction; if an applied magnetic field is reversed in direction, the strain produced changes signs. Additionally, a non-zero piezomagnetic moment can be produced by mechanical strain alone, at zero fields, which is not true of magnetostriction. According to the Institute of Electrical and Electronics Engineers (IEEE): "Piezomagnetism is the linear magneto-mechanical effect analogous to the linear electromechanical effect of piezoelectricity. Similarly, magnetostriction and electrostriction are analogous second-order effects. These higher-order effects can be represented as effectively first-order when variations in the system parameters are small compared with the initial values of the parameters". The piezomagnetic effect is made possible by an absence of certain symmetry elements in a crystal structure; specifically, symmetry under time reversal forbids the property. The first experimental observation of piezomagnetism was made in 1960, in the fluorides of cobalt and manganese. The strongest piezomagnet known is uranium dioxide, with magnetoelastic memory switching at magnetic fields near 180,000 Oe at temperatures below 30 kelvins. References Magnetic ordering Transducers
Piezomagnetism
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
334
[ "Magnetic ordering", "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
11,026,373
https://en.wikipedia.org/wiki/Hydraulic%20engine%20house%2C%20Bristol%20Harbour
The Hydraulic engine house is part of the "Underfall Yard" in Bristol Harbour in Bristol, England. The octagonal brick and terracotta chimney of the engine house dates from 1888, and is grade II* listed, as is the hydraulic engine house itself. It replaced the original pumping house which is now The Pump House public house. It is built of red brick with a slate roof and originally contained two steam engines made by the Worthington Corporation. These were compound surface condensing cylinders of diameter. These were replaced in 1907 by the current machines from Fullerton, Hodgart and Barclay of Paisley. It powered the docks' hydraulic system of cranes, bridges and locks until 2010. Water is pumped from the harbour to a header tank and then fed by gravity to the high pressure pumps, where it is pressurised thence raising the external hydraulic accumulator. This stores the hydraulic energy ensuring a smooth delivery of pressure and meaning that the pumps do not need to be running the whole time nor be capable of supplying the instantaneous peak demands. The working pressure is 750 lbs/square inch. The external accumulator was added about 1954 when the original inside the building's tower became difficult to service (but it remains in place). The building originally contained a pair of steam powered pumps however these were replaced with three electrically driven ones in 1907. The engine house provided the power for equipment such as the lock gates and cranes until 2010. The visitor centre in the hydraulic power house opened in time for Easter 2016. See also Grade II* listed buildings in Bristol References Engine houses Hydraulic accumulators Grade II* listed buildings in Bristol Infrastructure completed in 1888 Bristol Harbourside Grade II* listed industrial buildings
Hydraulic engine house, Bristol Harbour
[ "Physics" ]
344
[ "Physical systems", "Hydraulic accumulators", "Hydraulics" ]
11,027,904
https://en.wikipedia.org/wiki/Epsilon%20calculus
In logic, Hilbert's epsilon calculus is an extension of a formal language by the epsilon operator, where the epsilon operator substitutes for quantifiers in that language as a method leading to a proof of consistency for the extended formal language. The epsilon operator and epsilon substitution method are typically applied to a first-order predicate calculus, followed by a demonstration of consistency. The epsilon-extended calculus is further extended and generalized to cover those mathematical objects, classes, and categories for which there is a desire to show consistency, building on previously-shown consistency at earlier levels. Epsilon operator Hilbert notation For any formal language L, extend L by adding the epsilon operator to redefine quantification: The intended interpretation of ϵx A is some x that satisfies A, if it exists. In other words, ϵx A returns some term t such that A(t) is true, otherwise it returns some default or arbitrary term. If more than one term can satisfy A, then any one of these terms (which make A true) can be chosen, non-deterministically. Equality is required to be defined under L, and the only rules required for L extended by the epsilon operator are modus ponens and the substitution of A(t) to replace A(x) for any term t. Bourbaki notation In tau-square notation from N. Bourbaki's Theory of Sets, the quantifiers are defined as follows: where A is a relation in L, x is a variable, and juxtaposes a at the front of A, replaces all instances of x with , and links them back to . Then let Y be an assembly, (Y|x)A denotes the replacement of all variables x in A with Y. This notation is equivalent to the Hilbert notation and is read the same. It is used by Bourbaki to define cardinal assignment since they do not use the axiom of replacement. Defining quantifiers in this way leads to great inefficiencies. For instance, the expansion of Bourbaki's original definition of the number one, using this notation, has length approximately 4.5 × 1012, and for a later edition of Bourbaki that combined this notation with the Kuratowski definition of ordered pairs, this number grows to approximately 2.4 × 1054. Modern approaches Hilbert's program for mathematics was to justify those formal systems as consistent in relation to constructive or semi-constructive systems. While Gödel's results on incompleteness mooted Hilbert's Program to a great extent, modern researchers find the epsilon calculus to provide alternatives for approaching proofs of systemic consistency as described in the epsilon substitution method. Epsilon substitution method A theory to be checked for consistency is first embedded in an appropriate epsilon calculus. Second, a process is developed for re-writing quantified theorems to be expressed in terms of epsilon operations via the epsilon substitution method. Finally, the process must be shown to normalize the re-writing process, so that the re-written theorems satisfy the axioms of the theory. Notes References Systems of formal logic Proof theory
Epsilon calculus
[ "Mathematics" ]
643
[ "Mathematical logic", "Proof theory" ]
11,028,411
https://en.wikipedia.org/wiki/Jordan%20and%20Einstein%20frames
The Lagrangian in scalar-tensor theory can be expressed in the Jordan frame or in the Einstein frame, which are field variables that stress different aspects of the gravitational field equations and the evolution equations of the matter fields. In the Jordan frame the scalar field or some function of it multiplies the Ricci scalar in the Lagrangian and the matter is typically coupled minimally to the metric, whereas in the Einstein frame the Ricci scalar is not multiplied by the scalar field and the matter is coupled non-minimally. As a result, in the Einstein frame the field equations for the space-time metric resemble the Einstein equations but test particles do not move on geodesics of the metric. On the other hand, in the Jordan frame test particles move on geodesics, but the field equations are very different from Einstein equations. The causal structure in both frames is always equivalent and the frames can be transformed into each other as convenient for the given application. Christopher Hill and Graham Ross have shown that there exist ``gravitational contact terms" in the Jordan frame, whereby the action is modified by graviton exchange. This modification leads back to the Einstein frame as the effective theory. Contact interactions arise in Feynman diagrams when a vertex contains a power of the exchanged momentum, , which then cancels against the Feynman propagator, , leading to a point-like interaction. This must be included as part of the effective action of the theory. When the contact term is included results for amplitudes in the Jordan frame will be equivalent to those in the Einstein frame, and results of physical calculations in the Jordan frame that omit the contact terms will generally be incorrect. This implies that the Jordan frame action is misleading, and the Einstein frame is uniquely correct for fully representing the physics. Equations and physical interpretation If we perform the Weyl rescaling , then the Riemann and Ricci tensors are modified as follows. As an example consider the transformation of a simple Scalar-tensor action with an arbitrary set of matter fields coupled minimally to the curved background The tilde fields then correspond to quantities in the Jordan frame and the fields without the tilde correspond to fields in the Einstein frame. See that the matter action changes only in the rescaling of the metric. The Jordan and Einstein frames are constructed to render certain parts of physical equations simpler which also gives the frames and the fields appearing in them particular physical interpretations. For instance, in the Einstein frame, the equations for the gravitational field will be of the form I.e., they can be interpreted as the usual Einstein equations with particular sources on the right-hand side. Similarly, in the Newtonian limit one would recover the Poisson equation for the Newtonian potential with separate source terms. However, by transforming to the Einstein frame the matter fields are now coupled not only to the background but also to the field which now acts as an effective potential. Specifically, an isolated test particle will experience a universal four-acceleration where is the particle four-velocity. I.e., no particle will be in free-fall in the Einstein frame. On the other hand, in the Jordan frame, all the matter fields are coupled minimally to and isolated test particles will move on geodesics with respect to the metric . This means that if we were to reconstruct the Riemann curvature tensor by measurements of geodesic deviation, we would in fact obtain the curvature tensor in the Jordan frame. When, on the other hand, we deduce on the presence of matter sources from gravitational lensing from the usual relativistic theory, we obtain the distribution of the matter sources in the sense of the Einstein frame. Models Jordan frame gravity can be used to calculate type IV singular bouncing cosmological evolution, to derive the type IV singularity. See also Albert Einstein Pascual Jordan References Valerio Faraoni, Edgard Gunzig, Pasquale Nardone, Conformal transformations in classical gravitational theories and in cosmology, Fundam. Cosm. Phys. 20(1999):121, . Eanna E. Flanagan, The conformal frame freedom in theories of gravitation, Class. Q. Grav. 21(2004):3817, . General relativity Tensors
Jordan and Einstein frames
[ "Physics", "Engineering" ]
877
[ "General relativity", "Tensors", "Relativity stubs", "Theory of relativity" ]
14,695,652
https://en.wikipedia.org/wiki/Ceramography
Ceramography is the art and science of preparation, examination and evaluation of ceramic microstructures. Ceramography can be thought of as the metallography of ceramics. The microstructure is the structure level of approximately 0.1 to 100 μm, between the minimum wavelength of visible light and the resolution limit of the naked eye. The microstructure includes most grains, secondary phases, grain boundaries, pores, micro-cracks and hardness microindentations. Most bulk mechanical, optical, thermal, electrical and magnetic properties are significantly affected by the microstructure. The fabrication method and process conditions are generally indicated by the microstructure. The root cause of many ceramic failures is evident in the microstructure. Ceramography is part of the broader field of materialography, which includes all the microscopic techniques of material analysis, such as metallography, petrography and plastography. Ceramography is usually reserved for high-performance ceramics for industrial applications, such as 85–99.9% alumina (Al2O3) in Fig. 1, zirconia (ZrO2), silicon carbide (SiC), silicon nitride (Si3N4), and ceramic-matrix composites. It is seldom used on whiteware ceramics such as sanitaryware, wall tiles and dishware. History Ceramography evolved along with other branches of materialography and ceramic engineering. Alois de Widmanstätten of Austria etched a meteorite in 1808 to reveal proeutectoid ferrite bands that grew on prior austenite grain boundaries. Geologist Henry Clifton Sorby, the "father of metallography", applied petrographic techniques to the steel industry in the 1860s in Sheffield, England. French geologist Auguste Michel-Lévy devised a chart that correlated the optical properties of minerals to their transmitted color and thickness in the 1880s. Swedish metallurgist J.A. Brinell invented the first quantitative hardness scale in 1900. Smith and Sandland developed the first microindentation hardness test at Vickers Ltd. in London in 1922. Swiss-born microscopist A.I. Buehler started the first metallographic equipment manufacturer near Chicago in 1936. Frederick Knoop and colleagues at the National Bureau of Standards developed a less-penetrating (than Vickers) microindentation test in 1939. Struers A/S of Copenhagen introduced the electrolytic polisher to metallography in 1943. George Kehl of Columbia University wrote a book that was considered the bible of materialography until the 1980s. Kehl co-founded a group within the Atomic Energy Commission that became the International Metallographic Society in 1967. Preparation of ceramographic specimens The preparation of ceramic specimens for microstructural analysis consists of five broad steps: sawing, embedding, grinding, polishing and etching. The tools and consumables for ceramographic preparation are available worldwide from metallography equipment vendors and laboratory supply companies. Sawing Most ceramics are extremely hard and must be wet-sawed with a circular blade embedded with diamond particles. A metallography or lapidary saw equipped with a low-density diamond blade is usually suitable. The blade must be cooled by a continuous liquid spray. Embedding To facilitate further preparation, the sawed specimen is usually embedded (or mounted or encapsulated) in a plastic disc, 25, 32 or 38 mm in diameter. A thermosetting solid resin, activated by heat and compression, e.g. mineral-filled epoxy, is best for most applications. A castable (liquid) resin such as unfilled epoxy, acrylic or polyester may be used for porous refractory ceramics or microelectronic devices. The castable resins are also available with fluorescent dyes that aid in fluorescence microscopy. The left and right specimens in Fig. 3 were embedded in mineral-filled epoxy. The center refractory in Fig. 3 was embedded in castable, transparent acrylic. Grinding Grinding is abrasion of the surface of interest by abrasive particles, usually diamond, that are bonded to paper or a metal disc. Grinding erases saw marks, coarsely smooths the surface, and removes stock to a desired depth. A typical grinding sequence for ceramics is one minute on a 240-grit metal-bonded diamond wheel rotating at 240 rpm and lubricated by flowing water, followed by a similar treatment on a 400-grit wheel. The specimen is washed in an ultrasonic bath after each step. Polishing Polishing is abrasion by free abrasives that are suspended in a lubricant and can roll or slide between the specimen and paper. Polishing erases grinding marks and smooths the specimen to a mirror-like finish. Polishing on a bare metallic platen is called lapping. A typical polishing sequence for ceramics is 5–10 minutes each on 15-, 6- and 1-μm diamond paste or slurry on napless paper rotating at 240 rpm. The specimen is again washed in an ultrasonic bath after each step. The three sets of specimens in Fig. 3 have been sawed, embedded, ground and polished. Etching Etching reveals and delineates grain boundaries and other microstructural features that are not apparent on the as-polished surface. The two most common types of etching in ceramography are selective chemical corrosion, and a thermal treatment that causes relief. As an example, alumina can be chemically etched by immersion in boiling concentrated phosphoric acid for 30–60 s, or thermally etched in a furnace for 20–40 min at in air. The plastic encapsulation must be removed before thermal etching. The alumina in Fig. 1 was thermally etched. Alternatively, non-cubic ceramics can be prepared as thin sections, also known as petrography, for examination by polarized transmitted light microscopy. In this technique, the specimen is sawed to ~1 mm thick, glued to a microscope slide, and ground or sawed (e.g., by microtome) to a thickness (x) approaching 30 μm. A cover slip is glued onto the exposed surface. The adhesives, such as epoxy or Canada balsam resin, must have approximately the same refractive index (η ≈ 1.54) as glass. Most ceramics have a very small absorption coefficient (α ≈ 0.5 cm −1 for alumina in Fig. 2) in the Beer–Lambert law below, and can be viewed in transmitted light. Cubic ceramics, e.g. yttria-stabilized zirconia and spinel, have the same refractive index in all crystallographic directions and appear, therefore, black when the microscope's polarizer is 90° out of phase with its analyzer. (Beer–Lambert eqn) Ceramographic specimens are electrical insulators in most cases, and must be coated with a conductive ~10-nm layer of metal or carbon for electron microscopy, after polishing and etching. Gold or Au-Pd alloy from a sputter coater or evaporative coater also improves the reflection of visible light from the polished surface under a microscope, by the Fresnel formula below. Bare alumina (η ≈ 1.77, k ≈ 10 −6) has a negligible extinction coefficient and reflects only 8% of the incident light from the microscope, as in Fig. 1. Gold-coated (η ≈ 0.82, k ≈ 1.59 @ λ = 500 nm) alumina reflects 44% in air, 39% in immersion oil. (Fresnel eqn).. Ceramographic analysis Ceramic microstructures are most often analyzed by reflected visible-light microscopy in brightfield. Darkfield is used in limited circumstances, e.g., to reveal cracks. Polarized transmitted light is used with thin sections, where the contrast between grains comes from birefringence. Very fine microstructures may require the higher magnification and resolution of a scanning electron microscope (SEM) or confocal laser scanning microscope (CLSM). The cathodoluminescence microscope (CLM) is useful for distinguishing phases of refractories. The transmission electron microscope (TEM) and scanning acoustic microscope (SAM) have specialty applications in ceramography. Ceramography is often done qualitatively, for comparison of the microstructure of a component to a standard for quality control or failure analysis purposes. Three common quantitative analyses of microstructures are grain size, second-phase content and porosity. Microstructures are measured by the principles of stereology, in which three-dimensional objects are evaluated in 2-D by projections or cross-sections. Microstructures exhibiting heterogeneous grain sizes, with certain grains growing very large, occur in diverse ceramic systems and this phenomenon is known as abnormal grain growth or AGG. The occurrence of AGG has consequences, positive or negative, on mechanical and chemical properties of ceramics and its identification is often the goal of ceramographic analysis. Grain size can be measured by the line-fraction or area-fraction methods of ASTM E112. In the line-fraction methods, a statistical grain size is calculated from the number of grains or grain boundaries intersecting a line of known length or circle of known circumference. In the area-fraction method, the grain size is calculated from the number of grains inside a known area. In each case, the measurement is affected by secondary phases, porosity, preferred orientation, exponential distribution of sizes, and non-equiaxed grains. Image analysis can measure the shape factors of individual grains by ASTM E1382. Second-phase content and porosity are measured the same way in a microstructure, such as ASTM E562. Procedure E562 is a point-fraction method based on the stereological principle of point fraction = volume fraction, i.e., Pp = Vv. Second-phase content in ceramics, such as carbide whiskers in an oxide matrix, is usually expressed as a mass fraction. Volume fractions can be converted to mass fractions if the density of each phase is known. Image analysis can measure porosity, pore-size distribution and volume fractions of secondary phases by ASTM E1245. Porosity measurements do not require etching. Multi-phase microstructures do not require etching if the contrast between phases is adequate, as is usually the case. Grain size, porosity and second-phase content have all been correlated with ceramic properties such as mechanical strength σ by the Hall–Petch equation. Hardness, toughness, dielectric constant and many other properties are microstructure-dependent. Microindentation hardness and toughness The hardness of a material can be measured in many ways. The Knoop hardness test, a method of microindentation hardness, is the most reproducible for dense ceramics. The Vickers hardness test and superficial Rockwell scales (e.g., 45N) can also be used, but tend to cause more surface damage than Knoop. The Brinell test is suitable for ductile metals, but not ceramics. In the Knoop test, a diamond indenter in the shape of an elongated pyramid is forced into a polished (but not etched) surface under a predetermined load, typically 500 or 1000 g. The load is held for some amount of time, say 10 s, and the indenter is retracted. The indention long diagonal (d, μm, in Fig. 4) is measured under a microscope, and the Knoop hardness (HK) is calculated from the load (P, g) and the square of the diagonal length in the equations below. The constants account for the projected area of the indenter and unit conversion factors. Most oxide ceramics have a Knoop hardness in the range of 1000–1500 kgf/mm2 (10 – 15 GPa), and many carbides are over 2000 (20 GPa). The method is specified in ASTM C849, C1326 & E384. Microindentation hardness is also called microindentation hardness or simply microhardness. The hardness of very small particles and thin films of ceramics, on the order of 100 nm, can be measured by nanoindentation methods that use a Berkovich indenter. (kgf/mm2) and (GPa) The toughness of ceramics can be determined from a Vickers test under a load of 10 – 20 kg. Toughness is the ability of a material to resist crack propagation. Several calculations have been formulated from the load (P), elastic modulus (E), microindentation hardness (H), crack length (c in Fig. 5) and flexural strength (σ). Modulus of rupture (MOR) bars with a rectangular cross-section are indented in three places on a polished surface. The bars are loaded in 4-point bending with the polished, indented surface in tension, until fracture. The fracture normally originates at one of the indentions. The crack lengths are measured under a microscope. The toughness of most ceramics is 2–4 MPa, but toughened zirconia is as much as 13, and cemented carbides are often over 20. The toughness-by-indention methods have been discredited recently and are being replaced by more rigorous methods that measure crack growth in a notched beam in bending. initial crack length indention strength in bending References Further reading and external links Expert Guide: Materialography/Metallography, QATM Academy, ATM Qness GmbH, 2022. Metallographic Preparation of Ceramic and Cermet Materials, Leco Met-Tips No. 19, 2008. Sample Preparation of Ceramic Material, Buehler Ltd., 1990. Structure, Volume 33, Struers A/S, 1998, p 3–20. Struers Metalog Guide S. Binkowski, R. Paul & M. Woydt, "Comparing Preparation Techniques Using Microstructural Images of Ceramic Materials," Structure, Vol 39, 2002, p 8–19. R.E. Chinn, Ceramography, ASM International and the American Ceramic Society, 2002, . D.J. Clinton, A Guide to Polishing and Etching of Technical and Engineering Ceramics, The Institute of Ceramics, 1987. Digital Library of Ceramic Microstructures, University of Dayton, 2003. G. Elssner, H. Hoven, G. Kiessler & P. Wellner, translated by R. Wert, Ceramics and Ceramic Composites: Materialographic Preparation, Elsevier Science Inc., 1999, . R.M. Fulrath & J.A. Pask, ed., Ceramic Microstructures: Their Analysis, Significance, and Production, Robert E. Krieger Publishing Co., 1968, . K. Geels in collaboration with D.B. Fowler, W-U Kopp & M. Rückert, Metallographic and Materialographic Specimen Preparation, Light Microscopy, Image Analysis and Hardness Testing, ASTM International, 2007, . H. Insley & V.D. Fréchette, Microscopy of Ceramics and Cements, Academic Press Inc., 1955. W.E. Lee and W.M. Rainforth, Ceramic Microstructures: Property Control by Processing, Chapman & Hall, 1994. I.J. McColm, Ceramic Hardness, Plenum Press, 2000, . Micrograph Center, ASM International, 2005. H. Mörtel, "Microstructural Analysis," Engineered Materials Handbook, Volume 4: Ceramics and Glasses, ASM International, 1991, p 570–579, . G. Petzow, Metallographic Etching, 2nd Edition, ASM International, 1999, . G.D. Quinn, "Indentation Hardness Testing of Ceramics," ASM Handbook, Volume 8: Mechanical Testing and Evaluation, ASM International, 2000, p 244–251, . A.T. Santhanam, "Metallography of Cemented Carbides," ASM Handbook Volume 9: Metallography and Microstructures, ASM International, 2004, p 1057–1066, . U. Täffner, V. Carle & U. Schäfer, "Preparation and Microstructural Analysis of High-Performance Ceramics," ASM Handbook Volume 9: Metallography and Microstructures, ASM International, 2004, p 1057–1066, . D.C. Zipperian, Metallographic Handbook, PACE Technologies, 2011. Ceramic engineering Metallurgy Microscopy Materials science Materials testing
Ceramography
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,517
[ "Applied and interdisciplinary physics", "Metallurgy", "Materials science", "Materials testing", "nan", "Microscopy", "Ceramic engineering" ]
14,695,796
https://en.wikipedia.org/wiki/SPI1
Transcription factor PU.1 is a protein that in humans is encoded by the SPI1 gene. Function This gene encodes an ETS-domain transcription factor that activates gene expression during myeloid and B-lymphoid cell development. The nuclear protein binds to a purine-rich sequence known as the PU-box found on enhancers of target genes, and regulates their expression in coordination with other transcription factors and cofactors. The protein can also regulate alternative splicing of target genes. Multiple transcript variants encoding different isoforms have been found for this gene. The PU.1 transcription factor is essential for hematopoiesis and cell fate decisions. PU.1 can physically interact with a variety of regulatory factors like SWI/SNF, TFIID, GATA-2, GATA-1 and c-Jun. The protein-protein interactions between these factors can regulate PU.1-dependent cell fate decisions. PU.1 can modulate the expression of 3000 genes in hematopoietic cells including cytokines. It is expressed in monocytes, granulocytes, B and NK cells but is absent in T cells, reticulocytes and megakaryocytes. Its transcription is regulated by various mechanisms . PU.1 is an essential regulator of the pro-fibrotic system. In fibrotic conditions, PU.1 expression is perturbed in fibrotic diseases, resulting in upregulation of fibrosis-associated genes sets in fibroblasts. Disruption of PU.1 in fibrotic fibroblasts leads to them returning into their resting state from pro-fibrotic fibroblasts. PU.1 is seen to be highly expressed in extracellular matrix producing-fibrotic fibroblasts while it is downregulated in inflammatory/ ECM degrading and resting fibroblasts. The majority of the cells expressing PU.1 in fibrotic conditions remain to be fibroblasts with a few infiltrating lymphocytes. PU.1 induces the polarization of resting and inflammatory fibroblasts into fibrotic fibroblasts. Structure The ETS domain is the DNA-binding module of PU.1 and other ETS-family transcription factors. Interactions SPI1 has been shown to interact with: FUS, GATA2, IRF4, and NONO. References Further reading External links Transcription factors
SPI1
[ "Chemistry", "Biology" ]
506
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,696,512
https://en.wikipedia.org/wiki/Glycine%20receptor%2C%20alpha%201
Glycine receptor subunit alpha-1 is a protein that in humans is encoded by the GLRA1 gene. Function The inhibitory glycine receptor mediates postsynaptic inhibition in the spinal cord and other regions of the central nervous system. It is a pentameric receptor composed solely of alpha subunits. The GLRB gene encodes the alpha subunit of the receptor. Clinical significance Mutations in the gene have been associated with hyperekplexia, a neurologic syndrome associated with an exaggerated startle reaction. See also Glycine receptor Stiff person syndrome Hyperekplexia References Further reading External links Ion channels
Glycine receptor, alpha 1
[ "Chemistry" ]
128
[ "Neurochemistry", "Ion channels" ]
14,698,621
https://en.wikipedia.org/wiki/Neuropilin%201
Neuropilin-1 is a protein that in humans is encoded by the NRP1 gene. In humans, the neuropilin 1 gene is located at 10p11.22. This is one of two human neuropilins. Function NRP1 is a membrane-bound coreceptor to a tyrosine kinase receptor for both vascular endothelial growth factor (for example, VEGFA) and semaphorin (for example, SEMA3A) family members. NRP1 plays versatile roles in angiogenesis, axon guidance, cell survival, migration, and invasion.[supplied by OMIM] Interactions Neuropilin 1 has been shown to interact with Vascular endothelial growth factor A. Role in COVID-19 Research has shown that neuropilin 1 facilitates entry of SARS-CoV-2 into cells, making it a possible target for future antiviral drugs. Implication in cancer Neuropilin 1 has been implicated in the vascularization and progression of cancers. NRP1 expression has been shown to be elevated in a number of human patient tumor samples, including brain, prostate, breast, colon, and lung cancers and NRP1 levels are positively correlated with metastasis. In prostate cancer NRP1 has been demonstrated to be an androgen-suppressed gene, upregulated during the adaptive response of prostate tumors to androgen-targeted therapies and a prognostic biomarker of clinical metastasis and lethal PCa. In vitro and in vivo mouse studies have shown membrane bound NRP1 to be proangiogenic and that NRP1 promotes the vascularization of prostate tumors. Elevated NRP1 expression is also correlated with the invasiveness of non-small cell lung cancer both in vitro and in vivo. Target for cancer therapies As a co-receptor for VEGF, NRP1 is a potential target for cancer therapies. A synthetic peptide, EG3287, was generated in 2005 and has been shown to block NRP1 activity. EG3287 has been shown to induce apoptosis in tumor cells with elevated NRP1 expression. A patent for EG3287 was filed in 2002 and approved in 2003. As of 2015 there were no clinical trials ongoing or completed for EG3287 as a human cancer therapy. Soluble NRP1 has the opposite effect of membrane bound NRP1 and has anti-VEGF activity. In vivo mouse studies have shown that injections of sNRP-1 inhibits progression of acute myeloid leukemia in mice. References Further reading Proteins
Neuropilin 1
[ "Chemistry" ]
542
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,698,685
https://en.wikipedia.org/wiki/Metal-induced%20gap%20states
In bulk semiconductor band structure calculations, it is assumed that the crystal lattice (which features a periodic potential due to the atomic structure) of the material is infinite. When the finite size of a crystal is taken into account, the wavefunctions of electrons are altered and states that are forbidden within the bulk semiconductor gap are allowed at the surface. Similarly, when a metal is deposited onto a semiconductor (by thermal evaporation, for example), the wavefunction of an electron in the semiconductor must match that of an electron in the metal at the interface. Since the Fermi levels of the two materials must match at the interface, there exists gap states that decay deeper into the semiconductor. Band-bending at the metal-semiconductor interface As mentioned above, when a metal is deposited onto a semiconductor, even when the metal film as small as a single atomic layer, the Fermi levels of the metal and semiconductor must match. This pins the Fermi level in the semiconductor to a position in the bulk gap. Shown to the right is a diagram of band-bending interfaces between two different metals (high and low work functions) and two different semiconductors (n-type and p-type). Volker Heine was one of the first to estimate the length of the tail end of metal electron states extending into the semiconductor's energy gap. He calculated the variation in surface state energy by matching wavefunctions of a free-electron metal to gapped states in an undoped semiconductor, showing that in most cases the position of the surface state energy is quite stable regardless of the metal used. Branching point It is somewhat crude to suggest that the metal-induced gap states (MIGS) are tail ends of metal states that leak into the semiconductor. Since the mid-gap states do exist within some depth of the semiconductor, they must be a mixture (a Fourier series) of valence and conduction band states from the bulk. The resulting positions of these states, as calculated by C. Tejedor, F. Flores and E. Louis, and J. Tersoff, must be closer to either the valence- or conduction- band thus acting as acceptor or donor dopants, respectively. The point that divides these two types of MIGS is called the branching point, E_B. Tersoff argued , where is the spin orbit splitting of at the point. is the indirect conduction band minimum. Metal–semiconductor contact point barrier height In order for the Fermi levels to match at the interface, there must be charge transfer between the metal and semiconductor. The amount of charge transfer was formulated by Linus Pauling and later revised to be: where and are the electronegativities of the metal and semiconductor, respectively. The charge transfer produces a dipole at the interface and thus a potential barrier called the Schottky barrier height. In the same derivation of the branching point mentioned above, Tersoff derives the barrier height to be: where is a parameter adjustable for the specific metal, dependent mostly on its electronegativity, . Tersoff showed that the experimentally measured fits his theoretical model for Au in contact with 10 common semiconductors, including Si, Ge, GaP, and GaAs. Another derivation of the contact barrier height in terms of experimentally measurable parameters was worked out by Federico Garcia-Moliner and Fernando Flores who considered the density of states and dipole contributions more rigorously. is dependent on the charge densities of the both materials density of surface states work function of metal sum of dipole contributions considering dipole corrections to the jellium model semiconductor gap Ef – Ev in semiconductor Thus can be calculated by theoretically deriving or experimentally measuring each parameter. Garcia-Moliner and Flores also discuss two limits (The Bardeen Limit), where the high density of interface states pins the Fermi level at that of the semiconductor regardless of . (The Schottky Limit) where varies with strongly with the characteristics of the metal, including the particular lattice structure as accounted for in . Applications When a bias voltage is applied across the interface of an n-type semiconductor and a metal, the Fermi level in the semiconductor is shifted with respect to the metal's and the band bending decreases. In effect, the capacitance across the depletion layer in the semiconductor is bias voltage dependent and goes as . This makes the metal/semiconductor junction useful in varactor devices used frequently in electronics. References Electronic band structures Semiconductor structures
Metal-induced gap states
[ "Physics", "Chemistry", "Materials_science" ]
912
[ "Electron", "Electronic band structures", "Condensed matter physics" ]
14,699,013
https://en.wikipedia.org/wiki/Dealkalization
Dealkalization is a process of surface modification applicable to glasses containing alkali ions, wherein a thin surface layer is created that has a lower concentration of alkali ions than is present in the underlying, bulk glass. This change in surface composition commonly alters the observed properties of the surface, most notably enhancing corrosion resistance. Many commercial glass products such as containers are made of soda-lime glass, and therefore have a substantial percentage of sodium ions in their internal structure. Since sodium is an alkali element, its selective removal from the surface results in a dealkalized surface. A classic example of dealkalization is the treatment of glass containers, where a special process is used to create a dealkalized inside surface that is more resistant to interactions with liquid products put inside the container. However, the term dealkalization may also be generally applied to any process where a glass surface forms a thin surface layer that is depleted of alkali ions relative to the bulk. A common example is the initial stages of glass corrosion or weathering, where alkali ions are leached from the surface region by interactions with water, forming a dealkalized surface layer. A dealkalized surface may have either no alkali remaining or may just have less than the bulk. In silicate glasses, dealkalized surfaces are also often considered "silica-rich" since the selective removal of alkali ions can be thought to leave behind a surface composed primarily of silica (SiO2). To be precise, dealkalization does not generally involve the outright removal of alkali from the glass, but rather its replacement with protons (H+) or hydronium ions (H3O+) in the structure through the process of ion-exchange. Treatment of glass containers Motivation For glass containers, the goal of surface dealkalization is to render the inside surface of the container more resistant to interactions with liquid products later put inside it. Since the treatment is directed primarily at changing the properties of the inside surface in contact with the product, it is also referred to as "internal treatment". The most common example of its use with containers is on bottles intended to hold alcoholic spirits. The reason for this is that some alcoholic spirits such as vodka and gin have an approximately neutral pH and a high alcohol content, but are not buffered in any way against changes in pH. If alkali is leached from the glass into the product, the pH will begin to rise (i.e. become more alkaline), can eventually reach a pH high enough that the solution begins to attack the glass itself quite effectively. By this mechanism, initially neutral alcohol products can achieve a pH where the glass container itself begins to slowly dissolve, leaving thin, siliceous glass flakes or particles in the fluid. Dealkalization treatment hinders this process by removing alkali from the inside surface. Not only does this mean less extractable alkali in the glass surface directly contacting the product, but it also creates a barrier for the diffusion of alkali from the underlying bulk glass into the product. The same logic applies in pharmaceutical glass items such as vials that are intended to hold medicinal products. While many of these items are composed of more durable borosilicate glass, they are also at times dealkalized in order to minimize the possibility of alkali leaching from the glass into the product. This action helps to avoid undesired changes in pH or ionic strength of the solution, which not only inhibits eventual attack of the glass as previously described, but can also be important in maintaining the efficacy or stability of sensitive product formulations. Dealkalization methods Dealkalizing glass containers is accomplished by exposing the glass surface to reactive sulfur- or fluorine-containing compounds during the manufacturing process. A rapid ion-exchange reaction proceeds that depletes the inside surface of alkali, and is performed when the glass is at high temperature, usually on the order of 500–650 °C or greater. Historically, sulfur-containing compounds were the first materials used to dealkalize glass containers. Dealkalization proceeds through the interdiffusion/ion-exchange of Na+ out of the glass and H+/H3O+ into the glass, along with the subsequent reaction of the sulfate species with available sodium at the surface to form sodium sulfate (Na2SO4). The latter is left behind as water-soluble crystalline deposits, or bloom, on the glass surface that must be rinsed away prior to filling. On manufacturing lines, one way in which this process was done was by flooding the annealing lehr with sulfur dioxide (SO2) or sulfur trioxide (SO3) gases—especially in the presence of water, which enhances the reaction. However, this practice fell into disfavor due to environmental and health concerns regarding SOx-type gases. An alternative method for sulfate treatment is with solid ammonium sulfate salt or aqueous solutions thereof. These materials are introduced inside the container after forming and decompose into gases in the annealing lehr, where the resulting sulfur-containing gas mixture carries out the dealkalization reaction. This method is purportedly safer than flooding the annealing lehr since the unreacted components in the gas mixture will tend not to escape to the atmosphere, but rather react with each other and recreate the original salt in the container that can later be rinsed away. Treatment with fluorine-containing compounds is typically accomplished through the injection of a fluorinated gas mixture (e.g. 1,1-difluoroethane mixed with air) into bottles at high temperatures. The gas can be delivered to the container either in the air used in the forming process (i.e. during the final blow of the container into its desired shape), or with a nozzle directing a stream of the gas down into the mouth of the bottle as it passes on a conveyor belt after forming but before annealing. The mixture gently combusts inside the bottle, creating an extremely small dose of hydrofluoric acid that reacts with the glass surface and serves to dealkalize it. The resultant surface is virtually free from any residues of the process. This treatment is also known as the Ball I.T. process (I.T. standing for internal treatment) as Ball Corporation held the patent and developed the first commercially available system implementing this process. Testing for dealkalization Routine tests for surface dealkalization in the glass container industry all generally aim to evaluate the amount of alkali extracted from the glass when it is rinsed with or exposed to purified water. For example, dealkalization can be quickly checked by introducing a small volume of distilled water to a freshly made bottle and rolling the bottle gently to pass the water completely over its inside surface. The pH of the rinse water is then measured; untreated containers will tend to yield a slightly alkaline pH in the 8-9 range due to extracted alkali, while dealkalized containers tend to yield a pH that remains approximately neutral. A much more thorough version of this test is outlined in various international and domestic testing standards for glass containers, all with comparable methodologies. These tests evaluate the hydrolytic stability of the containers under more severe conditions, wherein containers, filled close to capacity with purified water, are covered and then heat-cycled in an autoclave at 121 °C for 1 hour. After cooling to room temperature, the water is titrated with acid to evaluate the pH of the water, and therefore the equivalent amount of alkali extracted during the heat cycle. The alkali content of the rinse water can also be evaluated more directly by chemical analysis of the rinse water, as outlined in more recent versions of the European Pharmacopoeia. According to the Pharmacopoeia standards, internally treated or dealkalized soda-lime glass containers are designated as "Type II" containers, thus setting them apart from their untreated counterparts due to their improved resistance to product interactions (as opposed to "Type III", which is standard, untreated soda-lime glass, or "Type I", which is reserved for highly resistant borosilicate glass). While not routine, dealkalization can also be measured in a variety of other ways. Since dealkalized surfaces are more chemically durable, they are also more resistant to weathering reactions, and appropriate evaluation of this parameter can give indirect evidence of a previously dealkalized surface. It is also possible to evaluate dealkalization through the use of advanced, surface analytical techniques such as SIMS or XPS, which give direct measurements of glass surface composition. See also Corrosion of glasses Glass Glass container industry Soda-lime glass Surface science Glass disease References Glass coating and surface modification Glass chemistry Packaging Containers
Dealkalization
[ "Chemistry", "Materials_science", "Engineering" ]
1,791
[ "Glass engineering and science", "Glass chemistry", "Coatings", "Glass coating and surface modification" ]
14,700,395
https://en.wikipedia.org/wiki/EPH%20receptor%20A2
EPH receptor A2 (ephrin type-A receptor 2) is a protein that in humans is encoded by the EPHA2 gene. Function This gene belongs to the ephrin receptor subfamily of the protein-tyrosine kinase family. EPH and EPH-related receptors have been implicated in mediating developmental events, particularly in the nervous system. Receptors in the EPH subfamily typically have a single kinase domain and an extracellular region containing a Cys-rich domain and 2 fibronectin type III repeats. The ephrin receptors are divided into two groups based on the similarity of their extracellular domain sequences and their affinities for binding ephrin-A and ephrin-B ligands. This gene encodes a protein that binds ephrin-A ligands. Clinical significance It may be implicated in BRAF mutated melanomas becoming resistant to BRAF-inhibitors and MEK inhibitors. It is also the receptor by which Kaposi's sarcoma-associated herpesvirus (KSHV) enters host cells; small molecule inhibitors of EphA2 have shown some ability to block KSHV entry into human cells. Interactions EPH receptor A2 has been shown to interact with: Ephrin A1 ACP1 Grb2, PIK3R1, and SHC1. It was also shown that doxazosin is a small molecule agonist of EPH receptor A2. See also Vasculogenic mimicry References Further reading External links Tyrosine kinase receptors
EPH receptor A2
[ "Chemistry" ]
316
[ "Tyrosine kinase receptors", "Signal transduction" ]
14,703,145
https://en.wikipedia.org/wiki/Aubin%E2%80%93Lions%20lemma
In mathematics, the Aubin–Lions lemma (or theorem) is the result in the theory of Sobolev spaces of Banach space-valued functions, which provides a compactness criterion that is useful in the study of nonlinear evolutionary partial differential equations. Typically, to prove the existence of solutions one first constructs approximate solutions (for example, by a Galerkin method or by mollification of the equation), then uses the compactness lemma to show that there is a convergent subsequence of approximate solutions whose limit is a solution. The result is named after the French mathematicians Jean-Pierre Aubin and Jacques-Louis Lions. In the original proof by Aubin, the spaces X0 and X1 in the statement of the lemma were assumed to be reflexive, but this assumption was removed by Simon, so the result is also referred to as the Aubin–Lions–Simon lemma. Statement of the lemma Let X0, X and X1 be three Banach spaces with X0 ⊆ X ⊆ X1. Suppose that X0 is compactly embedded in X and that X is continuously embedded in X1. For , let (i) If then the embedding of into is compact. (ii) If and then the embedding of into is compact. See also Lions–Magenes lemma Notes References (Theorem II.5.16) (Sect.7.3) (Proposition III.1.3) Banach spaces Theorems in functional analysis Lemmas in analysis Measure theory
Aubin–Lions lemma
[ "Mathematics" ]
316
[ "Lemmas", "Theorems in mathematical analysis", "Theorems in functional analysis", "Lemmas in mathematical analysis" ]
14,703,193
https://en.wikipedia.org/wiki/Type%20inhabitation
In type theory, a branch of mathematical logic, in a given typed calculus, the type inhabitation problem for this calculus is the following problem: given a type and a typing environment , does there exist a -term M such that ? With an empty type environment, such an M is said to be an inhabitant of . Relationship to logic In the case of simply typed lambda calculus, a type has an inhabitant if and only if its corresponding proposition is a tautology of minimal implicative logic. Similarly, a System F type has an inhabitant if and only if its corresponding proposition is a tautology of intuitionistic second-order logic. Girard's paradox shows that type inhabitation is strongly related to the consistency of a type system with Curry–Howard correspondence. To be sound, such a system must have uninhabited types. Formal properties For most typed calculi, the type inhabitation problem is very hard. Richard Statman proved that for simply typed lambda calculus the type inhabitation problem is PSPACE-complete. For other calculi, like System F, the problem is even undecidable. See also Curry–Howard isomorphism References Lambda calculus Type theory
Type inhabitation
[ "Mathematics" ]
240
[ "Type theory", "Mathematical logic", "Mathematical structures", "Mathematical objects" ]
14,703,220
https://en.wikipedia.org/wiki/Pentosidine
Pentosidine is a biomarker for advanced glycation endproducts, or AGEs. It is a well characterized and easily detected member of this large class of compounds. Background AGEs are biochemicals formed continuously under normal circumstances, but more rapidly under a variety of stresses, especially oxidative stress and hyperglycemia. They serve as markers of stress and act as toxins themselves. Pentosidine is typical of the class, except that it fluoresces, which allows it to be seen and measured easily. Because it is well characterized, it is often studied to provide new insight into the biochemistry of AGE compounds in general. Biochemistry Derived from ribose, a pentose, pentosidine forms fluorescent cross-links between the arginine and lysine residues in collagen. It is formed in a reaction of the amino acids with the Maillard reaction products of ribose. Although it is present only in trace concentrations among tissue proteins, it is useful for assessing cumulative damage to proteins—advanced glycation endproducts—by non-enzymatic browning reactions with carbohydrates. Physiology In vivo, AGEs form pentosidine through sugar fragmentation. In patients with diabetes mellitus type 2, pentosidine correlates with the presence and severity of diabetic complications. References Biomolecules Guanidines Imidazopyridines Biomarkers Advanced glycation end-products
Pentosidine
[ "Chemistry", "Biology" ]
303
[ "Carbohydrates", "Biomarkers", "Natural products", "Biochemistry", "Guanidines", "Functional groups", "Organic compounds", "Senescence", "Biomolecules", "Molecular biology", "Structural biology", "Advanced glycation end-products" ]
14,703,713
https://en.wikipedia.org/wiki/Shock-capturing%20method
In computational fluid dynamics, shock-capturing methods are a class of techniques for computing inviscid flows with shock waves. The computation of flow containing shock waves is an extremely difficult task because such flows result in sharp, discontinuous changes in flow variables such as pressure, temperature, density, and velocity across the shock. Method In shock-capturing methods, the governing equations of inviscid flows (i.e. Euler equations) are cast in conservation form and any shock waves or discontinuities are computed as part of the solution. Here, no special treatment is employed to take care of the shocks themselves, which is in contrast to the shock-fitting method, where shock waves are explicitly introduced in the solution using appropriate shock relations (Rankine–Hugoniot relations). The shock waves predicted by shock-capturing methods are generally not sharp and may be smeared over several grid elements. Also, classical shock-capturing methods have the disadvantage that unphysical oscillations (Gibbs phenomenon) may develop near strong shocks. Euler equations The Euler equations are the governing equations for inviscid flow. To implement shock-capturing methods, the conservation form of the Euler equations are used. For a flow without external heat transfer and work transfer (isoenergetic flow), the conservation form of the Euler equation in Cartesian coordinate system can be written as where the vectors , , , and are given by where is the total energy (internal energy + kinetic energy + potential energy) per unit mass. That is The Euler equations may be integrated with any of the shock-capturing methods available to obtain the solution. Classical and modern shock capturing methods From a historical point of view, shock-capturing methods can be classified into two general categories: classical methods and modern shock capturing methods (also called high-resolution schemes). Modern shock-capturing methods are generally upwind biased in contrast to classical symmetric or central discretizations. Upwind-biased differencing schemes attempt to discretize hyperbolic partial differential equations by using differencing based on the direction of the flow. On the other hand, symmetric or central schemes do not consider any information about the direction of wave propagation. Regardless of the shock-capturing scheme used, a stable calculation in the presence of shock waves requires a certain amount of numerical dissipation, in order to avoid the formation of unphysical numerical oscillations. In the case of classical shock-capturing methods, numerical dissipation terms are usually linear and the same amount is uniformly applied at all grid points. Classical shock-capturing methods only exhibit accurate results in the case of smooth and weak shock solutions, but when strong shock waves are present in the solution, non-linear instabilities and oscillations may arise across discontinuities. Modern shock-capturing methods usually employ nonlinear numerical dissipation, where a feedback mechanism adjusts the amount of artificial dissipation added in accord with the features in the solution. Ideally, artificial numerical dissipation needs to be added only in the vicinity of shocks or other sharp features, and regions of smooth flow must be left unmodified. These schemes have proven to be stable and accurate even for problems containing strong shock waves. Some of the well-known classical shock-capturing methods include the MacCormack method (uses a discretization scheme for the numerical solution of hyperbolic partial differential equations), Lax–Wendroff method (based on finite differences, uses a numerical method for the solution of hyperbolic partial differential equations), and Beam–Warming method. Examples of modern shock-capturing schemes include higher-order total variation diminishing (TVD) schemes first proposed by Harten, flux-corrected transport scheme introduced by Boris and Book, Monotonic Upstream-centered Schemes for Conservation Laws (MUSCL) based on Godunov approach and introduced by van Leer, various essentially non-oscillatory schemes (ENO) proposed by Harten et al., and the piecewise parabolic method (PPM) proposed by Colella and Woodward. Another important class of high-resolution schemes belongs to the approximate Riemann solvers proposed by Roe and by Osher. The schemes proposed by Jameson and Baker, where linear numerical dissipation terms depend on nonlinear switch functions, fall in between the classical and modern shock-capturing methods. References Books Anderson, J. D., "Modern Compressible Flow with Historical Perspective", McGraw-Hill (2004). Hirsch, C., "Numerical Computation of Internal and External Flows", Vol. II, 2nd ed., Butterworth-Heinemann (2007). Laney, C. B., "Computational Gasdynamics", Cambridge Univ. Press 1998). LeVeque, R. J., "Numerical Methods for Conservation Laws", Birkhauser-Verlag (1992). Tannehill, J. C., Anderson, D. A., and Pletcher, R. H., "Computational Fluid Dynamics and Heat Transfer", 2nd ed., Taylor & Francis (1997). Toro, E. F., "Riemann Solvers and Numerical Methods for Fluid Dynamics", 2nd ed., Springer-Verlag (1999). Technical papers Boris, J. P. and Book, D. L., "Flux-Corrected Transport III. Minimal Error FCT Algorithms", J. Comput. Phys., 20, 397–431 (1976). Colella, P. and Woodward, P., "The Piecewise parabolic Method (PPM) for Gasdynamical Simulations", J. Comput. Phys., 54, 174–201 (1984). Godunov, S. K., "A Difference Scheme for Numerical Computation of Discontinuous Solution of Hyperbolic Equations", Mat. Sbornik, 47, 271–306 (1959). Harten, A., "High Resolution Schemes for Hyperbolic Conservation Laws", J. Comput. Phys., 49, 357–293 (1983). Harten, A., Engquist, B., Osher, S., and Chakravarthy, S. R., "Uniformly High Order Accurate Essentially Non-Oscillatory Schemes III", J. Comput. Phys., 71, 231–303 (1987). Jameson, A. and Baker, T., "Solution of the Euler Equations for Complex Configurations", AIAA Paper, 83–1929 (1983). MacCormack, R. W., "The Effect of Viscosity in Hypervelocity Impact Cratering", AIAA Paper, 69–354 (1969). Roe, P. L., "Approximate Riemann Solvers, Parameter Vectors and Difference Schemes", J. Comput. Phys. 43, 357–372 (1981). Shu, C.-W., Osher, S., "Efficient Implementation of Essentially Non-Oscillatory Shock Capturing Schemes", J. Comput. Phys., 77, 439–471 (1988). van Leer, B., "Towards the Ultimate Conservative Difference Scheme V; A Second-order Sequel to Godunov's Sequel", J. Comput. Phys., 32, 101–136, (1979). Computational fluid dynamics Numerical differential equations Aerodynamics
Shock-capturing method
[ "Physics", "Chemistry", "Engineering" ]
1,539
[ "Computational fluid dynamics", "Aerodynamics", "Computational physics", "Aerospace engineering", "Fluid dynamics" ]
348,029
https://en.wikipedia.org/wiki/Virasoro%20algebra
In mathematics, the Virasoro algebra is a complex Lie algebra and the unique nontrivial central extension of the Witt algebra. It is widely used in two-dimensional conformal field theory and in string theory. Structure The Virasoro algebra is spanned by generators for and the central charge . These generators satisfy and The factor of is merely a matter of convention. For a derivation of the algebra as the unique central extension of the Witt algebra, see derivation of the Virasoro algebra. The Virasoro algebra has a presentation in terms of two generators (e.g. 3 and −2) and six relations. The generators are called annihilation modes, while are creation modes. A basis of creation generators of the Virasoro algebra's universal enveloping algebra is the set For , let , then . Representation theory In any indecomposable representation of the Virasoro algebra, the central generator of the algebra takes a constant value, also denoted and called the representation's central charge. A vector in a representation of the Virasoro algebra has conformal dimension (or conformal weight) if it is an eigenvector of with eigenvalue : An -eigenvector is called a primary state (of dimension ) if it is annihilated by the annihilation modes, Highest weight representations A highest weight representation of the Virasoro algebra is a representation generated by a primary state . A highest weight representation is spanned by the -eigenstates . The conformal dimension of is , where is called the level of . Any state whose level is not zero is called a descendant state of . For any , the Verma module of central charge and conformal dimension is the representation whose basis is , for a primary state of dimension . The Verma module is the largest possible highest weight representation. The Verma module is indecomposable, and for generic values of it is also irreducible. When it is reducible, there exist other highest weight representations with these values of , called degenerate representations, which are quotients of the Verma module. In particular, the unique irreducible highest weight representation with these values of is the quotient of the Verma module by its maximal submodule. A Verma module is irreducible if and only if it has no singular vectors. Singular vectors A singular vector or null vector of a highest weight representation is a state that is both descendant and primary. A sufficient condition for the Verma module to have a singular vector is for some , where Then the singular vector has level and conformal dimension Here are the values of for , together with the corresponding singular vectors, written as for the primary state of : Singular vectors for arbitrary may be computed using various algorithms, and their explicit expressions are known. If , then has a singular vector at level if and only if with . If , there can also exist a singular vector at level if with and . This singular vector is now a descendant of another singular vector at level . The integers that appear in are called Kac indices. It can be useful to use non-integer Kac indices for parametrizing the conformal dimensions of Verma modules that do not have singular vectors, for example in the critical random cluster model. Shapovalov form For any , the involution defines an automorphism of the Virasoro algebra and of its universal enveloping algebra. Then the Shapovalov form is the symmetric bilinear form on the Verma module such that , where the numbers are defined by and . The inverse Shapovalov form is relevant to computing Virasoro conformal blocks, and can be determined in terms of singular vectors. The determinant of the Shapovalov form at a given level is given by the Kac determinant formula, where is the partition function, and is a positive constant that does not depend on or . Hermitian form and unitarity If , a highest weight representation with conformal dimension has a unique Hermitian form such that the Hermitian adjoint of is and the norm of the primary state is one. In the basis , the Hermitian form on the Verma module has the same matrix as the Shapovalov form , now interpreted as a Gram matrix. The representation is called unitary if that Hermitian form is positive definite. Since any singular vector has zero norm, all unitary highest weight representations are irreducible. An irreducible highest weight representation is unitary if and only if either with , or with Daniel Friedan, Zongan Qiu, and Stephen Shenker showed that these conditions are necessary, and Peter Goddard, Adrian Kent, and David Olive used the coset construction or GKO construction (identifying unitary representations of the Virasoro algebra within tensor products of unitary representations of affine Kac–Moody algebras) to show that they are sufficient. Characters The character of a representation of the Virasoro algebra is the function The character of the Verma module is where is the Dedekind eta function. For any and for , the Verma module is reducible due to the existence of a singular vector at level . This singular vector generates a submodule, which is isomorphic to the Verma module . The quotient of by this submodule is irreducible if does not have other singular vectors, and its character is Let with and coprime, and and . (Then is in the Kac table of the corresponding minimal model). The Verma module has infinitely many singular vectors, and is therefore reducible with infinitely many submodules. This Verma module has an irreducible quotient by its largest nontrivial submodule. (The spectrums of minimal models are built from such irreducible representations.) The character of the irreducible quotient is This expression is an infinite sum because the submodules and have a nontrivial intersection, which is itself a complicated submodule. Applications Conformal field theory In two dimensions, the algebra of local conformal transformations is made of two copies of the Witt algebra. It follows that the symmetry algebra of two-dimensional conformal field theory is the Virasoro algebra. Technically, the conformal bootstrap approach to two-dimensional CFT relies on Virasoro conformal blocks, special functions that include and generalize the characters of representations of the Virasoro algebra. String theory Since the Virasoro algebra comprises the generators of the conformal group of the worldsheet, the stress tensor in string theory obeys the commutation relations of (two copies of) the Virasoro algebra. This is because the conformal group decomposes into separate diffeomorphisms of the forward and back lightcones. Diffeomorphism invariance of the worldsheet implies additionally that the stress tensor vanishes. This is known as the Virasoro constraint, and in the quantum theory, cannot be applied to all the states in the theory, but rather only on the physical states (compare Gupta–Bleuler formalism). Generalizations Super Virasoro algebras There are two supersymmetric N = 1 extensions of the Virasoro algebra, called the Neveu–Schwarz algebra and the Ramond algebra. Their theory is similar to that of the Virasoro algebra, now involving Grassmann numbers. There are further extensions of these algebras with more supersymmetry, such as the N = 2 superconformal algebra. W-algebras W-algebras are associative algebras which contain the Virasoro algebra, and which play an important role in two-dimensional conformal field theory. Among W-algebras, the Virasoro algebra has the particularity of being a Lie algebra. Affine Lie algebras The Virasoro algebra is a subalgebra of the universal enveloping algebra of any affine Lie algebra, as shown by the Sugawara construction. In this sense, affine Lie algebras are extensions of the Virasoro algebra. Meromorphic vector fields on Riemann surfaces The Virasoro algebra is a central extension of the Lie algebra of meromorphic vector fields with two poles on a genus 0 Riemann surface. On a higher-genus compact Riemann surface, the Lie algebra of meromorphic vector fields with two poles also has a central extension, which is a generalization of the Virasoro algebra. This can be further generalized to supermanifolds. Vertex algebras and conformal algebras The Virasoro algebra also has vertex algebraic and conformal algebraic counterparts, which basically come from arranging all the basis elements into generating series and working with single objects. History The Witt algebra (the Virasoro algebra without the central extension) was discovered by É. Cartan (1909). Its analogues over finite fields were studied by E. Witt in about the 1930s. The central extension of the Witt algebra that gives the Virasoro algebra was first found (in characteristic p > 0) by R. E. Block (1966, page 381) and independently rediscovered (in characteristic 0) by I. M. Gelfand and Dmitry Fuchs (1969). The physicist Miguel Ángel Virasoro (1970) wrote down some operators generating the Virasoro algebra (later known as the Virasoro operators) while studying dual resonance models, though he did not find the central extension. The central extension giving the Virasoro algebra was rediscovered in physics shortly after by J. H. Weis, according to Brower and Thorn (1971, footnote on page 167). See also Conformal field theory Goddard–Thorn theorem Heisenberg algebra Lie conformal algebra Pohlmeyer charge Super Virasoro algebra W-algebra Witt algebra WZW model References Further reading V. G. Kac, A. K. Raina, Bombay lectures on highest weight representations, World Sci. (1987) . & correction: ibid. 13 (1987) 260. V. K. Dobrev, "Characters of the irreducible highest weight modules over the Virasoro and super-Virasoro algebras", Suppl. Rendiconti del Circolo Matematico di Palermo, Serie II, Numero 14 (1987) 25-42. Conformal field theory Lie algebras Mathematical physics
Virasoro algebra
[ "Physics", "Mathematics" ]
2,142
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
348,044
https://en.wikipedia.org/wiki/Meta-system
A metasystem or meta-system is a "system about other systems", such as describing, generalizing, modelling, or analyzing the other system(s). It links the concepts of a system and meta. Control theory
Meta-system
[ "Mathematics" ]
50
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
348,302
https://en.wikipedia.org/wiki/Van%20der%20Pauw%20method
The van der Pauw Method is a technique commonly used to measure the resistivity and the Hall coefficient of a sample. Its strength lies in its ability to accurately measure the properties of a sample of any arbitrary shape, as long as the sample is approximately two-dimensional (i.e. it is much thinner than it is wide), solid (no holes), and the electrodes are placed on its perimeter. The van der Pauw method employs a four-point probe placed around the perimeter of the sample, in contrast to the linear four point probe: this allows the van der Pauw method to provide an average resistivity of the sample, whereas a linear array provides the resistivity in the sensing direction. This difference becomes important for anisotropic materials, which can be properly measured using the Montgomery Method, an extension of the van der Pauw Method (see, for instance, reference). From the measurements made, the following properties of the material can be calculated: The resistivity of the material The doping type (i.e. whether it is a P-type or N-type material) The sheet carrier density of the majority carrier (the number of majority carriers per unit area). From this the charge density and doping level can be found The mobility of the majority carrier The method was first propounded by Leo J. van der Pauw in 1958. Conditions There are five conditions that must be satisfied to use this technique:1. The sample must have a flat shape of uniform thickness2. The sample must not have any isolated holes3. The sample must be homogeneous and isotropic4. All four contacts must be located at the edges of the sample5. The area of contact of any individual contact should be at least an order of magnitude smaller than the area of the entire sample. The second condition can be weakened. The van der Pauw technique can also be applied to samples with one hole. Sample preparation In order to use the van der Pauw method, the sample thickness must be much less than the width and length of the sample. In order to reduce errors in the calculations, it is preferable that the sample be symmetrical. There must also be no isolated holes within the sample. The measurements require that four ohmic contacts be placed on the sample. Certain conditions for their placement need to be met: They must be as small as possible; any errors given by their non-zero size will be of the order D/L, where D is the average diameter of the contact and L is the distance between the contacts. They must be as close as possible to the boundary of the sample. In addition to this, any leads from the contacts should be constructed from the same batch of wire to minimise thermoelectric effects. For the same reason, all four contacts should be of the same material. Measurement definitions The contacts are numbered from 1 to 4 in a counter-clockwise order, beginning at the top-left contact. The current I12 is a positive DC current injected into contact 1 and taken out of contact 2, and is measured in amperes (A). The voltage V34 is a DC voltage measured between contacts 3 and 4 (i.e. V4 - V3) with no externally applied magnetic field, measured in volts (V). The resistivity ρ is measured in ohms⋅metres (Ω⋅m). The thickness of the sample t is measured in metres (m). The sheet resistance RS is measured in ohms per square (Ω/sq or ). Resistivity measurements The average resistivity of a sample is given by ρ = RS⋅t, where the sheet resistance RS is determined as follows. For an anisotropic material, the individual resistivity components, e.g. ρx or ρy, can be calculated using the Montgomery method. Basic measurements To make a measurement, a current is caused to flow along one edge of the sample (for instance, I12) and the voltage across the opposite edge (in this case, V34) is measured. From these two values, a resistance (for this example, ) can be found using Ohm's law: In his paper, van der Pauw showed that the sheet resistance of samples with arbitrary shapes can be determined from two of these resistances - one measured along a vertical edge, such as , and a corresponding one measured along a horizontal edge, such as . The actual sheet resistance is related to these resistances by the van der Pauw formula Reciprocal measurements The reciprocity theorem tells us that Therefore, it is possible to obtain a more precise value for the resistances and by making two additional measurements of their reciprocal values and and averaging the results. We define and Then, the van der Pauw formula becomes Reversed polarity measurements A further improvement in the accuracy of the resistance values can be obtained by repeating the resistance measurements after switching polarities of both the current source and the voltage meter. Since this is still measuring the same portion of the sample, just in the opposite direction, the values of Rvertical and Rhorizontal can still be calculated as the averages of the standard and reversed polarity measurements. The benefit of doing this is that any offset voltages, such as thermoelectric potentials due to the Seebeck effect, will be cancelled out. Combining these methods with the reciprocal measurements from above leads to the formulas for the resistances being and The van der Pauw formula takes the same form as in the previous section. Measurement accuracy Both of the above procedures check the repeatability of the measurements. If any of the reversed polarity measurements don't agree to a sufficient degree of accuracy (usually within 3%) with the corresponding standard polarity measurement, then there is probably a source of error somewhere in the setup, which should be investigated before continuing. The same principle applies to the reciprocal measurements – they should agree to a sufficient degree before they are used in any calculations. Calculating sheet resistance In general, the van der Pauw formula cannot be rearranged to give the sheet resistance RS in terms of known functions. The most notable exception to this is when Rvertical = R = Rhorizontal; in this scenario the sheet resistance is given by The quotient is known as the van der Pauw constant and has approximate value 4.53236. In most other scenarios, an iterative method is used to solve the van der Pauw formula numerically for RS. Typically a formula is considered to fail the preconditions for Banach Fixed Point Theorem, so methods based on it do not work. Instead, nested intervals converge slowly but steadily. Recently, however, it has been shown that an appropriate reformulation of the van der Pauw problem (e.g., by introducing a second van der Pauw formula) makes it fully solvable by the Banach fixed point method. Alternatively, a Newton-Raphson method converges relatively quickly. To reduce the complexity of the notation, the following variables are introduced: Then the next approximation is calculated by Hall measurements Background When a charged particle—such as an electron—is placed in a magnetic field, it experiences a Lorentz force proportional to the strength of the field and the velocity at which it is traveling through it. This force is strongest when the direction of motion is perpendicular to the direction of the magnetic field; in this case the force where is the charge on the particle in coulombs, the velocity it is traveling at (centimeters per second), and the strength of the magnetic field (Wb/cm2). Note that centimeters are often used to measure length in the semiconductor industry, which is why they are used here instead of the SI units of meters. When a current is applied to a piece of semiconducting material, this results in a steady flow of electrons through the material (as shown in parts (a) and (b) of the accompanying figure). The velocity the electrons are traveling at is (see electric current): where is the electron density, is the cross-sectional area of the material and the elementary charge (1.602×10−19 coulombs). If an external magnetic field is then applied perpendicular to the direction of current flow, then the resulting Lorentz force will cause the electrons to accumulate at one edge of the sample (see part (c) of the figure). Combining the above two equations, and noting that is the charge on an electron, results in a formula for the Lorentz force experienced by the electrons: This accumulation will create an electric field across the material due to the uneven distribution of charge, as shown in part (d) of the figure. This in turn leads to a potential difference across the material, known as the Hall voltage . The current, however, continues to only flow along the material, which indicates that the force on the electrons due to the electric field balances the Lorentz force. Since the force on an electron from an electric field is , we can say that the strength of the electric field is therefore Finally, the magnitude of the Hall voltage is simply the strength of the electric field multiplied by the width of the material; that is, where is the thickness of the material. Since the sheet density is defined as the density of electrons multiplied by the thickness of the material, we can define the Hall voltage in terms of the sheet density: Making the measurements Two sets of measurements need to be made: one with a magnetic field in the positive z-direction as shown above, and one with it in the negative z-direction. From here on in, the voltages recorded with a positive field will have a subscript P (for example, V13, P = V3, P - V1, P) and those recorded with a negative field will have a subscript N (such as V13, N = V3, N - V1, N). For all of the measurements, the magnitude of the injected current should be kept the same; the magnitude of the magnetic field needs to be the same in both directions also. First of all with a positive magnetic field, the current I24 is applied to the sample and the voltage V13, P is recorded; note that the voltages can be positive or negative. This is then repeated for I13 and V42, P. As before, we can take advantage of the reciprocity theorem to provide a check on the accuracy of these measurements. If we reverse the direction of the currents (i.e. apply the current I42 and measure V31, P, and repeat for I31 and V24, P), then V13, P should be the same as V31, P to within a suitably small degree of error. Similarly, V42, P and V24, P should agree. Having completed the measurements, a negative magnetic field is applied in place of the positive one, and the above procedure is repeated to obtain the voltage measurements V13, N, V42, N, V31, N and V24, N. Calculations Initially, the difference of the voltages for positive and negative magnetic fields is calculated: V13 = V13, P − V13, N V24 = V24, P − V24, N V31 = V31, P − V31, N V42 = V42, P − V42, N The overall Hall voltage is then . The polarity of this Hall voltage indicates the type of material the sample is made of; if it is positive, the material is P-type, and if it is negative, the material is N-type. The formula given in the background can then be rearranged to show that the sheet density Note that the strength of the magnetic field B needs to be in units of Wb/cm2 if ns is in cm−2. For instance, if the strength is given in the commonly used units of teslas, it can be converted by multiplying it by 10−4. Other calculations Mobility The resistivity of a semiconductor material can be shown to be where n and p are the concentration of electrons and holes in the material respectively, and μn and μp are the mobility of the electrons and holes respectively. Generally, the material is sufficiently doped so that there is a difference of many orders-of-magnitude between the two concentrations, allowing this equation to be simplified to where nm and μm are the doping level and mobility of the majority carrier respectively. If we then note that the sheet resistance RS is the resistivity divided by the thickness of the sample, and that the sheet density nS is the doping level multiplied by the thickness, we can divide the equation through by the thickness to get This can then be rearranged to give the majority carrier mobility in terms of the previously calculated sheet resistance and sheet density: Footnotes References Measuring Electrical Conductivity and Resistivity with the van der Pauw Technique Electrical engineering Hall effect
Van der Pauw method
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,634
[ "Physical phenomena", "Hall effect", "Electric and magnetic fields in matter", "Electrical phenomena", "Electrical engineering", "Solid state engineering" ]
348,898
https://en.wikipedia.org/wiki/Fatigue%20%28material%29
In materials science, fatigue is the initiation and propagation of cracks in a material due to cyclic loading. Once a fatigue crack has initiated, it grows a small amount with each loading cycle, typically producing striations on some parts of the fracture surface. The crack will continue to grow until it reaches a critical size, which occurs when the stress intensity factor of the crack exceeds the fracture toughness of the material, producing rapid propagation and typically complete fracture of the structure. Fatigue has traditionally been associated with the failure of metal components which led to the term metal fatigue. In the nineteenth century, the sudden failing of metal railway axles was thought to be caused by the metal crystallising because of the brittle appearance of the fracture surface, but this has since been disproved. Most materials, such as composites, plastics and ceramics, seem to experience some sort of fatigue-related failure. To aid in predicting the fatigue life of a component, fatigue tests are carried out using coupons to measure the rate of crack growth by applying constant amplitude cyclic loading and averaging the measured growth of a crack over thousands of cycles. However, there are also a number of special cases that need to be considered where the rate of crack growth is significantly different compared to that obtained from constant amplitude testing, such as the reduced rate of growth that occurs for small loads near the threshold or after the application of an overload, and the increased rate of crack growth associated with short cracks or after the application of an underload. If the loads are above a certain threshold, microscopic cracks will begin to initiate at stress concentrations such as holes, persistent slip bands (PSBs), composite interfaces or grain boundaries in metals. The stress values that cause fatigue damage are typically much less than the yield strength of the material. Stages of fatigue Historically, fatigue has been separated into regions of high cycle fatigue that require more than 104 cycles to failure where stress is low and primarily elastic and low cycle fatigue where there is significant plasticity. Experiments have shown that low cycle fatigue is also crack growth. Fatigue failures, both for high and low cycles, all follow the same basic steps: crack initiation, crack growth stages I and II, and finally ultimate failure. To begin the process, cracks must nucleate within a material. This process can occur either at stress risers in metallic samples or at areas with a high void density in polymer samples. These cracks propagate slowly at first during stage I crack growth along crystallographic planes, where shear stresses are highest. Once the cracks reach a critical size they propagate quickly during stage II crack growth in a direction perpendicular to the applied force. These cracks can eventually lead to the ultimate failure of the material, often in a brittle catastrophic fashion. Crack initiation The formation of initial cracks preceding fatigue failure is a separate process consisting of four discrete steps in metallic samples. The material will develop cell structures and harden in response to the applied load. This causes the amplitude of the applied stress to increase given the new restraints on strain. These newly formed cell structures will eventually break down with the formation of persistent slip bands (PSBs). Slip in the material is localized at these PSBs, and the exaggerated slip can now serve as a stress concentrator for a crack to form. Nucleation and growth of a crack to a detectable size accounts for most of the cracking process. It is for this reason that cyclic fatigue failures seem to occur so suddenly where the bulk of the changes in the material are not visible without destructive testing. Even in normally ductile materials, fatigue failures will resemble sudden brittle failures. PSB-induced slip planes result in intrusions and extrusions along the surface of a material, often occurring in pairs. This slip is not a microstructural change within the material, but rather a propagation of dislocations within the material. Instead of a smooth interface, the intrusions and extrusions will cause the surface of the material to resemble the edge of a deck of cards, where not all cards are perfectly aligned. Slip-induced intrusions and extrusions create extremely fine surface structures on the material. With surface structure size inversely related to stress concentration factors, PSB-induced surface slip can cause fractures to initiate. These steps can also be bypassed entirely if the cracks form at a pre-existing stress concentrator such as from an inclusion in the material or from a geometric stress concentrator caused by a sharp internal corner or fillet. Crack growth Most of the fatigue life is generally consumed in the crack growth phase. The rate of growth is primarily driven by the range of cyclic loading although additional factors such as mean stress, environment, overloads and underloads can also affect the rate of growth. Crack growth may stop if the loads are small enough to fall below a critical threshold. Fatigue cracks can grow from material or manufacturing defects from as small as 10 μm. When the rate of growth becomes large enough, fatigue striations can be seen on the fracture surface. Striations mark the position of the crack tip and the width of each striation represents the growth from one loading cycle. Striations are a result of plasticity at the crack tip. When the stress intensity exceeds a critical value known as the fracture toughness, unsustainable fast fracture will occur, usually by a process of microvoid coalescence. Prior to final fracture, the fracture surface may contain a mixture of areas of fatigue and fast fracture. Acceleration and retardation The following effects change the rate of growth: Mean stress effect: Higher mean stress increases the rate of crack growth. Environment: Increased moisture increases the rate of crack growth. In the case of aluminium, cracks generally grow from the surface, where water vapour from the atmosphere is able to reach the tip of the crack and dissociate into atomic hydrogen which causes hydrogen embrittlement. Cracks growing internally are isolated from the atmosphere and grow in a vacuum where the rate of growth is typically an order of magnitude slower than a surface crack. Short crack effect: In 1975, Pearson observed that short cracks grow faster than expected. Possible reasons for the short crack effect include the presence of the T-stress, the tri-axial stress state at the crack tip, the lack of crack closure associated with short cracks and the large plastic zone in comparison to the crack length. In addition, long cracks typically experience a threshold which short cracks do not have. There are a number of criteria for short cracks: cracks are typically smaller than 1 mm, cracks are smaller than the material microstructure size such as the grain size, or crack length is small compared to the plastic zone. Underloads: Small numbers of underloads increase the rate of growth and may counteract the effect of overloads. Overloads: Initially overloads (> 1.5 the maximum load in a sequence) lead to a small increase in the rate of growth followed by a long reduction in the rate of growth. Characteristics of fatigue In metal alloys, and for the simplifying case when there are no macroscopic or microscopic discontinuities, the process starts with dislocation movements at the microscopic level, which eventually form persistent slip bands that become the nucleus of short cracks. Macroscopic and microscopic discontinuities (at the crystalline grain scale) as well as component design features which cause stress concentrations (holes, keyways, sharp changes of load direction etc.) are common locations at which the fatigue process begins. Fatigue is a process that has a degree of randomness (stochastic), often showing considerable scatter even in seemingly identical samples in well controlled environments. Fatigue is usually associated with tensile stresses but fatigue cracks have been reported due to compressive loads. The greater the applied stress range, the shorter the life. Fatigue life scatter tends to increase for longer fatigue lives. Damage is irreversible. Materials do not recover when rested. Fatigue life is influenced by a variety of factors, such as temperature, surface finish, metallurgical microstructure, presence of oxidizing or inert chemicals, residual stresses, scuffing contact (fretting), etc. Some materials (e.g., some steel and titanium alloys) exhibit a theoretical fatigue limit below which continued loading does not lead to fatigue failure. High cycle fatigue strength (about 104 to 108 cycles) can be described by stress-based parameters. A load-controlled servo-hydraulic test rig is commonly used in these tests, with frequencies of around 20–50 Hz. Other sorts of machines—like resonant magnetic machines—can also be used, to achieve frequencies up to 250 Hz. Low-cycle fatigue (loading that typically causes failure in less than 104 cycles) is associated with localized plastic behavior in metals; thus, a strain-based parameter should be used for fatigue life prediction in metals. Testing is conducted with constant strain amplitudes typically at 0.01–5 Hz. Timeline of research history 1837: Wilhelm Albert publishes the first article on fatigue. He devised a test machine for conveyor chains used in the Clausthal mines. 1839: Jean-Victor Poncelet describes metals as being 'tired' in his lectures at the military school at Metz. 1842: William John Macquorn Rankine recognises the importance of stress concentrations in his investigation of railroad axle failures. The Versailles train wreck was caused by fatigue failure of a locomotive axle. 1843: Joseph Glynn reports on the fatigue of an axle on a locomotive tender. He identifies the keyway as the crack origin. 1848: The Railway Inspectorate reports one of the first tyre failures, probably from a rivet hole in tread of railway carriage wheel. It was likely a fatigue failure. 1849: Eaton Hodgkinson is granted a "small sum of money" to report to the UK Parliament on his work in "ascertaining by direct experiment, the effects of continued changes of load upon iron structures and to what extent they could be loaded without danger to their ultimate security". 1854: F. Braithwaite reports on common service fatigue failures and coins the term fatigue. 1860: Systematic fatigue testing undertaken by Sir William Fairbairn and August Wöhler. 1870: A. Wöhler summarises his work on railroad axles. He concludes that cyclic stress range is more important than peak stress and introduces the concept of endurance limit. 1903: Sir James Alfred Ewing demonstrates the origin of fatigue failure in microscopic cracks. 1910: O. H. Basquin proposes a log-log relationship for S-N curves, using Wöhler's test data. 1940: Sidney M. Cadwell publishes first rigorous study of fatigue in rubber. 1945: A. M. Miner popularises Palmgren's (1924) linear damage hypothesis as a practical design tool. 1952: W. Weibull An S-N curve model. 1954: The world's first commercial jetliner, the de Havilland Comet, suffers disaster as three planes break up in mid-air, causing de Havilland and all other manufacturers to redesign high altitude aircraft and in particular replace square apertures like windows with oval ones. 1954: L. F. Coffin and S. S. Manson explain fatigue crack-growth in terms of plastic strain in the tip of cracks. 1961: P. C. Paris proposes methods for predicting the rate of growth of individual fatigue cracks in the face of initial scepticism and popular defence of Miner's phenomenological approach. 1968: Tatsuo Endo and M. Matsuishi devise the rainflow-counting algorithm and enable the reliable application of Miner's rule to random loadings. 1970: Smith, Watson, and Topper developed a mean stress correction model, where the fatigue damage in a cycle is determined by the product of the maximum stress and strain amplitude. 1970: W. Elber elucidates the mechanisms and importance of crack closure in slowing the growth of a fatigue crack due to the wedging effect of plastic deformation left behind the tip of the crack. 1973: M. W. Brown and K. J. Miller observe that fatigue life under multiaxial conditions is governed by the experience of the plane receiving the most damage, and that both tension and shear loads on the critical plane must be considered. Predicting fatigue life The American Society for Testing and Materials defines fatigue life, Nf, as the number of stress cycles of a specified character that a specimen sustains before failure of a specified nature occurs. For some materials, notably steel and titanium, there is a theoretical value for stress amplitude below which the material will not fail for any number of cycles, called a fatigue limit or endurance limit. However, in practice, several bodies of work done at greater numbers of cycles suggest that fatigue limits do not exist for any metals. Engineers have used a number of methods to determine the fatigue life of a material: the stress-life method, the strain-life method, the crack growth method and probabilistic methods, which can be based on either life or crack growth methods. Whether using stress/strain-life approach or using crack growth approach, complex or variable amplitude loading is reduced to a series of fatigue equivalent simple cyclic loadings using a technique such as the rainflow-counting algorithm. Stress-life and strain-life methods A mechanical part is often exposed to a complex, often random, sequence of loads, large and small. In order to assess the safe life of such a part using the fatigue damage or stress/strain-life methods the following series of steps is usually performed: Complex loading is reduced to a series of simple cyclic loadings using a technique such as rainflow analysis; A histogram of cyclic stress is created from the rainflow analysis to form a fatigue damage spectrum; For each stress level, the degree of cumulative damage is calculated from the S-N curve; and The effect of the individual contributions are combined using an algorithm such as Miner's rule. Since S-N curves are typically generated for uniaxial loading, some equivalence rule is needed whenever the loading is multiaxial. For simple, proportional loading histories (lateral load in a constant ratio with the axial), Sines rule may be applied. For more complex situations, such as non-proportional loading, critical plane analysis must be applied. Miner's rule In 1945, Milton A. Miner popularised a rule that had first been proposed by Arvid Palmgren in 1924. The rule, variously called Miner's rule or the Palmgren–Miner linear damage hypothesis, states that where there are k different stress magnitudes in a spectrum, Si (1 ≤ i ≤ k), each contributing ni(Si) cycles, then if Ni(Si) is the number of cycles to failure of a constant stress reversal Si (determined by uni-axial fatigue tests), failure occurs when: Usually, for design purposes, C is assumed to be 1. This can be thought of as assessing what proportion of life is consumed by a linear combination of stress reversals at varying magnitudes. Although Miner's rule may be a useful approximation in many circumstances, it has several major limitations: It fails to recognize the probabilistic nature of fatigue and there is no simple way to relate life predicted by the rule with the characteristics of a probability distribution. Industry analysts often use design curves, adjusted to account for scatter, to calculate Ni(Si). The sequence in which high vs. low stress cycles are applied to a sample in fact affect the fatigue life, for which Miner's Rule does not account. In some circumstances, cycles of low stress followed by high stress cause more damage than would be predicted by the rule. It does not consider the effect of an overload or high stress which may result in a compressive residual stress that may retard crack growth. High stress followed by low stress may have less damage due to the presence of compressive residual stress (or localized plastic damages around crack tip). Stress-life (S-N) method Materials fatigue performance is commonly characterized by an S-N curve, also known as a Wöhler curve. This is often plotted with the cyclic stress (S) against the cycles to failure (N) on a logarithmic scale. S-N curves are derived from tests on samples of the material to be characterized (often called coupons or specimens) where a regular sinusoidal stress is applied by a testing machine which also counts the number of cycles to failure. This process is sometimes known as coupon testing. For greater accuracy but lower generality component testing is used. Each coupon or component test generates a point on the plot though in some cases there is a runout where the time to failure exceeds that available for the test (see censoring). Analysis of fatigue data requires techniques from statistics, especially survival analysis and linear regression. The progression of the S-N curve can be influenced by many factors such as stress ratio (mean stress), loading frequency, temperature, corrosion, residual stresses, and the presence of notches. A constant fatigue life (CFL) diagram is useful for the study of stress ratio effect. The Goodman line is a method used to estimate the influence of the mean stress on the fatigue strength. A Constant Fatigue Life (CFL) diagram is useful for stress ratio effect on S-N curve. Also, in the presence of a steady stress superimposed on the cyclic loading, the Goodman relation can be used to estimate a failure condition. It plots stress amplitude against mean stress with the fatigue limit and the ultimate tensile strength of the material as the two extremes. Alternative failure criteria include Soderberg and Gerber. As coupons sampled from a homogeneous frame will display a variation in their number of cycles to failure, the S-N curve should more properly be a Stress-Cycle-Probability (S-N-P) curve to capture the probability of failure after a given number of cycles of a certain stress. With body-centered cubic materials (bcc), the Wöhler curve often becomes a horizontal line with decreasing stress amplitude, i.e. there is a fatigue strength that can be assigned to these materials. With face-centered cubic metals (fcc), the Wöhler curve generally drops continuously, so that only a fatigue limit can be assigned to these materials. Strain-life (ε-N) method When strains are no longer elastic, such as in the presence of stress concentrations, the total strain can be used instead of stress as a similitude parameter. This is known as the strain-life method. The total strain amplitude is the sum of the elastic strain amplitude and the plastic strain amplitude and is given by . Basquin's equation for the elastic strain amplitude is where is Young's modulus. The relation for high cycle fatigue can be expressed using the elastic strain amplitude where is a parameter that scales with tensile strength obtained by fitting experimental data, is the number of cycles to failure and is the slope of the log-log curve again determined by curve fitting. In 1954, Coffin and Manson proposed that the fatigue life of a component was related to the plastic strain amplitude using . Combining the elastic and plastic portions gives the total strain amplitude accounting for both low and high cycle fatigue . where is the fatigue strength coefficient, is the fatigue strength exponent, is the fatigue ductility coefficient, is the fatigue ductility exponent, and is the number of cycles to failure ( being the number of reversals to failure). Crack growth methods An estimate of the fatigue life of a component can be made using a crack growth equation by summing up the width of each increment of crack growth for each loading cycle. Safety or scatter factors are applied to the calculated life to account for any uncertainty and variability associated with fatigue. The rate of growth used in crack growth predictions is typically measured by applying thousands of constant amplitude cycles to a coupon and measuring the rate of growth from the change in compliance of the coupon or by measuring the growth of the crack on the surface of the coupon. Standard methods for measuring the rate of growth have been developed by ASTM International. Crack growth equations such as the Paris–Erdoğan equation are used to predict the life of a component. They can be used to predict the growth of a crack from 10 um to failure. For normal manufacturing finishes this may cover most of the fatigue life of a component where growth can start from the first cycle. The conditions at the crack tip of a component are usually related to the conditions of test coupon using a characterising parameter such as the stress intensity, J-integral or crack tip opening displacement. All these techniques aim to match the crack tip conditions on the component to that of test coupons which give the rate of crack growth. Additional models may be necessary to include retardation and acceleration effects associated with overloads or underloads in the loading sequence. In addition, small crack growth data may be needed to match the increased rate of growth seen with small cracks. Typically, a cycle counting technique such as rainflow-cycle counting is used to extract the cycles from a complex sequence. This technique, along with others, has been shown to work with crack growth methods. Crack growth methods have the advantage that they can predict the intermediate size of cracks. This information can be used to schedule inspections on a structure to ensure safety whereas strain/life methods only give a life until failure. Dealing with fatigue Design Dependable design against fatigue-failure requires thorough education and supervised experience in structural engineering, mechanical engineering, or materials science. There are at least five principal approaches to life assurance for mechanical parts that display increasing degrees of sophistication: Design to keep stress below threshold of fatigue limit (infinite lifetime concept); Fail-safe, graceful degradation, and fault-tolerant design: Instruct the user to replace parts when they fail. Design in such a way that there is no single point of failure, and so that when any one part completely fails, it does not lead to catastrophic failure of the entire system. Safe-life design: Design (conservatively) for a fixed life after which the user is instructed to replace the part with a new one (a so-called lifed part, finite lifetime concept, or "safe-life" design practice); planned obsolescence and disposable product are variants that design for a fixed life after which the user is instructed to replace the entire device; Damage tolerance: Is an approach that ensures aircraft safety by assuming the presence of cracks or defects even in new aircraft. Crack growth calculations, periodic inspections and component repair or replacement can be used to ensure critical components that may contain cracks, remain safe. Inspections usually use nondestructive testing to limit or monitor the size of possible cracks and require an accurate prediction of the rate of crack-growth between inspections. The designer sets some aircraft maintenance checks schedule frequent enough that parts are replaced while the crack is still in the "slow growth" phase. This is often referred to as damage tolerant design or "retirement-for-cause". Risk Management: Ensures the probability of failure remains below an acceptable level. This approach is typically used for aircraft where acceptable levels may be based on probability of failure during a single flight or taken over the lifetime of an aircraft. A component is assumed to have a crack with a probability distribution of crack sizes. This approach can consider variability in values such as crack growth rates, usage and critical crack size. It is also useful for considering damage at multiple locations that may interact to produce multi-site or widespread fatigue damage. Probability distributions that are common in data analysis and in design against fatigue include the log-normal distribution, extreme value distribution, Birnbaum–Saunders distribution, and Weibull distribution. Testing Fatigue testing can be used for components such as a coupon or a full-scale test article to determine: the rate of crack growth and fatigue life of components such as a coupon or a full-scale test article. location of critical regions degree of fail-safety when part of the structure fails the origin and cause of the crack initiating defect from fractographic examination of the crack. These tests may form part of the certification process such as for airworthiness certification. Repair Stop drill Fatigue cracks that have begun to propagate can sometimes be stopped by drilling holes, called drill stops, at the tip of the crack. The possibility remains of a new crack starting in the side of the hole. Blend. Small cracks can be blended away and the surface cold worked or shot peened. Oversize holes. Holes with cracks growing from them can be drilled out to a larger hole to remove cracking and bushed to restore the original hole. Bushes can be cold shrink Interference fit bushes to induce beneficial compressive residual stresses. The oversized hole can also be cold worked by drawing an oversized mandrel through the hole. Patch. Cracks may be repaired by installing a patch or repair fitting. Composite patches have been used to restore the strength of aircraft wings after cracks have been detected or to lower the stress prior to cracking in order to improve the fatigue life. Patches may restrict the ability to monitor fatigue cracks and may need to be removed and replaced for inspections. Life improvement Change material. Changes in the materials used in parts can also improve fatigue life. For example, parts can be made from better fatigue rated metals. Complete replacement and redesign of parts can also reduce if not eliminate fatigue problems. Thus helicopter rotor blades and propellers in metal are being replaced by composite equivalents. They are not only lighter, but also much more resistant to fatigue. They are more expensive, but the extra cost is amply repaid by their greater integrity, since loss of a rotor blade usually leads to total loss of the aircraft. A similar argument has been made for replacement of metal fuselages, wings and tails of aircraft. Induce residual stresses Peening a surface can reduce such tensile stresses and create compressive residual stress, which prevents crack initiation. Forms of peening include: shot peening, using high-speed projectiles, high-frequency impact treatment (also called high-frequency mechanical impact) using a mechanical hammer, and laser peening which uses high-energy laser pulses. Low plasticity burnishing can also be used to induce compresses stress in fillets and cold work mandrels can be used for holes. Increases in fatigue life and strength are proportionally related to the depth of the compressive residual stresses imparted. Shot peening imparts compressive residual stresses approximately 0.005 inches (0.1 mm) deep, while laser peening can go 0.040 to 0.100 inches (1 to 2.5 mm) deep, or deeper. Deep cryogenic treatment. The use of Deep Cryogenic treatment has been shown to increase resistance to fatigue failure. Springs used in industry, auto racing and firearms have been shown to last up to six times longer when treated. Heat checking, which is a form of thermal cyclic fatigue has been greatly delayed. Re-profiling. Changing the shape of a stress concentration such as a hole or cutout may be used to extend the life of a component. Shape optimisation using numerical optimisation algorithms have been used to lower the stress concentration in wings and increase their life. Fatigue of composites Composite materials can offer excellent resistance to fatigue loading. In general, composites exhibit good fracture toughness and, unlike metals, increase fracture toughness with increasing strength. The critical damage size in composites is also greater than that for metals. The primary mode of damage in a metal structure is cracking. For metal, cracks propagate in a relatively well-defined manner with respect to the applied stress, and the critical crack size and rate of crack propagation can be related to specimen data through analytical fracture mechanics. However, with composite structures, there is no single damage mode which dominates. Matrix cracking, delamination, debonding, voids, fiber fracture, and composite cracking can all occur separately and in combination, and the predominance of one or more is highly dependent on the laminate orientations and loading conditions. In addition, the unique joints and attachments used for composite structures often introduce modes of failure different from those typified by the laminate itself. The composite damage propagates in a less regular manner and damage modes can change. Experience with composites indicates that the rate of damage propagation in does not exhibit the two distinct regions of initiation and propagation like metals. The crack initiation range in metals is propagation, and there is a significant quantitative difference in rate while the difference appears to be less apparent with composites. Fatigue cracks of composites may form in the matrix and propagate slowly since the matrix carries such a small fraction of the applied stress. And the fibers in the wake of the crack experience fatigue damage. In many cases, the damage rate is accelerated by deleterious interactions with the environment like oxidation or corrosion of fibers. Notable fatigue failures Versailles train crash Following the King Louis-Philippe I's celebrations at the Palace of Versailles, a train returning to Paris crashed in May 1842 at Meudon after the leading locomotive broke an axle. The carriages behind piled into the wrecked engines and caught fire. At least 55 passengers were killed trapped in the locked carriages, including the explorer Jules Dumont d'Urville. This accident is known in France as the . The accident was witnessed by the British locomotive engineer Joseph Locke and widely reported in Britain. It was discussed extensively by engineers, who sought an explanation. The derailment had been the result of a broken locomotive axle. Rankine's investigation of broken axles in Britain highlighted the importance of stress concentration, and the mechanism of crack growth with repeated loading. His and other papers suggesting a crack growth mechanism through repeated stressing, however, were ignored, and fatigue failures occurred at an ever-increasing rate on the expanding railway system. Other spurious theories seemed to be more acceptable, such as the idea that the metal had somehow "crystallized". The notion was based on the crystalline appearance of the fast fracture region of the crack surface, but ignored the fact that the metal was already highly crystalline. de Havilland Comet Two de Havilland Comet passenger jets broke up in mid-air and crashed within a few months of each other in 1954. As a result, systematic tests were conducted on a fuselage immersed and pressurised in a water tank. After the equivalent of 3,000 flights, investigators at the Royal Aircraft Establishment (RAE) were able to conclude that the crash had been due to failure of the pressure cabin at the forward Automatic Direction Finder window in the roof. This 'window' was in fact one of two apertures for the aerials of an electronic navigation system in which opaque fibreglass panels took the place of the window 'glass'. The failure was a result of metal fatigue caused by the repeated pressurisation and de-pressurisation of the aircraft cabin. Also, the supports around the windows were riveted, not bonded, as the original specifications for the aircraft had called for. The problem was exacerbated by the punch rivet construction technique employed. Unlike drill riveting, the imperfect nature of the hole created by punch riveting caused manufacturing defect cracks which may have caused the start of fatigue cracks around the rivet. The Comet's pressure cabin had been designed to a safety factor comfortably in excess of that required by British Civil Airworthiness Requirements (2.5 times the cabin proof test pressure as opposed to the requirement of 1.33 times and an ultimate load of 2.0 times the cabin pressure) and the accident caused a revision in the estimates of the safe loading strength requirements of airliner pressure cabins. In addition, it was discovered that the stresses around pressure cabin apertures were considerably higher than had been anticipated, especially around sharp-cornered cut-outs, such as windows. As a result, all future jet airliners would feature windows with rounded corners, greatly reducing the stress concentration. This was a noticeable distinguishing feature of all later models of the Comet. Investigators from the RAE told a public inquiry that the sharp corners near the Comets' window openings acted as initiation sites for cracks. The skin of the aircraft was also too thin, and cracks from manufacturing stresses were present at the corners. Alexander L. Kielland oil platform capsizing Alexander L. Kielland was a Norwegian semi-submersible drilling rig that capsized whilst working in the Ekofisk oil field in March 1980, killing 123 people. The capsizing was the worst disaster in Norwegian waters since World War II. The rig, located approximately 320 km east of Dundee, Scotland, was owned by the Stavanger Drilling Company of Norway and was on hire to the United States company Phillips Petroleum at the time of the disaster. In driving rain and mist, early in the evening of 27 March 1980 more than 200 men were off duty in the accommodation on Alexander L. Kielland. The wind was gusting to 40 knots with waves up to 12 m high. The rig had just been winched away from the Edda production platform. Minutes before 18:30 those on board felt a 'sharp crack' followed by 'some kind of trembling'. Suddenly the rig heeled over 30° and then stabilised. Five of the six anchor cables had broken, with one remaining cable preventing the rig from capsizing. The list continued to increase and at 18:53 the remaining anchor cable snapped and the rig turned upside down. A year later in March 1981, the investigative report concluded that the rig collapsed owing to a fatigue crack in one of its six bracings (bracing D-6), which connected the collapsed D-leg to the rest of the rig. This was traced to a small 6 mm fillet weld which joined a non-load-bearing flange plate to this D-6 bracing. This flange plate held a sonar device used during drilling operations. The poor profile of the fillet weld contributed to a reduction in its fatigue strength. Further, the investigation found considerable amounts of lamellar tearing in the flange plate and cold cracks in the butt weld. Cold cracks in the welds, increased stress concentrations due to the weakened flange plate, the poor weld profile, and cyclical stresses (which would be common in the North Sea), seemed to collectively play a role in the rig's collapse. Others The 1862 Hartley Colliery Disaster was caused by the fracture of a steam engine beam and killed 204 people. The 1919 Boston Great Molasses Flood has been attributed to a fatigue failure. The 1948 Northwest Airlines Flight 421 crash due to fatigue failure in a wing spar root The 1957 "Mt. Pinatubo", presidential plane of Philippine President Ramon Magsaysay, crashed due to engine failure caused by metal fatigue. The 1965 capsize of the UK's first offshore oil platform, the Sea Gem, was due to fatigue in part of the suspension system linking the hull to the legs. The 1968 Los Angeles Airways Flight 417 lost one of its main rotor blades due to fatigue failure. The 1968 MacRobertson Miller Airlines Flight 1750 lost a wing due to improper maintenance leading to fatigue failure. The 1969 F-111A crash due to a fatigue failure of the wing pivot fitting from a material defect resulted in the development of the damage-tolerant approach for fatigue design. The 1977 Dan-Air Boeing 707 crash caused by fatigue failure resulting in the loss of the right horizontal stabilizer. The 1979 American Airlines Flight 191 crashed after engine separation attributed to fatigue damage in the pylon structure holding the engine to the wing, caused by improper maintenance procedures. The 1980 LOT Flight 7 crashed due to fatigue in an engine turbine shaft resulting in engine disintegration leading to loss of control. The 1985 Japan Airlines Flight 123 crashed after the aircraft lost its vertical stabilizer due to faulty repairs on the rear bulkhead. The 1988 Aloha Airlines Flight 243 suffered an explosive decompression at after a fatigue failure. The 1989 United Airlines Flight 232 lost its tail engine due to fatigue failure in a fan disk hub. The 1992 El Al Flight 1862 lost both engines on its right-wing due to fatigue failure in the pylon mounting of the #3 Engine. The 1998 Eschede train disaster was caused by fatigue failure of a single composite wheel. The 2000 Hatfield rail crash was likely caused by rolling contact fatigue. The 2000 recall of 6.5 million Firestone tires on Ford Explorers originated from fatigue crack growth leading to separation of the tread from the tire. The 2002 China Airlines Flight 611 disintegrated in-flight due to fatigue failure. The 2005 Chalk's Ocean Airways Flight 101 lost its right wing due to fatigue failure brought about by inadequate maintenance practices. The 2009 Viareggio train derailment due to fatigue failure. The 2009 Sayano–Shushenskaya power station accident due to metal fatigue of turbine mountings. The 2017 Air France Flight 66 had in-flight engine failure due to cold dwell fatigue fracture in the fan hub. The 2023 Titan submersible implosion is thought to have occurred due to fatigue delamination of the carbon-fiber material used for the hull. See also Basquin's Law of Fatigue , a diagram by British mechanical engineer International Journal of Fatigue References Further reading External links Fatigue Shawn M. Kelly Application note on fatigue crack propagation in UHMWPE fatigue test video Karlsruhe University of Applied Sciences Strain life method G. Glinka Fatigue from variable amplitude loading A. Fatemi Fracture mechanics Materials degradation Mechanical failure modes Solid mechanics Structural analysis
Fatigue (material)
[ "Physics", "Materials_science", "Technology", "Engineering" ]
7,627
[ "Structural engineering", "Solid mechanics", "Mechanical failure modes", "Fracture mechanics", "Structural analysis", "Technological failures", "Materials science", "Mechanics", "Mechanical engineering", "Aerospace engineering", "Materials degradation", "Mechanical failure" ]
349,735
https://en.wikipedia.org/wiki/Baryon%20number
In particle physics, the baryon number is a strictly conserved additive quantum number of a system. It is defined as where is the number of quarks, and is the number of antiquarks. Baryons (three quarks) have a baryon number of +1, mesons (one quark, one antiquark) have a baryon number of 0, and antibaryons (three antiquarks) have a baryon number of −1. Exotic hadrons like pentaquarks (four quarks, one antiquark) and tetraquarks (two quarks, two antiquarks) are also classified as baryons and mesons depending on their baryon number. Baryon number vs. quark number Quarks carry not only electric charge, but also charges such as color charge and weak isospin. Because of a phenomenon known as color confinement, a hadron cannot have a net color charge; that is, the total color charge of a particle has to be zero ("white"). A quark can have one of three "colors", dubbed "red", "green", and "blue"; while an antiquark may be either "anti-red", "anti-green" or "anti-blue". For normal hadrons, a white color can thus be achieved in one of three ways: A quark of one color with an antiquark of the corresponding anticolor, giving a meson with baryon number 0, Three quarks of different colors, giving a baryon with baryon number +1, Three antiquarks of different anticolors, giving an antibaryon with baryon number −1. The baryon number was defined long before the quark model was established, so rather than changing the definitions, particle physicists simply gave quarks one third the baryon number. Nowadays it might be more accurate to speak of the conservation of quark number. In theory, exotic hadrons can be formed by adding pairs of quarks and antiquarks, provided that each pair has a matching color/anticolor. For example, a pentaquark (four quarks, one antiquark) could have the individual quark colors: red, green, blue, blue, and antiblue. In 2015, the LHCb collaboration at CERN reported results consistent with pentaquark states in the decay of bottom Lambda baryons (). Particles not formed of quarks Particles without any quarks have a baryon number of zero. Such particles are leptons – the electron, muon, tauon, and their corresponding neutrinos vector bosons – the photon, W and Z bosons, gluons scalar boson – the Higgs boson second-order tensor boson – the hypothetical graviton Conservation The baryon number is conserved in all the interactions of the Standard Model, with one possible exception. The conservation is due to global symmetry of the QCD Lagrangian. 'Conserved' means that the sum of the baryon number of all incoming particles is the same as the sum of the baryon numbers of all particles resulting from the reaction. The one exception is the hypothesized Adler–Bell–Jackiw anomaly in electroweak interactions; however, sphalerons are not all that common and could occur at high energy and temperature levels and can explain electroweak baryogenesis and leptogenesis. Electroweak sphalerons can only change the baryon and/or lepton number by 3 or multiples of 3 (collision of three baryons into three leptons/antileptons and vice versa). No experimental evidence of sphalerons has yet been observed. The hypothetical concepts of grand unified theory (GUT) models and supersymmetry allows for the changing of a baryon into leptons and antiquarks (see B − L), thus violating the conservation of both baryon and lepton numbers. Proton decay would be an example of such a process taking place, but has never been observed. The conservation of baryon number is not consistent with the physics of black hole evaporation via Hawking radiation. It is expected in general that quantum gravitational effects violate the conservation of all charges associated to global symmetries. The violation of conservation of baryon number led John Archibald Wheeler to speculate on a principle of mutability for all physical properties. See also Lepton number Flavour (particle physics) Isospin Hypercharge Proton decay B − L References Baryons Conservation laws Nuclear physics Quantum chromodynamics Quarks Standard Model Flavour (particle physics)
Baryon number
[ "Physics" ]
977
[ "Standard Model", "Equations of physics", "Conservation laws", "Particle physics", "Nuclear physics", "Symmetry", "Physics theorems" ]
351,080
https://en.wikipedia.org/wiki/Transparency%20%28projection%29
A transparency, also known variously as a viewfoil or foil (from the French word "feuille" or sheet), or viewgraph, is a thin sheet of transparent flexible material, typically polyester (historically cellulose acetate), onto which figures can be drawn. These are then placed on an overhead projector for display to an audience. Many companies and small organizations use a system of projectors and transparencies in meetings and other groupings of people, though this system is being largely replaced by video projectors and interactive whiteboards. Printing Transparencies can be printed using a variety of technologies. In the 1960s and 70s the GAF OZALID "projecto-viewfoil" used a diazo process to make a clear sheet framed in cardboard and protected by a rice paper cover. In the 1980's laser printers or copiers could make foil sheets using standard xerographic processes. Specialist transparencies are available for use with laser printers that are better able to handle the high temperatures present in the fuser unit. For inkjet printers, coated transparencies are available that can absorb and hold the liquid ink—although care must be taken to avoid excessive exposure to moisture, which can cause the transparency to become cloudy; they must also be loaded correctly into the printer as they are only usually coated on one side. Uses Uses for transparencies are as varied as the organizations that use them. Certain classes, such as those associated with mathematics or history and geography use transparencies to illustrate a point or problem. Until the advent of LaTeX, math classes in particular used rolls of acetate to illustrate sufficiently long problems and to display mathematical symbols missing from common computer keyboards. Aerospace companies, like Boeing and Beechcraft, used transparencies for years in management meetings in order to brief engineers and relevant personnel about new aircraft designs and changes to existing designs, as well as bring up illustrated problems. Some churches and other religious organizations used them to show sermon outlines and illustrate certain topics such as Old Testament battles and Jewish artifacts during worship services, as well as outline business meetings. Spatial light modulators (SLMs) Many overhead projectors are used with a flat-panel LCD which, when used this way, is referred to as a spatial light modulator or SLM. Data projectors are often based on some form of SLM in a projection path. An LCD is a transmissive SLM, whereas other technologies such as Texas Instrument's DLP are reflective SLMs. Not all projectors use SLMs (e.g., some use devices that produce their own light rather than function as transparencies). An example of non-SLM system are organic light-emitting diodes (OLEDs). See also Presentation slide Projection panel Reversal film References External links Transparency (projection) – semanticscholar.org Display technology Office equipment Presentation
Transparency (projection)
[ "Technology", "Engineering" ]
586
[ "Multimedia", "Electronic engineering", "Presentation", "Display technology" ]
351,088
https://en.wikipedia.org/wiki/Transparency%20%28telecommunication%29
In telecommunications, transparency can refer to: The property of an entity that allows another entity to pass through it without altering either of the entities. The property that allows a transmission system or channel to accept, at its input, unmodified user information, and deliver corresponding user information at its output, unchanged in form or information content. The user information may be changed internally within the transmission system, but it is restored to its original form prior to the output without the involvement of the user. The quality of a data communications system or device that uses a bit-oriented link protocol that does not depend on the bit sequence structure used by the data source. Some communication systems are not transparent. Non-transparent communication systems have one or both of the following problems: user data may be incorrectly interpreted as internal commands. For example, modems with a Time Independent Escape Sequence or 20th century Signaling System No. 5 and R2 signalling telephone systems, which occasionally incorrectly interpreted user data (from a "blue box") as commands. output "user data" may not always be the same as input user data. For example, many early email systems were not 8-bit clean; they seemed to transfer typical short text messages properly, but converted "unusual" characters (the control characters, the "high ASCII" characters) in an irreversible way into some other "usual" character. Many of these systems also changed user data in other irreversible ways – such as inserting linefeeds to make sure each line is less than some maximum length, and inserting a ">" at the beginning of every line that begins with "From ". Until 8BITMIME, a variety of binary-to-text encoding techniques have been overlaid on top of such systems to restore transparency – to make sure that any possible file can be transferred so that the final output "user data" is actually identical to the original user data. References See also In-band signaling out-of-band communication Telecommunications engineering
Transparency (telecommunication)
[ "Engineering" ]
409
[ "Electrical engineering", "Telecommunications engineering" ]
351,131
https://en.wikipedia.org/wiki/Virtual%20file%20system
A virtual file system (VFS) or virtual filesystem switch is an abstract layer on top of a more concrete file system. The purpose of a VFS is to allow client applications to access different types of concrete file systems in a uniform way. A VFS can, for example, be used to access local and network storage devices transparently without the client application noticing the difference. It can be used to bridge the differences in Windows, classic Mac OS/macOS and Unix filesystems, so that applications can access files on local file systems of those types without having to know what type of file system they are accessing. A VFS specifies an interface (or a "contract") between the kernel and a concrete file system. Therefore, it is easy to add support for new file system types to the kernel simply by fulfilling the contract. The terms of the contract might change incompatibly from release to release, which would require that concrete file system support be recompiled, and possibly modified before recompilation, to allow it to work with a new release of the operating system; or the supplier of the operating system might make only backward-compatible changes to the contract, so that concrete file system support built for a given release of the operating system would work with future versions of the operating system. Implementations One of the first virtual file system mechanisms on Unix-like systems was introduced by Sun Microsystems in SunOS 2.0 in 1985. It allowed Unix system calls to access local UFS file systems and remote NFS file systems transparently. For this reason, Unix vendors who licensed the NFS code from Sun often copied the design of Sun's VFS. Other file systems could be plugged into it also: there was an implementation of the MS-DOS FAT file system developed at Sun that plugged into the SunOS VFS, although it wasn't shipped as a product until SunOS 4.1. The SunOS implementation was the basis of the VFS mechanism in System V Release 4. John Heidemann developed a stacking VFS under SunOS 4.0 for the experimental Ficus file system. This design provided for code reuse among file system types with differing but similar semantics (e.g., an encrypting file system could reuse all of the naming and storage-management code of a non-encrypting file system). Heidemann adapted this work for use in 4.4BSD as a part of his thesis research; descendants of this code underpin the file system implementations in modern BSD derivatives including macOS. Other Unix virtual file systems include the File System Switch in System V Release 3, the Generic File System in Ultrix, and the VFS in Linux. In OS/2 and Microsoft Windows, the virtual file system mechanism is called the Installable File System. The Filesystem in Userspace (FUSE) mechanism allows userland code to plug into the virtual file system mechanism in Linux, NetBSD, FreeBSD, OpenSolaris, and macOS. In Microsoft Windows, virtual filesystems can also be implemented through userland Shell namespace extensions; however, they do not support the lowest-level file system access application programming interfaces in Windows, so not all applications will be able to access file systems that are implemented as namespace extensions. KIO and GVfs/GIO provide similar mechanisms in the KDE and GNOME desktop environments (respectively), with similar limitations, although they can be made to use FUSE techniques and therefore integrate smoothly into the system. Single-file virtual file systems Sometimes Virtual File System refers to a file or a group of files (not necessarily inside a concrete file system) that acts as a manageable container which should provide the functionality of a concrete file system through the usage of software. Examples of such containers are CBFS Storage or a single-file virtual file system in an emulator like PCTask or so-called WinUAE, Oracle's VirtualBox, Microsoft's Virtual PC, VMware. The primary benefit for this type of file system is that it is centralized and easy to remove. A single-file virtual file system may include all the basic features expected of any file system (virtual or otherwise), but access to the internal structure of these file systems is often limited to programs specifically written to make use of the single-file virtual file system (instead of implementation through a driver allowing universal access). Another major drawback is that performance is relatively low when compared to other virtual file systems. Low performance is mostly due to the cost of shuffling virtual files when data is written or deleted from the virtual file system. Implementation of single-file virtual filesystems Direct examples of single-file virtual file systems include emulators, such as PCTask and WinUAE, which encapsulate not only the filesystem data but also emulated disk layout. This makes it easy to treat an OS installation like any other piece of software—transferring it with removable media or over the network. PCTask The Amiga emulator PCTask emulated an Intel PC 8088 based machine clocked at 4.77MHz (and later an 80486SX clocked at 25 MHz). Users of PCTask could create a file of large size on the Amiga filesystem, and this file would be virtually accessed from the emulator as if it were a real PC Hard Disk. The file could be formatted with the FAT16 filesystem to store normal MS-DOS or Windows files. WinUAE The UAE for Windows, WinUAE, allows for large single files on Windows to be treated as Amiga file systems. In WinUAE this file is called a hardfile. UAE could also treat a directory on the host filesystem (Windows, Linux, macOS, AmigaOS) as an Amiga filesystem. See also 9P (protocol) a distributed file system protocol that maps directly to the VFS layer of Plan 9, making all file system access network-transparent Synthetic file system a hierarchical interface to non-file objects that appear as if they were regular files in the tree of a disk-based file system Notes Emulation on Amiga Comparison between PCX and PCTask, Amiga PC emulators. See also This article explaining how it works PCTask. Help About WinUAE (See Hardfile section). Help About WinUAE (See Add Directory section) References Linux kernel's Virtual File System The Linux VFS, Chapter 4 of Linux File Systems by Moshe Bar (McGraw-Hill, 2001). Chapter 12 of Understanding the Linux Kernel by Daniel P. Bovet, Marco Cesati (O'Reilly Media, 2005). The Linux VFS Model: Naming structure External links Anatomy of the Linux virtual file system switch Computer file systems Virtualization
Virtual file system
[ "Engineering" ]
1,409
[ "Computer networks engineering", "Virtualization" ]
4,229,687
https://en.wikipedia.org/wiki/Iron-56
Iron-56 (56Fe) is the most common isotope of iron. About 91.754% of all iron is iron-56. Of all nuclides, iron-56 has the lowest mass per nucleon. With 8.8 MeV binding energy per nucleon, iron-56 is one of the most tightly bound nuclei. The high nuclear binding energy for 56Fe represents the point where further nuclear reactions become energetically unfavorable. Because of this, it is among the heaviest elements formed in stellar nucleosynthesis reactions in massive stars. These reactions fuse lighter elements like magnesium, silicon, and sulfur to form heavier elements. Among the heavier elements formed is 56Ni, which subsequently decays to 56Co and then 56Fe. Relationship to nickel-62 Nickel-62, a relatively rare isotope of nickel, has a higher nuclear binding energy per nucleon; this is consistent with having a higher mass-per-nucleon because nickel-62 has a greater proportion of neutrons, which are slightly more massive than protons. (See the nickel-62 article for more). Light elements undergoing nuclear fusion and heavy elements undergoing nuclear fission release energy as their nucleons bind more tightly, so 62Ni might be expected to be common. However, during stellar nucleosynthesis the competition between photodisintegration and alpha capturing causes more 56Ni to be produced than 62Ni (56Fe is produced later in the star's ejection shell as 56Ni decays). Although nickel-62 has a higher binding energy per nucleon, the conversion of 28 atoms of nickel-62 into 31 atoms of iron-56 releases of energy. As the universe ages, matter will slowly convert to ever more tightly bound nuclei, approaching 56Fe, ultimately leading to the formation of iron stars over ≈ 101500 years, assuming an expanding universe without proton decay. See also Isotopes of iron Iron star References Isotopes of iron
Iron-56
[ "Chemistry" ]
404
[ "Isotopes of iron", "Isotopes" ]
4,229,946
https://en.wikipedia.org/wiki/Soil%20contamination
Soil contamination, soil pollution, or land pollution as a part of land degradation is caused by the presence of xenobiotic (human-made) chemicals or other alteration in the natural soil environment. It is typically caused by industrial activity, agricultural chemicals or improper disposal of waste. The most common chemicals involved are petroleum hydrocarbons, polynuclear aromatic hydrocarbons (such as naphthalene and benzo(a)pyrene), solvents, pesticides, lead, and other heavy metals. Contamination is correlated with the degree of industrialization and intensity of chemical substance. The concern over soil contamination stems primarily from health risks, from direct contact with the contaminated soil, vapour from the contaminants, or from secondary contamination of water supplies within and underlying the soil. Mapping of contaminated soil sites and the resulting clean ups are time-consuming and expensive tasks, and require expertise in geology, hydrology, chemistry, computer modelling, and GIS in Environmental Contamination, as well as an appreciation of the history of industrial chemistry. In North America and South-Western Europe the extent of contaminated land is best known for as many of the countries in these areas having a legal framework to identify and deal with this environmental problem. Developing countries tend to be less tightly regulated despite some of them having undergone significant industrialization. Causes Soil pollution can be caused by the following (non-exhaustive list): Microplastics Oil spills Mining and activities by other heavy industries Accidental spills may happen during activities, etc. Corrosion of underground storage tanks (including piping used to transmit the contents) Acid rain Intensive farming Agrochemicals, such as pesticides, herbicides and fertilizers Petrochemicals Industrial accidents Road debris Construction activities Exterior lead-based paints Drainage of contaminated surface water into the soil Ammunitions, chemical agents, and other agents of war Waste disposal Oil and fuel dumping Nuclear wastes Direct discharge of industrial wastes to the soil Discharge of sewage Landfill and illegal dumping Coal ash Electronic waste Contaminated by rocks containing large amounts of toxic elements. Contaminated by Pb due to vehicle exhaust, Cd, and Zn caused by tire wear. Contamination by strengthening air pollutants by incineration of fossil raw materials. The most common chemicals involved are petroleum hydrocarbons, solvents, pesticides, lead, and other heavy metals. Any activity that leads to other forms of soil degradation (erosion, compaction, etc.) may indirectly worsen the contamination effects in that soil remediation becomes more tedious. Historical deposition of coal ash used for residential, commercial, and industrial heating, as well as for industrial processes such as ore smelting, were a common source of contamination in areas that were industrialized before about 1960. Coal naturally concentrates lead and zinc during its formation, as well as other heavy metals to a lesser degree. When the coal is burned, most of these metals become concentrated in the ash (the principal exception being mercury). Coal ash and slag may contain sufficient lead to qualify as a "characteristic hazardous waste", defined in the US as containing more than 5 mg/L of extractable lead using the TCLP procedure. In addition to lead, coal ash typically contains variable but significant concentrations of polynuclear aromatic hydrocarbons (PAHs; e.g., benzo(a)anthracene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(a)pyrene, indeno(cd)pyrene, phenanthrene, anthracene, and others). These PAHs are known human carcinogens and the acceptable concentrations of them in soil are typically around 1 mg/kg. Coal ash and slag can be recognised by the presence of off-white grains in soil, gray heterogeneous soil, or (coal slag) bubbly, vesicular pebble-sized grains. Treated sewage sludge, known in the industry as biosolids, has become controversial as a "fertilizer". As it is the byproduct of sewage treatment, it generally contains more contaminants such as organisms, pesticides, and heavy metals than other soil. In the European Union, the Urban Waste Water Treatment Directive allows sewage sludge to be sprayed onto land. The volume is expected to double to 185,000 tons of dry solids in 2005. This has good agricultural properties due to the high nitrogen and phosphate content. In 1990/1991, 13% wet weight was sprayed onto 0.13% of the land; however, this is expected to rise 15 fold by 2005. Advocates say there is a need to control this so that pathogenic microorganisms do not get into water courses and to ensure that there is no accumulation of heavy metals in the top soil. Pesticides and herbicides A pesticide is a substance used to kill a pest. A pesticide may be a chemical substance, biological agent (such as a virus or bacteria), antimicrobial, disinfectant or device used against any pest. Pests include insects, plant pathogens, weeds, mollusks, birds, mammals, fish, nematodes (roundworms) and microbes that compete with humans for food, destroy property, spread or are a vector for disease or cause a nuisance. Although there are benefits to the use of pesticides, there are also drawbacks, such as potential toxicity to humans and other organisms. Herbicides are used to kill weeds, especially on pavements and railways. They are similar to auxins and most are biodegradable by soil bacteria. However, one group derived from trinitrotoluene (2:4 D and 2:4:5 T) have the impurity dioxin, which is very toxic and causes fatality even in low concentrations. Another herbicide is Paraquat. It is highly toxic but it rapidly degrades in soil due to the action of bacteria and does not kill soil fauna. Insecticides are used to rid farms of pests which damage crops. The insects damage not only standing crops but also stored ones and in the tropics it is reckoned that one third of the total production is lost during food storage. As with fungicides, the first insecticides used in the nineteenth century were inorganic e.g. Paris Green and other compounds of arsenic. Nicotine has also been used since 1690. There are now two main groups of synthetic insecticides: 1. Organochlorines include DDT, Aldrin, Dieldrin and BHC. They are cheap to produce, potent and persistent. DDT was used on a massive scale from the 1930s, with a peak of 72,000 tonnes used 1970. Then usage fell as the harmful environmental effects were realized. It was found worldwide in fish and birds and was even discovered in the snow in the Antarctic. It is only slightly soluble in water but is very soluble in the bloodstream. It affects the nervous and endocrine systems and causes the eggshells of birds to lack calcium causing them to be easily breakable. It is thought to be responsible for the decline of the numbers of birds of prey like ospreys and peregrine falcons in the 1950s – they are now recovering. As well as increased concentration via the food chain, it is known to enter via permeable membranes, so fish get it through their gills. As it has low water solubility, it tends to stay at the water surface, so organisms that live there are most affected. DDT found in fish that formed part of the human food chain caused concern, but the levels found in the liver, kidney and brain tissues was less than 1 ppm and in fat was 10 ppm, which was below the level likely to cause harm. However, DDT was banned in the UK and the United States to stop the further buildup of it in the food chain. U.S. manufacturers continued to sell DDT to developing countries, who could not afford the expensive replacement chemicals and who did not have such stringent regulations governing the use of pesticides. 2. Organophosphates, e.g. parathion, methyl parathion and about 40 other insecticides are available nationally. Parathion is highly toxic, methyl-parathion is less so and Malathion is generally considered safe as it has low toxicity and is rapidly broken down in the mammalian liver. This group works by preventing normal nerve transmission as cholinesterase is prevented from breaking down the transmitter substance acetylcholine, resulting in uncontrolled muscle movements. Agents of war The disposal of munitions, and a lack of care in manufacture of munitions caused by the urgency of production, can contaminate soil for extended periods. There is little published evidence on this type of contamination largely because of restrictions placed by governments of many countries on the publication of material related to war effort. However, mustard gas stored during World War II has contaminated some sites for up to 50 years and the testing of Anthrax as a potential biological weapon contaminated the whole island of Gruinard. Human health Exposure pathways Contaminated or polluted soil directly affects human health through direct contact with soil or via inhalation of soil contaminants that have vaporized; potentially greater threats are posed by the infiltration of soil contamination into groundwater aquifers used for human consumption, sometimes in areas apparently far removed from any apparent source of above-ground contamination. Toxic metals can also make their way up the food chain through plants that reside in soils containing high concentrations of heavy metals. This tends to result in the development of pollution-related diseases. Most exposure is accidental, and exposure can happen through: Ingesting dust or soil directly Ingesting food or vegetables grown in contaminated soil or with foods in contact with contaminants Skin contact with dust or soil Vapors from the soil Inhaling clouds of dust while working in soils or windy environments However, some studies estimate that 90% of exposure is through eating contaminated food. Consequences Health consequences from exposure to soil contamination vary greatly depending on pollutant type, the pathway of attack, and the vulnerability of the exposed population. Researchers suggest that pesticides and heavy metals in soil may harm cardiovascular health, including inflammation and change in the body's internal clock. Chronic exposure to chromium, lead, and other metals, petroleum, solvents, and many pesticide and herbicide formulations can be carcinogenic, can cause congenital disorders, or can cause other chronic health conditions. Industrial or human-made concentrations of naturally occurring substances, such as nitrate and ammonia associated with livestock manure from agricultural operations, have also been identified as health hazards in soil and groundwater. Chronic exposure to benzene at sufficient concentrations is known to be associated with a higher incidence of leukemia. Mercury and cyclodienes are known to induce higher incidences of kidney damage and some irreversible diseases. PCBs and cyclodienes are linked to liver toxicity. Organophosphates and carbonates can cause a chain of responses leading to neuromuscular blockage. Many chlorinated solvents induce liver changes, kidney changes, and depression of the central nervous system. There is an entire spectrum of further health effects such as headache, nausea, fatigue, eye irritation and skin rash for the above cited and other chemicals. At sufficient dosages a large number of soil contaminants can cause death by exposure via direct contact, inhalation or ingestion of contaminants in groundwater contaminated through soil. The Scottish Government has commissioned the Institute of Occupational Medicine to undertake a review of methods to assess risk to human health from contaminated land. The overall aim of the project is to work up guidance that should be useful to Scottish Local Authorities in assessing whether sites represent a significant possibility of significant harm (SPOSH) to human health. It is envisaged that the output of the project will be a short document providing high level guidance on health risk assessment with reference to existing published guidance and methodologies that have been identified as being particularly relevant and helpful. The project will examine how policy guidelines have been developed for determining the acceptability of risks to human health and propose an approach for assessing what constitutes unacceptable risk in line with the criteria for SPOSH as defined in the legislation and the Scottish Statutory Guidance. Ecosystem effects Not unexpectedly, soil contaminants can have significant deleterious consequences for ecosystems. There are radical soil chemistry changes which can arise from the presence of many hazardous chemicals even at low concentration of the contaminant species. These changes can manifest in the alteration of metabolism of endemic microorganisms and arthropods resident in a given soil environment. The result can be virtual eradication of some of the primary food chain, which in turn could have major consequences for predator or consumer species. Even if the chemical effect on lower life forms is small, the lower pyramid levels of the food chain may ingest alien chemicals, which normally become more concentrated for each consuming rung of the food chain. Many of these effects are now well known, such as the concentration of persistent DDT materials for avian consumers, leading to weakening of egg shells, increased chick mortality and potential extinction of species. Effects occur to agricultural lands which have certain types of soil contamination. Contaminants typically alter plant metabolism, often causing a reduction in crop yields. This has a secondary effect upon soil conservation, since the languishing crops cannot shield the Earth's soil from erosion. Some of these chemical contaminants have long half-lives and in other cases derivative chemicals are formed from decay of primary soil contaminants. Potential effects of contaminants to soil functions Heavy metals and other soil contaminants can adversely affect the activity, species composition and abundance of soil microorganisms, thereby threatening soil functions such as biochemical cycling of carbon and nitrogen. However, soil contaminants can also become less bioavailable by time, and microorganisms and ecosystems can adapt to altered conditions. Soil properties such as pH, organic matter content and texture are very important and modify mobility, bioavailability and toxicity of pollutants in contaminated soils. The same amount of contaminant can be toxic in one soil but totally harmless in another soil. This stresses the need for soil-specific risks assessment and measures. Cleanup options Cleanup or environmental remediation is analyzed by environmental scientists who utilize field measurement of soil chemicals and also apply computer models (GIS in Environmental Contamination) for analyzing transport and fate of soil chemicals. Various technologies have been developed for remediation of oil-contaminated soil and sediments There are several principal strategies for remediation: Excavate soil and take it to a disposal site away from ready pathways for human or sensitive ecosystem contact. This technique also applies to dredging of bay muds containing toxins. Aeration of soils at the contaminated site (with attendant risk of creating air pollution) Thermal remediation by introduction of heat to raise subsurface temperatures sufficiently high to volatilize chemical contaminants out of the soil for vapor extraction. Technologies include ISTD, electrical resistance heating (ERH), and ET-DSP. Bioremediation, involving microbial digestion of certain organic chemicals. Techniques used in bioremediation include landfarming, biostimulation and bioaugmentating soil biota with commercially available microflora. Extraction of groundwater or soil vapor with an active electromechanical system, with subsequent stripping of the contaminants from the extract. Containment of the soil contaminants (such as by capping or paving over in place). Phytoremediation, or using plants (such as willow) to extract heavy metals. Mycoremediation, or using fungus to metabolize contaminants and accumulate heavy metals. Remediation of oil contaminated sediments with self-collapsing air microbubbles. Surfactant leaching Interfacial solar evaporation to extract heavy metal ions from moist soil By country Various national standards for concentrations of particular contaminants include the United States EPA Region 9 Preliminary Remediation Goals (U.S. PRGs), the U.S. EPA Region 3 Risk Based Concentrations (U.S. EPA RBCs) and National Environment Protection Council of Australia Guideline on Investigation Levels in Soil and Groundwater. People's Republic of China The immense and sustained growth of the People's Republic of China since the 1970s has exacted a price from the land in increased soil pollution. The Ministry of Ecology and Environment believes it to be a threat to the environment, to food safety and to sustainable agriculture. According to a scientific sampling, 150 million mu (100,000 square kilometres) of China's cultivated land have been polluted, with contaminated water being used to irrigate a further 32.5 million mu (21,670 square kilometres) and another 2 million mu (1,300 square kilometres) covered or destroyed by solid waste. In total, the area accounts for one-tenth of China's cultivatable land, and is mostly in economically developed areas. An estimated 12 million tonnes of grain are contaminated by heavy metals every year, causing direct losses of 20 billion yuan ($2.57 billion USD). Recent survey shows that 19% of the agricultural soils are contaminated which contains heavy metals and metalloids. And the rate of these heavy metals in the soil has been increased dramatically. European Union According to the received data from Member states, in the European Union the number of estimated potential contaminated sites is more than 2.5 million and the identified contaminated sites around 342 thousand. Municipal and industrial wastes contribute most to soil contamination (38%), followed by the industrial/commercial sector (34%). Mineral oil and heavy metals are the main contaminants contributing around 60% to soil contamination. In terms of budget, the management of contaminated sites is estimated to cost around 6 billion Euros (€) annually. United Kingdom Generic guidance commonly used in the United Kingdom are the Soil Guideline Values published by the Department for Environment, Food and Rural Affairs (DEFRA) and the Environment Agency. These are screening values that demonstrate the minimal acceptable level of a substance. Above this there can be no assurances in terms of significant risk of harm to human health. These have been derived using the Contaminated Land Exposure Assessment Model (CLEA UK). Certain input parameters such as Health Criteria Values, age and land use are fed into CLEA UK to obtain a probabilistic output. Guidance by the Inter Departmental Committee for the Redevelopment of Contaminated Land (ICRCL) has been formally withdrawn by DEFRA, for use as a prescriptive document to determine the potential need for remediation or further assessment. The CLEA model published by DEFRA and the Environment Agency (EA) in March 2002 sets a framework for the appropriate assessment of risks to human health from contaminated land, as required by Part IIA of the Environmental Protection Act 1990. As part of this framework, generic Soil Guideline Values (SGVs) have currently been derived for ten contaminants to be used as "intervention values". These values should not be considered as remedial targets but values above which further detailed assessment should be considered; see Dutch standards. Three sets of CLEA SGVs have been produced for three different land uses, namely residential (with and without plant uptake) allotments commercial/industrial It is intended that the SGVs replace the former ICRCL values. The CLEA SGVs relate to assessing chronic (long term) risks to human health and do not apply to the protection of ground workers during construction, or other potential receptors such as groundwater, buildings, plants or other ecosystems. The CLEA SGVs are not directly applicable to a site completely covered in hardstanding, as there is no direct exposure route to contaminated soils. To date, the first ten of fifty-five contaminant SGVs have been published, for the following: arsenic, cadmium, chromium, lead, inorganic mercury, nickel, selenium ethyl benzene, phenol and toluene. Draft SGVs for benzene, naphthalene and xylene have been produced but their publication is on hold. Toxicological data (Tox) has been published for each of these contaminants as well as for benzo[a]pyrene, benzene, dioxins, furans and dioxin-like PCBs, naphthalene, vinyl chloride, 1,1,2,2 tetrachloroethane and 1,1,1,2 tetrachloroethane, 1,1,1 trichloroethane, tetrachloroethene, carbon tetrachloride, 1,2-dichloroethane, trichloroethene and xylene. The SGVs for ethyl benzene, phenol and toluene are dependent on the soil organic matter (SOM) content (which can be calculated from the total organic carbon (TOC) content). As an initial screen the SGVs for 1% SOM are considered to be appropriate. Canada As of February 2021, there are a total of 2,500 plus contaminated sites in Canada. One infamous contaminated sited is located near a nickel-copper smelting site in Sudbury, Ontario. A study investigating the heavy metal pollution in the vicinity of the smelter reveals that elevated levels of nickel and copper were found in the soil; values going as high as 5,104ppm Ni, and 2,892 ppm Cu within a 1.1 km range of the smelter location. Other metals were also found in the soil; such metals include iron, cobalt, and silver. Furthermore, upon examining the different vegetation surrounding the smelter it was evident that they too had been affected; the results show that the plants contained nickel, copper and aluminium as a result of soil contamination. India In March 2009, the issue of uranium poisoning in Punjab attracted press coverage. It was alleged to be caused by fly ash ponds of thermal power stations, which reportedly lead to severe birth defects in children in the Faridkot and Bhatinda districts of Punjab. The news reports claimed the uranium levels were more than 60 times the maximum safe limit. In 2012, the Government of India confirmed that the ground water in Malwa belt of Punjab has uranium metal that is 50% above the trace limits set by the United Nations' World Health Organization (WHO). Scientific studies, based on over 1000 samples from various sampling points, could not trace the source to fly ash and any sources from thermal power plants or industry as originally alleged. The study also revealed that the uranium concentration in ground water of Malwa district is not 60 times the WHO limits, but only 50% above the WHO limit in 3 locations. This highest concentration found in samples was less than those found naturally in ground waters currently used for human purposes elsewhere, such as Finland. Research is underway to identify natural or other sources for the uranium. See also Contamination control Dutch pollutant standards Environmental policy in China#Soil pollution GIS in environmental contamination Groundwater pollution Habitat destruction Index of waste management articles Land degradation Landfill List of solid waste treatment technologies List of waste management companies Litter Pesticide drift Plasticulture Plastic-eating organisms Remediation of contaminated sites with cement Triangle of death (Italy) Water pollution References Further reading External links Portal for soil and water management in Europe Independent information gateway originally funded by the European Commission for topics related to soil and water, including contaminated land, soil and water management. European Soil Portal: Soil Contamination At EU-level, the issue of contaminated sites (local contamination) and contaminated land (diffuse contamination) has been considered by: European Soil Data Centre (ESDAC). Article on soil contamination in China Arsenic in groundwater Book on arsenic in groundwater by IAH's Netherlands Chapter and the Netherlands Hydrological Society Environmental chemistry Environmental issues with soil Pollution Soil chemistry
Soil contamination
[ "Chemistry", "Environmental_science" ]
4,932
[ "Environmental chemistry", "Soil chemistry", "Soil contamination", "nan", "Environmental soil science", "Environmental issues with soil" ]
4,230,480
https://en.wikipedia.org/wiki/Wideband%20materials
Wideband material refers to material that can convey Microwave signals (light/sound) over a variety of wavelengths. These materials possess exemplary attenuation and dielectric constants, and are excellent dielectrics for semiconductor gates. Examples of such material include gallium nitride (GaN) and silicon carbide (SiC). SiC has been used extensively in the creation of lasers for several years. However, it performs poorly (providing limited brightness) because it has an indirect band gap. GaN has a wide band gap (~3.4 eV), which usually results in high energies for structures which possess electrons in the conduction band. References External links UCSB.edu – Wideband Gap Semiconductors Materials science
Wideband materials
[ "Physics", "Materials_science", "Engineering" ]
150
[ "Materials science stubs", "Applied and interdisciplinary physics", "Materials science", "Condensed matter physics", "nan", "Condensed matter stubs" ]
4,230,598
https://en.wikipedia.org/wiki/Optical%20flat
An optical flat is an optical-grade piece of glass lapped and polished to be extremely flat on one or both sides, usually within a few tens of nanometres (billionths of a metre). They are used with a monochromatic light to determine the flatness (surface accuracy) of other surfaces (whether optical, metallic, ceramic, or otherwise), by means of wave interference. When an optical flat is placed on another surface and illuminated, the light waves reflect off both the bottom surface of the flat and the surface it is resting on. This causes a phenomenon similar to thin-film interference. The reflected waves interfere, creating a pattern of interference fringes visible as light and dark bands. The spacing between the fringes is smaller where the gap is changing more rapidly, indicating a departure from flatness in one of the two surfaces. This is comparable to the contour lines one would find on a map. A flat surface is indicated by a pattern of straight, parallel fringes with equal spacing, while other patterns indicate uneven surfaces. Two adjacent fringes indicate a difference in elevation of one-half wavelength of the light used, so by counting the fringes, differences in elevation of the surface can be measured to better than one micrometre. Usually only one of the two surfaces of an optical flat is made flat to the specified tolerance, and this surface is indicated by an arrow on the edge of the glass. Optical flats are sometimes given an optical coating and used as precision mirrors or optical windows for special purposes, such as in a Fabry–Pérot interferometer or laser cavity. Optical flats have uses in spectrophotometry as well. Flatness testing An optical flat is usually placed upon a flat surface to be tested. If the surface is clean and reflective enough, rainbow colored bands of interference fringes will form when the test piece is illuminated with white light. However, if a monochromatic light is used to illuminate the work piece, such as helium, low-pressure sodium, or a laser, then a series of dark and light interference fringes will form. These interference fringes determine the flatness of the work piece, relative to the optical flat, to within a fraction of the wavelength of the light. If both surfaces are perfectly the same flatness and parallel to each other, no interference fringes will form. However, there is usually some air trapped between the surfaces. If the surfaces are flat, but a tiny optical wedge of air exists between them, then straight, parallel interference fringes will form, indicating the angle of the wedge (i.e.: more, thinner fringes indicate a steeper wedge while fewer but wider fringes indicate less of a wedge). The shape of the fringes also indicate the shape of the test surface, because fringes with a bend, a contour, or rings indicate high and low points on the surface, such as rounded edges, hills or valleys, or convex and concave surfaces. Preparation Both the optical flat and the surface to be tested need to be extremely clean. The tiniest bit of dust settling between the surfaces can ruin the results. Even the thickness of a streak or a fingerprint on the surfaces can be enough to change the width of the gap between them. Before the test, the surfaces are usually cleaned very thoroughly. Most commonly, acetone is used as the cleaning agent, because it dissolves most oils and it evaporates completely, leaving no residue. Typically, the surface will be cleaned using the "drag" method, in which a lint-free, scratch-free tissue is wetted, stretched, and dragged across the surface, pulling any impurities along with it. This process is usually performed dozens of times, ensuring that the surface is completely free of impurities. A new tissue will need to be used each time, to prevent recontamination of the surfaces from previously removed dust and oils. Testing is often done in a clean-room or another dust-free environment, keeping the dust from settling on the surfaces between cleaning and assembly. Sometimes, the surfaces may be assembled by sliding them together, helping to scrape off any dust that might happen to land on the flat. The testing is usually done in a temperature-controlled environment to prevent any distortions in the glass, and needs to be performed on a very stable work-surface. After testing, the flats are usually cleaned again and stored in a protective case, and are often kept in a temperature-controlled environment until used again. Lighting For the best test-results, a monochromatic light, consisting of only a single wavelength, is used to illuminate the flats. To show the fringes properly, several factors need to be taken into account when setting up the light source, such as the angle of incidence between the light and the observer, the angular size of the light source in relation to the pupil of the eye, and the homogeneity of the light source when reflected off of the glass. Many sources for monochromatic light can be used. Most lasers emit light of a very narrow bandwidth, and often provide a suitable light source. A helium–neon laser emits light at 632 nanometres (red), while a frequency doubled Nd:YAG laser emits light at 532 nm (green). Various laser diodes and diode-pumped solid-state lasers emit light in red, yellow, green, blue or violet. Dye lasers can be tuned to emit nearly any color. However, lasers also experience a phenomenon called laser speckle, which shows up in the fringes. Several gas or metal-vapor lamps can also be used. When operated at low pressure and current, these lamps generally produce light in various spectral lines, with one or two lines being most predominant. Because these lines are very narrow, the lamps can be combined with narrow-bandwidth filters to isolate the strongest line. A helium-discharge lamp will produce a line at 587.6 nm (yellow), while a mercury-vapor lamp produces a line at 546.1 (yellowish green). Cadmium vapor produces a line at 643.8 nm (red), but low pressure sodium produces a line at 589.3 nm (yellow). Of all the lights, low pressure sodium is the only one that produces a single line, requiring no filter. The fringes only appear in the reflection of the light source, so the optical flat must be viewed from the exact angle of incidence that the light shines upon it. If viewed from a zero degree angle (from directly above), the light must also be at a zero degree angle. As the viewing angle changes, the lighting angle must also change. The light must be positioned so that its reflection can be seen covering the entire surface. Also, the angular size of the light source needs to be many times greater than the eye. For example, if an incandescent light is used, the fringes may only show up in the reflection of the filament. By moving the lamp much closer to the flat, the angular size becomes larger and the filament may appear to cover the entire flat, giving clearer readings. Sometimes, a diffuser may be used, such as the powder coating inside frosted bulbs, to provide a homogenous reflection off the glass. Typically, the measurements will be more accurate when the light source is as close to the flat as possible, but the eye is as far away as possible. How interference fringes form The diagram shows an optical flat resting on a surface to be tested. Unless the two surfaces are perfectly flat, there will be a small gap between them (shown), which will vary with the contour of the surface. Monochromatic light (red) shines through the glass flat and reflects from both the bottom surface of the optical flat and the top surface of the test piece, and the two reflected rays combine and superpose. However, the ray reflecting off the bottom surface travels a longer path. The additional path length is equal to twice the gap between the surfaces. In addition, the ray reflecting off the bottom surface undergoes a 180° phase reversal, while the internal reflection of the other ray from the underside of the optical flat causes no phase reversal. The brightness of the reflected light depends on the difference in the path length of the two rays: If the gap between the surfaces is not constant, this interference results in a pattern of bright and dark lines or bands called "interference fringes" being observed on the surface. These are similar to contour lines on maps, revealing the height differences of the bottom test surface. The gap between the surfaces is constant along a fringe. The path length difference between two adjacent bright or dark fringes is one wavelength of the light, so the difference in the gap between the surfaces is one-half wavelength. Since the wavelength of light is so small, this technique can measure very small departures from flatness. For example, the wavelength of red light is about 700 nm, so the difference in height between two fringes is half that, or 350 nm, about 1/100 the diameter of a human hair. Mathematical derivation The variation in brightness of the reflected light as a function of gap width can be found by deriving the formula for the sum of the two reflected waves. Assume that the z-axis is oriented in the direction of the reflected rays. Assume for simplicity that the intensity A of the two reflected light rays is the same (this is almost never true, but the result of differences in intensity is just a smaller contrast between light and dark fringes). The equation for the electric field of the sinusoidal light ray reflected from the top surface traveling along the z-axis is where is the peak amplitude, λ is the wavelength, and is the angular frequency of the wave. The ray reflected from the bottom surface will be delayed by the additional path length and the 180° phase reversal at the reflection, causing a phase shift with respect to the top ray where is the phase difference between the waves in radians. The two waves will superpose and add: the sum of the electric fields of the two waves is Using the trigonometric identity for the sum of two cosines: , this can be written This represents a wave at the original wavelength whose amplitude is proportional to the cosine of , so the brightness of the reflected light is an oscillating, sinusoidal function of the gap width d. The phase difference is equal to the sum of the phase shift due to the path length difference 2d and the additional 180° phase shift at the reflection so the electric field of the resulting wave will be This represents an oscillating wave whose magnitude varies sinusoidally between and zero as increases. Constructive interference: The brightness will be maximum where , which occurs when Destructive interference: The brightness will be zero (or in the more general case minimum) where , which occurs when Thus the bright and dark fringes alternate, with the separation between two adjacent bright or dark fringes representing a change in the gap length of one half wavelength (λ/2). Precision and errors Counterintuitively, the fringes do not exist within the gap or the flat itself. The interference fringes actually form when the light waves all converge at the eye or camera, forming the image. Because the image is the compilation of all converging wavefronts interfering with each other, the flatness of the test piece can only be measured relative to the flatness of the optical flat. Any deviations on the flat will be added to the deviations on the test surface. Therefore, a surface polished to a flatness of λ/4 cannot be effectively tested with a λ/4 flat, as it is not possible to determine where the errors lie, but its contours can be revealed by testing with more accurate surfaces like a λ/20 or λ/50 optical flat. This also means that both the lighting and viewing angle have an effect on the accuracy of the results. When lighted or viewed at an angle, the distance that the light must travel across the gap is longer than when viewed and illuminated straight on. Thus, as the angle of incidence becomes steeper, the fringes will also appear to move and change. A zero degree angle of incidence is usually the most desirable angle, both for lighting and viewing. Unfortunately, this is usually impossible to achieve with the naked eye. Many interferometers use beamsplitters to obtain such an angle. Because the results are relative to the wavelength of the light, accuracy can also be increased by using light of shorter wavelengths, although the 632 nm line from a helium–neon laser is often used as the standard. No surface is ever completely flat. Therefore, any errors or irregularities that exist on the optical flat will affect the results of the test. Optical flats are extremely sensitive to temperature changes, which can cause temporary surface deviations resulting from uneven thermal expansion. The glass often experiences poor thermal conduction, taking a long time to reach thermal equilibrium. Merely handling the flats can transfer enough heat to offset the results, so glasses such as fused silica or borosilicate are used, which have very low coefficients of thermal expansion. The glass needs to be hard and very stable, and is usually very thick to prevent flexing. When measuring on the nanometre scale, the slightest bit of pressure can cause the glass to flex enough to distort the results. Therefore, a very flat and stable work-surface is also needed, on which the test can be performed, preventing both the flat and the test-piece from sagging under their combined weight, Often, a precision-ground surface plate is used as a work surface, providing a steady table-top for testing upon. To provide an even flatter surface, sometimes the test may be performed on top of another optical flat, with the test surface sandwiched in the middle. Absolute flatness Absolute flatness is the flatness of an object when measured against an absolute scale, in which the reference flat (standard) is completely free of irregularities. The flatness of any optical flat is relative to the flatness of the original standard that was used to calibrate it. Therefore, because both surfaces have some irregularities, there are few ways to know the true, absolute flatness of any optical flat. The only surface that can achieve nearly absolute flatness is a liquid surface, such as mercury, and can sometimes achieve flatness readings to within λ/100, which equates to a deviation of only 6.32 nm (632 nm/100). However, liquid flats are very difficult to use and align properly, so they are typically only used when preparing a standard flat for calibrating other flats. The other method for determining absolute flatness is the "three-flat test." In this test, three flats of equal size and shape are tested against each other. By analyzing the patterns and their different phase shifts, the absolute contours of each surface can be extrapolated. This usually requires at least twelve individual tests, checking each flat against every other flat in at least two different orientations. To eliminate any errors, the flats sometimes may be tested while resting on edge, rather than lying flat, helping to prevent sagging. Wringing Wringing occurs when nearly all of the air becomes forced out from between the surfaces, causing the surfaces to lock together, partly through the vacuum between them. The flatter the surfaces; the better they will wring together, especially when the flatness extends all the way to the edges. If two surfaces are very flat, they may become wrung together so tightly that a lot of force may be needed to separate them. The interference fringes typically only form once the optical flat begins to wring to the testing surface. If the surfaces are clean and very flat, they will begin to wring almost immediately after the first contact. After wringing begins, as air is slowly forced out from between the surfaces, an optical wedge forms between the surfaces. The interference fringes form perpendicular to this wedge. As the air is forced out, the fringes will appear to move toward the thickest gap, spreading out and becoming wider but fewer. As the air is forced out, the vacuum holding the surfaces together becomes stronger. The optical flat should usually never be allowed to fully wring to the surface, otherwise it can be scratched or even broken when separating them. In some cases, if left for many hours, a block of wood may be needed to knock them loose. Testing flatness with an optical flat is typically done as soon a viable interference pattern develops, and then the surfaces are separated before they can fully wring. Because the angle of the wedge is extremely shallow and the gap extremely small, wringing may take a few hours to complete. Sliding the flat in relation to the surface can speed up wringing, but trying to press the air out will have little effect. If the surfaces are insufficiently flat, if any oil films or impurities exist on the surface, or if slight dust-particles land between the surfaces, they may not wring at all. Therefore, the surfaces must be very clean and free of debris to get an accurate measurement. Determining surface shape The fringes act very much like the lines on a topography map, where the fringes are always perpendicular to the wedge between the surfaces. When wringing first begins, there is a large angle in the air wedge and the fringes will resemble grid topography-lines. If the fringes are straight; then the surface is flat. If the surfaces are allowed to fully wring and become parallel, the straight fringes will widen until only a dark fringe remains, and they will disappear completely. If the surface is not flat, the grid lines will have some bends in them, indicating the topography of the surface. Straight fringes with bends in them may indicate a raised elevation or a depression. Straight fringes with a "V" shape in the middle indicate a ridge or valley running across the center, while straight fringes with curves near the ends indicate edges that are either rounded-off or have a raised lip. If the surfaces are not completely flat, as wringing progresses the fringes will widen and continue to bend. When fully wrung, they will resemble contour topography-lines, indicating the deviations on the surface. Rounded fringes indicate gentle sloping or slightly cylindrical surfaces, while tight corners in the fringes indicate sharp angles in the surface. Small, round circles may indicate bumps or depressions, while concentric circles indicate a conical shape. Unevenly spaced concentric circles indicate a convex or concave surface. Before the surfaces fully wring, these fringes will be distorted due to the added angle of the air wedge, changing into the contours as the air is slowly pushed out. A single dark-fringe has the same gap thickness, following a line that runs the entire length of the fringe. The adjacent bright-fringe will indicate a thickness which is either 1/2 of the wavelength narrower or 1/2 of the wavelength wider. The thinner and closer the fringes are; the steeper the slope is, while wider fringes, spaced further apart, show a shallower slope. Unfortunately, it is impossible to tell whether the fringes are indicating an uphill or downhill slope from just a single view of the fringes alone, because the adjacent fringes can be going either way. A ring of concentric circles can indicate that the surface is either concave or convex, which is an effect similar to the hollow-mask illusion. There are three ways to test the surface for shape, but the most common is the "finger-pressure test." In this test, slight pressure is applied to the flat, to see which way the fringes move. The fringes will move away from the narrow end of the wedge. If the testing surface is concave, when pressure is applied to the center of the rings, the flat will flex a little and the fringes will appear to move inward. However, if the surface is convex, the flat will be in point-contact with the surface in that spot, so it will have no room to flex. Thus, the fringes will remain stationary, merely growing a little wider. If pressure is applied to the edge of the flat something similar happens. If the surface is convex the flat will rock a little, causing the fringes to move toward the finger. However, if the surface is concave the flat will flex a little, and the fringes will move away from the finger toward the center. Although this is called a "finger" pressure test, a wooden stick or some other instrument is often used to avoid heating the glass (with the mere weight of a toothpick often being enough pressure). Another method involves exposing the flat to white light, allowing rainbow fringes to form, and then pressing in the center. If the surface is concave, there will be point-contact along the edge, and the outer fringe will turn dark. If the surface is convex, there will be point-contact in the center, and the central fringe will turn dark. Much like tempering colors of steel, the fringes will be slightly brownish at the narrower side of the fringe and blue on the wider side, so if the surface is concave the blue will be on the inside of the rings, but if convex the blue will be on the outside. The third method involves moving the eye in relation to the flat. When moving the eye from a zero-degree angle of incidence to an oblique angle, the fringes will appear to move. If the testing surface is concave, the fringes will appear to move toward the center. If the surface is convex, the fringes will move away from the center. To get a truly accurate reading of the surface, the test should usually be performed in at least two different directions. As grid lines, the fringes only represent part of a grid, so a valley running across the surface may only show as a slight bend in the fringe if it is running parallel to the valley. However, if the optical flat is rotated 90 degrees and retested, the fringes will run perpendicular to the valley and it will show up as a row of V- or U-shaped contours in the fringes. By testing in more than one orientation, a better map of the surface can be made. Long-term stability During reasonable care and use, optical flats need to maintain their flatness over long periods of time. Therefore, hard glasses with low coefficients of thermal expansion, such as fused silica, are often used for the manufacturing material. However, a few laboratory measurements of room temperature, fused-silica optical-flats have shown a motion consistent with a material viscosity on the order of 1017–1018 Pa·s. This equates to a deviation of a few nanometres over the period of a decade. Because the flatness of an optical flat is relative to the flatness of the original test flat, the true (absolute) flatness at the time of manufacture can only be determined by performing an interferometer test using a liquid flat, or by performing a "three flat test", in which the interference patterns produced by three flats are computer-analyzed. A few tests that have been carried out have shown that a deviation sometimes occurs on the fused silica's surface. However, the tests show that the deformation may be sporadic, with only some of the flats deforming during the test period, some partially deforming, and others remaining the same. The cause of the deformation is unknown and would never be visible to the human eye during a lifetime. (A λ/4 flat has a normal surface-deviation of 158 nanometres, while a λ/20 flat has a normal deviation of over 30 nm.) This deformation has only been observed in fused silica, while soda-lime glass still shows a viscosity of 1041Pa·s, which is many orders of magnitude higher. See also Newton's rings Optical contact bonding Gauge block, another type of component designed for flatness Surface plate References Optical devices
Optical flat
[ "Materials_science", "Engineering" ]
4,936
[ "Glass engineering and science", "Optical devices" ]
4,231,961
https://en.wikipedia.org/wiki/Plasma%20etching
Plasma etching is a form of plasma processing used to fabricate integrated circuits. It involves a high-speed stream of glow discharge (plasma) of an appropriate gas mixture being shot (in pulses) at a sample. The plasma source, known as etch species, can be either charged (ions) or neutral (atoms and radicals). During the process, the plasma generates volatile etch products at room temperature from the chemical reactions between the elements of the material etched and the reactive species generated by the plasma. Eventually the atoms of the shot element embed themselves at or just below the surface of the target, thus modifying the physical properties of the target. Mechanisms Plasma generation A plasma is a high energetic condition in which a lot of processes can occur. These processes happen because of electrons and atoms. To form the plasma electrons have to be accelerated to gain energy. Highly energetic electrons transfer the energy to atoms by collisions. Three different processes can occur because of this collisions: Excitation Dissociation Ionization Different species are present in the plasma such as electrons, ions, radicals, and neutral particles. Those species are interacting with each other constantly. Two processes occur during plasma etching: generation of chemical species interaction with the surrounding surfaces Without a plasma, all those processes would occur at a higher temperature. There are different ways to change the plasma chemistry and get different kinds of plasma etching or plasma depositions. One way to form a plasma is by using RF excitation by a power source of 13.56 MHz, a frequency allocated for this application in the ISM bands. The mode of operation of the plasma system will change if the operating pressure changes. Also, it is different for different structures of the reaction chamber. In the simple case, the electrode structure is symmetrical, and the sample is placed upon the grounded electrode. Influences on the process The key to develop successful complex etching processes is to find the appropriate gas etch chemistry that will form volatile products with the material to be etched as shown in Table 1. For some difficult materials (such as magnetic materials), the volatility can only be obtained when the wafer temperature is increased. The main factors that influence the plasma process: Electron source Pressure Gas species Vacuum Surface interaction The reaction of the products depend on the likelihood of dissimilar atoms, photons, or radicals reacting to form chemical compounds. The temperature of the surface also affects the reaction of products. Adsorption happens when a substance is able to gather and reach the surface in a condensed layer, ranging in thickness (usually a thin, oxidized layer.) Volatile products desorb in the plasma phase and help the plasma etching process as the material interacts with the sample's walls. If the products are not volatile, a thin film will form at the surface of the material. Different principles that affect a sample's ability for plasma etching: Volatility Adsorption Chemical Affinity Ion-bombarding Sputtering Plasma etching can change the surface contact angles, such as hydrophilic to hydrophobic, or vice versa. Argon plasma etching has reported to enhance contact angle from 52 deg to 68 deg, and, Oxygen plasma etching to reduce contact angle from 52 deg to 19 deg for CFRP composites for bone plate applications. Plasma etching has been reported to reduce the surface roughness from hundreds of nanometers to as much lower as 3 nm for metals. Types Pressure influences the plasma etching process. For plasma etching to happen, the chamber has to be under low pressure, less than 100 Pa. In order to generate low-pressure plasma, the gas has to be ionized. The ionization happens by a glow charge. Those excitations happen by an external source, which can deliver up to 30 kW and frequencies from 50 Hz (dc) over 5–10 Hz (pulsed dc) to radio and microwave frequency (MHz-GHz). Microwave plasma etching Microwave etching happens with an excitation sources in the microwave frequency, so between MHz and GHz. One example of plasma etching is shown here. Hydrogen plasma etching One form to use gas as plasma etching is hydrogen plasma etching. Therefore, an experimental apparatus like this can be used: Plasma etcher A plasma etcher, or etching tool, is a tool used in the production of semiconductor devices. A plasma etcher produces a plasma from a process gas, typically oxygen or a fluorine-bearing gas, using a high frequency electric field, typically 13.56 MHz. A silicon wafer is placed in the plasma etcher, and the air is evacuated from the process chamber using a system of vacuum pumps. Then a process gas is introduced at low pressure, and is excited into a plasma through dielectric breakdown. Plasma confinement Industrial plasma etchers often feature plasma confinement to enable repeatable etch rates and precise spatial distributions in plasmas. One method of confining plasmas is by using the properties of the Debye sheath, a near-surface layer in plasmas similar to the double layer in other fluids. For example, if the Debye sheath length on a slotted quartz part is at least half the width of the slot, the sheath will close off the slot and confine the plasma, while still permitting uncharged particles to pass through the slot. Applications Plasma etching is currently used to process semiconducting materials for their use in the fabrication of electronics. Small features can be etched into the surface of the semiconducting material in order to be more efficient or enhance certain properties when used in electronic devices. For example, plasma etching can be used to create deep trenches on the surface of silicon for uses in microelectromechanical systems. This application suggests that plasma etching also has the potential to play a major role in the production of microelectronics. Similarly, research is currently being done on how the process can be adjusted to the nanometer scale. Hydrogen plasma etching, in particular, has other interesting applications. When used in the process of etching semiconductors, hydrogen plasma etching has been shown to be effective in removing portions of native oxides found on the surface. Hydrogen plasma etching also tends to leave a clean and chemically balanced surface, which is ideal for a number of applications. Oxygen plasma etching can be used for anisotropic deep-etching of diamond nanostructures by application of high bias in inductively coupled plasma/reactive ion etching (ICP/RIE) reactor. On the other hand, the use of oxygen 0V bias plasmas can be used for isotropic surface termination of C-H terminated diamond surface. Integrated circuits Plasma can be used to grow a silicon dioxide film on a silicon wafer (using an oxygen plasma), or can be used to remove silicon dioxide by using a fluorine bearing gas. When used in conjunction with photolithography, silicon dioxide can be selectively applied or removed to trace paths for circuits. For the formation of integrated circuits it is necessary to structure various layers. This can be done with a plasma etcher. Before etching, a photoresist is deposited on the surface, illuminated through a mask, and developed. The dry etch is then performed so that structured etching is achieved. After the process, the remaining photoresist has to be removed. This is also done in a special plasma etcher, called an asher. Dry etching allows a reproducible, uniform etching of all materials used in silicon and III-V semiconductor technology. By using inductively coupled plasma/reactive ion etching (ICP/RIE), even hardest materials like e.g. diamond can be nanostructured. Plasma etchers are also used for de-layering integrated circuits in failure analysis. Printed circuit boards Plasma is used to etch printed circuit boards, including de-smear vias. See also Plasma cleaning References External links http://stage.iupac.org/publications/pac/pdf/1990/pdf/6209x1699.pdf Plasma processing Semiconductor device fabrication
Plasma etching
[ "Materials_science" ]
1,666
[ "Semiconductor device fabrication", "Microtechnology" ]
4,232,047
https://en.wikipedia.org/wiki/Attenuator%20%28electronics%29
An attenuator is a passive broadband electronic device that reduces the power of a signal without appreciably distorting its waveform. An attenuator is effectively the opposite of an amplifier, though the two work by different methods. While an amplifier provides gain, an attenuator provides loss, or gain less than unity. An attenuator is often referred to as a "pad" in audio electronics. Construction and usage Attenuators are usually passive devices made from simple voltage divider networks. Switching between different resistances forms adjustable stepped attenuators and continuously adjustable ones using potentiometers. For higher frequencies precisely matched low voltage standing wave ratio (VSWR) resistance networks are used. Fixed attenuators in circuits are used to lower voltage, dissipate power, and to improve impedance matching. In measuring signals, attenuator pads or adapters are used to lower the amplitude of the signal a known amount to enable measurements, or to protect the measuring device from signal levels that might damage it. Attenuators are also used to 'match' impedance by lowering apparent SWR (Standing Wave Ratio). Attenuator circuits Basic circuits used in attenuators are pi (Π) pads (π-type) and T pads. These may be required to be balanced or unbalanced networks depending on whether the line geometry with which they are to be used is balanced or unbalanced. For instance, attenuators used with coaxial lines would be the unbalanced form while attenuators for use with twisted pair are required to be the balanced form. Four fundamental attenuator circuit diagrams are given in the figures on the left. Since an attenuator circuit consists solely of passive resistor elements, it is both linear and reciprocal. If the circuit is also made symmetrical (this is usually the case since it is usually required that the input and output impedance Z1 and Z2 are equal), then the input and output ports are not distinguished, but by convention the left and right sides of the circuits are referred to as input and output, respectively. Various tables and calculators are available that provide a means of determining the appropriate resistor values for achieving particular loss values, such as that published by the NAB in 1960 for losses ranging from 1/2 to 40 dB, for use in 600 ohm circuits. Attenuator characteristics Key specifications for attenuators are: Attenuation expressed in decibels of relative power. A 3 dB pad reduces power to one half, 6 dB to one fourth, 10 dB to one tenth, 20 dB to one hundredth, 30 dB to one thousandth and so on. When input and output impedances are the same, voltage attenuation will be the square root of power attenuation, so, for example, a 6 dB attenuator that reduces power to one fourth will reduce the voltage (and the current) by half. Nominal impedance, for example 50 ohm Frequency bandwidth, for example DC-18 GHz Power dissipation depends on mass and surface area of resistance material as well as possible additional cooling fins. SWR is the standing wave ratio for input and output ports Accuracy Repeatability RF attenuators Radio frequency attenuators are typically coaxial in structure with precision connectors as ports and coaxial, micro strip or thin-film internal structure. Above SHF special waveguide structure is required. The flap attenuator is designed for use in waveguides to attenuate the signal. Important characteristics are: accuracy, low SWR, flat frequency-response and repeatability. The size and shape of the attenuator depends on its ability to dissipate power. RF attenuators are used as loads for and as known attenuation and protective dissipation of power in measuring RF signals. Audio attenuators A line-level attenuator in the preamp or a power attenuator after the power amplifier uses electrical resistance to reduce the amplitude of the signal that reaches the speaker, reducing the volume of the output. A line-level attenuator has lower power handling, such as a 1/2-watt potentiometer or voltage divider and controls preamp level signals, whereas a power attenuator has higher power handling capability, such as 10 watts or more, and is used between the power amplifier and the speaker. Power attenuator (guitar) Guitar amplifier Component values for resistive pads and attenuators This section concerns pi-pads, T-pads and L-pads made entirely from resistors and terminated on each port with a purely real resistance. All impedance, currents, voltages and two-port parameters will be assumed to be purely real. For practical applications, this assumption is often close enough. The pad is designed for a particular load impedance, ZLoad, and a particular source impedance, Zs. The impedance seen looking into the input port will be ZS if the output port is terminated by ZLoad. The impedance seen looking into the output port will be ZLoad if the input port is terminated by ZS. Reference figures for attenuator component calculation The attenuator two-port is generally bidirectional. However, in this section it will be treated as though it were one way. In general, either of the two figures applies, but the first figure (which depicts the source on the left) will be tacitly assumed most of the time. In the case of the L-pad, the second figure will be used if the load impedance is greater than the source impedance. Each resistor in each type of pad discussed is given a unique designation to decrease confusion. The L-pad component value calculation assumes that the design impedance for port 1 (on the left) is equal or higher than the design impedance for port 2. Terms used Pad will include pi-pad, T-pad, L-pad, attenuator, and two-port. Two-port will include pi-pad, T-pad, L-pad, attenuator, and two-port. Input port will mean the input port of the two-port. Output port will mean the output port of the two-port. Symmetric means a case where the source and load have equal impedance. Loss means the ratio of power entering the input port of the pad divided by the power absorbed by the load. Insertion Loss means the ratio of power that would be delivered to the load if the load were directly connected to the source divided by the power absorbed by the load when connected through the pad. Symbols used Passive, resistive pads and attenuators are bidirectional two-ports, but in this section they will be treated as unidirectional. ZS = the output impedance of the source. ZLoad = the input impedance of the load. Zin = the impedance seen looking into the input port when ZLoad is connected to the output port. Zin is a function of the load impedance. Zout = the impedance seen looking into the output port when Zs is connected to the input port. Zout is a function of the source impedance. Vs = source open circuit or unloaded voltage. Vin = voltage applied to the input port by the source. Vout = voltage applied to the load by the output port. Iin = current entering the input port from the source. Iout = current entering the load from the output port. Pin = Vin Iin = power entering the input port from the source. Pout = Vout Iout = power absorbed by the load from the output port. Pdirect = the power that would be absorbed by the load if the load were connected directly to the source. Lpad = 10 log10 (Pin / Pout), always. Further, if Zs = ZLoad, then Lpad = 20 log10 (Vin / Vout ). Note, as defined, Loss ≥ 0 dB Linsertion = 10 log10 (Pdirect / Pout ). Further, if Zs = ZLoad, then Linsertion = Lpad. Loss ≡ Lpad. Loss is defined to be Lpad. Symmetric T pad resistor calculation see Valkenburg p 11-3 Symmetric pi pad resistor calculation see Valkenburg p 11-3 L-Pad for impedance matching resistor calculation If a source and load are both resistive (i.e. Z1 and Z2 have zero or very small imaginary part) then a resistive L-pad can be used to match them to each other. As shown, either side of the L-pad can be the source or load, but the Z1 side must be the side with the higher impedance. see Large positive numbers means loss is large. The loss is a monotonic function of the impedance ratio. Higher ratios require higher loss. Converting T-pad to pi-pad This is the Y-Δ transform Converting pi-pad to T-pad This is the Δ-Y transform Conversion between two-ports and pads T-pad to impedance parameters The impedance parameters for a passive two-port are It is always possible to represent a resistive t-pad as a two-port. The representation is particularly simple using impedance parameters as follows: Impedance parameters to T-pad The preceding equations are trivially invertible, but if the loss is not enough, some of the t-pad components will have negative resistances. Impedance parameters to pi-pad These preceding T-pad parameters can be algebraically converted to pi-pad parameters. Pi-pad to admittance parameters The admittance parameters for a passive two port are It is always possible to represent a resistive pi pad as a two-port. The representation is particularly simple using admittance parameters as follows: Admittance parameters to pi-pad The preceding equations are trivially invertible, but if the loss is not enough, some of the pi-pad components will have negative resistances. General case, determining impedance parameters from requirements Because the pad is entirely made from resistors, it must have a certain minimum loss to match source and load if they are not equal. The minimum loss is given by Although a passive matching two-port can have less loss, if it does it will not be convertible to a resistive attenuator pad. Once these parameters have been determined, they can be implemented as a T or pi pad as discussed above. See also RF and microwave variable attenuators Optical attenuator Notes References External links Guitar amp power attenuator FAQ Basic attenuator circuits Explanation of attenuator types, impedance matching, and very useful calculator Resistive components Microwave technology Audio engineering
Attenuator (electronics)
[ "Physics", "Engineering" ]
2,244
[ "Physical quantities", "Resistive components", "Electrical engineering", "Audio engineering", "Electrical resistance and conductance" ]
4,232,656
https://en.wikipedia.org/wiki/Trakhtenbrot%27s%20theorem
In logic, finite model theory, and computability theory, Trakhtenbrot's theorem (due to Boris Trakhtenbrot) states that the problem of validity in first-order logic on the class of all finite models is undecidable. In fact, the class of valid sentences over finite models is not recursively enumerable (though it is co-recursively enumerable). Trakhtenbrot's theorem implies that Gödel's completeness theorem (that is fundamental to first-order logic) does not hold in the finite case. Also it seems counter-intuitive that being valid over all structures is 'easier' than over just the finite ones. The theorem was first published in 1950: "The Impossibility of an Algorithm for the Decidability Problem on Finite Classes". Mathematical formulation We follow the formulations as in Ebbinghaus and Flum Theorem Satisfiability for finite structures is not decidable in first-order logic. That is, the set {φ | φ is a sentence of first-order logic that is satisfied in some finite structure} is undecidable. (p. 127, Th. 7.2.1 in ) Corollary Let σ be a relational vocabulary with one at least binary relation symbol. The set of σ-sentences valid in all finite structures is not recursively enumerable. Remarks This implies that Gödel's completeness theorem fails in the finite since completeness implies recursive enumerability. It follows that there is no recursive function f such that: if φ has a finite model, then it has a model of size at most f(φ). In other words, there is no effective analogue to the Löwenheim–Skolem theorem in the finite. Intuitive proof This proof is taken from Chapter 10, section 4, 5 of Mathematical Logic by H.-D. Ebbinghaus. As in the most common proof of Gödel's First Incompleteness Theorem through using the undecidability of the halting problem, for each Turing machine there is a corresponding arithmetical sentence , effectively derivable from , such that it is true if and only if halts on the empty tape. Intuitively, asserts "there exists a natural number that is the Gödel code for the computation record of on the empty tape that ends with halting". If the machine does halt in finite steps, then the complete computation record is also finite, then there is a finite initial segment of the natural numbers such that the arithmetical sentence is also true on this initial segment. Intuitively, this is because in this case, proving requires the arithmetic properties of only finitely many numbers. If the machine does not halt in finite steps, then is false in any finite model, since there's no finite computation record of that ends with halting. Thus, if halts, is true in some finite models. If does not halt, is false in all finite models. So, does not halt if and only if is true over all finite models. The set of machines that does not halt is not recursively enumerable, so the set of valid sentences over finite models is not recursively enumerable. Alternative proof In this section we exhibit a more rigorous proof from Libkin. Note in the above statement that the corollary also entails the theorem, and this is the direction we prove here. Theorem For every relational vocabulary τ with at least one binary relation symbol, it is undecidable whether a sentence φ of vocabulary τ is finitely satisfiable. Proof According to the previous lemma, we can in fact use finitely many binary relation symbols. The idea of the proof is similar to the proof of Fagin's theorem, and we encode Turing machines in first-order logic. What we want to prove is that for every Turing machine M we construct a sentence φM of vocabulary τ such that φM is finitely satisfiable if and only if M halts on the empty input, which is equivalent to the halting problem and therefore undecidable. Let M= ⟨Q, Σ, δ, q0, Qa, Qr⟩ be a deterministic Turing machine with a single infinite tape. Q is the set of states, Σ is the input alphabet, Δ is the tape alphabet, δ is the transition function, q0 is the initial state, Qa and Qr are the sets of accepting and rejecting states. Since we are dealing with the problem of halting on an empty input we may assume w.l.o.g. that Δ={0,1} and that 0 represents a blank, while 1 represents some tape symbol. We define τ so that we can represent computations: τ := {<, min, T0 (⋅,⋅), T1 (⋅,⋅), (Hq(⋅,⋅))(q ∈ Q)} Where: < is a linear order and min is a constant symbol for the minimal element with respect to < (our finite domain will be associated with an initial segment of the natural numbers). T0 and T1 are tape predicates. Ti(s,t) indicates that position s at time t contains i, where i ∈ {0,1}. Hq's are head predicates. Hq(s,t) indicates that at time t the machine is in state q, and its head is in position s. The sentence φM states that (i) <, min, Ti's and Hq's are interpreted as above and (ii) that the machine eventually halts. The halting condition is equivalent to saying that Hq∗(s, t) holds for some s, t and q∗ ∈ Qa ∪ Qr and after that state, the configuration of the machine does not change. Configurations of a halting machine (the nonhalting is not finite) can be represented as a τ (finite) sentence (more precisely, a finite τ-structure which satisfies the sentence). The sentence φM is: φ ≡ α ∧ β ∧ γ ∧ η ∧ ζ ∧ θ. We break it down by components: α states that < is a linear order and that min is its minimal element γ defines the initial configuration of M: it is in state q0, the head is in the first position and the tape contains only zeros: γ ≡ Hq0(min,min) ∧ ∀s T0 (s, min) η states that in every configuration of M, each tape cell contains exactly one element of Δ: ∀s∀t(T0(s, t) ↔ ¬ T1(s, t)) β imposes a basic consistency condition on the predicates Hq's: at any time the machine is in exactly one state: ζ states that at some point M is in a halting state: θ consists of a conjunction of sentences stating that Ti's and Hq's are well behaved with respect to the transitions of M. As an example, let δ(q,0)=(q',1, left) meaning that if M is in state q reading 0, then it writes 1, moves the head one position to the left and goes into the state q'. We represent this condition by the disjunction of θ0 and θ1: Where θ2 is: And: Where θ3 is: s-1 and t+1 are first-order definable abbreviations for the predecessor and successor according to the ordering <. The sentence θ0 assures that the tape content in position s changes from 0 to 1, the state changes from q to q', the rest of the tape remains the same and that the head moves to s-1 (i. e. one position to the left), assuming s is not the first position in the tape. If it is, then all is handled by θ1: everything is the same, except the head does not move to the left but stays put. If φM has a finite model, then such a model that represents a computation of M (that starts with the empty tape (i.e. tape containing all zeros) and ends in a halting state). If M halts on the empty input, then the set of all configurations of the halting computations of M (coded with <, Ti's and Hq's) is a model of φM, which is finite, since the set of all configurations of halting computations is finite. It follows that M halts on the empty input iff φM has a finite model. Since halting on the empty input is undecidable, so is the question of whether φM has a finite model (equivalently, whether φM is finitely satisfiable) is also undecidable (recursively enumerable, but not recursive). This concludes the proof. Corollary The set of finitely satisfiable sentences is recursively enumerable. Proof Enumerate all pairs where is finite and . Corollary For any vocabulary containing at least one binary relation symbol, the set of all finitely valid sentences is not recursively enumerable. Proof From the previous lemma, the set of finitely satisfiable sentences is recursively enumerable. Assume that the set of all finitely valid sentences is recursively enumerable. Since ¬φ is finitely valid iff φ is not finitely satisfiable, we conclude that the set of sentences which are not finitely satisfiable is recursively enumerable. If both a set A and its complement are recursively enumerable, then A is recursive. It follows that the set of finitely satisfiable sentences is recursive, which contradicts Trakhtenbrot's theorem. References Boolos, Burgess, Jeffrey. Computability and Logic, Cambridge University Press, 2002. Simpson, S. "Theorems of Church and Trakhtenbrot". 2001. Finite model theory Computability theory Undecidable problems
Trakhtenbrot's theorem
[ "Mathematics" ]
2,107
[ "Mathematical theorems", "Foundations of mathematics", "Mathematical logic", "Computational problems", "Finite model theory", "Undecidable problems", "Model theory", "Computability theory", "Mathematical problems", "Theorems in the foundations of mathematics" ]
4,233,727
https://en.wikipedia.org/wiki/Aurea%20Alexandrina
Aurea Alexandrina was an ancient opiate. It is called Aurea from the gold which enters its composition, and Alexandrina for the physician Nicolaus Myresus Alexandrinus, who invented it. It was considered to be a good preservative against colic and apoplexy. References Opioids Antidotes
Aurea Alexandrina
[ "Chemistry" ]
73
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
4,234,010
https://en.wikipedia.org/wiki/3D%20Systems
3D Systems Corporation is an American company based in Rock Hill, South Carolina, that engineers, manufactures, and sells 3D printers, 3D printing materials, 3D printed parts, and application engineering services. The company creates product concept models, precision and functional prototypes, master patterns for tooling, as well as production parts for direct digital manufacturing. It uses proprietary processes to fabricate physical objects using input from computer-aided design and manufacturing software, or 3D scanning and 3D sculpting devices. 3D Systems' technologies and services are used in the design, development, and production stages of many industries, including aerospace, automotive, healthcare, dental, entertainment, and durable goods. The company offers a range of professional- and production-grade 3D printers, as well as software, materials, and the online rapid part printing service on demand. It is notable within the 3D printing industry for developing stereolithography and the STL file format. Chuck Hull, CTO and former president, pioneered stereolithography and obtained a patent for the technology in 1986. As of 2020, 3D Systems employed over 2,400 people in 25 offices worldwide. History 3D Systems was founded in Valencia, California, by Chuck Hull, the inventor and patent-holder of the first stereolithography (SLA) rapid prototyping system. Prior to Hull's introduction of SLA rapid prototyping, concept models required extensive time and money to produce. The innovation of SLA reduced these resource expenditures while increasing the quality and accuracy of the resulting model. Early SLA systems were complex and costly, and required extensive redesigns before achieving commercial viability. Primary issues concerned hydrodynamic and chemical complications. In 1996, the introduction of solid-state lasers permitted Hull and his team to reformulate their materials. Engineers in transportation, healthcare, and consumer products helped fuel early phases of 3D Systems' rapid prototyping research and development. These industries remain key followers of 3D Systems' technology. In late 2001, 3D Systems began an acquisitions program that expanded the company's technology through ownership of software, materials, printers, and printable content, as well as access to the skills of engineers and designers. The rate of 3D Systems' acquisitions (16 in 2011) raised questions with regard to the task facing the company's management team. Other onlookers pointed to the encompassing scope of the acquisitions as indicating calculated steps by 3D Systems to consolidate the 3D printing industry under one roof and logo, and to become capable of servicing each link in the scan/create-to-print chain. In 2003, Hull was succeeded by Avi Reichental. Both Reichental and Hull are listed among the top twenty most influential people in rapid technologies by TCT Magazine. Hull remains an active member of 3D Systems' board and serves as the company's Chief Technology Officer and Executive Vice President. In 2005, 3D Systems relocated its headquarters to Rock Hill, South Carolina, citing a favorable business climate, a sustained lower cost of doing business, and significant investment and tax benefits as reasons for the move. In May 2011, 3D Systems transferred from Nasdaq (TDSC) to the New York Stock Exchange (DDD). In January 2012 3D Systems acquired Z Corporation for US$137 million. That same year a Gray Wolf Report predicted 3D Systems' rate of growth to be unsustainable, pointing to inflated impressions from acquisitions as a corporate misstatement of organic growth. 3D Systems responded to this article on November 19, 2012, claiming it to "contain materially false statements and erroneous conclusions that we believe defamed the company and its reputation and resulted in losses to our shareholders". In January 2014 it was announced that 3D Systems had acquired the Burbank, CA-based collectibles company Gentle Giant Studios, which designs, develops, and manufactures three-dimensional representations of characters from a variety of globally recognized franchises, including Marvel, Disney, AMC’s The Walking Dead, Avatar, Harry Potter and Star Wars. In July 2014, 3D Systems announced the acquisition of Israeli medical imaging company Simbionix for . In September 2014, 3D Systems acquired the Leuven, Belgium-based LayerWise, a principal provider of direct metal 3D printing and manufacturing services spun off from KU Leuven. The terms of the acquisition were not disclosed by either company. In January 2015, 3D Systems acquired the 3D printer manufacturer botObjects, the first company to commercialize a full-color printer using the fused filament fabrication technique. botObjects was founded by Martin Warner (CEO) and Mike Duma (CTO). botObjects' proprietary 5-color CMYKW cartridge system was claimed to be able to generate color combinations and gradients by mixing primary printing colors. There was some skepticism about botObjects' claims. In April 2015, 3D Systems announced its acquisition of the Chinese Easyway Group, creating 3D Systems China. Easyway is a Chinese 3D printing sales and service provider, with key operations in Shanghai, Wuxi, Beijing, Guangdong, and Chongqing. In October 2015, Reichental stepped down as the president and CEO of 3D Systems, Inc. and was replaced on an interim basis by the company's chief legal officer Andrew Johnson. Vyomesh Joshi (VJ) was appointed as president and CEO on April 4, 2016. On May 14, 2020, the 3D Systems board named Jeff Graves as president and CEO, effective May 26. He remains the CEO as of February 17, 2023. Technology 3D Systems manufactures stereolithography (SLA), fused deposition modeling (FDM), selective laser sintering (SLS), color-jet printing (CJP), multi-jet printing (MJP), and direct metal printing (DMP, a version of SLS that uses metal powder) systems. Each technology uses digital 3D data to create parts through an additive layer-by-layer process. The systems vary in their materials, print capacities, and applications. Color jet printing uses inkjet technology to deposit a liquid binder across a bed of powder. Powder is released and spread with a roller to form each new layer. This technology was originally developed by Z Corporation. Multi-jet printing refers to the process of depositing liquid photopolymers onto a build surface using inkjet technology. A high resolution is attainable, with a support material that can be easily removed in post-processing. Products and patents As part of 3D Systems' effort to consolidate 3D printing under one company, its products span a range of 3D printers and print products to target users of its technologies across industries. 3D Systems offers both professional and production printers. In addition to printers, 3D Systems offers content creation software, including reverse engineering software and organic 3D modeling software. Following a razor and blades model, 3D Systems offers more than one hundred materials to be used with its printers, including waxes, rubber-like materials, metals, composites, plastics and nylons. 3D Systems is a closed-source company, using in-house technologies for product development and patents to protect their technologies from competitors. Critics of the closed-source model have blamed seemingly slow development and innovation in 3D printing not on a lack of technology, but on a lack of open information sharing within the industry, and supporters argue that the right to patents inspires and motivates higher-quality innovations, leading to a better and more impressive final product. In November 2012, 3D Systems filed a lawsuit against prosumer 3D printer company Formlabs and the Kickstarter crowdfunding website over Formlabs' attempt to fund a printer which it claimed infringed its patent on "Simultaneous multiple layer curing in stereolithography." The legal procedure lasted more than two years and was significant enough to be covered in a Netflix documentary about 3D printing, called "Print the Legend". 3D Systems has applied for patents for the following innovations and technologies: the rapid prototyping and manufacturing system and method; radiation-curable compositions useful in image projection systems; compensation of actinic radiation intensity profiles for 3D modelers; apparatus and methods for cooling laser-sintered parts; radiation-curable compositions useful in solid freeform fabrication systems; apparatus for 3D printing using imaged layers; compositions and methods for selective deposition modeling; edge smoothness with low-resolution projected images for use in solid imaging; an elevator and method for tilting a solid image build platform for reducing air entrapment and for build release; selective deposition modeling methods for improved support-object interface; region-based supports for parts produced by solid freeform fabrication; additive manufacturing methods for improved curl control and sidewall quality; support and build material and applications. Applications and industries 3D Systems' products and services are used across industries to assist, either in part or in full, the design, manufacture and/or marketing processes. 3D Systems' technologies and materials are used for prototyping and the production of functional end-use parts, in addition to fast, precise design communication. Current 3D Systems-reliant industries include automotive, aerospace and defense, architecture, dental and healthcare, consumer goods, and manufacturing. Examples of industry-specific applications include: Aerospace, for the manufacture and tooling of complex, durable and lighter-weight flight parts Architecture, for structure verification, design review, client concept communication, reverse structure engineering, and expedited scaled modeling Automotive, for design verification, difficult visualizations, and new engine development Defense, for lightweight flight and surveillance parts and the reduction of inventory with on-demand printing Dentistry, for restorations, molds and treatments. Invisalign orthodontics devices use 3D Systems' technologies. Education, for equation and geometry visualizations, art education, and design initiatives Entertainment, for the manufacture and prototyping of action figures, toys, games and game components; printing of sustainable guitars and basses, multifunction synthesizers, etc. Healthcare, for customized hearing aids and prosthetics, improved medicine delivery methods, respiratory devices, therapeutics, and flexible endoscopy and laparoscopy devices for improved procedures and recovery times Manufacturing, for faster product development cycles, mold production, prototypes, and design troubleshooting For industries such as aerospace and automotive, 3D Systems' technologies have reduced the time needed to incorporate design drafts and enabled the production of more efficient parts of lighter weight. Because 3D printing builds layer-by-layer according to design, it does not need to accommodate the traditional manufacturing tools of subtractive methods, often resulting in lighter parts and more efficient geometries. Operations In 2007, the company consolidated its offices, operations, and research and development functions into a new global headquarters in Rock Hill, South Carolina, US. About half of the headquarters' consist of research and development laboratories with an Rapid Manufacturing Center (RMC) with 3D Systems' rapid prototyping, rapid manufacturing and 3D printing systems at work. With customers in 80 countries, 3D Systems has over 2100 employees in 25 worldwide locations, including San Francisco, Leuven, France, Germany, Italy, Switzerland, South Korea, Brazil, the United Kingdom, China and Japan. The company has more than 359 U.S. and foreign patents. In 2019, the company consolidated resources within its On Demand domestic rapid printing service locations into Littleton, Seattle, Lawrenceburg, and Wilsonville. Restructuring and additions were made to the Lawrenceburg facility for future expansions and growth, which nearly doubled its size. Community involvement and partnerships 3D Systems is involved in a multi-year agreement with the Smithsonian Institution as part of an effort to strengthen collections' stewardship and increase collection accessibility through 3D representations. In 2012, 3D Systems began partnering with the Scholastic Art & Writing Awards in the Future New category, where three winners are awarded with a $1000 scholarship in addition to the prizes and recognition granted to winners by the Scholastic Awards, and contributed two production-grade 3D printers to the National Network for Manufacturing Innovation (NNMI), which aims to re-localize manufacturing and increase US manufacturing competitiveness. 3D Systems is also a corporate underwriter of the National Children's Oral Health Foundation (NCOHF), which delivers educational, preventative and treatment oral health services to children in at-risk populations. On February 18 of 2014, Ekso Bionics debuted the first ever 3D-printed hybrid exoskeleton in collaboration with 3D Systems. See also List of 3D printer manufacturers References External links 1986 establishments in California Companies listed on the New York Stock Exchange 3D printer companies Computer-aided design Manufacturing companies based in South Carolina Technology companies established in 1986 American companies established in 1986 Manufacturing companies established in 1986 Multinational companies headquartered in the United States Technology companies of the United States Fused filament fabrication Rock Hill, South Carolina
3D Systems
[ "Engineering" ]
2,598
[ "Computer-aided design", "Design engineering" ]
4,234,672
https://en.wikipedia.org/wiki/Roland%20Fra%C3%AFss%C3%A9
Roland Fraïssé (; 12 March 1920 – 30 March 2008) was a French mathematical logician. Life Fraïssé received his doctoral degree from the University of Paris in 1953. In his thesis, Fraïssé used the back-and-forth method to determine whether two model-theoretic structures were elementarily equivalent. This method of determining elementary equivalence was later formulated as the Ehrenfeucht–Fraïssé game. Fraïssé worked primarily in relation theory. Another of his important works was the Fraïssé construction of a Fraïssé limit of finite structures. He also formulated Fraïssé's conjecture on order embeddings, and introduced the notion of compensor in the theory of posets. Most of his career was spent as Professor at the University of Provence in Marseille, France. Selected publications Sur quelques classifications des systèmes de relations, thesis, University of Paris, 1953; published in Publications Scientifiques de l'Université d'Alger, series A 1 (1954), 35–182. Cours de logique mathématique, Paris: Gauthier-Villars Éditeur, 1967; second edition, 3 vols., 1971–1975; tr. into English and ed. by David Louvish as Course of Mathematical Logic, 2 vols., Dordrecht: Reidel, 1973–1974. Theory of relations, tr. into English by P. Clote, Amsterdam: North-Holland, 1986; rev. ed. 2000. References French logicians Model theorists Academic staff of the University of Provence 20th-century French mathematicians 21st-century French mathematicians 1920 births 2008 deaths Mathematical logicians French male non-fiction writers 20th-century French philosophers 20th-century French male writers University of Paris alumni
Roland Fraïssé
[ "Mathematics" ]
355
[ "Model theorists", "Mathematical logic", "Model theory", "Mathematical logicians" ]
1,034,969
https://en.wikipedia.org/wiki/Levelling
Levelling or leveling (American English; see spelling differences) is a branch of surveying, the object of which is to establish or verify or measure the height of specified points relative to a datum. It is widely used in geodesy and cartography to measure vertical position with respect to a vertical datum, and in construction to measure height differences of construction artifacts. Optical levelling Optical levelling, also known as spirit levelling and differential levelling, employs an optical level, which consists of a precision telescope with crosshairs and stadia marks. The cross hairs are used to establish the level point on the target, and the stadia allow range-finding; stadia are usually at ratios of 100:1, in which case one metre between the stadia marks on the level staff (or rod) represents 100metres from the target. The complete unit is normally mounted on a tripod, and the telescope can freely rotate 360° in a horizontal plane. The surveyor adjusts the instrument's level by coarse adjustment of the tripod legs and fine adjustment using three precision levelling screws on the instrument to make the rotational plane horizontal. The surveyor does this with the use of a bull's eye level built into the instrument mount. Procedure The surveyor looks through the eyepiece of telescope while an assistant holds a vertical level staff which is graduated in inches or centimeters. The level staff is placed vertically using a level, with its foot on the point for which the level measurement is required. The telescope is rotated and focused until the level staff is plainly visible in the crosshairs. In the case of a high accuracy manual level, the fine level adjustment is made by an altitude screw, using a high accuracy bubble level fixed to the telescope. This can be viewed by a mirror whilst adjusting or the ends of the bubble can be displayed within the telescope, which also allows assurance of the accurate level of the telescope whilst the sight is being taken. However, in the case of an automatic level, altitude adjustment is done automatically by a suspended prism due to gravity, as long as the coarse levelling is accurate within certain limits. When level, the staff graduation reading at the crosshairs is recorded, and an identifying mark or marker placed where the level staff rested on the object or position being surveyed. A typical procedure for a linear track of levels from a known datum is as follows. Set up the instrument within of a point of known or assumed elevation. A rod or staff is held vertical on that point and the instrument is used manually or automatically to read the rod scale. This gives the height of the instrument above the starting (backsight) point and allows the height of the instrument (H.I.) above the datum to be computed. The rod is then held on an unknown point and a reading is taken in the same manner, allowing the elevation of the new (foresight) point to be computed. The difference between these two readings equals the change in elevation, which is why this method is also called differential levelling. The procedure is repeated until the destination point is reached. It is usual practice to perform either a complete loop back to the starting point or else close the traverse on a second point whose elevation is already known. The closure check guards against blunders in the operation, and allows residual error to be distributed in the most likely manner among the stations. Some instruments provide three crosshairs which allow stadia measurement of the foresight and backsight distances. These also allow use of the average of the three readings (3-wire leveling) as a check against blunders and for averaging out the error of interpolation between marks on the rod scale. The two main types of levelling are single-levelling as already described, and double-levelling (double-rodding). In double-levelling, a surveyor takes two foresights and two backsights and makes sure the difference between the foresights and the difference between the backsights are equal, thereby reducing the amount of error. Double-levelling costs twice as much as single-levelling. Turning a level When using an optical level, the endpoint may be out of the effective range of the instrument. There may be obstructions or large changes of elevation between the endpoints. In these situations, extra setups are needed. Turning is a term used when referring to moving the level to take an elevation shot from a different location. To "turn" the level, one must first take a reading and record the elevation of the point the rod is located on. While the rod is being kept in exactly the same location, the level is moved to a new location where the rod is still visible. A reading is taken from the new location of the level and the height difference is used to find the new elevation of the level gun. This is repeated until the series of measurements is completed. The level must be horizontal to get a valid measurement. Because of this, if the horizontal crosshair of the instrument is lower than the base of the rod, the surveyor will not be able to sight the rod and get a reading. The rod can usually be raised up to 25 feet high, allowing the level to be set much higher than the base of the rod. Trigonometric levelling The other standard method of levelling in construction and surveying is called trigonometric levelling, which is preferred when levelling "out" to a number of points from one stationary point. This is done by using a total station, or any other instrument to read the vertical, or zenith angle to the rod, and the change in elevation is calculated using trigonometric functions (see example below). At greater distances (typically 1,000 feet and greater), the curvature of the Earth, and the refraction of the instrument wave through the air must be taken into account in the measurements as well (see section below). Ex: an instrument at Point A reading to a rod at Point B a zenith angle of < 88°15'22" (degrees, minutes, seconds of arc) and a slope distance of 305.50 feet not factoring rod or instrument height would be calculated thus: cos(88°15'22")(305.5)≈ 9.30 ft., meaning an elevation change of approximately 9.30 feet in elevation between Points A and B. So if Point A is at 1,000 feet of elevation, then Point B would be at approximately 1,009.30 feet of elevation, as the reference line (0°) for zenith angles is straight up going clockwise one complete revolution, and so an angle reading of less than 90 degrees (horizontal or flat) would be looking uphill and not down (and opposite for angles greater than 90 degrees), and so would gain elevation. Refraction and curvature The curvature of the earth means that a line of sight that is horizontal at the instrument will be higher and higher above a spheroid at greater distances. The effect may be insignificant for some work at distances under 100 meters. The increase in height of a straight line with distance is: where R is the radius of the earth. The line of sight is horizontal at the instrument, but is not a straight line because of atmospheric refraction. The change of air density with elevation causes the line of sight to bend toward the earth. The combined correction for refraction and curvature is approximately: or For precise work these effects need to be calculated and corrections applied. For most work it is sufficient to keep the foresight and backsight distances approximately equal so that the refraction and curvature effects cancel out. Refraction is generally the greatest source of error in leveling. For short level lines the effects of temperature and pressure are generally insignificant, but the effect of the temperature gradient dT / dh can lead to errors. Levelling loops and gravity variations Assuming error-free measurements, if the Earth's gravity field were completely regular and gravity constant, leveling loops would always close precisely: around a loop. In the real gravity field of the Earth, this happens only approximately; on small loops typical of engineering projects, the loop closure is negligible, but on larger loops covering regions or continents it is not. Instead of height differences, geopotential differences do close around loops: where stands for gravity at the leveling interval i. For precise leveling networks on a national scale, the latter formula should always be used. should be used in all computations, producing geopotential values for the benchmarks of the network. High precision levelling, especially when conducted over long distances as used for the establishment and maintenance of vertical datums, is called geodetic levelling. Instruments Classical instruments The dumpy level was developed by English civil engineer William Gravatt, while surveying the route of a proposed railway line from London to Dover. More compact and hence both more robust and easier to transport, it is commonly believed that dumpy levelling is less accurate than other types of levelling, but such is not the case. Dumpy levelling requires shorter and therefore more numerous sights, but this fault is compensated by the practice of making foresights and backsights equal. Precise level designs were often used for large leveling projects where utmost accuracy was required. They differ from other levels in having a very precise spirit level tube and a micrometer adjustment to raise or lower the line of sight so that the crosshair can be made to coincide with a line on the rod scale and no interpolation is required. Automatic level Automatic levels make use of a compensator that ensures that the line of sight remains horizontal once the operator has roughly leveled the instrument (to within maybe 0.05 degree). The compensator consists of small prisms suspended from wires inside of the level's chassis that are connected together in the shape of a pendulum. This allows for only horizontal light rays to enter, even in cases where the telescope of the instrument is not perfectly plumb. The surveyor sets the instrument up quickly and does not have to re-level it carefully each time they sight on a rod on another point. It also reduces the effect of minor settling of the tripod to the actual amount of motion instead of leveraging the tilt over the sight distance. Because the level of the instrument only needs to be adjusted once per setup, the surveyor can quickly and easily read as many side-shots as necessary between turns. Three level screws are used to level the instrument, as opposed to the four screws historically found in dumpy levels. Laser level Laser levels project a beam which is visible and/or detectable by a sensor on the leveling rod. This style is widely used in construction work but not for more precise control work. An advantage is that one person can perform the levelling independently, whereas other types require one person at the instrument and one holding the rod. The sensor can be mounted on earth-moving machinery to allow automated grading. See also Astrogeodetic levelling Dynamic height Glossary of levelling terms Hydrostatic levelling Land levelling Orthometric height Physical geodesy Survey camp References External links USALandSurveyor Differential leveling video tutorials E-Learning-site with online-exercise for differential levelling Differential levelling online calculation Civil engineering Geomatics engineering Surveying Vertical position
Levelling
[ "Physics", "Engineering" ]
2,304
[ "Vertical position", "Physical quantities", "Distance", "Construction", "Surveying", "Civil engineering" ]
1,035,507
https://en.wikipedia.org/wiki/Radiopharmacology
Radiopharmacology is radiochemistry applied to medicine and thus the pharmacology of radiopharmaceuticals (medicinal radiocompounds, that is, pharmaceutical drugs that are radioactive). Radiopharmaceuticals are used in the field of nuclear medicine as radioactive tracers in medical imaging and in therapy for many diseases (for example, brachytherapy). Many radiopharmaceuticals use technetium-99m (Tc-99m) which has many useful properties as a gamma-emitting tracer nuclide. In the book Technetium a total of 31 different radiopharmaceuticals based on Tc-99m are listed for imaging and functional studies of the brain, myocardium, thyroid, lungs, liver, gallbladder, kidneys, skeleton, blood and tumors. The term radioisotope, which in its general sense refers to any radioactive isotope (radionuclide), has historically been used to refer to all radiopharmaceuticals, and this usage remains common. Technically, however, many radiopharmaceuticals incorporate a radioactive tracer atom into a larger pharmaceutically-active molecule, which is localized in the body, after which the radionuclide tracer atom allows it to be easily detected with a gamma camera or similar gamma imaging device. An example is fludeoxyglucose in which fluorine-18 is incorporated into deoxyglucose. Some radioisotopes (for example gallium-67, gallium-68, and radioiodine) are used directly as soluble ionic salts, without further modification. This use relies on the chemical and biological properties of the radioisotope itself, to localize it within the body. History See nuclear medicine. Production Production of a radiopharmaceutical involves two processes: The production of the radionuclide on which the pharmaceutical is based. The preparation and packaging of the complete radiopharmaceutical. Radionuclides used in radiopharmaceuticals are mostly radioactive isotopes of elements with atomic numbers less than that of bismuth, that is, they are radioactive isotopes of elements that also have one or more stable isotopes. These may be roughly divided into two classes: Those with more neutrons in the nucleus than those required for stability are known as proton-deficient, and tend to be most easily produced in a nuclear reactor. The majority of radiopharmaceuticals are based on proton deficient isotopes, with technetium-99m being the most commonly used medical isotope, and therefore nuclear reactors are the prime source of medical radioisotopes. Those with fewer neutrons in the nucleus than those required for stability are known as neutron-deficient, and tend to be most easily produced using a proton accelerator such as a medical cyclotron. Practical use Because radiopharmeuticals require special licenses and handling techniques, they are often kept in local centers for medical radioisotope storage, often known as radiopharmacies. A radiopharmacist may dispense them from there, to local centers where they are handled at the practical medicine facility. Drug nomenclature for radiopharmaceuticals As with other pharmaceutical drugs, there is standardization of the drug nomenclature for radiopharmaceuticals, although various standards coexist. The International Nonproprietary Name (INN) gives the base drug name, followed by the radioisotope (as mass number, no space, element symbol) in parentheses with no superscript, followed by the ligand (if any). It is common to see square brackets and superscript superimposed onto the INN name, because chemical nomenclature (such as IUPAC nomenclature) uses those. The United States Pharmacopeia (USP) name gives the base drug name, followed by the radioisotope (as element symbol, space, mass number) with no parentheses, no hyphen, and no superscript, followed by the ligand (if any). The USP style is not the INN style, despite their being described as one and the same in some publications (e.g., AMA, whose style for radiopharmaceuticals matches the USP style). The United States Pharmacopeial Convention is a sponsor organization of the USAN Council, and the USAN for a given drug is often the same as the USP name. See also Radioactive tracer Nuclear medicine References Further reading Notes for guidance on the clinical administration of radiopharmaceuticals and use of sealed radioactive sources. Administration of radioactive substances advisory committee. March 2006. Produced by the Health Protection Agency. Malabsorption. In: The Merck Manual of Geriatrics, chapter 111. Leukoscan summary of product characteristics (Tc99m-Sulesomab). Schwochau, Klaus. Technetium. Wiley-VCH (2000). External links National Isotope Development Center U.S. Government resources for isotopes - production, distribution, and information Isotope Development & Production for Research and Applications (IDPRA) U.S. Department of Energy program sponsoring isotope production and production research and development Radiobiology Radiation therapy Medicinal chemistry Medicinal radiochemistry
Radiopharmacology
[ "Chemistry", "Biology" ]
1,103
[ "Medicinal radiochemistry", "Radiobiology", "Radiopharmaceuticals", "nan", "Medicinal chemistry", "Biochemistry", "Chemicals in medicine", "Radioactivity" ]
1,038,280
https://en.wikipedia.org/wiki/Climate%20engineering
Climate engineering (or geoengineering, climate intervention) is the intentional large-scale alteration of the planetary environment to counteract anthropogenic climate change. The term has been used as an umbrella term for both carbon dioxide removal and solar radiation modification when applied at a planetary scale. However, these two processes have very different characteristics, and are now often discussed separately. Carbon dioxide removal techniques remove carbon dioxide from the atmosphere, and are part of climate change mitigation. Solar radiation modification is the reflection of some sunlight (solar radiation) back to space to cool the earth. Some publications include passive radiative cooling as a climate engineering technology. The media tends to also use climate engineering for other technologies such as glacier stabilization, ocean liming, and iron fertilization of oceans. The latter would modify carbon sequestration processes that take place in oceans. Some types of climate engineering are highly controversial due to the large uncertainties around effectiveness, side effects and unforeseen consequences. Interventions at large scale run a greater risk of unintended disruptions of natural systems, resulting in a dilemma that such disruptions might be more damaging than the climate damage that they offset. However, the risks of such interventions must be seen in the context of the trajectory of climate change without them. The Union of Concerned Scientists warns that solar radiation modification could become an excuse to slow reductions in fossil fuel emissions and stall progress toward a low-carbon economy, as the technology does not address these root causes of climate change. Terminology Climate engineering (or geoengineering) has been used as an umbrella term for both carbon dioxide removal and solar radiation management, when applied at a planetary scale. However, these two methods have very different geophysical characteristics, which is why the Intergovernmental Panel on Climate Change no longer uses this term. This decision was communicated in around 2018, see for example the Special Report on Global Warming of 1.5 °C. According to climate economist Gernot Wagner the term geoengineering is "largely an artefact and a result of the term's frequent use in popular discourse" and "so vague and all-encompassing as to have lost much meaning". Specific technologies that fall into the climate engineering umbrella term include: Carbon dioxide removal Biochar: Biochar is a high-carbon, fine-grained residue that is produced via pyrolysis Bioenergy with carbon capture and storage (BECCS): the process of extracting bioenergy from biomass and capturing and storing the carbon, thereby removing it from the atmosphere. Direct air capture and carbon storage: a process of capturing carbon dioxide directly from the ambient air (as opposed to capturing from point sources, such as a cement factory or biomass power plant) and generating a concentrated stream of for sequestration or utilization or production of carbon-neutral fuel and windgas. Enhanced weathering: a process that aims to accelerate the natural weathering by spreading finely ground silicate rock, such as basalt, onto surfaces which speeds up chemical reactions between rocks, water, and air. It also removes carbon dioxide () from the atmosphere, permanently storing it in solid carbonate minerals or ocean alkalinity. The latter also slows ocean acidification. Solar Radiation Management Marine cloud brightening: a proposed technique that would make clouds brighter, reflecting a small fraction of incoming sunlight back into space in order to offset anthropogenic global warming. Mirrors in space (MIS): satellites that are designed to change the amount of solar radiation that impacts the Earth as a form of climate engineering. Since the conception of the idea in 1923, 1929, 1957 and 1978 (Hermann Oberth) and also in the 1980s, space mirrors have mainly been theorized as a way to deflect sunlight to counter global warming and were seriously considered in the 2000s. Stratospheric aerosol injection (SAI): a proposed method to introduce aerosols into the stratosphere to create a cooling effect via global dimming and increased albedo, which occurs naturally from volcanic eruptions. The following methods are not termed climate engineering in the latest IPCC assessment report in 2022 but are included under this umbrella term by other publications on this topic: Passive daytime radiative cooling: this technology increases increases the Earth's solar reflectance and it's thermal emittance in the atmospheric window. Ground-level albedo modification: a process of increasing Earth's albedo through the means of altering things on the Earth's surface. Examples include planting light-colored plants to help with reflecting sunlight back into space. Glacier stabilization: proposals aiming to slow down or prevent sea level rise caused by the collapse of notable marine-terminating glaciers, such as Jakobshavn Glacier in Greenland or Thwaites Glacier and Pine Island Glacier in Antarctica. It may be possible to bolster some glaciers directly, but blocking the flow of ever-warming ocean water at a distance, allowing it more time to mix with the cooler water around the glacier, is likely to be far more effective. Ocean geoengineering (adding material such as lime or iron to the ocean to affect its ability to sequester carbon dioxide). Technologies Carbon dioxide removal Solar radiation modification Passive daytime radiative cooling Enhancing the solar reflectance and thermal emissivity of Earth in the atmospheric window through passive daytime radiative cooling has been proposed as an alternative or "third approach" to climate engineering that is "less intrusive" and more predictable or reversible than stratospheric aerosol injection. Ocean geoengineering Ocean geoengineering involves modifying the ocean to reduce the impacts of rising temperature. One approach is to add material such as lime or iron to the ocean to increase its ability to support marine life and/or sequester . In 2021 the US National Academies of Sciences, Engineering, and Medicine (NASEM) requested $2.5 billion funds for research in the following decade, specifically including field tests. Another idea is to reduce sea level rise by installing underwater "curtains" to protect Antarctic glaciers from warming waters, or by drilling holes in ice to pump out water and heat. Ocean liming Enriching seawater with calcium hydroxide (lime) has been reported to lower ocean acidity, which reduces pressure on marine life such as oysters and absorbs . The added lime raised the water's pH, capturing in the form of calcium bicarbonate or as carbonate deposited in mollusk shells. Lime is produced in volume for the cement industry. This was assessed in 2022 in an experiment in Apalachicola, Florida in an attempt to halt declining oyster populations. pH levels increased modestly, as was reduced by 70 ppm. A 2014 experiment added sodium hydroxide (lye) to part of Australia's Great Barrier Reef. It raised pH levels to nearly preindustrial levels. However, producing alkaline materials typically releases large amounts of , partially offsetting the sequestration. Alkaline additives become diluted and dispersed in one month, without durable effects, such that if necessary, the program could be ended without leaving long-term effects. Ocean sulfur cycle enhancement Enhancing the natural marine sulfur cycle by fertilizing a small portion with iron—typically considered to be a greenhouse gas remediation method—may also increase the reflection of sunlight. Such fertilization, especially in the Southern Ocean, would enhance dimethyl sulfide production and consequently cloud reflectivity. This could potentially be used as regional SRM, to slow Antarctic ice from melting. Such techniques also tend to sequester carbon, but the enhancement of cloud albedo also appears to be a likely effect. Iron fertilization Submarine forest Another 2022 experiment attempted to sequester carbon using giant kelp planted off the Namibian coast. Whilst this approach has been called ocean geoengineering by the researchers it is just another form of carbon dioxide removal via sequestration. Another term that is used to describe this process is blue carbon management and also marine geoengineering. Glacier stabilization Problems and risks Interventions at large scale run a greater risk of unintended disruptions of natural systems, alongside a greater potential for reducing the risks of warming. This raises a question of whether climate interventions might be more or less damaging than the climate damage that they offset. Matthew Watson, of the University of Bristol, led a £5m research study into the potential adverse effects of climate engineering and said in 2014, "We are sleepwalking to a disaster with climate change. Cutting emissions is undoubtedly the thing we should be focusing on but it seems to be failing. Although geoengineering is terrifying to many people, and I include myself in this, [its feasibility and safety] are questions that have to be answered". University of Oxford Professor Steve Rayner is also worried about the adverse effects of climate engineering, especially the potential for people to be too positive about the effects and stop trying to slow the actual problem of climate change. Though, he says there is a potential reason to doing climate engineering: "People decry doing [climate engineering] as a band aid, but band aids are useful when you are healing". Climate engineering may reduce the urgency of reducing carbon emissions, a form of moral hazard. Also, some approaches would have only temporary effects, which implies rapid rebound if they are not sustained. The Union of Concerned Scientists points to the concern that the use of climate engineering technology might become an excuse not to address the root causes of climate change. However, several public opinion surveys and focus groups reported either a desire to increase emission cuts in the presence of climate engineering, or no effect. Other modelling work suggests that the prospect of climate engineering may in fact increase the likelihood of emissions reduction. If climate engineering can alter the climate, then this raises questions whether humans have the right to deliberately change the climate, and under what conditions. For example, using climate engineering to stabilize temperatures is not the same as doing so to optimize the climate for some other purpose. Some religious traditions express views on the relationship between humans and their surroundings that encourage (to conduct responsible stewardship) or discourage (to avoid hubris) explicit actions to affect climate. Society and culture Public perception A large 2018 study used an online survey to investigate public perceptions of six climate engineering methods in the United States, United Kingdom, Australia, and New Zealand. Public awareness of climate engineering was low; less than a fifth of respondents reported prior knowledge. Perceptions of the six climate engineering methods proposed (three from the carbon dioxide removal group and three from the solar radiation modification group) were largely negative and frequently associated with attributes like 'risky', 'artificial' and 'unknown effects'. Carbon dioxide removal methods were preferred over solar radiation modification. Public perceptions were remarkably stable with only minor differences between the different countries in the surveys. Some environmental organizations (such as Friends of the Earth and Greenpeace) have been reluctant to endorse or oppose solar radiation modification, but are often more supportive of nature-based carbon dioxide removal projects, such as afforestation and peatland restoration. Research and projects Several organizations have investigated climate engineering with a view to evaluating its potential, including the US Congress, the US National Academy of Sciences, Engineering, and Medicine, the Royal Society, the UK Parliament, the Institution of Mechanical Engineers, and the Intergovernmental Panel on Climate Change. In 2009, the Royal Society in the UK reviewed a wide range of proposed climate engineering methods and evaluated them in terms of effectiveness, affordability, timeliness, and safety (assigning qualitative estimates in each assessment). The key recommendations reports were that "Parties to the UNFCCC should make increased efforts towards mitigating and adapting to climate change, and in particular to agreeing to global emissions reductions", and that "[nothing] now known about geoengineering options gives any reason to diminish these efforts". Nonetheless, the report also recommended that "research and development of climate engineering options should be undertaken to investigate whether low-risk methods can be made available if it becomes necessary to reduce the rate of warming this century". In 2009, a review examined the scientific plausibility of proposed methods rather than the practical considerations such as engineering feasibility or economic cost. The authors found that "[air] capture and storage shows the greatest potential, combined with afforestation, reforestation and bio-char production", and noted that "other suggestions that have received considerable media attention, in particular, "ocean pipes" appear to be ineffective". They concluded that "[climate] geoengineering is best considered as a potential complement to the mitigation of emissions, rather than as an alternative to it". The IMechE report examined a small subset of proposed methods (air capture, urban albedo and algal-based capture techniques), and its main conclusions in 2011 were that climate engineering should be researched and trialed at the small scale alongside a wider decarbonization of the economy. In 2015, the US National Academy of Sciences, Engineering, and Medicine concluded a 21-month project to study the potential impacts, benefits, and costs of climate engineering. The differences between these two classes of climate engineering "led the committee to evaluate the two types of approaches separately in companion reports, a distinction it hopes carries over to future scientific and policy discussions." The resulting study titled Climate Intervention was released in February 2015 and consists of two volumes: Reflecting Sunlight to Cool Earth and Carbon Dioxide Removal and Reliable Sequestration. In June 2023 the US government released a report that recommended conducting research on stratospheric aerosol injection and marine cloud brightening. As of 2024 the Coastal Atmospheric Aerosol Research and Engagement (CAARE) project was launching sea salt into the marine sky in an effort to increase cloud "brightness" (reflective capacity). The sea salt is launched from the USS Hornet Sea, Air & Space Museum (based on the project's regulatory filings). See also Arctic geoengineering Climate justice Earth systems engineering and management Land surface effects on climate List of geoengineering topics Weather modification References Engineering Engineering Emissions reduction Engineering disciplines Planetary engineering
Climate engineering
[ "Chemistry", "Engineering" ]
2,877
[ "Planetary engineering", "Emissions reduction", "Geoengineering", "nan", "Greenhouse gases" ]
614,085
https://en.wikipedia.org/wiki/Good%20manufacturing%20practice
Current good manufacturing practices (cGMP) are those conforming to the guidelines recommended by relevant agencies. Those agencies control the authorization and licensing of the manufacture and sale of food and beverages, cosmetics, pharmaceutical products, dietary supplements, and medical devices. These guidelines provide minimum requirements that a manufacturer must meet to assure that their products are consistently high in quality, from batch to batch, for their intended use. The rules that govern each industry may differ significantly; however, the main purpose of GMP is always to prevent harm from occurring to the end user. Additional tenets include ensuring the end product is free from contamination, that it is consistent in its manufacture, that its manufacture has been well documented, that personnel are well trained, and that the product has been checked for quality more than just at the end phase. GMP is typically ensured through the effective use of a quality management system (QMS). Good manufacturing practice, along with good agricultural practice, good laboratory practice and good clinical practice, are overseen by regulatory agencies in the United Kingdom, United States, Canada, various European countries, China, India and other countries. High-level details Good manufacturing practice guidelines provide guidance for manufacturing, testing, and quality assurance in order to ensure that a manufactured product is safe for human consumption or use. Many countries have legislated that manufacturers follow GMP procedures and create their own GMP guidelines that correspond with their legislation. All guidelines follow a few basic principles: Manufacturing facilities must maintain a clean and hygienic manufacturing area. Manufacturing facilities must maintain controlled environmental conditions in order to prevent cross-contamination from adulterants and allergens that may render the product unsafe for human consumption or use. Manufacturing processes must be clearly defined and controlled. All critical processes are validated to ensure consistency and compliance with specifications. Manufacturing processes must be controlled, and any changes to the process must be evaluated. Changes that affect the quality of the drug are validated as necessary. Instructions and procedures must be written in clear and unambiguous language using good documentation practices. Operators must be trained to carry out and document procedures. Records must be made, manually or electronically, during manufacture that demonstrate that all the steps required by the defined procedures and instructions were in fact taken and that the quantity and quality of the food or drug was as expected. Deviations must be investigated and documented. Records of manufacture (including distribution) that enable the complete history of a batch to be traced must be retained in a comprehensible and accessible form. Any distribution of products must minimize any risk to their quality. A system must be in place for recalling any batch from sale or supply. Complaints about marketed products must be examined, the causes of quality defects must be investigated, and appropriate measures must be taken with respect to the defective products and to prevent recurrence. Good manufacturing practice is recommended with the goal of safeguarding the health of consumers and patients as well as producing quality products. In the United States, a food or drug may be deemed "adulterated" if it has passed all of the specifications tests but is found to be manufactured in a facility or condition which violates or does not comply with current good manufacturing guideline. GMP standards are not prescriptive instructions on how to manufacture products. They are a series of performance based requirements that must be met during manufacturing. When a company is setting up its quality program and manufacturing process, there may be many ways it can fulfill GMP requirements. It is the company's responsibility to determine the most effective and efficient quality process that both meets business and regulatory needs. Regulatory agencies have recently begun to look at more fundamental quality metrics of manufacturers than just compliance with basic GMP regulations. US-FDA has found that manufacturers who have implemented quality metrics programs gain a deeper insight into employee behaviors that impact product quality. In its Guidance for Industry "Data Integrity and Compliance With Drug CGMP" US-FDA states “it is the role of management with executive responsibility to create a quality culture where employees understand that data integrity is an organizational core value and employees are encouraged to identify and promptly report data integrity issues.” Australia's Therapeutic Goods Administration has said that recent data integrity failures have raised questions about the role of quality culture in driving behaviors. In addition, non-governmental organizations such as the International Society for Pharmaceutical Engineering (ISPE) and the Parenteral Drug Association (PDA) have developed information and resources to help pharmaceutical companies better understand why quality culture is important and how to assess the current situation within a site or organization. Guideline versions GMP is enforced in the United States by the U.S. Food and Drug Administration (FDA), under Title 21 CFR. The regulations use the phrase "current good manufacturing practices" (CGMP) to describe these guidelines. Courts may theoretically hold that a product is adulterated even if there is no specific regulatory requirement that was violated as long as the process was not performed according to industry standards. However, since June 2007, a different set of CGMP requirements have applied to all manufacturers of dietary supplements, with additional supporting guidance issued in 2010. Additionally, in the U.S., medical device manufacturers must follow what are called "quality system regulations" which are deliberately harmonized with ISO requirements, not necessarily CGMPs. The World Health Organization (WHO) version of GMP is used by pharmaceutical regulators and the pharmaceutical industry in over 100 countries worldwide, primarily in the developing world. The European Union's GMP (EU GMP) enforces similar requirements to WHO GMP, as does the FDA's version in the US. Similar GMPs are used in other countries, with Australia, Canada, Japan, Saudi Arabia, Singapore, Philippines], Vietnam and others having highly developed/sophisticated GMP requirements. In the United Kingdom, the Medicines Act (1968) covers most aspects of GMP in what is commonly referred to as "The Orange Guide," which is named so because of the color of its cover; it is officially known as Rules and Guidance for Pharmaceutical Manufacturers and Distributors. Since the 1999 publication of Good Manufacturing Practice for Active Pharmaceutical Ingredients, by the International Conference on Harmonization (ICH), GMPs now apply in those countries and trade groupings that are signatories to ICH (the EU, Japan and the U.S.), and applies in other countries (e.g., Australia, Canada, Singapore) which adopt ICH guidelines for the manufacture and testing of active raw materials. Enforcement Within the European Union GMP inspections are performed by National Regulatory Agencies. GMP inspections are performed in Canada by the Health Products and Food Branch Inspectorate; in the United Kingdom by the Medicines and Healthcare products Regulatory Agency (MHRA); in the Republic of Korea (South Korea) by the Ministry of Food and Drug Safety (MFDS); in Australia by the Therapeutic Goods Administration (TGA); in Bangladesh by the Directorate General of Drug Administration (DGDA); in South Africa by the Medicines Control Council (MCC); in Brazil by the National Health Surveillance Agency (ANVISA); in India by state Food and Drugs Administrations (FDA), reporting to the Central Drugs Standard Control Organization; in Pakistan by the Drug Regulatory Authority of Pakistan; in Nigeria by NAFDAC; and by similar national organizations worldwide. Each of the inspectorates carries out routine GMP inspections to ensure that drug products are produced safely and correctly. Additionally, many countries perform pre-approval inspections (PAI) for GMP compliance prior to the approval of a new drug for marketing. CGMP inspections Regulatory agencies (including the FDA in the U.S. and regulatory agencies in many European nations) are authorized to conduct unannounced inspections, though some are scheduled. FDA routine domestic inspections are usually unannounced, but must be conducted according to 704(a) of the Food, Drug and Cosmetic Act (21 USCS § 374), which requires that they are performed at a "reasonable time". Courts have held that any time the firm is open for business is a reasonable time for an inspection. Other good practices Other good-practice systems, along the same lines as GMP, exist: Good agricultural practice (GAP), for farming and ranching Good clinical practice (GCP), for hospitals and clinicians conducting clinical studies on new drugs in humans Good distribution practice (GDP) deals with the guidelines for the proper distribution of medicinal products for human use. Good laboratory practice (GLP), for laboratories conducting non-clinical studies (toxicology and pharmacology studies in animals) Good pharmacovigilance practice (GVP), for the safety of produced drugs Good regulatory practice (GRP), for the management of regulatory commitments, procedures and documentation Collectively, these and other good-practice requirements are referred to as "GxP" requirements, all of which follow similar philosophies. Other examples include good guidance practice and good tissue practice. See also Best practice Corrective and preventive action (CAPA) EudraLex Food safety Good automated manufacturing practice (GAMP) in the pharmaceutical industry Site Master File Washdown References External links Pharmaceutical Inspection Cooperation Scheme: GMP Guides World Health Organization GMP Guidelines European Union GMP Guidelines US CFR Title 21 parts 210 (GMP, general), 211 (GMP, finished pharmaceuticals), 212 (GMP, positron emission tomography drugs), 225 (GMP, medicated feeds), 226 (GMP, type A medicated articles). Report on Optimizing and Leaning GMP Batch Record Design Food safety Pharmaceutical industry Pharmaceuticals policy Good practice Life sciences industry
Good manufacturing practice
[ "Chemistry", "Biology" ]
1,966
[ "Pharmaceutical industry", "Pharmacology", "Life sciences industry" ]
614,192
https://en.wikipedia.org/wiki/Paschen%27s%20law
Paschen's law is an equation that gives the breakdown voltage, that is, the voltage necessary to start a discharge or electric arc, between two electrodes in a gas as a function of pressure and gap length. It is named after Friedrich Paschen who discovered it empirically in 1889. Paschen studied the breakdown voltage of various gases between parallel metal plates as the gas pressure and gap distance were varied: With a constant gap length, the voltage necessary to arc across the gap decreased as the pressure was reduced and then increased gradually, exceeding its original value. With a constant pressure, the voltage needed to cause an arc reduced as the gap size was reduced but only to a point. As the gap was reduced further, the voltage required to cause an arc began to rise and again exceeded its original value. For a given gas, the voltage is a function only of the product of the pressure and gap length. The curve he found of voltage versus the pressure-gap length product (right) is called Paschen's curve. He found an equation that fit these curves, which is now called Paschen's law. At higher pressures and gap lengths, the breakdown voltage is approximately proportional to the product of pressure and gap length, and the term Paschen's law is sometimes used to refer to this simpler relation. However, this is only roughly true, over a limited range of the curve. Paschen curve Early vacuum experimenters found a rather surprising behavior. An arc would sometimes take place in a long irregular path rather than at the minimal distance between the electrodes. For example, in air, at a pressure of one atmosphere, the distance for minimal breakdown voltage is about 7.5 μm. The voltage required to arc this distance is 327 V, which is insufficient to ignite the arcs for gaps that are either wider or narrower. For a 3.5 μm gap, the required voltage is 533 V, nearly twice as much. If 500 V were applied, it would not be sufficient to arc at the 2.85 μm distance, but would arc at a 7.5 μm distance. Paschen found that breakdown voltage was described by the equation where is the breakdown voltage in volts, is the pressure in pascals, is the gap distance in meters, is the secondary-electron-emission coefficient (the number of secondary electrons produced per incident positive ion), is the saturation ionization in the gas at a particular (electric field/pressure), and is related to the excitation and ionization energies. The constants and interpolate the first Townsend coefficient . They are determined experimentally and found to be roughly constant over a restricted range of for any given gas. For example, air with an in the range of 450 to 7500 V/(kPa·cm),  = 112.50 (kPa·cm)−1 and = 2737.50 V/(kPa·cm). The graph of this equation is the Paschen curve. By differentiating it with respect to and setting the derivative to zero, the minimal voltage can be found. This yields and predicts the occurrence of a minimal breakdown voltage for  = 7.5×10−6 m·atm. This is 327 V in air at standard atmospheric pressure at a distance of 7.5 μm. The composition of the gas determines both the minimal arc voltage and the distance at which it occurs. For argon, the minimal arc voltage is 137 V at a larger 12 μm. For sulfur dioxide, the minimal arc voltage is 457 V at only 4.4 μm. Long gaps For air at standard conditions for temperature and pressure (STP), the voltage needed to arc a 1-metre gap is about 3.4 MV. The intensity of the electric field for this gap is therefore 3.4 MV/m. The electric field needed to arc across the minimal-voltage gap is much greater than what is necessary to arc a gap of one metre. At large gaps (or large pd) Paschen's Law is known to fail. The Meek Criteria for breakdown is usually used for large gaps. It takes into account non-uniformity in the electric field and formation of streamers due to the build up of charge within the gap that can occur over long distances. For a 7.5 μm gap the arc voltage is 327 V, which is 43 MV/m. This is about 14 times greater than the field strength for the 1.5-metre gap. The phenomenon is well verified experimentally and is referred to as the Paschen minimum. The equation loses accuracy for gaps under about 10 μm in air at one atmosphere and incorrectly predicts an infinite arc voltage at a gap of about 2.7 μm. Breakdown voltage can also differ from the Paschen curve prediction for very small electrode gaps, when field emission from the cathode surface becomes important. Physical mechanism The mean free path of a molecule in a gas is the average distance between its collision with other molecules. This is inversely proportional to the pressure of the gas, given constant temperature. In air at STP the mean free path of molecules is about 96 nm. Since electrons are much smaller, their average distance between colliding with molecules is about 5.6 times longer, or about 0.5 μm. This is a substantial fraction of the 7.5 μm spacing between the electrodes for minimal arc voltage. If the electron is in an electric field of 43 MV/m, it will be accelerated and acquire 21.5 eV of energy in 0.5 μm of travel in the direction of the field. The first ionization energy needed to dislodge an electron from nitrogen molecule is about 15.6 eV. The accelerated electron will acquire more than enough energy to ionize a nitrogen molecule. This liberated electron will in turn be accelerated, which will lead to another collision. A chain reaction then leads to avalanche breakdown, and an arc takes place from the cascade of released electrons. More collisions will take place in the electron path between the electrodes in a higher-pressure gas. When the pressure–gap product is high, an electron will collide with many different gas molecules as it travels from the cathode to the anode. Each of the collisions randomizes the electron direction, so the electron is not always being accelerated by the electric field—sometimes it travels back towards the cathode and is decelerated by the field. Collisions reduce the electron's energy and make it more difficult for it to ionize a molecule. Energy losses from a greater number of collisions require larger voltages for the electrons to accumulate sufficient energy to ionize many gas molecules, which is required to produce an avalanche breakdown. On the left side of the Paschen minimum, the product is small. The electron mean free path can become long compared to the gap between the electrodes. In this case, the electrons might gain large amounts of energy, but have fewer ionizing collisions. A greater voltage is therefore required to assure ionization of enough gas molecules to start an avalanche. Derivation Basics To calculate the breakthrough voltage, a homogeneous electrical field is assumed. This is the case in a parallel-plate capacitor setup. The electrodes may have the distance . The cathode is located at the point . To get impact ionization, the electron energy must become greater than the ionization energy of the gas atoms between the plates. Per length of path a number of ionizations will occur. is known as the first Townsend coefficient as it was introduced by Townsend. The increase of the electron current , can be described for the assumed setup as (So the number of free electrons at the anode is equal to the number of free electrons at the cathode that were multiplied by impact ionization. The larger and/or , the more free electrons are created.) The number of created electrons is Neglecting possible multiple ionizations of the same atom, the number of created ions is the same as the number of created electrons: is the ion current. To keep the discharge going on, free electrons must be created at the cathode surface. This is possible because the ions hitting the cathode release secondary electrons at the impact. (For very large applied voltages also field electron emission can occur.) Without field emission, we can write where is the mean number of generated secondary electrons per ion. This is also known as the second Townsend coefficient. Assuming that , one gets the relation between the Townsend coefficients by putting () into () and transforming: Impact ionization What is the amount of ? The number of ionization depends upon the probability that an electron hits a gas molecule. This probability is the relation of the cross-sectional area of a collision between electron and ion in relation to the overall area that is available for the electron to fly through: As expressed by the second part of the equation, it is also possible to express the probability as relation of the path traveled by the electron to the mean free path (distance at which another collision occurs). is the number of molecules which electrons can hit. It can be calculated using the equation of state of the ideal gas (: pressure, : volume, : Boltzmann constant, : temperature) The adjoining sketch illustrates that . As the radius of an electron can be neglected compared to the radius of an ion it simplifies to . Using this relation, putting () into () and transforming to one gets where the factor was only introduced for a better overview. The alteration of the current of not yet collided electrons at every point in the path can be expressed as This differential equation can easily be solved: The probability that (that there was not yet a collision at the point ) is According to its definition is the number of ionizations per length of path and thus the relation of the probability that there was no collision in the mean free path of the ions, and the mean free path of the electrons: It was hereby considered that the energy that a charged particle can get between a collision depends on the electric field strength and the charge : Breakdown voltage For the parallel-plate capacitor we have , where is the applied voltage. As a single ionization was assumed is the elementary charge . We can now put () and () into () and get Putting this into (5) and transforming to we get the Paschen law for the breakdown voltage that was first investigated by Paschen in and whose formula was first derived by Townsend in with Plasma ignition Plasma ignition in the definition of Townsend (Townsend discharge) is a self-sustaining discharge, independent of an external source of free electrons. This means that electrons from the cathode can reach the anode in the distance and ionize at least one atom on their way. So according to the definition of this relation must be fulfilled: If is used instead of () one gets for the breakdown voltage Conclusions, validity Paschen's law requires that: There are already free electrons at the cathode () which can be accelerated to trigger impact ionization. Such so-called seed electrons can be created by ionization by natural radioactivity or cosmic rays. The creation of further free electrons is only achieved by impact ionization. Thus Paschen's law is not valid if there are external electron sources. This can, for example, be a light source creating secondary electrons by the photoelectric effect. This has to be considered in experiments. Each ionized atom leads to only one free electron. However, multiple ionizations occur always in practice. Free electrons at the cathode surface are created by the impacting ions. The problem is that the number of thereby created electrons strongly depends on the material of the cathode, its surface (roughness, impurities) and the environmental conditions (temperature, humidity etc.). The experimental, reproducible determination of the factor is therefore nearly impossible. The electrical field is homogeneous. Effects with different gases Different gases will have different mean free paths for molecules and electrons. This is because different molecules have ionization cross sections, that is, different effective diameters. Noble gases like helium and argon are monatomic, which makes them harder to ionize and tend to have smaller effective diameters. This gives them greater mean free paths. Ionization potentials differ between molecules, as well as the speed that they recapture electrons after they have been knocked out of orbit. All three effects change the number of collisions needed to cause an exponential growth in free electrons. These free electrons are necessary to cause an arc. See also Atmospheric pressure Breakdown voltage Dielectric strength Townsend discharge References External links Electrical breakdown limits for MEMS High Voltage Experimenter's Handbook Paschen's law calculator Breakdown Voltage vs. PressureIn the internet archive 16.04.2023 Electrical Breakdown of Low Pressure Gases Electrical Discharges Pressure Dependence of Plasma Structure in Microwave Gas Breakdown at 110GHz Electrical discharge in gases Electrochemistry Electrostatics Electrical breakdown Eponymous laws of physics Plasma physics equations
Paschen's law
[ "Physics", "Chemistry" ]
2,643
[ "Physical phenomena", "Electrical discharge in gases", "Equations of physics", "Plasma phenomena", "Electrochemistry", "Electrical phenomena", "Plasma physics equations", "Electrical breakdown", "Ions", "Matter" ]
614,297
https://en.wikipedia.org/wiki/Solvay%20process
The Solvay process or ammonia–soda process is the major industrial process for the production of sodium carbonate (soda ash, Na2CO3). The ammonia–soda process was developed into its modern form by the Belgian chemist Ernest Solvay during the 1860s. The ingredients for this are readily available and inexpensive: salt brine (from inland sources or from the sea) and limestone (from quarries). The worldwide production of soda ash in 2005 was estimated at 42 million tonnes, which is more than six kilograms () per year for each person on Earth. Solvay-based chemical plants now produce roughly three-quarters of this supply, with the remaining being mined from natural deposits. This method superseded the Leblanc process. History The name "soda ash" is based on the principal historical method of obtaining alkali, which was by using water to extract it from the ashes of certain plants. Wood fires yielded potash and its predominant ingredient potassium carbonate (K2CO3), whereas the ashes from these special plants yielded "soda ash" and its predominant ingredient sodium carbonate (Na2CO3). The word "soda" (from the Middle Latin) originally referred to certain plants that grow in salt solubles; it was discovered that the ashes of these plants yielded the useful alkali soda ash. The cultivation of such plants reached a particularly high state of development in the 18th century in Spain, where the plants are named barrilla (or "barilla" in English). The ashes of kelp also yield soda ash and were the basis of an enormous 18th-century industry in Scotland. Alkali was also mined from dry lakebeds in Egypt. By the late 18th century these sources were insufficient to meet Europe's burgeoning demand for alkali for soap, textile, and glass industries. In 1791, the French physician Nicolas Leblanc developed a method to manufacture soda ash using salt, limestone, sulfuric acid, and coal. Although the Leblanc process came to dominate alkali production in the early 19th century, the expense of its inputs and its polluting byproducts (including hydrogen chloride gas) made it apparent that it was far from an ideal solution. It has been reported that in 1811 French physicist Augustin Jean Fresnel discovered that sodium bicarbonate precipitates when carbon dioxide is bubbled through ammonia-containing brines – which is the chemical reaction central to the Solvay process. The discovery wasn't published. As has been noted by Desmond Reilly, "The story of the evolution of the ammonium–soda process is an interesting example of the way in which a discovery can be made and then laid aside and not applied for a considerable time afterwards." Serious consideration of this reaction as the basis of an industrial process dates from the British patent issued in 1834 to H. G. Dyar and J. Hemming. There were several attempts to reduce this reaction to industrial practice, with varying success. In 1861, Belgian industrial chemist Ernest Solvay turned his attention to the problem; he was apparently largely unaware of the extensive earlier work. His solution was a gas absorption tower in which carbon dioxide bubbled up through a descending flow of brine. This, together with efficient recovery and recycling of the ammonia, proved effective. By 1864 Solvay and his brother Alfred had acquired financial backing and constructed a plant in Couillet, today a suburb of the Belgian town of Charleroi. The new process proved more economical and less polluting than the Leblanc method, and its use spread. In 1874, the Solvays expanded their facilities with a new, larger plant at Nancy, France. In the same year, Ludwig Mond visited Solvay in Belgium and acquired rights to use the new technology. He and John Brunner formed the firm of Brunner, Mond & Co., and built a Solvay plant at Winnington, near Northwich, Cheshire, England. The facility began operating in 1874. Mond was instrumental in making the Solvay process a commercial success. He made several refinements between 1873 and 1880 that removed byproducts that could slow or halt the process. In 1884, the Solvay brothers licensed Americans William B. Cogswell and Rowland Hazard to produce soda ash in the US, and formed a joint venture (Solvay Process Company) to build and operate a plant in Solvay, New York. By the 1890s, Solvay-process plants produced the majority of the world's soda ash. In 1938 large deposits of the mineral trona were discovered near the Green River in Wyoming from which sodium carbonate can be extracted more cheaply than produced by the process. The original Solvay New York plant closed in 1986, replaced in the US by a factory in Green River. Throughout the rest of the world, the Solvay process remains the major source of soda ash. Chemistry The Solvay process results in soda ash (predominantly sodium carbonate (Na2CO3)) from brine (as a source of sodium chloride (NaCl)) and from limestone (as a source of calcium carbonate (CaCO3)). The overall process is: 2NaCl + CaCO3 -> Na2CO3 + CaCl2 The actual implementation of this global, overall reaction is intricate. A simplified description can be given using the four different, interacting chemical reactions illustrated in the figure. In the first step in the process, carbon dioxide (CO2) passes through a concentrated aqueous solution of sodium chloride (table salt, NaCl) and ammonia (NH3). NaCl + CO2 + NH3 + H2O -> NaHCO3 + NH4Cl ---(I) In industrial practice, the reaction is carried out by passing concentrated brine (salt water) through two towers. In the first, ammonia bubbles up through the brine and is absorbed by it. In the second, carbon dioxide bubbles up through the ammoniated brine, and sodium bicarbonate (baking soda) precipitates out of the solution. Note that, in a basic solution, NaHCO3 is less water-soluble than sodium chloride. The ammonia (NH3) buffers the solution at a basic (high) pH; without the ammonia, a hydrochloric acid byproduct would render the solution acidic, and arrest the precipitation. Here, NH3 along with ammoniacal brine acts as a mother liquor. The necessary ammonia "catalyst" for reaction (I) is reclaimed in a later step, and relatively little ammonia is consumed. The carbon dioxide required for reaction (I) is produced by heating ("calcination") of the limestone at 950–1100 °C, and by calcination of the sodium bicarbonate (see below). The calcium carbonate (CaCO3) in the limestone is partially converted to quicklime (calcium oxide (CaO)) and carbon dioxide: CaCO3 -> CO2 + CaO ---(II) The sodium bicarbonate (NaHCO3) that precipitates out in reaction (I) is filtered out from the hot ammonium chloride (NH4Cl) solution, and the solution is then reacted with the quicklime (calcium oxide (CaO)) left over from heating the limestone in step (II). 2 NH4Cl + CaO -> 2 NH3 + CaCl2 + H2O ---(III) CaO makes a strong basic solution. The ammonia from reaction (III) is recycled back to the initial brine solution of reaction (I). The sodium bicarbonate (NaHCO3) precipitate from reaction (I) is then converted to the final product, sodium carbonate (washing soda: Na2CO3), by calcination (160–230 °C), producing water and carbon dioxide as byproducts: 2 NaHCO3 -> Na2CO3 + H2O + CO2 ---(IV) The carbon dioxide from step (IV) is recovered for re-use in step (I). When properly designed and operated, a Solvay plant can reclaim almost all its ammonia, and consumes only small amounts of additional ammonia to make up for losses. The only major inputs to the Solvay process are salt, limestone and thermal energy, and its only major byproduct is calcium chloride, which is sometimes sold as road salt. After the invention of the Haber and other new ammonia-producing processes in the 1910s and 1920s its price dropped, and there was less need in reclaiming it. So in the modified Solvay process developed by Chinese chemist Hou Debang in 1930s, the first few steps are the same as the Solvay process, but the CaCl2 is supplanted by ammonium chloride (NH4Cl). Instead of treating the remaining solution with lime, carbon dioxide and ammonia are pumped into the solution, then sodium chloride is added until the solution saturates at 40 °C. Next, the solution is cooled to 10 °C. Ammonium chloride precipitates and is removed by filtration, and the solution is recycled to produce more sodium carbonate. Hou's process eliminates the production of calcium chloride. The byproduct ammonium chloride can be refined, used as a fertilizer and may have greater commercial value than CaCl2, thus reducing the extent of waste beds. Additional details of the industrial implementation of this process are available in the report prepared for the European Soda Ash Producer's Association. Byproducts and wastes The principal byproduct of the Solvay process is calcium chloride (CaCl2) in aqueous solution. The process has other waste and byproducts as well. Not all of the limestone that is calcined is converted to quicklime and carbon dioxide (in reaction II); the residual calcium carbonate and other components of the limestone become wastes. In addition, the salt brine used by the process is usually purified to remove magnesium and calcium ions, typically to form carbonates (MgCO3, CaCO3); otherwise, these impurities would lead to scale in the various reaction vessels and towers. These carbonates are additional waste products. In inland plants, such as that in Solvay, New York, the byproducts have been deposited in "waste beds"; the weight of material deposited in these waste beds exceeded that of the soda ash produced by about 50%. These waste beds have led to water pollution, principally by calcium and chloride. The waste beds in Solvay, New York substantially increased the salinity in nearby Onondaga Lake, which used to be among the most polluted lakes in the U.S. and is a superfund pollution site. As such waste beds age, they do begin to support plant communities which have been the subject of several scientific studies. At seaside locations, such as those at Saurashtra, Gujarat, India, the CaCl2 solution may be discharged directly into the sea, apparently without substantial environmental harm (although small amounts of heavy metals in it may be a problem), the major concern is discharge location falls within the Marine National Park of Gulf of Kutch which serves as habitat for coral reefs, seagrass and seaweed community. At Osborne, South Australia, a settling pond is now used to remove 99% of the CaCl2 as the former discharge was silting up the shipping channel. At Rosignano Solvay in Tuscany, Italy the limestone waste produced by the Solvay factory has changed the landscape, producing the "Spiagge Bianche" ("White Beaches"). A report published in 1999 by the United Nations Environment Programme (UNEP), listed Spiagge Bianche among the priority pollution hot spots in the coastal areas of the Mediterranean Sea. Carbon sequestration and the Solvay process Variations in the Solvay process have been proposed for carbon sequestration. One idea is to react carbon dioxide, produced perhaps by the combustion of coal, to form solid carbonates (such as sodium bicarbonate) that could be permanently stored, thus avoiding carbon dioxide emission into the atmosphere. The Solvay process could be modified to give the overall reaction: 2 NaCl + CaCO3 + + → 2NaHCO3 + CaCl2 Variations in the Solvay process have been proposed to convert carbon dioxide emissions into sodium carbonates, but carbon sequestration by calcium or magnesium carbonates appears more promising. However, the amount of carbon dioxide which can be used for carbon sequestration with calcium or magnesium (when compared to the total amount of carbon dioxide exhausted by mankind) is very low. This is primarily due to the major feasibility difference between capturing carbon dioxide from controlled and concentrated emission sources such as from coal-fired power plants as compared to capturing carbon from non-concentrated small-scale carbon sources such as small fires, vehicle exhaust, human respiration etc. when using such methods. Moreover, variation on the Solvay process will most probably add an additional energy consuming step, which will increase carbon dioxide emissions unless carbon neutral energy sources like hydropower, nuclear energy, wind or solar power are used. See also Chloralkali process Hou's process, a production method similar to the Solvay process but ammonia is not recycled References Further reading The minimum energy required to calcine limestone is about per tonne. External links European Soda Ash Producer's Association (ESAPA) Timeline of US plant at Solvay, New York Salt and the Chemical Revolution Process flow diagram of Solvay process Ammonia Chemical processes Belgian inventions
Solvay process
[ "Chemistry" ]
2,816
[ "Chemical process engineering", "Chemical processes", "nan" ]
614,700
https://en.wikipedia.org/wiki/Chandelier
A chandelier () is an ornamental lighting device, typically with spreading branched supports for multiple lights, designed to be hung from the ceiling. Chandeliers are often ornate, and they were originally designed to hold candles, but now incandescent light bulbs are commonly used, as well as fluorescent lamps and LEDs. A wide variety of materials ranging from wood and earthenware to silver and gold can be used to make chandeliers. Brass is one of the most popular with Dutch or Flemish brass chandeliers being the best-known, but glass is the material most commonly associated with chandeliers. True glass chandeliers were first developed in Italy, England, France, and Bohemia in the 18th century. Classic glass and crystal chandeliers have arrays of hanging "crystal" prisms to illuminate a room with refracted light. Contemporary chandeliers may assume a more minimalist design, and they may illuminate a room with direct light from the lamps or are equipped with translucent glass shades covering each lamp. Chandeliers produced nowadays can assume a wide variety of styles that span modernized and traditional designs or a combination of both. Although chandeliers have been called candelabras, chandeliers can be distinguished from candelabras which are designed to stand on tables or the floor, while chandeliers are hung from the ceiling. They are also distinct from pendant lights, as they usually consist of multiple lamps and hang in branched frames, whereas pendant lights hang from a single cord and only contain one or two lamps with few decorative elements. Due to their size, they are often installed in large hallways and staircases, living rooms, lounges, and dining rooms, often as focus of the room. Small chandeliers can be installed in smaller spaces such as bedrooms or small living spaces, while large chandeliers are typically installed in the grand rooms of buildings such as halls and lobbies, or in religious buildings such as churches, synagogues or mosques. Etymology The word chandelier was first known in the English language in the sense as used today in 1736, borrowed from the word in French that means a candleholder. It may have been derived from chandelle meaning "tallow candle", or chandelabre in Old French and candēlābrum in Latin, and ultimately from candēla meaning "candle". In the earlier periods, the term "candlestick", chandelier in France, may be used to refer to a candelabra, a hanging branched light, or a wall light or sconce. In English, "hanging candlesticks" or "branches" were used to mean lighting devices hanging from the ceiling until chandelier began to be used in the 18th century. In France, chandelier still means a candleholder, and what is called chandelier in English is in French, a term first used in the late-17th century. The French lustre, from Italian , can also be used in English to mean a chandelier hung with crystals, or the glass pendant used to decorate such chandelier. The use of words for indoor lighting objects can be confusing, and a number of terms like lustres, branches, chandeliers and candelabras were used interchangeably at various times, which can make the early appearance of these words misleading. Girandole was also once used to refer to all candelabra as well as chandelier, although girandole now usually means an ornate branched candleholder that may be mounted on a wall, often with a mirror. Chandeliers may sometimes be called suspended lights, although not all suspended lights are necessarily chandeliers. History Precursors Hanging lighting devices, some described as chandeliers, were known since ancient times, and circular ceramic lamps with multiple points for wicks or candles were used in the Roman period. The Roman terms or , however, can refer to candlestick, floor lamps, candelabra, or chandelier. By the 4th century, terms such as , , pharicanthari were used, and they were often mentioned as presents of the popes. In the Byzantine period, flat circular metallic structures suspended with chains that can hold oil lamps known as polycandela (singular polycandelon) were commonly used throughout the eastern Mediterranean. First developed in late antiquity, polycandela were used in churches and synagogues, and took the shape of a bronze or iron frame holding a varying number of globular or conical glass beakers provided with a wick and filled with oil. They may be hung between columns, over the altar or tombs of saints. Polycandela were also commonly used to furnish households up until the 8th century. Hanging lamps were commonly found in mosques in Islamic countries, while sanctuary lamps were found in churches. In Spain which had significant Moorish influence, hanging farol lanterns made of pierced brass and bronze as well as glass were produced. A type of Spanish silver lampadario with an elongated central reservoir for oil may have developed into a form of chandelier that has a central baluster and branching arms. The early form of hanging lighting devices in religious buildings may be of considerable size. Huge hanging lamps in Hagia Sophia were described by Paul the Silentiary in 563: "And beneath each chain he has caused to be fitted silver discs, hanging circle-wise in the air, round the space in the center of the church. Thus these discs, pendant from their lofty courses, form a coronet above the heads of men. They have been pierced too by the weapon of the skillful workman, in order that they may receive shafts of fire-wrought glass and hold light on high for men at night." In the late 8th century, Pope Adrian I was said to have presented the St. Peter's Basilica with a chandelier that could hold 1,370 candles, while his successor Pope Leo III presented a golden corona decorated with jewels to the Basilica of St. Andrew. The Venerable Bede mentioned that it was customary to have two hanging lighting devices called phari in a major English church, one in the nave and one in the choir, which may be a large bronze hoop with lamps hung over the figure of a cross. Early chandeliers In the medieval period, circular crown-shaped hanging devices made of iron called the corona (couronne de lumière in France and corona de luz in Spain) were used in many European countries in religious buildings since the 9th century. The larger Romanesque or Gothic-style circular wheel chandeliers were also recorded in Germany, France, and the Netherlands in the 11th and 12th century. Four Romanesque wheel chandeliers survive in Germany, including to be the Azelin and Hezilo chandeliers in Hildesheim Cathedral, and the Barbarossa Chandelier in the Aachen Cathedral. These large structures may be considered the first true chandeliers. These chandeliers have prickets (vertical spikes for holding candles) and cups for oil and wicks. A hammered iron corona with floral decorated was recorded in the St Paul's Cathedral in London in the 13th century. The iron chandeliers may have polychrome paint as well as jewel and enamelwork decorations. Wooden cross-beam chandeliers were the early form of chandelier used in a domestic setting and they were found in the households of the wealthy in the medieval period. The wooden cross beams were attached to a vertical wooden pillar, and on each of the four arms a candle may be placed. Some that could hold two candles in each arm were called "double candlesticks". While simple in design compared to later chandeliers, such wooden chandeliers were still found in the court of Charles VI of France in the 15th century and a double candlestick was listed in the inventory of the estate of Henry VIII of England in the 16th century. In the medieval period, chandeliers may also be lighting devices that could be moved to different rooms. In later periods, wood used in chandeliers may be carved and gilded. By the late Gothic period, more complex forms of chandeliers appeared. Chandeliers with many branches radiating out from a central stem, sometimes in tiers, were made by the 15th century, and these may be adorned with statuettes and foliated decorations. Chandelier became popular decorative features in palaces and homes of nobility, clergy and merchants, and their high cost made chandeliers symbols of luxury and status. A diverse range of materials were also employed in the making of chandeliers. In Germany, a form of chandeliers made of deer antlers and wooden sculpted figures called lusterweibchen were known to have been made since the 14th century. Ivory chandeliers in the palace of the king of Mutapa, were depicted in a 17th-century description by Olfert Dapper. Porcelain introduced to Europe were also used to make chandeliers in the 18th century. Brass chandelier Many different metallic materials have been used to make chandeliers, including iron, pewter, bronze, or more prestigiously silver and even gold. Brass, however, has the warm appearance of gold while being considerably cheaper, and also easy to work with, it therefore became a popular choice for making chandeliers. Brass or brass-like latten has been used to make chandeliers since the medieval period, and many were made with brass-type alloy from Dinant (now in Belgium, brass ware from the town was known as dinanderie) until the mid-15th century. The metal chandeliers may have a central support with curved or S-shaped arms attached, and at the end of each arm is a drip-pan and nozzle for holding a candle; by the 15th century, candle nozzles were used instead of prickets to hold the candles since candle production techniques allowed for the production of identically sized candles. Many such brass chandeliers can be seen depicted in Dutch and Flemish paintings from the 15th to 17th centuries. These Dutch and Flemish chandeliers may be decorated with stylized floral embellishments as well as Gothic symbols and emblems and religious figures. Large numbers of brass chandeliers existed, but most of the early brass chandeliers did not survive destruction during the Reformation. The Dutch brass chandeliers have distinctive features – a large brass sphere at the end of a central ball stem, and six curved low-swooping arms. The globe helps to keep the chandelier upright and reflect the light from candles, and the arms are curved downward to bring the candles to the level of the sphere to allow for maximum reflection. The arms of early brass chandeliers may also have drooped lower through use over time as the brass used in the earlier period was softer due to lower zinc content. Many Dutch chandeliers were topped by a double-headed eagle by the 16th century. The features of Dutch brass chandeliers were widely copied in other countries, and this form is arguably the most successful and long-lasting of all types of chandeliers. Dutch brass chandeliers were popular across Europe, particularly in England, as well as in the United States. Variations of the Dutch brass chandelier were produced, for example there may be multiple tiers of the arms, the sphere may become elongated, or the arms may emerge from the globe itself. By the early 18th century, ornate cast ormolu forms with long, curved arms and many candles were in the homes of many in the growing merchant class. Glass and crystal chandeliers Chandeliers began to be decorated with carved rock crystal (quartz) of Italian origin in the 16th century, a highly expensive material. The rock crystal pieces were hung from a metal frame as pendants or drops. The metal frame of French chandeliers may have a central stem onto which arms are attached, later some may form a cage or "birdcage" without a central stem. Few, however, could afford these rock crystal chandeliers as they were costly to produce. In the 17th century multi-faceted crystals that could reflect light from the candles were used to decorate chandelier and they were called chandeliers de crystal in France. The chandeliers produced in France in the 17th century were in the French Baroque style, and rococo in the 18th century. French rock crystal chandeliers found their finest expression under Louis XIV, as exemplified by chandeliers at the Palace of Versailles. Rock crystal began to be replaced by cut glass in the late 17th century. and examples of chandeliers made with rock crystal as well as Bohemian glass can be found in the Palace of Versailles. Crystal chandeliers in the early period were literally made of crystals, but what are called crystal chandeliers now are almost always made of cut glass. Glass, although not crystalline in structure, continued to be called crystal, after much clearer cut glass that resembled crystal was produced from the late 17th-century. Quartz is nevertheless still more reflective than the best glass, and lead glass that is perfectly clear was not produced until 1816. Although France is believed to have produced lead glass in the late-17th century, France used imported glass for its chandeliers until the late 18th century when high quality glass was produced in the country. The origin of the glass chandelier is unclear, but some scholars believed that the first glass chandelier was made in 1673 in Orléans France, where a simple iron rod was encased in multi-coloured glass with glass arms attached. By the turn of the 18th century, glass chandeliers were produced in France, England, Bohemia, and Venice. In Britain, Lead glass was developed by George Ravenscroft 1675, which allowed for the production of cheaper lead crystal that resembles rock crystal without the crisseling defects of other glass. It is also relatively soft compared to soda glass, allowing it to be cut or faceted without shattering. Lead glass also rings when struck, unlike soda glass which has no resonance. The clearness and light scattering properties of lead glass made it a popular addition to the form, and conventionally, lead glass may be the only glass that can be described as crystal. The first mention of a glass chandelier in an advertisement appeared in 1727 (as schandelier) in London. The design of the first English true glass chandelier was influenced by Dutch and Flemish brass chandeliers. These English chandeliers were made largely of glass, with the metal parts limited to the central stem and receiver plates and bowls. The metallic part may be silvered or silver-plated, and the silver-plating inside the glass stem can create the illusion that the chandelier is made entirely of glass. A glass bowl at the bottom disguises the metal disc onto which the glass arms are attached. The early glass chandeliers were molded and uncut, often with solid rope-twist arms. Later cuts to the arms were introduced to provide sparkle, and additional ornaments added. Cut glass pendant drops were hung from the frame, initially only a small number, but in increasingly large number by 1770. By the 1800s, the decorative ornaments became so abundant that the underlying structure of the chandelier became obscured. The early chandeliers may follow a rococo style, and later neo-classical style, A notable early producer of glass chandeliers was William Parker; Parker replaced the Dutch-influenced ball stem with a vase-shaped stem, as seen in the chandeliers in Bath Assembly Rooms, which were the first datable neo-Classical style chandeliers as well as the first chandeliers that were signed by the maker. Other designers of neo-Classical chandeliers were Robert and James Adam. Neoclassical motifs in cast metal or carved and gilded wood were common elements in these chandeliers. Chandeliers made in this style also drew heavily on the aesthetic of ancient Greece and Rome, incorporating clean lines, classical proportions and mythological creatures. Bohemia in present-day Czech Republic has been producing glass for centuries. Bohemian glass contains potash that gives it a clear colorless appearance, which became renown in Europe in the 18th century. Production of crystal chandeliers appeared in Bohemia and Germany in the early 18th century, with designs that followed what were popular in England and France, and many early chandeliers were copies of designs from London. Bohemia soon developed its own styles of chandeliers, the best-known of which is the Maria-Theresa, named after the Empress of Austria. This type of chandeliers do not have a central baluster, and their distinctive feature is the curved flat metal arms placed between sections of molded glass joined together with glass rosettes. Some Bohemian chandeliers used wood instead of metal as the central stem due to the abundance of wood and wood carvers in the area. The Bohemian style was largely successful across Europe and its biggest draw was the chance to obtain spectacular light refraction due to the facets and bevels of crystal prisms. Glass chandeliers became the dominant form of chandelier from about 1750 until at least 1900, and the Czech Republic remains a great producer of glass chandeliers today. Venice has been a center of glass production, particularly on the island of Murano. The Venetians created a form of soda–lime glass by adding manganese dioxide that is clear like crystal, which they called cristallo. This glass was typically used to make mirrors, but around 1700, Italian glass factories in Murano started creating new kinds of artistic chandeliers. Since Murano glass is hard and brittle, it is not suitable for cutting/faceting; however, it is lighter, softer and more malleable when heated, and Venetian glassmakers relied upon the properties of their glass to create elaborate forms of chandelier. Typical features of a Murano chandelier are the intricate arabesques of leaves, flowers and fruits that would be enriched by colored glass, made possible by the specific type of glass used in Murano. Great skill and time was required to twist and shape a chandelier precisely. The ornate type of murano chandelier is called ciocca (literally "bouquet of flowers") for the characteristic decorations of glazed polychrome flowers. The most sumptuous consisted of a metal frame covered with small elements in blown glass, transparent or colored, with decorations of flowers, fruits and leaves, while simpler models had arms made with unique pieces of glass. Their shape was inspired by an original architectural concept: the space on the inside is left almost empty, since decorations are spread all around the central support, distanced from it by the length of the arms. Huge Murano chandeliers were often used for interior lighting in theaters and rooms in important palaces. Despite periods of decline and revival, designs of Murano glass chandeliers have stayed relatively constant through time, and modern productions of these chandelier may still be stylistically nearly identical to those made in the 18th or 19th centuries. Glass arms that were hollow were produced instead of solid glass to accommodate gas lines or electrical wiring were produced by the late 19th century. Chandeliers were also produced in other countries in the 18th century, including Russia and Sweden. Russian and Scandinavian chandeliers are similar in designs, with a metal frame that is lighter and more decorative, gilded or finished with brass, and hung with small slender glass drops. Russian chandeliers may be accented with coloured glass. 19th century The 19th century was a period of great changes and development; the industrial revolution and the growth of wealth from the industries greatly increased the market for chandeliers, new methods of lighting and better techniques of production emerged. Other countries such as the United States also started producing chandeliers; the first American chandelier is believed to date from 1804. New styles and more complex and elaborate chandeliers also appeared, and production of chandeliers reached a peak in the 19th century. France, which only started producing significant amount of high-quality glass in the late 18th century, became renown as a producer of the finest quality chandeliers. One of the best-known French manufacturers, Baccarat, started making chandeliers in 1824. In England, Perry & Co. produced a large quantity of chandeliers, while F. & C. Osler was known for producing spectacular chandeliers, the great proportion of which went to India, the richest market for chandeliers at that time. In 1843, Osler opened a branch in Calcutta to start production of chandeliers in India. In England, the imposition of the Glass Excise Act on all glass products in 1811 led to a new style of chandelier being created. Chandelier makers, in order to avoid paying the tax, reused broken glass pieces cut into crystal icicles and strung together, and hung from circular frames in the form of tent or canopy above a hoop, with a bag below and/or tiered sheets that resembled waterfalls. A large number of crystals are used to make such chandeliers, and many may contain over 1,000 pieces of crystal. The central stem is hidden by the crystals. These forms of Regency-era chandeliers were popular all over Europe. In France, chandeliers of similar designs are described as Empire style. After the Glass Excise Act was repealed, chandeliers with glass arms became popular again, but they became larger, bolder and heavily decorated. The largest English-made chandelier in the world (by Hancock Rixon & Dunt and probably F. & C. Osler) is in the Dolmabahçe Palace in Istanbul, and it has 750 lamps and weighs 4.5 tons. In the 19th century, a variety of new methods for producing light that are brighter, cleaner or more convenient than candles began to be used. These included colza oil (Argand lamp), kerosene/paraffin, and gas. Due to its brightness, gas was initially only used for public lighting, later it also appeared in homes. As gas lighting caught on, branched ceiling fixtures called gasoliers (a portmanteau of gas and chandelier) were produced. Many candle chandeliers were converted. Gasoliers may have only slight variations in the decorations from chandeliers, but the arms were hollow to carry the gas to the burners. Examples of gasoliers were the extravagant chandeliers in the Royal Pavilion in Brighton first installed in 1821. While popular, gas lighting was considered too bright and harsh on the eyes, and lacking the pleasing quality of candlelight. Shades that surround the gas light were then added to reduce the glare. Gas lighting was eventually replaced by electric light bulbs in the early 20th century. Electric lighting began to be introduced widely in the late 19th century. For a time, some chandeliers used both gas and electricity, with gas nozzles pointing upward while the light bulbs hung downward. As distribution of electricity widened, and supplies became dependable, electric-only chandeliers became standard. Another portmanteau word, electrolier, was coined for these, but nowadays they are most commonly still called chandeliers even though no candles are used. Glass chandeliers requires electrical wiring, large areas of metals and light bulbs, but the results were often not aesthetically pleasing. A large number of light bulbs close together can also produce too much glare. Shades for the bulbs of these electroliers were therefore often added. Modern chandeliers At the turn of the 20th century, the chandelier still enjoyed the status it had the previous century. Of the many lighting fixtures made that conformed to the popular contemporary styles of Art Nouveau, Art Deco and Modernism, few could be described properly as chandeliers. The popularity of chandeliers declined in the 20th century. A vast array of lighting choices became available, and chandeliers often did not fit the aesthetics of modern architecture and interior design. Light fittings of avant-garde form and material however started to be made 1940. A wide variety of chandeliers of modern design appeared, ranging from the minimalist to the highly extravagant. Towards the end of the 20th century, the popularity of chandeliers revived. A number of glass artists such as Dale Chihuly who produced chandeliers emerged. Chandeliers were often used as decorative focal points for rooms, although some do not necessarily illuminate. Older styles of chandeliers continued to be produced in the 20th and 21st centuries, and older styles of chandeliers may also be revived, such as the Art Deco-style of chandeliers. Incandescent light bulbs became the most common source of lighting for modern chandeliers in the 20th century, and a variety of electrical lights such as fluorescent light, halogen. LED lamp are also used. Many antique chandeliers not designed for electrical wiring have also been adapted for electricity. Modern chandeliers produced in older styles and antique chandeliers wired for electricity usually use imitation candles, where incandescent or LED light bulbs are shaped like candle flames. These light bulbs may be dimmable to adjust the brightness. Some may use bulbs containing a shimmering gas discharge that mimics candle flame. Chandeliers around the world The biggest chandeliers in the world are now found in the Islamic countries. The chandelier in the prayer hall in the Sultan Qaboos Grand Mosque in Muscat, Oman was the biggest when it was installed in 2001. It is high, has a diameter of , and weighs over eight tonnes (8,000 kg). It is lit by over 1,122 halogen lamps and contains 600,000 pieces of crystal. The biggest chandelier in the Sheikh Zayed Grand Mosque in Abu Dhabi, with a diameter of 10 m, height of 15.5 m, weight of nearly 12 tonnes and lit with 15,500 LED lights, became the world's largest chandelier when it was installed in 2007. In 2010, a chandelier of modern design was built in the foyer of an office building in Doha, Qatar. This chandelier has a height of , width of , length of , and weight of 39,683 pounds (18 tonnes). It has 165,000 LED lights and 2,300 optical crystals and it is considered the biggest interactive LED chandelier in the world. In 2022, a chandelier in height, in length and in width and weighing 16 tonnes was unveiled at the Assima Mall in Kuwait. In Egypt, the largest and heaviest chandelier in the world, weighing 24,300 kg (53,572 lb) with a diameter of 22 m (72.2 ft) in four levels made by Asfour Crystal, was installed in the Grand Mosque of the Islamic Cultural Center in Cairo. Glossary of terms Adam style A Neoclassical style, light, airy and elegant chandelier – usually English. Arm The light-bearing part of a chandelier also sometimes known as a branch. Arm plate The metal or wooden block placed on the stem, into which the arms slot. Bag A bag of crystal drops formed by strings hanging from a circular frame and looped back into the center underneath, associated especially with early American crystal and Regency style crystal chandeliers. Baluster A turned wood or molded stem forming the axis of a chandelier, with alternating narrow and bulbous parts of varying widths. Bead A glass drop with a hole drilled in it. Bobèche A dish fitted just below the candle nozzle, designed to catch drips of wax. Also known as a drip pan. Branch Another name for the light-bearing part of a chandelier, also known as an arm. Cage An arrangement where the central stem supporting arms and decorations is replaced by a metal structure leaving the center clear for candles and further embellishments. Also called a "bird cage". Candlebeam A cross made from two wooden beams with one or more cups and prickets at each end for securing candles. Candle nozzle The small cup into which the end of the candle is slotted. Canopy An inverted shallow dish at the top of a chandelier from which festoons of beads are often suspended, lending a flourish to the top of the fitting. Corona Another term for crown-style chandelier. Crown A circular chandelier reminiscent of a crown, usually of gilded metal or brass, and often with upstanding decorative elements. Crystal Essentially a traditional marketing term for lead glass with a chemical content that gives it special qualities of clarity, resonance and softness, making it especially suitable for use in cut glass. Some chandeliers, as at the Palace of Versailles are actually made of cut rock crystal (clear quartz), which cut glass essentially imitates. Drip pan The dish fitted just below the candle nozzle, designed to catch drips of wax. Also known as a bobèche. Drop A small piece of glass usually cut into one of many shapes and drilled at one end so that it can be hung from the chandelier as a pendant with a brass pin. A chain drop is drilled at both ends so that a series can be hung together to form a necklace or festoon. Dutch Also known as Flemish, a style of brass chandelier with a bulbous baluster and arms curving down around a low hung ball. Festoon An arrangement of glass drops or beads draped and hung across or down a glass chandelier, or sometimes a piece of solid glass shaped into a swag. Also known as a garland. Finial The final flourish at the very bottom of the stem. Some Venetian glass chandeliers have little finials hanging from glass rings on the arms. Hoop A circular metal support for arms, usually on a regency-styles or other chandelier with glass pieces. Also known as a ring. Montgolfière chandelier A chandelier with a rounded bottom, like an inverted hot air balloon, named after the Montgolfier brothers, the early French balloonists. Molded The process by which a pressed glass piece is shaped by being blown into a mold. Neoclassical style chandelier A glass chandelier featuring many delicate arms, spires and strings of ovals rhomboids or octagons. Panikadilo Gothic candelabrum chandelier hung from centers of Greek Orthodox cathedrals' domes. Pendeloque Specific pear- and drop-shaped versions of drops. Prism A straight, many-sided drop. Regency style chandelier A larger chandelier with a multitude of drops. Above a hoop, rises strings of beads that diminish in size and attach at the top to form a canopy. A bag, with concentric rings of pointed glass, forms a waterfall beneath. The stem is usually completely hidden. Soda glass A type of glass used typically in Venetian glass chandeliers. Soda glass remains "plastic" for longer when heated, and can therefore be shaped into elegant curving leaves and flowers. Refracts light poorly and is normally fire polished. Spire A tall spike of glass, round in section or flat sided, to which arms and decorative elements may be attached, made from wood, metal or glass. Tent A tent-shaped structure on the upper part of a glass chandelier where necklaces of drops attach at the top to a canopy and at the bottom to a larger ring. Venetian A glass from the island of Murano, Venice but usually used to describe any chandelier in Venetian style. Waterfall or wedding cake Concentric rings of icicle drops suspended beneath the hoop or plate. Source: Gallery See also Candelabra Ceiling rose Girandole Ljuskrona J. & L. Lobmeyr, the first company to make an electric chandelier Light fixture Sconce Wheel chandelier References Sources Katz, Cheryl and Jeffrey. Chandeliers Rockport Publishers: 2001. . Parissien, Steven. Regency Style. Phaidon: 1992. . Light fixtures Glass art Ceilings Chandeliers
Chandelier
[ "Engineering" ]
6,543
[ "Structural engineering", "Ceilings" ]
614,723
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Plasma%20Physics
The Max Planck Institute for Plasma Physics (, IPP) is a physics institute investigating the physical foundations of a fusion power plant. The IPP is an institute of the Max Planck Society, part of the European Atomic Energy Community, and an associated member of the Helmholtz Association. The IPP has two sites: Garching near Munich (founded 1960) and Greifswald (founded 1994), both in Germany. It owns several large devices, namely the experimental tokamak ASDEX Upgrade (in operation since 1991), the experimental stellarator Wendelstein 7-X (in operation since 2016), a tandem accelerator and a high heat flux test facility (GLADIS) Furthermore it cooperates closely with the ITER, DEMO and JET projects. The International Helmholtz Graduate School for Plasma Physics partners with the Technical University of Munich (TUM) and the University of Greifswald. Associated partners are the Leibniz Institute for Plasma Science and Technology (INP) in Greifswald and the Leibniz Computational Center (LRZ) in Garching. External links References Fusion power Plasma physics facilities Physics research institutes Plasma Physics University of Greifswald Garching bei München Max Planck
Max Planck Institute for Plasma Physics
[ "Physics", "Chemistry" ]
252
[ "Plasma physics", "Fusion power", "Plasma physics stubs", "Plasma physics facilities", "Nuclear fusion" ]
614,763
https://en.wikipedia.org/wiki/Stark%20effect
The Stark effect is the shifting and splitting of spectral lines of atoms and molecules due to the presence of an external electric field. It is the electric-field analogue of the Zeeman effect, where a spectral line is split into several components due to the presence of the magnetic field. Although initially coined for the static case, it is also used in the wider context to describe the effect of time-dependent electric fields. In particular, the Stark effect is responsible for the pressure broadening (Stark broadening) of spectral lines by charged particles in plasmas. For most spectral lines, the Stark effect is either linear (proportional to the applied electric field) or quadratic with a high accuracy. The Stark effect can be observed both for emission and absorption lines. The latter is sometimes called the inverse Stark effect, but this term is no longer used in the modern literature. History The effect is named after the German physicist Johannes Stark, who discovered it in 1913. It was independently discovered in the same year by the Italian physicist Antonino Lo Surdo. The discovery of this effect contributed importantly to the development of quantum theory and Stark was awarded with the Nobel Prize in Physics in the year 1919. Inspired by the magnetic Zeeman effect, and especially by Hendrik Lorentz's explanation of it, Woldemar Voigt performed classical mechanical calculations of quasi-elastically bound electrons in an electric field. By using experimental indices of refraction he gave an estimate of the Stark splittings. This estimate was a few orders of magnitude too low. Not deterred by this prediction, Stark undertook measurements on excited states of the hydrogen atom and succeeded in observing splittings. By the use of the Bohr–Sommerfeld ("old") quantum theory, Paul Epstein and Karl Schwarzschild were independently able to derive equations for the linear and quadratic Stark effect in hydrogen. Four years later, Hendrik Kramers derived formulas for intensities of spectral transitions. Kramers also included the effect of fine structure, with corrections for relativistic kinetic energy and coupling between electron spin and orbital motion. The first quantum mechanical treatment (in the framework of Werner Heisenberg's matrix mechanics) was by Wolfgang Pauli. Erwin Schrödinger discussed at length the Stark effect in his third paper on quantum theory (in which he introduced his perturbation theory), once in the manner of the 1916 work of Epstein (but generalized from the old to the new quantum theory) and once by his (first-order) perturbation approach. Finally, Epstein reconsidered the linear and quadratic Stark effect from the point of view of the new quantum theory. He derived equations for the line intensities which were a decided improvement over Kramers's results obtained by the old quantum theory. While the first-order-perturbation (linear) Stark effect in hydrogen is in agreement with both the old Bohr–Sommerfeld model and the quantum-mechanical theory of the atom, higher-order corrections are not. Measurements of the Stark effect under high field strengths confirmed the correctness of the new quantum theory. Mechanism Overview Imagine an atom with occupied 2s and 2p electron states. In the Bohr model, these states are degenerate. However, in the presence of an external electric field, these electron orbitals will hybridize into eigenstates of the perturbed Hamiltonian (where each perturbed hybrid state can be written as a superpositon of unperturbed states). Since the 2s and 2p states have opposite parity, these hybrid states will lack inversion symmetry and will possess a time-averaged electric dipole moment. If this dipole moment is aligned with the electric field, the energy of the state will shift down; if this dipole moment is anti-aligned with the electric field, the energy of the state will shift up. Thus, the Stark effect causes a splitting of the original degeneracy. Other things being equal, the effect of the electric field is greater for outer electron shells because the electron is more distant from the nucleus, resulting in a larger electric dipole moment upon hybridization. Multipole expansion The Stark effect originates from the interaction between a charge distribution (atom or molecule) and an external electric field. The interaction energy of a continuous charge distribution , confined within a finite volume , with an external electrostatic potential is This expression is valid classically and quantum-mechanically alike. If the potential varies weakly over the charge distribution, the multipole expansion converges fast, so only a few first terms give an accurate approximation. Namely, keeping only the zero- and first-order terms, where we introduced the electric field and assumed the origin 0 to be somewhere within . Therefore, the interaction becomes where and are, respectively, the total charge (zero moment) and the dipole moment of the charge distribution. Classical macroscopic objects are usually neutral or quasi-neutral (), so the first, monopole, term in the expression above is identically zero. This is also the case for a neutral atom or molecule. However, for an ion this is no longer true. Nevertheless, it is often justified to omit it in this case, too. Indeed, the Stark effect is observed in spectral lines, which are emitted when an electron "jumps" between two bound states. Since such a transition only alters the internal degrees of freedom of the radiator but not its charge, the effects of the monopole interaction on the initial and final states exactly cancel each other. Perturbation theory Turning now to quantum mechanics an atom or a molecule can be thought of as a collection of point charges (electrons and nuclei), so that the second definition of the dipole applies. The interaction of atom or molecule with a uniform external field is described by the operator This operator is used as a perturbation in first- and second-order perturbation theory to account for the first- and second-order Stark effect. First order Let the unperturbed atom or molecule be in a g-fold degenerate state with orthonormal zeroth-order state functions . (Non-degeneracy is the special case g = 1). According to perturbation theory the first-order energies are the eigenvalues of the g × g matrix with general element If g = 1 (as is often the case for electronic states of molecules) the first-order energy becomes proportional to the expectation (average) value of the dipole operator , Since the electric dipole moment is a vector (tensor of the first rank), the diagonal elements of the perturbation matrix Vint vanish between states that have a definite parity. Atoms and molecules possessing inversion symmetry do not have a (permanent) dipole moment and hence do not show a linear Stark effect. In order to obtain a non-zero matrix Vint for systems with an inversion center it is necessary that some of the unperturbed functions have opposite parity (obtain plus and minus under inversion), because only functions of opposite parity give non-vanishing matrix elements. Degenerate zeroth-order states of opposite parity occur for excited hydrogen-like (one-electron) atoms or Rydberg states. Neglecting fine-structure effects, such a state with the principal quantum number n is n2-fold degenerate and where is the azimuthal (angular momentum) quantum number. For instance, the excited n = 4 state contains the following states, The one-electron states with even are even under parity, while those with odd are odd under parity. Hence hydrogen-like atoms with n>1 show first-order Stark effect. The first-order Stark effect occurs in rotational transitions of symmetric top molecules (but not for linear and asymmetric molecules). In first approximation a molecule may be seen as a rigid rotor. A symmetric top rigid rotor has the unperturbed eigenstates with 2(2J+1)-fold degenerate energy for |K| > 0 and (2J+1)-fold degenerate energy for K=0. Here DJMK is an element of the Wigner D-matrix. The first-order perturbation matrix on basis of the unperturbed rigid rotor function is non-zero and can be diagonalized. This gives shifts and splittings in the rotational spectrum. Quantitative analysis of these Stark shift yields the permanent electric dipole moment of the symmetric top molecule. Second order As stated, the quadratic Stark effect is described by second-order perturbation theory. The zeroth-order eigenproblem is assumed to be solved. The perturbation theory gives with the components of the polarizability tensor α defined by The energy E(2) gives the quadratic Stark effect. Neglecting the hyperfine structure (which is often justified — unless extremely weak electric fields are considered), the polarizability tensor of atoms is isotropic, For some molecules this expression is a reasonable approximation, too. For the ground state is always positive, i.e., the quadratic Stark shift is always negative. Problems The perturbative treatment of the Stark effect has some problems. In the presence of an electric field, states of atoms and molecules that were previously bound (square-integrable), become formally (non-square-integrable) resonances of finite width. These resonances may decay in finite time via field ionization. For low lying states and not too strong fields the decay times are so long, however, that for all practical purposes the system can be regarded as bound. For highly excited states and/or very strong fields ionization may have to be accounted for. (See also the article on the Rydberg atom). Applications The Stark effect is at the basis of the spectral shift measured for voltage-sensitive dyes used for imaging of the firing activity of neurons. See also Zeeman effect Autler–Townes effect Quantum-confined Stark effect Stark spectroscopy Inglis–Teller equation Electric field NMR Stark effect in semiconductor optics References Further reading (Early history of the Stark effect) (Chapter 17 provides a comprehensive treatment, as of 1935.) (Stark effect for atoms) (Stark effect for rotating molecules) Atomic physics Foundational quantum physics Physical phenomena Spectroscopy
Stark effect
[ "Physics", "Chemistry" ]
2,110
[ "Physical phenomena", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Foundational quantum physics", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", "Spectroscopy", " and optical physics" ]
615,108
https://en.wikipedia.org/wiki/Poynting%E2%80%93Robertson%20effect
The Poynting–Robertson effect, also known as Poynting–Robertson drag, named after John Henry Poynting and Howard P. Robertson, is a process by which solar radiation causes a dust grain orbiting a star to lose angular momentum relative to its orbit around the star. This is related to radiation pressure tangential to the grain's motion. This causes dust that is small enough to be affected by this drag, but too large to be blown away from the star by radiation pressure, to spiral slowly into the star. In the Solar System, this affects dust grains from about to in diameter. Larger dust is likely to collide with another object long before such drag can have an effect. Poynting initially gave a description of the effect in 1903 based on the luminiferous aether theory, which was superseded by the theories of relativity in 1905–1915. In 1937 Robertson described the effect in terms of general relativity. History Robertson considered dust motion in a beam of radiation emanating from a point source. A. W. Guess later considered the problem for a spherical source of radiation and found that for particles far from the source the resultant forces are in agreement with those concluded by Poynting. Cause The effect can be understood in two ways, depending on the reference frame chosen. From the perspective of the grain of dust circling a star (panel (a) of the figure), the star's radiation appears to be coming from a slightly forward direction (aberration of light). Therefore, the absorption of this radiation leads to a force with a component against the direction of movement. The angle of aberration is extremely small since the radiation is moving at the speed of light while the dust grain is moving many orders of magnitude slower than that. From the perspective of the star (panel (b) of the figure), the dust grain absorbs sunlight entirely in a radial direction, thus the grain's angular momentum is not affected by it. But the re-emission of photons, which is isotropic in the frame of the grain (a), is no longer isotropic in the frame of the star (b). This anisotropic emission causes the photons to carry away angular momentum from the dust grain. The Poynting–Robertson drag acts in the opposite direction to the dust grain's orbital motion, leading to a drop in the grain's angular momentum. While the dust grain thus spirals slowly into the star, its orbital speed increases continuously. The Poynting–Robertson force is equal to where v is the grain's velocity, c is the speed of light, W is the power of the incoming radiation, r the grain's radius, G is the universal gravitational constant, M☉ the Sun's mass, L☉ is the solar luminosity, and R the grain's orbital radius. Relation to other forces The Poynting–Robertson effect is more pronounced for smaller objects. Gravitational force varies with mass, which is (where is the radius of the dust), while the power it receives and radiates varies with surface area (). So for large objects the effect is negligible. The effect is also stronger closer to the Sun. Gravity varies as (where R is the radius of the orbit), whereas the Poynting–Robertson force varies as , so the effect also gets relatively stronger as the object approaches the Sun. This tends to reduce the eccentricity of the object's orbit in addition to dragging it in. In addition, as the size of the particle increases, the surface temperature is no longer approximately constant, and the radiation pressure is no longer isotropic in the particle's reference frame. If the particle rotates slowly, the radiation pressure may contribute to the change in angular momentum, either positively or negatively. Radiation pressure affects the effective force of gravity on the particle: it is felt more strongly by smaller particles, and blows very small particles away from the Sun. It is characterized by the dimensionless dust parameter , the ratio of the force due to radiation pressure to the force of gravity on the particle: where is the Mie scattering coefficient, is the density, and is the size (the radius) of the dust grain. Impact of the effect on dust orbits Particles with have radiation pressure at least half as strong as gravity and will pass out of the Solar System on hyperbolic orbits if their initial velocities were Keplerian. For rocky dust particles, this corresponds to a diameter of less than 1 μm. Particles with may spiral inwards or outwards, depending on their size and initial velocity vector; they tend to stay in eccentric orbits. Particles with take around 10,000 years to spiral into the Sun from a circular orbit at 1 AU. In this regime, inspiraling time and particle diameter are both roughly . If the initial grain velocity was not Keplerian, then circular or any confined orbit is possible for . It has been theorized that the slowing down of the rotation of Sun's outer layer may be caused by a similar effect. See also Differential Doppler effect Radiation pressure Yarkovsky effect Speed of gravity References Additional sources (Abstract of Philosophical Transactions paper) Orbital perturbations Doppler effects Cosmic dust Special relativity Radiation effects
Poynting–Robertson effect
[ "Physics", "Materials_science", "Astronomy", "Engineering" ]
1,075
[ "Physical phenomena", "Outer space", "Materials science", "Astrophysics", "Special relativity", "Radiation", "Condensed matter physics", "Theory of relativity", "Doppler effects", "Radiation effects", "Astronomical objects", "Cosmic dust" ]
615,385
https://en.wikipedia.org/wiki/Gate%20valve
A gate valve, also known as a sluice valve, is a valve that opens by lifting a barrier (gate) out of the path of the fluid. Gate valves require very little space along the pipe axis and hardly restrict the flow of fluid when the gate is fully opened. The gate faces can be parallel but are most commonly wedge-shaped (in order to be able to apply pressure on the sealing surface). Typical use Gate valves are used to shut off the flow of liquids rather than for flow regulation, which is frequently done with a globe valve. When fully open, the typical gate valve has no obstruction in the flow path, resulting in very low flow resistance. The size of the open flow path generally varies in a nonlinear manner as the gate is moved. This means that the flow rate does not change evenly with stem travel. Depending on the construction, a partially open gate can vibrate from the fluid flow. Gate valves are mostly used with larger pipe diameters (from 2" to the largest pipelines) since they are less complex to construct than other types of valves in large sizes. At high pressures, friction can become a problem. As the gate is pushed against its guiding rail by the pressure of the medium, it becomes harder to operate the valve. Large gate valves are sometimes fitted with a bypass controlled by a smaller valve to be able to reduce the pressure before operating the gate valve itself. Gate valves without an extra sealing ring on the gate or the seat are used in applications where minor leaking of the valve is not an issue, such as heating circuits or sewer pipes. Valve construction Common gate valves are actuated by a threaded stem that connects the actuator (e.g. handwheel or motor) to the gate. They are characterised as having either a rising or a nonrising stem, depending on which end of the stem is threaded. Rising stems are fixed to the gate and rise and lower together as the valve is operated, providing a visual indication of valve position. The actuator is attached to a nut that is rotated around the threaded stem to move it. Nonrising stem valves are fixed to, and rotate with, the actuator, and are threaded into the gate. They may have a pointer threaded onto the stem to indicate valve position, since the gate's motion is concealed inside the valve. Nonrising stems are used where vertical space is limited. Gate valves may have flanged ends drilled according to pipeline-compatible flange dimensional standards. Gate valves are typically constructed from cast iron, cast carbon steel, ductile iron, gunmetal, stainless steel, alloy steels, and forged steels. All-metal gate valves are used in ultra-high vacuum chambers to isolate regions of the chamber. Bonnet Bonnets provide leakproof closure for the valve body. Gate valves may have a screw-in, union, or bolted bonnet. A screw-in bonnet is the simplest, offering a durable, pressure-tight seal. A union bonnet is suitable for applications requiring frequent inspection and cleaning. It also gives the body added strength. A bolted bonnet is used for larger valves and higher pressure applications. Pressure seal bonnet Another type of bonnet construction in a gate valve is pressure seal bonnet. This construction is adopted for valves for high pressure service, typically in excess of 2250 psi (15 MPa). The unique feature of the pressure seal bonnet is that the bonnet ends in a downward-facing cup that fits inside the body of the valve. As the internal pressure in the valve increases, the sides of the cup are forced outward. improving the body-bonnet seal. Other constructions where the seal is provided by external clamping pressure tend to create leaks in the body-bonnet joint. Knife gate valve For plastic solids and high-viscosity slurries such as paper pulp, a specialty valve known as a knife gate valve is used to cut through the material to stop the flow. A knife gate valve is usually not wedge shaped and has a tapered knife-like edge on its lower surface. Images See also Ball valve Blast gate Butterfly valve Control valve Diaphragm valve Globe valve Needle valve Process flow diagram Piping and instrumentation diagram References Plumbing valves Valves Articles containing video clips
Gate valve
[ "Physics", "Chemistry" ]
857
[ "Physical systems", "Valves", "Hydraulics", "Piping" ]
615,402
https://en.wikipedia.org/wiki/Ball%20valve
A ball valve is a flow control device which uses a hollow, perforated, and pivoting ball to control fluid flowing through it. It is open when the hole through the middle of the ball is in line with the flow inlet, and closed when it is pivoted 90 degrees by the valve handle, blocking the flow. The handle lies flat in alignment with the flow when open, and is perpendicular to it when closed, making for easy visual confirmation of the valve's status. The shut position 1/4 turn could be in either clockwise or counter-clockwise direction. Ball valves are durable, performing well after many cycles, and reliable, closing securely even after long periods of disuse. These qualities make them an excellent choice for shutoff and control applications, where they are often preferred to gates and globe valves, but they lack the fine control of those alternatives, in throttling applications. The ball valve's ease of operation, repair, and versatility lend it to extensive industrial use, supporting pressures up to and temperatures up to , depending on design and materials used. Sizes typically range from 0.2 to 48 in (5 to 1200 mm). Valve bodies are made of metal, plastic, or metal with a ceramic; floating balls are often chrome plated for durability. One disadvantage of a ball valve is that when used for controlling water flow, they trap water in the center cavity while in the closed position. In the event of ambient temperatures falling below freezing point, the sides can crack due to the expansion associated with ice formation. Some means of insulation or heat tape in this situation will usually prevent damage. Another option for cold climates is the "freeze tolerant ball valve". This style of ball valve incorporates a freeze plug in the side so in the event of a freeze-up, the freeze plug ruptures, acting as a 'sacrificial' fail point, allowing an easier repair. Instead of replacing the whole valve, all that is required is the fitting of a new freeze plug. For cryogenic fluids, or product that may expand inside of the ball, there is a vent drilled into the upstream side of the valve. This is referred to as a vented ball. A ball valve should not be confused with a "ball-check valve", a type of check valve that uses a solid ball to prevent undesired backflow. Other types of quarter-turn valves include the butterfly valve and plug valve and freeze proof ball valve. Types There are five general body styles of ball valves: single body, three-piece body, split body, top entry, and welded. The difference is based on how the pieces of the valve—especially the casing that contains the ball itself—are manufactured and assembled. The valve operation is the same in each case. The one-piece bodies provide a very rigid construction, in some versions the ball is removable from the valve without taking the entire valve out of the line however multi-piece bodies offer greater scope for ingenuity of design. In addition, there are different styles related to the bore of the ball mechanism itself. And depending on the working pressure, the ball valves are categorized as low-pressure ball valves and high-pressure ball valves. In most industries, the ball valves with working pressures higher than are considered high-pressure ball valves. Usually the maximum working pressure for the high-pressure ball valves is and depends on the structure, sizes and sealing materials. The maximum working pressure of high-pressure ball valves can be up to . High-pressure ball valves are often used in hydraulic systems, so they are also known as hydraulic ball valves. Ball valves in sizes up to generally come in a single piece, two or three-piece designs. One-piece ball valves are almost always reduced bore, are relatively inexpensive, and are generally replaced instead of repaired. Two-piece ball valves generally have a slightly reduced (or standard) bore, and can be either throw-away or repairable. The three-piece design allows for the center part of the valve containing the ball, stem and seats to be easily removed from the pipeline. This facilitates efficient cleaning of deposited sediments, replacement of seats and gland packings, polishing out of small scratches on the ball, all this without removing the pipes from the valve body. The design concept of a three-piece valve is for it to be repairable. Each valve is heated to a certain degree, while the excess material is trimmed from the body. Full bore A full bore (sometimes full port) ball valve has an oversized ball so that the hole in the ball is the same size as the pipeline resulting in lower friction loss. Flow is unrestricted but the valve is larger and more expensive so this is only used where free flow is required, for example in pipelines that require pigging. Reduced bore, or reduced port In reduced bore (sometimes reduced port) ball valves, flow through the valve is one pipe size smaller than the valve's pipe size resulting in the flow area being smaller than the pipe. As the flow discharge remains constant and is equal to the area of flow (A) times velocity (V), the velocity increases with reduced area of flow. V port A V port ball valve has either a 'v' shaped ball or a 'v' shaped seat. This allows for linear and even equal percentage flow characteristics. When the valve is in the closed position and opening is commenced the small end of the 'v' is opened first allowing stable flow control during this stage. This type of design requires a generally more robust construction due to higher velocities of the fluids, which might damage a standard valve. When machined correctly these are excellent control valves, offering superior leakage performance. Cavity filler Many industries encounter problems with residues in the ball valve. Where the fluid is meant for human consumption, residues may also be a health hazard, and where the fluid changes from time to time contamination of one fluid with another may occur. Residues arise because in the half-open position of the ball valve a gap is created between the ball bore and the body in which fluid can be trapped. To avoid the fluid getting into this cavity, the cavity has to be plugged, which can be done by extending the seats in such a manner that it is always in contact with the ball. This type of ball valve is known as Cavity Filler Ball Valve. There are a few types of ball valves related to the attachment and lateral movement of the ball: Trunnion, floating and actuated A trunnion ball valve has additional mechanical anchoring of the ball at the top and the bottom, suitable for larger and higher pressure valves (generally above and ). A floating ball valve is one where the ball is not held in place by a trunnion. In normal operation, this will cause the ball to float downstream slightly. This causes the seating mechanism to compress under the ball pressing against it. Furthermore, in some types, in the event of some force causing the seat mechanism to dissipate (such as extreme heat from fire outside the valve), the ball will float all the way to metal body which is designed to seal against the ball providing a somewhat failsafe design. Manually operated ball valves can be closed quickly and thus there is a danger of water hammer. Some ball valves are equipped with an actuator that may be pneumatically, hydraulically or motor operated. These valves can be used either for on/off or flow control. A pneumatic flow control valve is also equipped with a positioner which transforms the control signal into actuator position and valve opening accordingly. Multiport Three- and four-way have an L- or T-shaped hole through the middle. The different combinations of flow are shown in the figure. It is easy to see that a T valve can connect any pair of ports, or all three, together, but the 45 degree position which might disconnect all three leaves no margin for error. The L valve can connect the center port to either side port, or disconnect all three, but it cannot connect the side ports together. Multi-port ball valves with 4 ways, or more, are also commercially available, the inlet way often being orthogonal to the plane of the outlets. For special applications, such as driving air-powered motors from forward to reverse, the operation is performed by rotating a single lever four-way valve. The 4-way ball valve has two L-shaped ports in the ball that do not interconnect, sometimes referred to as an "×" port. Materials of construction Body materials may include, but are not limited to, any of these materials: Stainless steel Brass Bronze Chrome Titanium PVC CPVC PFA-lined There are many different types of seats and seals that are used in ball valves as well. Valves are usually manufactured with different materials, each with specific applications they are good for due to their chemical compatibility, pressures, and temperatures. Some of the commonly used materials include brass, stainless steel, bronze etc. These material choices ensure that valves are suitable for their respective functions, providing efficient and reliable performance in various industries and applications. TMF (valve seat) Delrin Reinforced PTFE (RTFE) Polychlorotrifluoroethylene (PCTFE; Kel F) Metal Nylon PEEK 50/50 (valve seat) Virgin (unfilled) PTFE Ultra-high-molecular-weight polyethylene (UHMWPE) Graphoil Viton See also Butterfly valve Control valve Gate valve Globe valve Hydrogen valve Needle valve Pinch valve Piston valve Plastic pressure pipe systems Thermal expansion References Plumbing valves Valves
Ball valve
[ "Physics", "Chemistry" ]
1,969
[ "Physical systems", "Valves", "Hydraulics", "Piping" ]
616,293
https://en.wikipedia.org/wiki/Soft%20matter
Soft matter or soft condensed matter is a type of matter that can be deformed or structurally altered by thermal or mechanical stress which is of similar magnitude to thermal fluctuations. The science of soft matter is a subfield of condensed matter physics. Soft materials include liquids, colloids, polymers, foams, gels, granular materials, liquid crystals, flesh, and a number of biomaterials. These materials share an important common feature in that predominant physical behaviors occur at an energy scale comparable with room temperature thermal energy (of order of kT), and that entropy is considered the dominant factor. At these temperatures, quantum aspects are generally unimportant. When soft materials interact favorably with surfaces, they become squashed without an external compressive force. Pierre-Gilles de Gennes, who has been called the "founding father of soft matter," received the Nobel Prize in Physics in 1991 for discovering that methods developed for studying order phenomena in simple systems can be generalized to the more complex cases found in soft matter, in particular, to the behaviors of liquid crystals and polymers. History The current understanding of soft matter grew from Albert Einstein's work on Brownian motion, understanding that a particle suspended in a fluid must have a similar thermal energy to the fluid itself (of order of kT). This work built on established research into systems that would now be considered colloids. The crystalline optical properties of liquid crystals and their ability to flow were first described by Friedrich Reinitzer in 1888, and further characterized by Otto Lehmann in 1889. The experimental setup that Lehmann used to investigate the two melting points of cholesteryl benzoate are still used in the research of liquid crystals as of about 2019. In 1920, Hermann Staudinger, recipient of the 1953 Nobel Prize in Chemistry, was the first person to suggest that polymers are formed through covalent bonds that link smaller molecules together. The idea of a macromolecule was unheard of at the time, with the scientific consensus being that the recorded high molecular weights of compounds like natural rubber were instead due to particle aggregation. The use of hydrogel in the biomedical field was pioneered in 1960 by Drahoslav Lím and Otto Wichterle. Together, they postulated that the chemical stability, ease of deformation, and permeability of certain polymer networks in aqueous environments would have a significant impact on medicine, and were the inventors of the soft contact lens. These seemingly separate fields were dramatically influenced and brought together by Pierre-Gilles de Gennes. The work of de Gennes across different forms of soft matter was key to understanding its universality, where material properties are not based on the chemistry of the underlying structure, more so on the mesoscopic structures the underlying chemistry creates. He extended the understanding of phase changes in liquid crystals, introduced the idea of reptation regarding the relaxation of polymer systems, and successfully mapped polymer behavior to that of the Ising model. Distinctive physics Interesting behaviors arise from soft matter in ways that cannot be predicted, or are difficult to predict, directly from its atomic or molecular constituents. Materials termed soft matter exhibit this property due to a shared propensity of these materials to self-organize into mesoscopic physical structures. The assembly of the mesoscale structures that form the macroscale material is governed by low energies, and these low energy associations allow for the thermal and mechanical deformation of the material. By way of contrast, in hard condensed matter physics it is often possible to predict the overall behavior of a material because the molecules are organized into a crystalline lattice with no changes in the pattern at any mesoscopic scale. Unlike hard materials, where only small distortions occur from thermal or mechanical agitation, soft matter can undergo local rearrangements of the microscopic building blocks. A defining characteristic of soft matter is the mesoscopic scale of physical structures. The structures are much larger than the microscopic scale (the arrangement of atoms and molecules), and yet are much smaller than the macroscopic (overall) scale of the material. The properties and interactions of these mesoscopic structures may determine the macroscopic behavior of the material. The large number of constituents forming these mesoscopic structures, and the large degrees of freedom this causes, results in a general disorder between the large-scale structures. This disorder leads to the loss of long-range order that is characteristic of hard matter. For example, the turbulent vortices that naturally occur within a flowing liquid are much smaller than the overall quantity of liquid and yet much larger than its individual molecules, and the emergence of these vortices controls the overall flowing behavior of the material. Also, the bubbles that compose a foam are mesoscopic because they individually consist of a vast number of molecules, and yet the foam itself consists of a great number of these bubbles, and the overall mechanical stiffness of the foam emerges from the combined interactions of the bubbles. Typical bond energies in soft matter structures are of similar scale to thermal energies. Therefore the structures are constantly affected by thermal fluctuations and undergo Brownian motion. The ease of deformation and influence of low energy interactions regularly result in slow dynamics of the mesoscopic structures which allows some systems to remain out of equilibrium in metastable states. This characteristic can allow for recovery of initial state through an external stimulus, which is often exploited in research. Self-assembly is an inherent characteristic of soft matter systems. The characteristic complex behavior and hierarchical structures arise spontaneously as a system evolves towards equilibrium. Self-assembly can be classified as static when the resulting structure is due to a free energy minimum, or dynamic when the system is caught in a metastable state. Dynamic self-assembly can be utilized in the functional design of soft materials with these metastable states through kinetic trapping. Soft materials often exhibit both elasticity and viscous responses to external stimuli such as shear induced flow or phase transitions. However, excessive external stimuli often result in nonlinear responses. Soft matter becomes highly deformed before crack propagation, which differs significantly from the general fracture mechanics formulation. Rheology, the study of deformation under stress, is often used to investigate the bulk properties of soft matter. Classes of soft matter Soft matter consists of a diverse range of interrelated systems and can be broadly categorized into certain classes. These classes are by no means distinct, as often there are overlaps between two or more groups. Polymers Polymers are large molecules composed of repeating subunits whose characteristics are governed by their environment and composition. Polymers encompass synthetic plastics, natural fibers and rubbers, and biological proteins. Polymer research finds applications in nanotechnology, from materials science and drug delivery to protein crystallization. Foams Foams consist of a liquid or solid through which a gas has been dispersed to form cavities. This structure imparts a large surface-area-to-volume ratio on the system. Foams have found applications in insulation and textiles, and are undergoing active research in the biomedical field of drug delivery and tissue engineering. Foams are also used in automotive for water and dust sealing and noise reduction. Gels Gels consist of non-solvent-soluble 3D polymer scaffolds, which are covalently or physically cross-linked, that have a high solvent/content ratio. Research into functionalizing gels that are sensitive to mechanical and thermal stress, as well as solvent choice, has given rise to diverse structures with characteristics such as shape-memory, or the ability to bind guest molecules selectively and reversibly. Colloids Colloids are non-soluble particles suspended in a medium, such as proteins in an aqueous solution. Research into colloids is primarily focused on understanding the organization of matter, with the large structures of colloids, relative to individual molecules, large enough that they can be readily observed. Liquid crystals Liquid crystals can consist of proteins, small molecules, or polymers, that can be manipulated to form cohesive order in a specific direction. They exhibit liquid-like behavior in that they can flow, yet they can obtain close-to-crystal alignment. One feature of liquid crystals is their ability to spontaneously break symmetry. Liquid crystals have found significant applications in optical devices such as liquid-crystal displays (LCD). Biological membranes Biological membranes consist of individual phospholipid molecules that have self-assembled into a bilayer structure due to non-covalent interactions. The localized, low energy associated with the forming of the membrane allows for the elastic deformation of the large-scale structure. Experimental characterization Due to the importance of mesoscale structures in the overarching properties of soft matter, experimental work is primarily focused on the bulk properties of the materials. Rheology is often used to investigate the physical changes of the material under stress. Biological systems, such as protein crystallization, are often investigated through X-ray and neutron crystallography, while nuclear magnetic resonance spectroscopy can be used in understanding the average structure and lipid mobility of membranes. Scattering Scattering techniques, such as wide-angle X-ray scattering, small-angle X-ray scattering, neutron scattering, and dynamic light scattering can also be used for materials when probing for the average properties of the constituents. These methods can determine particle-size distribution, shape, crystallinity and diffusion of the constituents in the system. There are limitations in the application of scattering techniques to some systems, as they can be more suited to isotropic and dilute samples. Computational Computational methods are often employed to model and understand soft matter systems, as they have the ability to strictly control the composition and environment of the structures being investigated, as well as span from microscopic to macroscopic length scales. Computational methods are limited, however, by their suitability to the system and must be regularly validated against experimental results to ensure accuracy. The use of informatics in the prediction of soft matter properties is also a growing field in computer science thanks to the large amount of data available for soft matter systems. Microscopy Optical microscopy can be used in the study of colloidal systems, but more advanced methods like transmission electron microscopy (TEM) and atomic force microscopy (AFM) are often used to characterize forms of soft matter due to their applicability to mapping systems at the nanoscale. These imaging techniques are not universally appropriate to all classes of soft matter and some systems may be more suited to one kind of analysis than another. For example, there are limited applications in imaging hydrogels with TEM due to the processes required for imaging. However, fluorescence microscopy can be readily applied. Liquid crystals are often probed using polarized light microscopy to determine the ordering of the material under various conditions, such as temperature or electric field. Applications Soft materials are important in a wide range of technological applications, and each soft material can often be associated with multiple disciplines. Liquid crystals, for example, were originally discovered in the biological sciences when the botanist and chemist Friedrich Reinitzer was investigating cholesterols. Now, however, liquid crystals have also found applications as liquid-crystal displays, liquid crystal tunable filters, and liquid crystal thermometers. Active liquid crystals are another example of soft materials, where the constituent elements in liquid crystals can self-propel. Polymers have found diverse applications, from the natural rubber found in latex gloves to the vulcanized rubber found in tires. Polymers encompass a large range of soft matter, with applications in material science. An example of this is hydrogel. With the ability to undergo shear thinning, hydrogels are well suited for the development of 3D printing. Due to their stimuli responsive behavior, 3D printing of hydrogels has found applications in a diverse range of fields, such as soft robotics, tissue engineering, and flexible electronics. Polymers also encompass biological molecules such as proteins, where research insights from soft matter research have been applied to better understand topics like protein crystallization. Foams can naturally occur, such as the head on a beer, or be created intentionally, such as by fire extinguishers. The physical properties available to foams have resulted in applications which can be based on their viscosity, with more rigid and self-supporting forms of foams being used as insulation or cushions, and foams that exhibit the ability to flow being used in the cosmetic industry as shampoos or makeup. Foams have also found biomedical applications in tissue engineering as scaffolds and biosensors. Historically the problems considered in the early days of soft matter science were those pertaining to the biological sciences. As such, an important application of soft matter research is biophysics, with a major goal of the discipline being the reduction of the field of cell biology to the concepts of soft matter physics. Applications of soft matter characteristics are used to understand biologically relevant topics such as membrane mobility, as well as the rheology of blood. See also Biological membranes Biomaterials Colloids Complex fluids Foams Fracture of soft materials Gels Granular materials Liquids Liquid crystals Microemulsions Polymers Protein dynamics Protein structure Surfactants Active matter Roughness References I. Hamley, Introduction to Soft Matter (2nd edition), J. Wiley, Chichester (2000). R. A. L. Jones, Soft Condensed Matter, Oxford University Press, Oxford (2002). T. A. Witten (with P. A. Pincus), Structured Fluids: Polymers, Colloids, Surfactants, Oxford (2004). M. Kleman and O. D. Lavrentovich, Soft Matter Physics: An Introduction, Springer (2003). M. Mitov, Sensitive Matter: Foams, Gels, Liquid Crystals and Other Miracles, Harvard University Press (2012). J. N. Israelachvili, Intermolecular and Surface Forces, Academic Press (2010). A. V. Zvelindovsky (editor), Nanostructured Soft Matter - Experiment, Theory, Simulation and Perspectives, Springer/Dordrecht (2007), . M. Daoud, C.E. Williams (editors), Soft Matter Physics, Springer Verlag, Berlin (1999). Gerald H. Ristow, Pattern Formation in Granular Materials, Springer Tracts in Modern Physics, v. 161. Springer, Berlin (2000). . de Gennes, Pierre-Gilles, Soft Matter, Nobel Lecture, December 9, 1991 S. A. Safran, Statistical thermodynamics of surfaces, interfaces and membranes, Westview Press (2003) R.G. Larson, "The Structure and Rheology of Complex Fluids," Oxford University Press (1999) Gang, Oleg, "Soft Matter and Biomaterials on the Nanoscale: The WSPC Reference on Functional Nanomaterials — Part I (In 4 Volumes)", World Scientific Publisher (2020) External links Pierre-Gilles de Gennes' Nobel Lecture American Physical Society Topical Group on Soft Matter (GSOFT) Softbites - a blog run by graduate students and postdocs that makes soft matter more accessible through bite-sized posts that summarize current and classic soft matter research Softmatterworld.org Softmatterresources.com SklogWiki - a wiki dedicated to simple liquids, complex fluids, and soft condensed matter. Harvard School of Engineering and Applied Sciences Soft Matter Wiki - organizes, reviews, and summarizes academic papers on soft matter. Soft Matter Engineering - A group dedicated to Soft Matter Engineering at the University of Florida Google Scholar page on soft matter Condensed matter physics
Soft matter
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,150
[ "Soft matter", "Phases of matter", "Materials science", "Condensed matter physics", "Matter" ]