id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
755,647
https://en.wikipedia.org/wiki/Semilattice
In mathematics, a join-semilattice (or upper semilattice) is a partially ordered set that has a join (a least upper bound) for any nonempty finite subset. Dually, a meet-semilattice (or lower semilattice) is a partially ordered set which has a meet (or greatest lower bound) for any nonempty finite subset. Every join-semilattice is a meet-semilattice in the inverse order and vice versa. Semilattices can also be defined algebraically: join and meet are associative, commutative, idempotent binary operations, and any such operation induces a partial order (and the respective inverse order) such that the result of the operation for any two elements is the least upper bound (or greatest lower bound) of the elements with respect to this partial order. A lattice is a partially ordered set that is both a meet- and join-semilattice with respect to the same partial order. Algebraically, a lattice is a set with two associative, commutative idempotent binary operations linked by corresponding absorption laws. Order-theoretic definition A set partially ordered by the binary relation is a meet-semilattice if For all elements and of , the greatest lower bound of the set exists. The greatest lower bound of the set is called the meet of and denoted Replacing "greatest lower bound" with "least upper bound" results in the dual concept of a join-semilattice. The least upper bound of is called the join of and , denoted . Meet and join are binary operations on A simple induction argument shows that the existence of all possible pairwise suprema (infima), as per the definition, implies the existence of all non-empty finite suprema (infima). A join-semilattice is bounded if it has a least element, the join of the empty set. Dually, a meet-semilattice is bounded if it has a greatest element, the meet of the empty set. Other properties may be assumed; see the article on completeness in order theory for more discussion on this subject. That article also discusses how we may rephrase the above definition in terms of the existence of suitable Galois connections between related posets — an approach of special interest for category theoretic investigations of the concept. Algebraic definition A meet-semilattice is an algebraic structure consisting of a set with a binary operation , called meet, such that for all members and of the following identities hold: Associativity Commutativity Idempotency A meet-semilattice is bounded if includes an identity element 1 such that for all in If the symbol , called join, replaces in the definition just given, the structure is called a join-semilattice. One can be ambivalent about the particular choice of symbol for the operation, and speak simply of semilattices. A semilattice is a commutative, idempotent semigroup; i.e., a commutative band. A bounded semilattice is an idempotent commutative monoid. A partial order is induced on a meet-semilattice by setting whenever . For a join-semilattice, the order is induced by setting whenever . In a bounded meet-semilattice, the identity 1 is the greatest element of Similarly, an identity element in a join semilattice is a least element. Connection between the two definitions An order theoretic meet-semilattice gives rise to a binary operation such that is an algebraic meet-semilattice. Conversely, the meet-semilattice gives rise to a binary relation that partially orders in the following way: for all elements and in if and only if The relation introduced in this way defines a partial ordering from which the binary operation may be recovered. Conversely, the order induced by the algebraically defined semilattice coincides with that induced by Hence the two definitions may be used interchangeably, depending on which one is more convenient for a particular purpose. A similar conclusion holds for join-semilattices and the dual ordering ≥. Examples Semilattices are employed to construct other order structures, or in conjunction with other completeness properties. A lattice is both a join- and a meet-semilattice. The interaction of these two semilattices via the absorption law is what truly distinguishes a lattice from a semilattice. The compact elements of an algebraic lattice, under the induced partial ordering, form a bounded join-semilattice. By induction on the number of elements, any non-empty finite meet semilattice has a least element and any non-empty finite join semilattice has a greatest element. (In neither case will the semilattice necessarily be bounded.) A totally ordered set is a distributive lattice, hence in particular a meet-semilattice and join-semilattice: any two distinct elements have a greater and lesser one, which are their meet and join. A well-ordered set is further a bounded join-semilattice, as the set as a whole has a least element, hence it is bounded. The natural numbers , with their usual order are a bounded join-semilattice, with least element 0, although they have no greatest element: they are the smallest infinite well-ordered set. Any single-rooted tree (with the single root as the least element) of height is a (generally unbounded) meet-semilattice. Consider for example the set of finite words over some alphabet, ordered by the prefix order. It has a least element (the empty word), which is an annihilator element of the meet operation, but no greatest (identity) element. A Scott domain is a meet-semilattice. Membership in any set can be taken as a model of a semilattice with base set because a semilattice captures the essence of set extensionality. Let denote & Two sets differing only in one or both of the: Order in which their members are listed; Multiplicity of one or more members, are in fact the same set. Commutativity and associativity of assure (1), idempotence, (2). This semilattice is the free semilattice over It is not bounded by because a set is not a member of itself. Classical extensional mereology defines a join-semilattice, with join read as binary fusion. This semilattice is bounded from above by the world individual. Given a set the collection of partitions of is a join-semilattice. In fact, the partial order is given by if such that and the join of two partitions is given by . This semilattice is bounded, with the least element being the singleton partition . Semilattice morphisms The above algebraic definition of a semilattice suggests a notion of morphism between two semilattices. Given two join-semilattices and , a homomorphism of (join-) semilattices is a function such that Hence is just a homomorphism of the two semigroups associated with each semilattice. If and both include a least element 0, then should also be a monoid homomorphism, i.e. we additionally require that In the order-theoretic formulation, these conditions just state that a homomorphism of join-semilattices is a function that preserves binary joins and least elements, if such there be. The obvious dual—replacing with and 0 with 1—transforms this definition of a join-semilattice homomorphism into its meet-semilattice equivalent. Note that any semilattice homomorphism is necessarily monotone with respect to the associated ordering relation. For an explanation see the entry preservation of limits. Equivalence with algebraic lattices There is a well-known equivalence between the category of join-semilattices with zero with -homomorphisms and the category of algebraic lattices with compactness-preserving complete join-homomorphisms, as follows. With a join-semilattice with zero, we associate its ideal lattice . With a -homomorphism of -semilattices, we associate the map , that with any ideal of associates the ideal of generated by . This defines a functor . Conversely, with every algebraic lattice we associate the -semilattice of all compact elements of , and with every compactness-preserving complete join-homomorphism between algebraic lattices we associate the restriction . This defines a functor . The pair defines a category equivalence between and . Distributive semilattices Surprisingly, there is a notion of "distributivity" applicable to semilattices, even though distributivity conventionally requires the interaction of two binary operations. This notion requires but a single operation, and generalizes the distributivity condition for lattices. A join-semilattice is distributive if for all and with there exist and such that Distributive meet-semilattices are defined dually. These definitions are justified by the fact that any distributive join-semilattice in which binary meets exist is a distributive lattice. See the entry distributivity (order theory). A join-semilattice is distributive if and only if the lattice of its ideals (under inclusion) is distributive. Complete semilattices Nowadays, the term "complete semilattice" has no generally accepted meaning, and various mutually inconsistent definitions exist. If completeness is taken to require the existence of all infinite joins, or all infinite meets, whichever the case may be, as well as finite ones, this immediately leads to partial orders that are in fact complete lattices. For why the existence of all possible infinite joins entails the existence of all possible infinite meets (and vice versa), see the entry completeness (order theory). Nevertheless, the literature on occasion still takes complete join- or meet-semilattices to be complete lattices. In this case, "completeness" denotes a restriction on the scope of the homomorphisms. Specifically, a complete join-semilattice requires that the homomorphisms preserve all joins, but contrary to the situation we find for completeness properties, this does not require that homomorphisms preserve all meets. On the other hand, we can conclude that every such mapping is the lower adjoint of some Galois connection. The corresponding (unique) upper adjoint will then be a homomorphism of complete meet-semilattices. This gives rise to a number of useful categorical dualities between the categories of all complete semilattices with morphisms preserving all meets or joins, respectively. Another usage of "complete meet-semilattice" refers to a bounded complete cpo. A complete meet-semilattice in this sense is arguably the "most complete" meet-semilattice that is not necessarily a complete lattice. Indeed, a complete meet-semilattice has all non-empty meets (which is equivalent to being bounded complete) and all directed joins. If such a structure has also a greatest element (the meet of the empty set), it is also a complete lattice. Thus a complete semilattice turns out to be "a complete lattice possibly lacking a top". This definition is of interest specifically in domain theory, where bounded complete algebraic cpos are studied as Scott domains. Hence Scott domains have been called algebraic semilattices. Cardinality-restricted notions of completeness for semilattices have been rarely considered in the literature. Free semilattices This section presupposes some knowledge of category theory. In various situations, free semilattices exist. For example, the forgetful functor from the category of join-semilattices (and their homomorphisms) to the category of sets (and functions) admits a left adjoint. Therefore, the free join-semilattice over a set is constructed by taking the collection of all non-empty finite subsets of ordered by subset inclusion. Clearly, can be embedded into by a mapping that takes any element in to the singleton set Then any function from a to a join-semilattice (more formally, to the underlying set of ) induces a unique homomorphism between the join-semilattices and such that Explicitly, is given by Now the obvious uniqueness of suffices to obtain the required adjunction—the morphism-part of the functor can be derived from general considerations (see adjoint functors). The case of free meet-semilattices is dual, using the opposite subset inclusion as an ordering. For join-semilattices with bottom, we just add the empty set to the above collection of subsets. In addition, semilattices often serve as generators for free objects within other categories. Notably, both the forgetful functors from the category of frames and frame-homomorphisms, and from the category of distributive lattices and lattice-homomorphisms, have a left adjoint. See also − generalization of join semilattice Notes References It is often the case that standard treatments of lattice theory define a semilattice, if that, and then say no more. See the references in the entries order theory and lattice theory. Moreover, there is no literature on semilattices of comparable magnitude to that on semigroups. External links Jipsen's algebra structures page: Semilattices. Lattice theory Algebraic structures
Semilattice
[ "Mathematics" ]
2,886
[ "Mathematical structures", "Lattice theory", "Mathematical objects", "Fields of abstract algebra", "Algebraic structures", "Order theory" ]
12,704,641
https://en.wikipedia.org/wiki/Tits%20alternative
In mathematics, the Tits alternative, named after Jacques Tits, is an important theorem about the structure of finitely generated linear groups. Statement The theorem, proven by Tits, is stated as follows. Consequences A linear group is not amenable if and only if it contains a non-abelian free group (thus the von Neumann conjecture, while not true in general, holds for linear groups). The Tits alternative is an important ingredient in the proof of Gromov's theorem on groups of polynomial growth. In fact the alternative essentially establishes the result for linear groups (it reduces it to the case of solvable groups, which can be dealt with by elementary means). Generalizations In geometric group theory, a group G is said to satisfy the Tits alternative if for every subgroup H of G either H is virtually solvable or H contains a nonabelian free subgroup (in some versions of the definition this condition is only required to be satisfied for all finitely generated subgroups of G). Examples of groups satisfying the Tits alternative which are either not linear, or at least not known to be linear, are: Hyperbolic groups Mapping class groups; Out(Fn); Certain groups of birational transformations of algebraic surfaces. Examples of groups not satisfying the Tits alternative are: the Grigorchuk group; Thompson's group F. Proof The proof of the original Tits alternative is by looking at the Zariski closure of in . If it is solvable then the group is solvable. Otherwise one looks at the image of in the Levi component. If it is noncompact then a ping-pong argument finishes the proof. If it is compact then either all eigenvalues of elements in the image of are roots of unity and then the image is finite, or one can find an embedding of in which one can apply the ping-pong strategy. Note that the proof of all generalisations above also rests on a ping-pong argument. References Infinite group theory Geometric group theory Theorems in group theory
Tits alternative
[ "Physics" ]
417
[ "Geometric group theory", "Group actions", "Symmetry" ]
12,706,742
https://en.wikipedia.org/wiki/Directive%2096/82/EC
Council Directive 96/82/EC of 9 December 1996 on the control of major-accident hazards involving dangerous substances (as amended) is a European Union law aimed at improving the safety of sites containing large quantities of dangerous substances. It is also known as the Seveso II Directive, after the Seveso disaster. It replaced the Seveso Directive and was in turn modified by the Seveso III directive (2012/18/EU). See also Seveso Directive Control of Major Accident Hazards Regulations 1999 External links Council Directive 96/82/EC of 9 December 1996 on the control of major-accident hazards involving dangerous substances Summaries of EU legislation > Environment > Civil protection > Major accidents involving dangerous substances European Commission page about the Seveso Directives Seveso III Directive (2012/18/EU) European Union directives 1996/82 1996 in law 1996 in the European Union Environmental law in the European Union Regulation of chemicals in the European Union Process safety Safety codes
Directive 96/82/EC
[ "Chemistry", "Engineering" ]
201
[ "Regulation of chemicals in the European Union", "Safety engineering", "Regulation of chemicals", "Process safety", "Chemical process engineering" ]
12,706,857
https://en.wikipedia.org/wiki/Aircraft%20specific%20energy
Aircraft-specific energy is a form of specific energy applied to aircraft and missile trajectory analysis. It represents the combined kinetic and potential energy of the vehicle at any given time. It is the total energy of the vehicle (relative to the Earth's surface) per unit weight of the vehicle. Being independent of the mass of the vehicle, it provides a powerful tool for the design of optimal trajectories. Aircraft-specific energy is very similar to specific orbital energy except that it is expressed as a positive quantity. A zero value of aircraft-specific energy represents an aircraft at rest on the Earth's surface, and the value increases as speed and altitude increases. As with other forms of specific energy, aircraft-specific energy is an intensive property and is represented in units of length since it is independent of the mass of the vehicle. That is, while the specific energy may be expressed in joule per kilogram (J/kg), the specific energy height may be expressed in meters by the formula (v^2 / 2g) + h, where V is the airspeed of the aircraft, g is the acceleration due to gravity, and h is the altitude of the aircraft. Applications The field of trajectory optimization has made use of the concept since the 1950s in the form of energy analysis. In this approach, the specific energy is defined as one of the dynamic states of the problem and is the slowest varying state. All other states such as altitude and flight path angle are approximated as infinitely fast compared to the specific energy dynamics. This assumption allow the solution of optimal trajectories in a relatively simple form. The specific energy is computed by the total energy (as defined above relative the Earth's surface) divided by the mass of the vehicle. It is a key element in performance of aircraft and rockets. For a rocket flying vertically (in a vacuum), it is the apogee that the rocket would obtain. Aircraft-specific energy is used extensively in the energy–maneuverability theory governing modern aircraft dogfighting tactics. The primary goal of air combat manoeuvring is to maintain an optimal aircraft-specific energy. Speed allows an aircraft the ability to potentially outmaneuver adversaries, and altitude can be converted into speed, while also providing extended range for guided munitions (due to lower air density and therefore lower drag at any given velocity). Aircraft such as the F-16 Fighting Falcon were designed to be optimized in accordance to the energy-maneuverability theory, allowing for an aircraft to quickly gain aircraft-specific energy as fast as possible. References Aerodynamics
Aircraft specific energy
[ "Chemistry", "Engineering" ]
527
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics stubs", "Fluid dynamics" ]
12,707,594
https://en.wikipedia.org/wiki/Magnetic-activated%20cell%20sorting
Magnetic-activated cell sorting (MACS) is a method for separation of various cell populations depending on their surface antigens (CD molecules) invented by Miltenyi Biotec. The name MACS is a registered trademark of the company. The method was developed with Miltenyi Biotec's MACS system, which uses superparamagnetic nanoparticles and columns. The superparamagnetic nanoparticles are of the order of 100 nm. They are used to tag the targeted cells in order to capture them inside the column. The column is placed between permanent magnets so that when the magnetic particle-cell complex passes through it, the tagged cells can be captured. The column consists of steel wool which increases the magnetic field gradient to maximize separation efficiency when the column is placed between the permanent magnets. Magnetic-activated cell sorting is a commonly used method in areas like immunology, cancer research, neuroscience, and stem cell research. Miltenyi sells microbeads which are magnetic nanoparticles conjugated to antibodies which can be used to target specific cells. In the Assisted Reproductive Technology (ART) field, the apoptotic spermatozoa (those who are programmed to die) are bind in the Annexin V(a membrane apoptotic marker) with a specific monoclonal antibodies which are conjugated a magnetic microsphere. So later when we use a MACS column it is possible to separate the healthy spermatozoa with the apoptotic ones. Procedure The MACS method allows cells to be separated by using magnetic nanoparticles coated with antibodies against a particular surface antigen. This causes the cells expressing this antigen to attach to the magnetic nanoparticles. After incubating the beads and cells, the solution is transferred to a column in a strong magnetic field. In this step, the cells attached to the nanoparticles (expressing the antigen) stay on the column, while other cells (not expressing the antigen) flow through. With this method, the cells can be separated positively or negatively with respect to the particular antigen(s). Positive and negative selection With positive selection, the cells expressing the antigen(s) of interest, which attached to the magnetic column, are washed out to a separate vessel, after removing the column from the magnetic field. This method is useful for isolation of a particular cell type, for instance CD4 lymphocytes. Moreover, it enables early detection of sperm which initiate apoptosis, although they may show an adequate appearance and motility. A magnetic-labelled receptor that binds to annexin is added to sperm. Inside normal cells, phosphatidylserine molecules are located within the cell membrane towards the cytoplasm. Nevertheless, in those cells that initiate the apoptotic process phosphatidylserine instead faces the cell membrane outer side, binding to the annexin conjugate. Therefore, normal spermatozoa will pass through the column without binding to the labelled receptor. On the other hand, proapoptotic sperm will remain trapped, which turns out as a sperm selection process thanks to the magnetic- labeled antibody. Finally, this technique has shown its efficacy, even though it remains limited. With negative selection, the antibody used is against surface antigen(s) which are known to be present on cells that are not of interest. After administration of the cells/magnetic nanoparticles solution onto the column the cells expressing these antigens bind to the column and the fraction that goes through is collected, as it contains almost no cells with these undesired antigens. Modifications Magnetic nanoparticles conjugated to an antibody against an antigen of interest are not always available, but there is a way to circumvent it. Since fluorophore-conjugated antibodies are much more prevalent, it is possible to use magnetic nanoparticles coated with anti-fluorochrome antibodies. They are incubated with the fluorescent-labelled antibodies against the antigen of interest and may thus serve for cell separation with respect to the antigen. See also Dynabeads Flow cytometry Clusters of differentiation Molecular biology References S. Miltenyi, W. Muller, W. Weichel, and A. Radbruch, “High Gradient Magnetic Cell Separation With MACS,” Cytometry, vol. 11, no. 2, pp. 231–238, 1990.
Magnetic-activated cell sorting
[ "Chemistry", "Biology" ]
911
[ "Biochemistry", "Molecular biology" ]
12,711,268
https://en.wikipedia.org/wiki/Anti-symmetric%20operator
In quantum mechanics, a raising or lowering operator (collectively known as ladder operators) is an operator that increases or decreases the eigenvalue of another operator. In quantum mechanics, the raising operator is sometimes called the creation operator, and the lowering operator the annihilation operator. Well-known applications of ladder operators in quantum mechanics are in the formalisms of the quantum harmonic oscillator and angular momentum. Introduction Another type of operator in quantum field theory, discovered in the early 1970s, is known as the anti-symmetric operator. This operator, similar to spin in non-relativistic quantum mechanics is a ladder operator that can create two fermions of opposite spin out of a boson or a boson from two fermions. A Fermion, named after Enrico Fermi, is a particle with a half-integer spin, such as electrons and protons. This is a matter particle. A boson, named after S. N. Bose, is a particle with full integer spin, such as photons and W's. This is a force carrying particle. Spin First, we will review spin for non-relativistic quantum mechanics. Spin, an intrinsic property similar to angular momentum, is defined by a spin operator S that plays a role on a system similar to the operator L for orbital angular momentum. The operators and whose eigenvalues are and respectively. These formalisms also obey the usual commutation relations for angular momentum , , and . The raising and lowering operators, and , are defined as and respectively. These ladder operators act on the state in the following and respectively. The operators S_x and S_y can be determined using the ladder method. In the case of the spin 1/2 case (fermion), the operator acting on a state produces and . Likewise, the operator acting on a state produces and . The matrix representations of these operators are constructed as follows: Therefore, and can be represented by the matrix representations: Recalling the generalized uncertainty relation for two operators A and B, , we can immediately see that the uncertainty relation of the operators and are as follows: Therefore, like orbital angular momentum, we can only specify one coordinate at a time. We specify the operators and . Application in quantum field theory The creation of a particle and anti-particle from a boson is defined similarly but for infinite dimensions. Therefore, the Levi-Civita symbol for infinite dimensions is introduced. The commutation relations are simply carried over to infinite dimensions . is now equal to where n=∞. Its eigenvalue is . Defining the magnetic quantum number, angular momentum projected in the z direction, is more challenging than the simple state of spin. The problem becomes analogous to moment of inertia in classical mechanics and is generalizable to n dimensions. It is this property that allows for the creation and annihilation of bosons. Bosons Characterized by their spin, a bosonic field can be scalar fields, vector fields and even tensor fields. To illustrate, the electromagnetic field quantized is the photon field, which can be quantized using conventional methods of canonical or path integral quantization. This has led to the theory of quantum electrodynamics, arguably the most successful theory in physics. The graviton field is the quantized gravitational field. There is yet to be a theory that quantizes the gravitational field, but theories such as string theory can be thought of the gravitational field quantized. An example of a non-relativistic bosonic field is that describing cold bosonic atoms, such as Helium-4. Free bosonic fields obey commutation relations: , To illustrate, suppose we have a system of N bosons that occupy mutually orthogonal single-particle states , etc. Using the usual representation, we demonstrate the system by assigning a state to each particle and then imposing exchange symmetry. This wave equation can be represented using a second quantized approach, known as second quantization. The number of particles in each single-particle state is listed. The creation and annihilation operators, which add and subtract particles from multi-particle states. These creation and annihilation operators are very similar to those defined for the quantum harmonic oscillator, which added and subtracted energy quanta. However, these operators literally create and annihilate particles with a given quantum state. The bosonic annihilation operator and creation operator have the following effects: Like the creation and annihilation operators and also found in quantum field theory, the creation and annihilation operators and act on bosons in multi-particle states. While and allows us to determine whether a particle was created or destroyed in a system, the spin operators and allow us to determine how. A photon can become both a positron and electron and vice versa. Because of the anti-symmetric statistics, a particle of spin obeys the Pauli-Exclusion Rule. Two particles can exist in the same state if and only if the spin of the particle is opposite. Back to our example, the spin state of the particle is spin-1. Symmetric particles, or bosons, need not obey the Pauli-Exclusion Principle so therefore we can represent the spin state of the particle as follows: and The annihilation spin operator, as its name implies, annihilates a photon into both an electron and positron. Likewise, the creation spin operator creates a photon. The photon can be in either the first state or the second state in this example. If we apply the linear momentum operator Fermions Therefore, we define the operator and . In the case of the non-relativistic particle, if is applied to a fermion twice, the resulting eigenvalue is 0. Similarly, the eigenvalue is 0 when is applied to a fermion twice. This relation satisfies the Pauli Exclusion Principle. However, bosons are symmetric particles, which do not obey the Pauli Exclusion Principle. References Quantum mechanics Quantum field theory
Anti-symmetric operator
[ "Physics" ]
1,233
[ "Quantum field theory", "Quantum operators", "Quantum mechanics" ]
36,603
https://en.wikipedia.org/wiki/Rotaxane
A rotaxane () is a mechanically interlocked molecular architecture consisting of a dumbbell-shaped molecule which is threaded through a macrocycle (see graphical representation). The two components of a rotaxane are kinetically trapped since the ends of the dumbbell (often called stoppers) are larger than the internal diameter of the ring and prevent dissociation (unthreading) of the components since this would require significant distortion of the covalent bonds. Much of the research concerning rotaxanes and other mechanically interlocked molecular architectures, such as catenanes, has been focused on their efficient synthesis or their utilization as artificial molecular machines. However, examples of rotaxane substructure have been found in naturally occurring peptides, including: cystine knot peptides, cyclotides or lasso-peptides such as microcin J25. Synthesis The earliest reported synthesis of a rotaxane in 1967 relied on the statistical probability that if two halves of a dumbbell-shaped molecule were reacted in the presence of a macrocycle that some small percentage would connect through the ring. To obtain a reasonable quantity of rotaxane, the macrocycle was attached to a solid-phase support and treated with both halves of the dumbbell 70 times and then severed from the support to give a 6% yield. However, the synthesis of rotaxanes has advanced significantly and efficient yields can be obtained by preorganizing the components utilizing hydrogen bonding, metal coordination, hydrophobic forces, covalent bonds, or coulombic interactions. The three most common strategies to synthesize rotaxane are "capping", "clipping", and "slipping", though others do exist. Recently, Leigh and co-workers described a new pathway to mechanically interlocked architectures involving a transition-metal center that can catalyse a reaction through the cavity of a macrocycle. Capping Synthesis via the capping method relies strongly upon a thermodynamically driven template effect; that is, the "thread" is held within the "macrocycle" by non-covalent interactions, for example rotaxinations with cyclodextrin macrocycles involve exploitation of the hydrophobic effect. This dynamic complex or pseudorotaxane is then converted to the rotaxane by reacting the ends of the threaded guest with large groups, preventing disassociation. Clipping The clipping method is similar to the capping reaction except that in this case the dumbbell shaped molecule is complete and is bound to a partial macrocycle. The partial macrocycle then undergoes a ring closing reaction around the dumbbell-shaped molecule, forming the rotaxane. Slipping The method of slipping is one which exploits the thermodynamic stability of the rotaxane. If the end groups of the dumbbell are an appropriate size it will be able to reversibly thread through the macrocycle at higher temperatures. By cooling the dynamic complex, it becomes kinetically trapped as a rotaxane at the lower temperature. Snapping Snapping involves two separate parts of the thread, both containing a bulky group. one part of the thread is then threaded to the macrocycle, forming a semi rotaxane, and end is closed of by the other part of the thread forming the rotaxane. "Active template" methodology Leigh and co-workers recently began to explore a strategy in which template ions could also play an active role in promoting the crucial final covalent bond forming reaction that captures the interlocked structure (i.e., the metal has a dual function, acting as a template for entwining the precursors and catalyzing covalent bond formation between the reactants). Potential applications Molecular machines Rotaxane-based molecular machines have been of initial interest for their potential use in molecular electronics as logic molecular switching elements and as molecular shuttles. These molecular machines are usually based on the movement of the macrocycle on the dumbbell. The macrocycle can rotate around the axis of the dumbbell like a wheel and axle or it can slide along its axis from one site to another. Controlling the position of the macrocycle allows the rotaxane to function as a molecular switch, with each possible location of the macrocycle corresponding to a different state. These rotaxane machines can be manipulated both by chemical and photochemical inputs. Rotaxane based systems have also been shown to function as molecular muscles. In 2009, there was a report of a "domino effect" from one extremity to the other in a Glycorotaxane Molecular Machine. In this case, the 4C1 or 1C4 chair-like conformation of the mannopyranoside stopper can be controlled, depending on the localization of the macrocycle. In 2012, unique pseudo-macrocycles consisting of double-lasso molecular machines (also called rotamacrocycles) were reported in Chem. Sci. These structures can be tightened or loosened depending on pH. A controllable jump rope movement was also observed in these new molecular machines. Ultrastable dyes Potential application as long-lasting dyes is based on the enhanced stability of the inner portion of the dumbbell-shaped molecule. Studies with cyclodextrin-protected rotaxane azo dyes established this characteristic. More reactive squaraine dyes have also been shown to have enhanced stability by preventing nucleophilic attack of the inner squaraine moiety. The enhanced stability of rotaxane dyes is attributed to the insulating effect of the macrocycle, which is able to block interactions with other molecules. Nanorecording In a nanorecording application, a certain rotaxane is deposited as a Langmuir–Blodgett film on ITO-coated glass. When a positive voltage is applied with the tip of a scanning tunneling microscope probe, the rotaxane rings in the tip area switch to a different part of the dumbbell and the resulting new conformation makes the molecules stick out 0.3 nanometer from the surface. This height difference is sufficient for a memory dot. It is not yet known how to erase such a nanorecording film. Nomenclature Accepted nomenclature is to designate the number of components of the rotaxane in brackets as a prefix. Therefore, the a rotaxane consisting of a single dumbbell-shaped axial molecule with a single macrocycle around its shaft is called a [2]rotaxane, and two cyanostar molecules around the central phosphate group of dialkylphosphate is a [3]rotaxane. See also Catenane Mechanically interlocked molecular architecture Molecular Borromean rings Molecular knots Polyrotaxane References Supramolecular chemistry Molecular electronics Organic semiconductors Molecular topology Articles containing video clips
Rotaxane
[ "Chemistry", "Materials_science", "Mathematics" ]
1,394
[ "Molecular physics", "Semiconductor materials", "Molecular electronics", "Supramolecular chemistry", "Molecular topology", "Topology", "nan", "Nanotechnology", "Organic semiconductors" ]
36,605
https://en.wikipedia.org/wiki/Molecular%20electronics
Molecular electronics is the study and application of molecular building blocks for the fabrication of electronic components. It is an interdisciplinary area that spans physics, chemistry, and materials science. It provides a potential means to extend Moore's Law beyond the foreseen limits of small-scale conventional silicon integrated circuits. Molecular scale electronics Molecular scale electronics, also called single-molecule electronics, is a branch of nanotechnology that uses single molecules, or nanoscale collections of single molecules, as electronic components. Because single molecules constitute the smallest stable structures possible, this miniaturization is the ultimate goal for shrinking electrical circuits. Conventional electronic devices are traditionally made from bulk materials. Bulk methods have inherent limits, and are growing increasingly demanding and costly. Thus, the idea was born that the components could instead be built up atom by atom in a chemistry lab (bottom up) as opposed to carving them out of bulk material (top down). In single-molecule electronics, the bulk material is replaced by single molecules. The molecules used have properties that resemble traditional electronic components such as a wire, transistor, or rectifier. Single-molecule electronics is an emerging field, and entire electronic circuits consisting exclusively of molecular sized compounds are still very far from being realized. However, the continuous demand for more computing power, together with the inherent limits of the present day lithographic methods make the transition seem unavoidable. Currently, the focus is on discovering molecules with interesting properties and on finding ways to obtain reliable and reproducible contacts between the molecular components and the bulk material of the electrodes. Molecular electronics operates at distances less than 100 nanometers. Miniaturization down to single molecules brings the scale down to a regime where quantum mechanics effects are important. In contrast to the case in conventional electronic components, where electrons can be filled in or drawn out more or less like a continuous flow of electric charge, the transfer of a single electron alters the system significantly. The significant amount of energy due to charging has to be taken into account when making calculations about the electronic properties of the setup and is highly sensitive to distances to conducting surfaces nearby. One of the biggest problems with measuring on single molecules is to establish reproducible electrical contact with only one molecule and doing so without shortcutting the electrodes. Because the current photolithographic technology is unable to produce electrode gaps small enough to contact both ends of the molecules tested (in the order of nanometers), alternative strategies are used. These include molecular-sized gaps called break junctions, in which a thin electrode is stretched until it breaks. One of the ways to overcome the gap size issue is by trapping molecular functionalized nanoparticles (internanoparticle spacing is matchable to the size of molecules), and later target the molecule by place exchange reaction. Another method is to use the tip of a scanning tunneling microscope (STM) to contact molecules adhered at the other end to a metal substrate. Another popular way to anchor molecules to the electrodes is to make use of sulfur's high chemical affinity to gold; though useful, the anchoring is non-specific and thus anchors the molecules randomly to all gold surfaces, and the contact resistance is highly dependent on the precise atomic geometry around the site of anchoring and thereby inherently compromises the reproducibility of the connection. To circumvent the latter issue, experiments have shown that fullerenes could be a good candidate for use instead of sulfur because of the large conjugated π-system that can electrically contact many more atoms at once than a single atom of sulfur. The shift from metal electrodes to semiconductor electrodes allows for more tailored properties and thus for more interesting applications. There are some concepts for contacting organic molecules using semiconductor-only electrodes, for example by using indium arsenide nanowires with an embedded segment of the wider bandgap material indium phosphide used as an electronic barrier to be bridged by molecules. One of the biggest hindrances for single-molecule electronics to be commercially exploited is the lack of means to connect a molecular sized circuit to bulk electrodes in a way that gives reproducible results. Also problematic is that some measurements on single molecules are done at cryogenic temperatures, near absolute zero, which is very energy consuming. History The first time in history molecular electronics are mentioned was in 1956 by the German physicist Arthur Von Hippel, who suggested a bottom-up procedure of developing electronics from atoms and molecules rather than using prefabricated materials, an idea he named molecular engineering. However the first breakthrough in the field is considered by many the article by Aviram and Ratner in 1974. In this article named Molecular Rectifiers, they presented a theoretical calculation of transport through a modified charge-transfer molecule with donor acceptor groups that would allow transport only in one direction, essentially like a semiconductor diode. This was a breakthrough that inspired many years of research in the field of molecular electronics. Molecular materials for electronics The biggest advantage of conductive polymers is their processability, mainly by dispersion. Conductive polymers are not plastics, i.e., they are not thermoformable, yet they are organic polymers, like (insulating) polymers. They can offer high electrical conductivity but have different mechanical properties than other commercially used polymers. The electrical properties can be fine-tuned using the methods of organic synthesis and of advanced dispersion. The linear-backbone polymers such as polyacetylene, polypyrrole, and polyaniline are the main classes of conductive polymers. Poly(3-alkylthiophenes) are the archetypical materials for solar cells and transistors. Conducting polymers have backbones of contiguous sp2 hybridized carbon centers. One valence electron on each center resides in a pz orbital, which is orthogonal to the other three sigma-bonds. The electrons in these delocalized orbitals have high mobility when the material is doped by oxidation, which removes some of these delocalized electrons. Thus the conjugated p-orbitals form a one-dimensional electronic band, and the electrons within this band become mobile when it is emptied partly. Despite intensive research, the relationship between morphology, chain structure, and conductivity is poorly understood yet. Due to their poor processability, conductive polymers have few large-scale applications. They have some promise in antistatic materials and have been built into commercial displays and batteries, but have had limits due to the production costs, material inconsistencies, toxicity, poor solubility in solvents, and inability to directly melt process. Nevertheless, conducting polymers are rapidly gaining attraction in new uses with increasingly processable materials with better electrical and physical properties and lower costs. With the availability of stable and reproducible dispersions, poly(3,4-ethylenedioxythiophene) (PEDOT) and polyaniline have gained some large-scale applications. While PEDOT is mainly used in antistatic applications and as a transparent conductive layer in the form of PEDOT and polystyrene sulfonic acid (PSS, mixed form: PEDOT:PSS) dispersions, polyaniline is widely used to make printed circuit boards, in the final finish, to protect copper from corrosion and preventing its solderability. Newer nanostructured forms of conducting polymers provide fresh impetus to this field, with their higher surface area and better dispersability. Recently supramolecular chemistry has been introduced to the field, which provide new opportunity for developing next generation of molecular electronics. For example, two orders of magnitude current intensity enhancement was achieved by inserting cationic molecules into the cavity of pillar[5]arene. See also Comparison of software for molecular mechanics modeling Molecular conductance Molecular wires Organic semiconductor Single-molecule magnet Spin transition Unimolecular rectifier Nanoelectronics Molecular scale electronics Mark Ratner Mark Reed (physicist) James Tour Supramolecular chemistry Supramolecular electronics References Further reading External links Nanoelectronics Organic polymers Organic semiconductors Conductive polymers
Molecular electronics
[ "Chemistry", "Materials_science" ]
1,666
[ "Organic polymers", "Molecular physics", "Semiconductor materials", "Molecular electronics", "Organic compounds", "Nanoelectronics", "Nanotechnology", "Conductive polymers", "Organic semiconductors" ]
36,680
https://en.wikipedia.org/wiki/Combinatorial%20chemistry
Combinatorial chemistry comprises chemical synthetic methods that make it possible to prepare a large number (tens to thousands or even millions) of compounds in a single process. These compound libraries can be made as mixtures, sets of individual compounds or chemical structures generated by computer software. Combinatorial chemistry can be used for the synthesis of small molecules and for peptides. Strategies that allow identification of useful components of the libraries are also part of combinatorial chemistry. The methods used in combinatorial chemistry are applied outside chemistry, too. Introduction The basic principle of combinatorial chemistry is to prepare libraries of a very large number of compounds and identify those which are useful as potential drugs or agrochemicals. This relies on high-throughput screening which is capable of assessing the output at sufficient scale. Although combinatorial chemistry has only really been taken up by industry since the 1990s, its roots can be seen as far back as the 1960s when a researcher at Rockefeller University, Bruce Merrifield, started investigating the solid-phase synthesis of peptides. Synthesis of peptides in a combinatorial fashion quickly leads to large numbers of molecules. Using the twenty natural amino acids, for example, in a tripeptide creates 8,000 (203) possibilities. Solid-phase methods for small molecules were later introduced and Furka devised a "split and mix" approach In its modern form, combinatorial chemistry has probably had its biggest impact in the pharmaceutical industry. Researchers attempting to optimize the activity profile of a compound create a 'library' of many different but related compounds. Advances in robotics have led to an industrial approach to combinatorial synthesis, enabling companies to routinely produce over 100,000 new and unique compounds per year. In order to handle the vast number of structural possibilities, researchers often create a 'virtual library', a computational enumeration of all possible structures of a given pharmacophore with all available reactants. Such a library can consist of thousands to millions of 'virtual' compounds. The researcher will select a subset of the 'virtual library' for actual synthesis, based upon various calculations and criteria (see ADME, computational chemistry, and QSAR). In 1996, at Parke-Davis Pharmaceutical Research, scientist Anthony Czarnik directed research and reported the first use of automation in synthesizing compound libraries. As the founding editor of the American Chemical Society's Journal of Combinatorial Chemistry, he also led research into RFID tags for targeted sorting in compound library synthesis. Polymers (peptides and oligonucleotides) Combinatorial split-mix (split and pool) synthesis Combinatorial split-mix (split and pool) synthesis is based on the solid-phase synthesis developed by Merrifield. If a combinatorial peptide library is synthesized using 20 amino acids (or other kinds of building blocks) the bead form solid support is divided into 20 equal portions. This is followed by coupling a different amino acid to each portion. The third step is the mixing of all portions. These three steps comprise a cycle. Elongation of the peptide chains can be realized by simply repeating the steps of the cycle. The procedure is illustrated by the synthesis of a dipeptide library using the same three amino acids as building blocks in both cycles. Each component of this library contains two amino acids arranged in different orders. The amino acids used in couplings are represented by yellow, blue and red circles in the figure. Divergent arrows show dividing solid support resin (green circles) into equal portions, vertical arrows mean coupling and convergent arrows represent mixing and homogenizing the portions of the support. The figure shows that in the two synthetic cycles 9 dipeptides are formed. In the third and fourth cycles, 27 tripeptides and 81 tetrapeptides would form, respectively. The "split-mix synthesis" has several outstanding features: It is highly efficient. As the figure demonstrates the number of peptides formed in the synthetic process (3, 9, 27, 81) increases exponentially with the number of executed cycles. Using 20 amino acids in each synthetic cycle the number of formed peptides are: 400, 8,000, 160,000 and 3,200,000, respectively. This means that the number of peptides increases exponentially with the number of the executed cycles. All peptide sequences are formed in the process that can be deduced by a combination of the amino acids used in the cycles. Portioning of the support into equal samples assures formation of the components of the library in nearly equal molar quantities. Only a single peptide forms on each bead of the support. This is the consequence of using only one amino acid in the coupling steps. It is completely unknown, however, which is the peptide that occupies a selected bead. The split-mix method can be used for the synthesis of organic or any other kind of library that can be prepared from its building blocks in a stepwise process. In 1990 three groups described methods for preparing peptide libraries by biological methods and one year later Fodor et al. published a remarkable method for synthesis of peptide arrays on small glass slides. A "parallel synthesis" method was developed by Mario Geysen and his colleagues for preparation of peptide arrays. They synthesized 96 peptides on plastic rods (pins) coated at their ends with the solid support. The pins were immersed into the solution of reagents placed in the wells of a microtiter plate. The method is widely applied particularly by using automatic parallel synthesizers. Although the parallel method is much slower than the real combinatorial one, its advantage is that it is exactly known which peptide or other compound forms on each pin. Further procedures were developed to combine the advantages of both split-mix and parallel synthesis. In the method described by two groups the solid support was enclosed into permeable plastic capsules together with a radiofrequency tag that carried the code of the compound to be formed in the capsule. The procedure was carried out similar to the split-mix method. In the split step, however, the capsules were distributed among the reaction vessels according to the codes read from the radiofrequency tags of the capsules. A different method for the same purpose was developed by Furka et al. is named "string synthesis". In this method, the capsules carried no code. They are strung like the pearls in a necklace and placed into the reaction vessels in stringed form. The identity of the capsules, as well as their contents, are stored by their position occupied on the strings. After each coupling step, the capsules are redistributed among new strings according to definite rules. Small molecules In the drug discovery process, the synthesis and biological evaluation of small molecules of interest have typically been a long and laborious process. Combinatorial chemistry has emerged in recent decades as an approach to quickly and efficiently synthesize large numbers of potential small molecule drug candidates. In a typical synthesis, only a single target molecule is produced at the end of a synthetic scheme, with each step in a synthesis producing only a single product. In a combinatorial synthesis, when using only single starting material, it is possible to synthesize a large library of molecules using identical reaction conditions that can then be screened for their biological activity. This pool of products is then split into three equal portions containing each of the three products, and then each of the three individual pools is then reacted with another unit of reagent B, C, or D, producing 9 unique compounds from the previous 3. This process is then repeated until the desired number of building blocks is added, generating many compounds. When synthesizing a library of compounds by a multi-step synthesis, efficient reaction methods must be employed, and if traditional purification methods are used after each reaction step, yields and efficiency will suffer. Solid-phase synthesis offers potential solutions to obviate the need for typical quenching and purification steps often used in synthetic chemistry. In general, a starting molecule is adhered to a solid support (typically an insoluble polymer), then additional reactions are performed, and the final product is purified and then cleaved from the solid support. Since the molecules of interest are attached to a solid support, it is possible to reduce the purification after each reaction to a single filtration/wash step, eliminating the need for tedious liquid-liquid extraction and solvent evaporation steps that most synthetic chemistry involves. Furthermore, by using heterogeneous reactants, excess reagents can be used to drive sluggish reactions to completion, which can further improve yields. Excess reagents can simply be washed away without the need for additional purification steps such as chromatography. Over the years, a variety of methods have been developed to refine the use of solid-phase organic synthesis in combinatorial chemistry, including efforts to increase the ease of synthesis and purification, as well as non-traditional methods to characterize intermediate products. Although the majority of the examples described here will employ heterogeneous reaction media in every reaction step, Booth and Hodges provide an early example of using solid-supported reagents only during the purification step of traditional solution-phase syntheses. In their view, solution-phase chemistry offers the advantages of avoiding attachment and cleavage reactions necessary to anchor and remove molecules to resins as well as eliminating the need to recreate solid-phase analogues of established solution-phase reactions. The single purification step at the end of a synthesis allows one or more impurities to be removed, assuming the chemical structure of the offending impurity is known. While the use of solid-supported reagents greatly simplifies the synthesis of compounds, many combinatorial syntheses require multiple steps, each of which still requires some form of purification. Armstrong, et al. describe a one-pot method for generating combinatorial libraries, called multiple-component condensations (MCCs). In this scheme, three or more reagents react such that each reagent is incorporated into the final product in a single step, eliminating the need for a multi-step synthesis that involves many purification steps. In MCCs, there is no deconvolution required to determine which compounds are biologically active because each synthesis in an array has only a single product, thus the identity of the compound should be unequivocally known. In another array synthesis, Still generated a large library of oligopeptides by split synthesis. The drawback to making many thousands of compounds is that it is difficult to determine the structure of the formed compounds. Their solution is to use molecular tags, where a tiny amount (1 pmol/bead) of a dye is attached to the beads, and the identity of a certain bead can be determined by analyzing which tags are present on the bead. Despite how easy attaching tags makes identification of receptors, it would be quite impossible to individually screen each compound for its receptor binding ability, so a dye was attached to each receptor, such that only those receptors that bind to their substrate produce a color change. When many reactions need to be run in an array (such as the 96 reactions described in one of Armstrong's MCC arrays), some of the more tedious aspects of synthesis can be automated to improve efficiency. This work, the "DIVERSOMER method" was pioneered at Parke-Davis in the early 1990s to run up to 40 chemical reactions in parallel. These efforts led to the first commercially available equipment for combinatorial chemistry (Diversomer synthesizer which was sold by Chemglass) and the first use of liquid handling robotics within a chemistry labortory. This method uses a device that automates the resin loading and wash cycles, as well as the reaction cycle monitoring and purification, and demonstrates the feasibility of their method and apparatus by using it to synthesize a variety of molecule classes, such as hydantoins and benzodiazepines, running 8 or 40 individual reactions in parallel. This and several other pioneering efforts in combinatorial chemistry were featured as "classical" papers in the field in 1999. Oftentimes, it is not possible to use expensive equipment, and Schwabacher, et al. describe a simple method of combining parallel synthesis of library members and evaluation of entire libraries of compounds. In their method, a thread that is partitioned into different regions is wrapped around a cylinder, where a different reagent is then coupled to each region which bears only a single species. The thread is then re-divided and wrapped around a cylinder of a different size, and this process is then repeated. The beauty of this method is that the identity of each product can be known simply by its location along the thread, and the corresponding biological activity is identified by Fourier transformation of fluorescence signals. In most of the syntheses described here, it is necessary to attach and remove the starting reagent to/from a solid support. This can lead to the generation of a hydroxyl group, which can potentially affect the biological activity of a target compound. Ellman uses solid phase supports in a multi-step synthesis scheme to obtain 192 individual 1,4-benzodiazepine derivatives, which are well-known therapeutic agents. To eliminate the possibility of potential hydroxyl group interference, a novel method using silyl-aryl chemistry is used to link the molecules to the solid support which cleaves from the support and leaves no trace of the linker. When anchoring a molecule to a solid support, intermediates cannot be isolated from one another without cleaving the molecule from the resin. Since many of the traditional characterization techniques used to track reaction progress and confirm product structure are solution-based, different techniques must be used. Gel-phase 13C NMR spectroscopy, MALDI mass spectrometry, and IR spectroscopy have been used to confirm structure and monitor the progress of solid-phase reactions. Gordon et al., describe several case studies that utilize imines and peptidyl phosphonates to generate combinatorial libraries of small molecules. To generate the imine library, an amino acid tethered to a resin is reacted in the presence of an aldehyde. The authors demonstrate the use of fast 13C gel phase NMR spectroscopy and magic angle spinning 1 H NMR spectroscopy to monitor the progress of reactions and showed that most imines could be formed in as little as 10 minutes at room temperature when trimethyl orthoformate was used as the solvent. The formed imines were then derivatized to generate 4-thiazolidinones, B-lactams, and pyrrolidines. The use of solid-phase supports greatly simplifies the synthesis of large combinatorial libraries of compounds. This is done by anchoring a starting material to a solid support and then running subsequent reactions until a sufficiently large library is built, after which the products are cleaved from the support. The use of solid-phase purification has also been demonstrated for use in solution-phase synthesis schemes in conjunction with standard liquid-liquid extraction purification techniques. Deconvolution and screening Combinatorial libraries Combinatorial libraries are special multi-component mixtures of small-molecule chemical compounds that are synthesized in a single stepwise process. They differ from collection of individual compounds as well as from series of compounds prepared by parallel synthesis. It is an important feature that mixtures are used in their synthesis. The use of mixtures ensures the very high efficiency of the process. Both reactants can be mixtures and in this case the procedure would be even more efficient. For practical reasons however, it is advisable to use the split-mix method in which one of two mixtures is replaced by single building blocks (BBs). The mixtures are so important that there are no combinatorial libraries without using mixture in the synthesis, and if a mixture is used in a process inevitably combinatorial library forms. The split-mix synthesis is usually realized using solid support but it is possible to apply it in solution, too. Since he structures the components are unknown deconvolution methods need to be used in screening. One of the most important features of combinatorial libraries is that the whole mixture can be screened in a single process. This makes these libraries very useful in pharmaceutical research. Partial libraries of full combinatorial libraries can also be synthesized. Some of them can be used in deconvolution Deconvolution of libraries cleaved from the solid support If the synthesized molecules of a combinatorial library are cleaved from the solid support a soluble mixture forms. In such solution, millions of different compounds may be found. When this synthetic method was developed, it first seemed impossible to identify the molecules, and to find molecules with useful properties. Strategies for identification of the useful components had been developed, however, to solve the problem. All these strategies are based on synthesis and testing of partial libraries. An early iterative strategy was devised by Furka in 1982. The method was later independently published by Erb et al. under the name "Recursive deconvolution" Recursive deconvolution The method is made understandable by the figure. A 27-member peptide library is synthesized from three amino acids. After the first (A) and second (B) cycles samples were set aside before mixing them. The products of the third cycle (C) are cleaved down before mixing then are tested for activity. Suppose the group labeled by + sign is active. All members have the red amino acid at the last coupling position (CP). Consequently, the active member also has the red amino acid at the last CP. Then the red amino acid is coupled to the three samples set aside after the second cycle (B) to get samples D. After cleaving, the three E samples are formed. If after testing the sample marked by + is the active one it shows that the blue amino acid occupies the second CP in the active component. Then to the three A samples first the blue then the red amino acid is coupled (F) then tested again after cleaving (G). If the + component proves to be active, the sequence of the active component is determined and shown in H. Positional scanning Positional scanning was introduced independently by Furka et al. and Pinilla et al. The method is based on the synthesis and testing of series of sublibraries. in which a certain sequence position is occupied by the same amino acid. The figure shows the nine sublibraries (B1-D3) of a full peptide trimer library (A) made from three amino acids. In sublibraries there is a position which is occupied by the same amino acid in all components. In the synthesis of a sublibrary the support is not divided and only one amino acid is coupled to the whole sample. As a result, one position is really occupied by the same amino acid in all components. For example, in the B2 sublibrary position 2 is occupied by the "yellow" amino acid in all the nine components. If in a screening test this sublibrary gives positive answer it means that position 2 in the active peptide is also occupied by the "yellow" amino acid. The amino acid sequence can be determined by testing all the nine (or sometime less) sublibraries. Omission libraries In omission libraries a certain amino acid is missing from all peptides of the mixture. The figure shows the full library and the three omission libraries. At the top the omitted amino acids are shown. If the omission library gives a negative test the omitted amino acid is present in the active component. Deconvolution of tethered combinatorial libraries If the peptides are not cleaved from the solid support we deal with a mixture of beads, each bead containing a single peptide. Smith and his colleagues showed earlier that peptides could be tested in tethered form, too. This approach was also used in screening peptide libraries. The tethered peptide library was tested with a dissolved target protein. The beads to which the protein was attached were picked out, removed the protein from the bead then the tethered peptide was identified by sequencing. A somewhat different approach was followed by Taylor and Morken. They used infrared thermography to identify catalysts in non-peptide tethered libraries. The method is based on the heat that is evolved in the beads that contain a catalyst when the tethered library immersed into a solution of a substrate. When the beads are examined through an infrared microscope the catalyst containing beads appear as bright spots and can be picked out. Encoded combinatorial libraries If we deal with a non-peptide organic libraries library it is not as simple to determine the identity of the content of a bead as in the case of a peptide one. In order to circumvent this difficulty methods had been developed to attach to the beads, in parallel with the synthesis of the library, molecules that encode the structure of the compound formed in the bead. Ohlmeyer and his colleagues published a binary encoding method They used mixtures of 18 tagging molecules that after cleaving them from the beads could be identified by Electron Capture Gas Chromatography. Sarkar et al. described chiral oligomers of pentenoic amides (COPAs) that can be used to construct mass encoded OBOC libraries. Kerr et al. introduced an innovative encoding method An orthogonally protected removable bifunctional linker was attached to the beads. One end of the linker was used to attach the non-natural building blocks of the library while to the other end encoding amino acid triplets were linked. The building blocks were non-natural amino acids and the series of their encoding amino acid triplets could be determined by Edman degradation. The important aspect of this kind of encoding was the possibility to cleave down from the beads the library members together with their attached encoding tags forming a soluble library. The same approach was used by Nikolajev et al. for encoding with peptides. In 1992 by Brenner and Lerner introduced DNA sequences to encode the beads of the solid support that proved to be the most successful encoding method. Nielsen, Brenner and Janda also used the Kerr approach for implementing the DNA encoding In the latest period of time there were important advancements in DNA sequencing. The next generation techniques make it possible to sequence large number of samples in parallel that is very important in screening of DNA encoded libraries. There was another innovation that contributed to the success of DNA encoding. In 2000 Halpin and Harbury omitted the solid support in the split-mix synthesis of the DNA encoded combinatorial libraries and replaced it by the encoding DNA oligomers. In solid phase split and pool synthesis the number of components of libraries can't exceed the number of the beads of the support. By the novel approach of the authors, this restraint was eliminated and made it possible to prepare new compounds in practically unlimited number. The Danish company Nuevolution for example synthesized a DNA encoded library containing 40 trillion! components The DNA encoded libraries are soluble that makes possible to apply the efficient affinity binding in screening. Some authors apply the DEL for acromim of DNA encoded combinatorial libraries others are using DECL. The latter seems better since in this name the combinatorial nature of these libraries is clearly expressed. Several types of DNA encoded combinatorial libraries had been introduced and described in the first decade of the present millennium. These libraries are very successfully applied in drug research. DNA templated synthesis of combinatorial libraries described in 2001 by Gartner et al. Dual pharmacophore DNA encoded combinatorial libraries invented in 2004 by Mlecco et al. Sequence encoded routing published by Harbury Halpin and Harbury in 2004. Single pharmacophore DNA encoded combinatorial libraries introduced in 2008 by Manocci et al. DNA encoded combinatorial libraries formed by using yoctoliter-scale reactor published by Hansen et al. in 2009 Details are found about their synthesis and application in the page DNA-encoded chemical library. The DNA encoded soluble combinatorial libraries have drawbacks, too. First of all the advantage coming from the use of solid support is completely lost. In addition, the polyionic character of DNA encoding chains limits the utility of non-aqueous solvents in the synthesis. For this reason many laboratories choose to develop DNA compatible reactions for use in the synthesis of DECLs. Quite a few of available ones are already described Materials science Materials science has applied the techniques of combinatorial chemistry to the discovery of new materials. This work was pioneered by P.G. Schultz et al. in the mid-nineties in the context of luminescent materials obtained by co-deposition of elements on a silicon substrate. His work was preceded by J. J. Hanak in 1970 but the computer and robotics tools were not available for the method to spread at the time. Work has been continued by several academic groups as well as companies with large research and development programs (Symyx Technologies, GE, Dow Chemical etc.). The technique has been used extensively for catalysis, coatings, electronics, and many other fields. The application of appropriate informatics tools is critical to handle, administer, and store the vast volumes of data produced. New types of design of experiments methods have also been developed to efficiently address the large experimental spaces that can be tackled using combinatorial methods. Diversity-oriented libraries Even though combinatorial chemistry has been an essential part of early drug discovery for more than two decades, so far only one de novo combinatorial chemistry-synthesized chemical has been approved for clinical use by FDA (sorafenib, a multikinase inhibitor indicated for advanced renal cancer). The analysis of the poor success rate of the approach has been suggested to connect with the rather limited chemical space covered by products of combinatorial chemistry. When comparing the properties of compounds in combinatorial chemistry libraries to those of approved drugs and natural products, Feher and Schmidt noted that combinatorial chemistry libraries suffer particularly from the lack of chirality, as well as structure rigidity, both of which are widely regarded as drug-like properties. Even though natural product drug discovery has not probably been the most fashionable trend in the pharmaceutical industry in recent times, a large proportion of new chemical entities still are nature-derived compounds, and thus, it has been suggested that effectiveness of combinatorial chemistry could be improved by enhancing the chemical diversity of screening libraries. As chirality and rigidity are the two most important features distinguishing approved drugs and natural products from compounds in combinatorial chemistry libraries, these are the two issues emphasized in so-called diversity oriented libraries, i.e. compound collections that aim at coverage of the chemical space, instead of just huge numbers of compounds. Patent classification subclass In the 8th edition of the International Patent Classification (IPC), which entered into force on January 1, 2006, a special subclass has been created for patent applications and patents related to inventions in the domain of combinatorial chemistry: "C40B". See also Combinatorics Cheminformatics Combinatorial biology Drug discovery Dynamic combinatorial chemistry High-throughput screening Mathematical chemistry Molecular modeling References External links IUPAC's "Glossary of Terms Used in Combinatorial Chemistry" ACS Combinatorial Science (formerly Journal of Combinatorial Chemistry) Combinatorial Chemistry Review Molecular Diversity Combinatorial Chemistry and High Throughput Screening Combinatorial Chemistry: an Online Journal SmiLib - A free open-source software for combinatorial library enumeration GLARE - A free open-source software for combinatorial library design Cheminformatics Drug discovery Materials science Chemistry
Combinatorial chemistry
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering", "Biology" ]
5,709
[ "Combinatorial chemistry", "Discrete mathematics", "Applied and interdisciplinary physics", "Life sciences industry", "Drug discovery", "Materials science", "Combinatorics", "Computational chemistry", "nan", "Cheminformatics", "Medicinal chemistry" ]
37,207
https://en.wikipedia.org/wiki/Nuclear%20engineering
Nuclear engineering is the engineering discipline concerned with designing and applying systems that utilize the energy released by nuclear processes. The most prominent application of nuclear engineering is the generation of electricity. Worldwide, some 440 nuclear reactors in 32 countries generate 10 percent of the world's energy through nuclear fission. In the future, it is expected that nuclear fusion will add another nuclear means of generating energy. Both reactions make use of the nuclear binding energy released when atomic nucleons are either separated (fission) or brought together (fusion). The energy available is given by the binding energy curve, and the amount generated is much greater than that generated through chemical reactions. Fission of 1 gram of uranium yields as much energy as burning 3 tons of coal or 600 gallons of fuel oil, without adding carbon dioxide to the atmosphere. History Nuclear engineering was born in 1938, with the discovery of nuclear fission. The first artificial nuclear reactor, CP-1, was designed by a team of physicists who were concerned that Nazi Germany might also be seeking to build a bomb based on nuclear fission. (The earliest known nuclear reaction on Earth occurred naturally, 1.7 billion years ago, in Oklo, Gabon, Africa.) The second artificial nuclear reactor, the X-10 Graphite Reactor, was also a part of the Manhattan Project, as were the plutonium-producing reactors of the Hanford Engineer Works. The first nuclear bomb was code named Gadget which was used in the Trinity Nuclear Test. The weapon was believed to have a yield of around 20 kilotons of TNT. The first nuclear reactor to generate electricity was Experimental Breeder Reactor I (EBR-I), which did so near Arco, Idaho, in 1951. EBR-I was a standalone facility, not connected to a grid, but a later Idaho research reactor in the BORAX series did briefly supply power to the town of Arco in 1955. The first commercial nuclear power plant, built to be connected to an electrical grid, is the Obninsk Nuclear Power Plant, which began operation in 1954. The second appears to be the Shippingport Atomic Power Station, which produced electricity in 1957. For a brief chronology, from the discovery of uranium to the current era, see Outline History of Nuclear Energy or History of Nuclear Power. See List of Commercial Nuclear Reactors for a comprehensive listing of nuclear power reactors and IAEA Power Reactor Information System (PRIS) for worldwide and country-level statistics on nuclear power generation. Sub-disciplines Nuclear engineers work in such areas as the following: Nuclear reactor design, which has evolved from the Generation I, proof-of concept, reactors of the 1950s and 1960s, to Generation II, Generation III, and Generation IV concepts Thermal hydraulics and heat transfer. In a typical nuclear power plant, heat generates steam that drives a steam turbine and a generator that produces electricity Materials science as it relates to nuclear power applications Managing the nuclear fuel cycle, in which fissile material is obtained, formed into fuel, removed when depleted, and safely stored or reprocessed Nuclear propulsion, mainly for military naval vessels, but there have been concepts for aircraft and missiles. Nuclear power has been used in space since the 1960s Plasma physics, which is integral to the development of fusion power Weapons development and management Generation of radionuclides, which have applications in industry, medicine, and many other areas Nuclear waste management Health physics Nuclear medicine and Medical Physics Health and safety Instrumentation and control engineering Process engineering Project Management Quality engineering Reactor operations Nuclear security (detection of clandestine nuclear materials) Nuclear engineering even has a role in criminal investigation, and agriculture. Many chemical, electrical and mechanical and other types of engineers also work in the nuclear industry, as do many scientists and support staff. In the U.S., nearly 100,000 people directly work in the nuclear industry. Including secondary sector jobs, the number of people supported by the U.S. nuclear industry is 475,000. Employment In the United States, nuclear engineers are employed as follows: Electric power generation 25% Federal government 18% Scientific research and development 15% Engineering services 5% Manufacturing 10% Other areas 27% Worldwide, job prospects for nuclear engineers are likely best in those countries that are active in or exploring nuclear technologies: Education Organizations that provide study and training in nuclear engineering include the following: Organizations American Nuclear Society Asian Network for Education in Nuclear Technology (ANENT) https://www.iaea.org/services/networks/anent Canadian Nuclear Association Chinese Nuclear Society International Atomic Energy Agency International Energy Agency (IEA) Japan Atomic Industrial Forum (JAIF) Korea Nuclear Energy Agency (KNEA) Latin American Network for Education in Nuclear Technology (LANENT) https://www.iaea.org/services/networks/lanent Minerals Council of Australia Nucleareurope Nuclear Institute Nuclear Energy Institute (NEI) Nuclear Industry Association of South Africa (NIASA) Nuclear Technology Education Consortion https://www.ntec.ac.uk/ OECD Nuclear Energy Agency (NEA) Regional Network for Education and Training in Nuclear Technology (STAR-NET) https://www.iaea.org/services/networks/star-net World Nuclear Association World Nuclear Transport Institute See also Atomic physics Chernobyl nuclear disaster Fukushima nuclear disaster International Nuclear Event Scale List of books about nuclear issues Lists of nuclear disasters and radioactive incidents List of nuclear reactors List of nuclear power stations Nuclear energy policy Nuclear fuel Nuclear criticality safety Nuclear material Nuclear physics Nuclear power Nuclear reactor technology Nuclear renaissance Safety engineering Thermal hydraulics Waste Isolation Pilot Plant References Further reading Ash, Milton, "Nuclear reactor kinetics", McGraw-Hill, (1965) Cravens, Gwyneth. Power to Save the World (2007) Gowing, Margaret. Britain and Atomic Energy, 1939–1945 (1964). Gowing, Margaret, and Lorna Arnold. Independence and Deterrence: Britain and Atomic Energy, Vol. I: Policy Making, 1945–52; Vol. II: Policy Execution, 1945–52 (London, 1974) Johnston, Sean F. "Creating a Canadian Profession: The Nuclear Engineer, 1940–68," Canadian Journal of History, Winter 2009, Vol. 44 Issue 3, pp 435–466 Johnston, Sean F. "Implanting a discipline: the academic trajectory of nuclear engineering in the USA and UK," Minerva, 47 (2009), pp. 51–73 External links Electric Generation from Commercial Nuclear Power Hacettepe University Department of Nuclear Engineering Nuclear Engineering International magazine Nuclear Safety Info Resources Nuclear Science and Engineering technical journal Science and Technology of Nuclear Installation Open-Access Journal Nuclear Engineers Engineering disciplines Nuclear technology
Nuclear engineering
[ "Physics", "Engineering" ]
1,344
[ "Nuclear technology", "nan", "Nuclear physics" ]
37,232
https://en.wikipedia.org/wiki/Fermat%27s%20principle
Fermat's principle, also known as the principle of least time, is the link between ray optics and wave optics. Fermat's principle states that the path taken by a ray between two given points is the path that can be traveled in the least time. First proposed by the French mathematician Pierre de Fermat in 1662, as a means of explaining the ordinary law of refraction of light (Fig.1), Fermat's principle was initially controversial because it seemed to ascribe knowledge and intent to nature. Not until the 19th century was it understood that nature's ability to test alternative paths is merely a fundamental property of waves. If points A and B are given, a wavefront expanding from A sweeps all possible ray paths radiating from A, whether they pass through B or not. If the wavefront reaches point B, it sweeps not only the ray path(s) from A to B, but also an infinitude of nearby paths with the same endpoints. Fermat's principle describes any ray that happens to reach point B; there is no implication that the ray "knew" the quickest path or "intended" to take that path. In its original "strong" form, Fermat's principle states that the path taken by a ray between two given points is the path that can be traveled in the least time. In order to be true in all cases, this statement must be weakened by replacing the "least" time with a time that is "stationary" with respect to variations of the path – so that a deviation in the path causes, at most, a second-order change in the traversal time. To put it loosely, a ray path is surrounded by close paths that can be traversed in very close times. It can be shown that this technical definition corresponds to more intuitive notions of a ray, such as a line of sight or the path of a narrow beam. For the purpose of comparing traversal times, the time from one point to the next nominated point is taken as if the first point were a point-source. Without this condition, the traversal time would be ambiguous; for example, if the propagation time from to were reckoned from an arbitrary wavefront W containing (Fig.2), that time could be made arbitrarily small by suitably angling the wavefront. Treating a point on the path as a source is the minimum requirement of Huygens' principle, and is part of the explanation of Fermat's principle. But it can also be shown that the geometric construction by which Huygens tried to apply his own principle (as distinct from the principle itself) is simply an invocation of Fermat's principle. Hence all the conclusions that Huygens drew from that construction – including, without limitation, the laws of rectilinear propagation of light, ordinary reflection, ordinary refraction, and the extraordinary refraction of "Iceland crystal" (calcite) – are also consequences of Fermat's principle. Derivation Sufficient conditions Let us suppose that: A disturbance propagates sequentially through a medium (a vacuum or some material, not necessarily homogeneous or isotropic), without action at a distance; During propagation, the influence of the disturbance at any intermediate point P upon surrounding points has a non-zero angular spread (as if P were a source), so that a disturbance originating from any point A arrives at any other point B via an infinitude of paths, by which B receives an infinitude of delayed versions of the disturbance at A; and These delayed versions of the disturbance will reinforce each other at B if they are synchronized within some tolerance. Then the various propagation paths from A to B will help each other, or interfere constructively, if their traversal times agree within the said tolerance. For a small tolerance (in the limiting case), the permissible range of variations of the path is maximized if the path is such that its traversal time is stationary with respect to the variations, so that a variation of the path causes at most a second-order change in the traversal time. The most obvious example of a stationarity in traversal time is a (local or global) minimum – that is, a path of least time, as in the "strong" form of Fermat's principle. But that condition is not essential to the argument. Having established that a path of stationary traversal time is reinforced by a maximally wide corridor of neighboring paths, we still need to explain how this reinforcement corresponds to intuitive notions of a ray. But, for brevity in the explanations, let us first define a ray path as a path of stationary traversal time. A ray as a signal path (line of sight) If the corridor of paths reinforcing a ray path from A to B is substantially obstructed, this will significantly alter the disturbance reaching B from A – unlike a similar-sized obstruction outside any such corridor, blocking paths that do not reinforce each other. The former obstruction will significantly disrupt the signal reaching B from A, while the latter will not; thus the ray path marks a signal path. If the signal is visible light, the former obstruction will significantly affect the appearance of an object at A as seen by an observer at B, while the latter will not; so the ray path marks a line of sight. In optical experiments, a line of sight is routinely assumed to be a ray path. A ray as an energy path (beam) If the corridor of paths reinforcing a ray path from A to B is substantially obstructed, this will significantly affect the energy reaching B from A – unlike a similar-sized obstruction outside any such corridor. Thus the ray path marks an energy path – as does a beam. Suppose that a wavefront expanding from point A passes point P, which lies on a ray path from point A to point B. By definition, all points on the wavefront have the same propagation time from A. Now let the wavefront be blocked except for a window, centered on P, and small enough to lie within the corridor of paths that reinforce the ray path from A to B. Then all points on the unobstructed portion of the wavefront will have, nearly enough, equal propagation times to B, but not to points in other directions, so that B will be in the direction of peak intensity of the beam admitted through the window. So the ray path marks the beam. And in optical experiments, a beam is routinely considered as a collection of rays or (if it is narrow) as an approximation to a ray (Fig.3). Analogies According to the "strong" form of Fermat's principle, the problem of finding the path of a light ray from point A in a medium of faster propagation, to point B in a medium of slower propagation (Fig.1), is analogous to the problem faced by a lifeguard in deciding where to enter the water in order to reach a drowning swimmer as soon as possible, given that the lifeguard can run faster than (s)he can swim. But that analogy falls short of explaining the behavior of the light, because the lifeguard can think about the problem (even if only for an instant) whereas the light presumably cannot. The discovery that ants are capable of similar calculations does not bridge the gap between the animate and the inanimate. In contrast, the above assumptions (1) to (3) hold for any wavelike disturbance and explain Fermat's principle in purely mechanistic terms, without any imputation of knowledge or purpose. The principle applies to waves in general, including (e.g.) sound waves in fluids and elastic waves in solids. In a modified form, it even works for matter waves: in quantum mechanics, the classical path of a particle is obtainable by applying Fermat's principle to the associated wave – except that, because the frequency may vary with the path, the stationarity is in the phase shift (or number of cycles) and not necessarily in the time. Fermat's principle is most familiar, however, in the case of visible light: it is the link between geometrical optics, which describes certain optical phenomena in terms of rays, and the wave theory of light, which explains the same phenomena on the hypothesis that light consists of waves. Equivalence to Huygens' construction In this article we distinguish between Huygens' principle, which states that every point crossed by a traveling wave becomes the source of a secondary wave, and Huygens' construction, which is described below. Let the surface be a wavefront at time , and let the surface be the same wavefront at the later time (Fig.4). Let be a general point on . Then, according to Huygens' construction, is the envelope (common tangent surface), on the forward side of , of all the secondary wavefronts each of which would expand in time from a point on , and if the secondary wavefront expanding from point in time touches the surface at point , then and lie on a ray. The construction may be repeated in order to find successive positions of the primary wavefront, and successive points on the ray. The ray direction given by this construction is the radial direction of the secondary wavefront, and may differ from the normal of the secondary wavefront (cf. Fig.2), and therefore from the normal of the primary wavefront at the point of tangency. Hence the ray velocity, in magnitude and direction, is the radial velocity of an infinitesimal secondary wavefront, and is generally a function of location and direction. Now let be a point on close to , and let be a point on close to . Then, by the construction,   the time taken for a secondary wavefront from to reach has at most a second-order dependence on the displacement , and the time taken for a secondary wavefront to reach from has at most a second-order dependence on the displacement . By (i), the ray path is a path of stationary traversal time from to ; and by (ii), it is a path of stationary traversal time from a point on to . So Huygens' construction implicitly defines a ray path as a path of stationary traversal time between successive positions of a wavefront, the time being reckoned from a point-source on the earlier wavefront. This conclusion remains valid if the secondary wavefronts are reflected or refracted by surfaces of discontinuity in the properties of the medium, provided that the comparison is restricted to the affected paths and the affected portions of the wavefronts. Fermat's principle, however, is conventionally expressed in point-to-point terms, not wavefront-to-wavefront terms. Accordingly, let us modify the example by supposing that the wavefront which becomes surface at time , and which becomes surface at the later time , is emitted from point at time . Let be a point on (as before), and a point on . And let , , , and be given, so that the problem is to find . If satisfies Huygens' construction, so that the secondary wavefront from is tangential to at , then is a path of stationary traversal time from to . Adding the fixed time from to , we find that is the path of stationary traversal time from to (possibly with a restricted domain of comparison, as noted above), in accordance with Fermat's principle. The argument works just as well in the converse direction, provided that has a well-defined tangent plane at . Thus Huygens' construction and Fermat's principle are geometrically equivalent. Through this equivalence, Fermat's principle sustains Huygens' construction and thence all the conclusions that Huygens was able to draw from that construction. In short, "The laws of geometrical optics may be derived from Fermat's principle". With the exception of the Fermat–Huygens principle itself, these laws are special cases in the sense that they depend on further assumptions about the media. Two of them are mentioned under the next heading. Special cases Isotropic media: rays normal to wavefronts In an isotropic medium, because the propagation speed is independent of direction, the secondary wavefronts that expand from points on a primary wavefront in a given infinitesimal time are spherical, so that their radii are normal to their common tangent surface at the points of tangency. But their radii mark the ray directions, and their common tangent surface is a general wavefront. Thus the rays are normal (orthogonal) to the wavefronts. Because much of the teaching of optics concentrates on isotropic media, treating anisotropic media as an optional topic, the assumption that the rays are normal to the wavefronts can become so pervasive that even Fermat's principle is explained under that assumption, although in fact Fermat's principle is more general. Homogeneous media: rectilinear propagation In a homogeneous medium (also called a uniform medium), all the secondary wavefronts that expand from a given primary wavefront in a given time are congruent and similarly oriented, so that their envelope may be considered as the envelope of a single secondary wavefront which preserves its orientation while its center (source) moves over . If is its center while is its point of tangency with , then moves parallel to , so that the plane tangential to at is parallel to the plane tangential to at . Let another (congruent and similarly orientated) secondary wavefront be centered on , moving with , and let it meet its envelope at point . Then, by the same reasoning, the plane tangential to at is parallel to the other two planes. Hence, due to the congruence and similar orientations, the ray directions and are the same (but not necessarily normal to the wavefronts, since the secondary wavefronts are not necessarily spherical). This construction can be repeated any number of times, giving a straight ray of any length. Thus a homogeneous medium admits rectilinear rays. Modern version Formulation in terms of refractive index Let a path extend from point to point . Let be the arc length measured along the path from , and let be the time taken to traverse that arc length at the ray speed (that is, at the radial speed of the local secondary wavefront, for each location and direction on the path). Then the traversal time of the entire path is (where and simply denote the endpoints and are not to be construed as values of or ). The condition for to be a ray path is that the first-order change in due to a change in is zero; that is, Now let us define the optical length of a given path (optical path length, OPL) as the distance traversed by a ray in a homogeneous isotropic reference medium (e.g., a vacuum) in the same time that it takes to traverse the given path at the local ray velocity. Then, if denotes the propagation speed in the reference medium (e.g., the speed of light in vacuum), the optical length of a path traversed in time is , and the optical length of a path traversed in time is . So, multiplying equation (1) through by , we obtain where is the ray index – that is, the refractive index calculated on the ray velocity instead of the usual phase velocity (wave-normal velocity). For an infinitesimal path, we have indicating that the optical length is the physical length multiplied by the ray index: the OPL is a notional geometric quantity, from which time has been factored out. In terms of OPL, the condition for to be a ray path (Fermat's principle) becomes This has the form of Maupertuis's principle in classical mechanics (for a single particle), with the ray index in optics taking the role of momentum or velocity in mechanics. In an isotropic medium, for which the ray velocity is also the phase velocity, we may substitute the usual refractive index for . Relation to Hamilton's principle If , , are Cartesian coordinates and an overdot denotes differentiation with respect to , Fermat's principle (2) may be written In the case of an isotropic medium, we may replace with the normal refractive index , which is simply a scalar field. If we then define the optical Lagrangian as Fermat's principle becomes If the direction of propagation is always such that we can use instead of as the parameter of the path (and the overdot to denote differentiation w.r.t.  instead of ), the optical Lagrangian can instead be written so that Fermat's principle becomes This has the form of Hamilton's principle in classical mechanics, except that the time dimension is missing: the third spatial coordinate in optics takes the role of time in mechanics. The optical Lagrangian is the function which, when integrated w.r.t. the parameter of the path, yields the OPL; it is the foundation of Lagrangian and Hamiltonian optics. History If a ray follows a straight line, it obviously takes the path of least length. Hero of Alexandria, in his Catoptrics (1st century CE), showed that the ordinary law of reflection off a plane surface follows from the premise that the total length of the ray path is a minimum. Ibn al-Haytham, an 11th-century polymath later extended this principle to refraction, hence giving an early version of the Fermat's principle. Fermat vs. the Cartesians In 1657, Pierre de Fermat received from Marin Cureau de la Chambre a copy of newly published treatise, in which La Chambre noted Hero's principle and complained that it did not work for refraction. Fermat replied that refraction might be brought into the same framework by supposing that light took the path of least resistance, and that different media offered different resistances. His eventual solution, described in a letter to La Chambre dated 1 January 1662, construed "resistance" as inversely proportional to speed, so that light took the path of least time. That premise yielded the ordinary law of refraction, provided that light traveled more slowly in the optically denser medium. Fermat's solution was a landmark in that it unified the then-known laws of geometrical optics under a variational principle or action principle, setting the precedent for the principle of least action in classical mechanics and the corresponding principles in other fields (see History of variational principles in physics). It was the more notable because it used the method of adequality, which may be understood in retrospect as finding the point where the slope of an infinitesimally short chord is zero, without the intermediate step of finding a general expression for the slope (the derivative). It was also immediately controversial. The ordinary law of refraction was at that time attributed to René Descartes (d.1650), who had tried to explain it by supposing that light was a force that propagated instantaneously, or that light was analogous to a tennis ball that traveled faster in the denser medium, either premise being inconsistent with Fermat's.  Descartes' most prominent defender, Claude Clerselier, criticized Fermat for apparently ascribing knowledge and intent to nature, and for failing to explain why nature should prefer to economize on time rather than distance. Clerselier wrote in part: 1. The principle that you take as the basis of your demonstration, namely that nature always acts in the shortest and simplest ways, is merely a moral principle and not a physical one; it is not, and cannot be, the cause of any effect in nature .... For otherwise we would attribute knowledge to nature; but here, by "nature", we understand only this order and this law established in the world as it is, which acts without foresight, without choice, and by a necessary determination. 2. This same principle would make nature irresolute ... For I ask you ... when a ray of light must pass from a point in a rare medium to a point in a dense one, is there not reason for nature to hesitate if, by your principle, it must choose the straight line as soon as the bent one, since if the latter proves shorter in time, the former is shorter and simpler in length? Who will decide and who will pronounce? Fermat, being unaware of the mechanistic foundations of his own principle, was not well placed to defend it, except as a purely geometric and kinematic proposition.  The wave theory of light, first proposed by Robert Hooke in the year of Fermat's death, and rapidly improved by Ignace-Gaston Pardies and (especially) Christiaan Huygens, contained the necessary foundations; but the recognition of this fact was surprisingly slow. Huygens's oversight In 1678, Huygens proposed that every point reached by a luminous disturbance becomes a source of a spherical wave; the sum of these secondary waves determines the form of the wave at any subsequent time. Huygens repeatedly referred to the envelope of his secondary wavefronts as the termination of the movement, meaning that the later wavefront was the outer boundary that the disturbance could reach in a given time, which was therefore the minimum time in which each point on the later wavefront could be reached. But he did not argue that the direction of minimum time was that from the secondary source to the point of tangency; instead, he deduced the ray direction from the extent of the common tangent surface corresponding to a given extent of the initial wavefront. His only endorsement of Fermat's principle was limited in scope: having derived the law of ordinary refraction, for which the rays are normal to the wavefronts, Huygens gave a geometric proof that a ray refracted according to this law takes the path of least time. He would hardly have thought this necessary if he had known that the principle of least time followed directly from the same common-tangent construction by which he had deduced not only the law of ordinary refraction, but also the laws of rectilinear propagation and ordinary reflection (which were also known to follow from Fermat's principle), and a previously unknown law of extraordinary refraction – the last by means of secondary wavefronts that were spheroidal rather than spherical, with the result that the rays were generally oblique to the wavefronts. It was as if Huygens had not noticed that his construction implied Fermat's principle, and even as if he thought he had found an exception to that principle. Manuscript evidence cited by Alan E.Shapiro tends to confirm that Huygens believed the principle of least time to be invalid "in double refraction, where the rays are not normal to the wave fronts". Shapiro further reports that the only three authorities who accepted "Huygens' principle" in the 17th and 18th centuries, namely Philippe de La Hire, Denis Papin, and Gottfried Wilhelm Leibniz, did so because it accounted for the extraordinary refraction of "Iceland crystal" (calcite) in the same manner as the previously known laws of geometrical optics. But, for the time being, the corresponding extension of Fermat's principle went unnoticed. Laplace, Young, Fresnel, and Lorentz On 30 January 1809, Pierre-Simon Laplace, reporting on the work of his protégé Étienne-Louis Malus, claimed that the extraordinary refraction of calcite could be explained under the corpuscular theory of light with the aid of Maupertuis's principle of least action: that the integral of speed with respect to distance was a minimum. The corpuscular speed that satisfied this principle was proportional to the reciprocal of the ray speed given by the radius of Huygens' spheroid. Laplace continued: According to Huygens, the velocity of the extraordinary ray, in the crystal, is simply expressed by the radius of the spheroid; consequently his hypothesis does not agree with the principle of the least action: but it is remarkable that it agrees with the principle of Fermat, which is, that light passes, from a given point without the crystal, to a given point within it, in the least possible time; for it is easy to see that this principle coincides with that of the least action, if we invert the expression of the velocity. Laplace's report was the subject of a wide-ranging rebuttal by Thomas Young, who wrote in part: The principle of Fermat, although it was assumed by that mathematician on hypothetical, or even imaginary grounds, is in fact a fundamental law with respect to undulatory motion, and is the basis of every determination in the Huygenian theory...  Mr. Laplace seems to be unacquainted with this most essential principle of one of the two theories which he compares; for he says, that "it is remarkable" that the Huygenian law of extraordinary refraction agrees with the principle of Fermat; which he would scarcely have observed, if he had been aware that the law was an immediate consequence of the principle. In fact Laplace was aware that Fermat's principle follows from Huygens' construction in the case of refraction from an isotropic medium to an anisotropic one; a geometric proof was contained in the long version of Laplace's report, printed in 1810. Young's claim was more general than Laplace's, and likewise upheld Fermat's principle even in the case of extraordinary refraction, in which the rays are generally not perpendicular to the wavefronts. Unfortunately, however, the omitted middle sentence of the quoted paragraph by Young began "The motion of every undulation must necessarily be in a direction perpendicular to its surface ..." (emphasis added), and was therefore bound to sow confusion rather than clarity. No such confusion subsists in Augustin-Jean Fresnel's "Second Memoir" on double refraction (Fresnel, 1827), which addresses Fermat's principle in several places (without naming Fermat), proceeding from the special case in which rays are normal to wavefronts, to the general case in which rays are paths of least time or stationary time. (In the following summary, page numbers refer to Alfred W.Hobson's translation.) For refraction of a plane wave at parallel incidence on one face of an anisotropic crystalline wedge (pp.291–2), in order to find the "first ray arrived" at an observation point beyond the other face of the wedge, it suffices to treat the rays outside the crystal as normal to the wavefronts, and within the crystal to consider only the parallel wavefronts (whatever the ray direction). So in this case, Fresnel does not attempt to trace the complete ray path. Next, Fresnel considers a ray refracted from a point-source M inside a crystal, through a point A on the surface, to an observation point B outside (pp.294–6). The surface passing through B and given by the "locus of the disturbances which arrive first" is, according to Huygens' construction, normal to "the ray AB of swiftest arrival". But this construction requires knowledge of the "surface of the wave" (that is, the secondary wavefront) within the crystal. Then he considers a plane wavefront propagating in a medium with non-spherical secondary wavefronts, oriented so that the ray path given by Huygens' construction – from the source of the secondary wavefront to its point of tangency with the subsequent primary wavefront – is not normal to the primary wavefronts (p.296). He shows that this path is nevertheless "the path of quickest arrival of the disturbance" from the earlier primary wavefront to the point of tangency. In a later heading (p.305) he declares that "The construction of Huygens, which determines the path of swiftest arrival" is applicable to secondary wavefronts of any shape. He then notes that when we apply Huygens' construction to refraction into a crystal with a two-sheeted secondary wavefront, and draw the lines from the two points of tangency to the center of the secondary wavefront, "we shall have the directions of the two paths of swiftest arrival, and consequently of the ordinary and of the extraordinary ray." Under the heading "Definition of the word Ray" (p.309), he concludes that this term must be applied to the line which joins the center of the secondary wave to a point on its surface, whatever the inclination of this line to the surface. As a "new consideration" (pp.310–11), he notes that if a plane wavefront is passed through a small hole centered on point E, then the direction ED of maximum intensity of the resulting beam will be that in which the secondary wave starting from E will "arrive there the first", and the secondary wavefronts from opposite sides of the hole (equidistant from E) will "arrive at D in the same time" as each other. This direction is not assumed to be normal to any wavefront. Thus Fresnel showed, even for anisotropic media, that the ray path given by Huygens' construction is the path of least time between successive positions of a plane or diverging wavefront, that the ray velocities are the radii of the secondary "wave surface" after unit time, and that a stationary traversal time accounts for the direction of maximum intensity of a beam. However, establishing the general equivalence between Huygens' construction and Fermat's principle would have required further consideration of Fermat's principle in point-to-point terms. Hendrik Lorentz, in a paper written in 1886 and republished in 1907, deduced the principle of least time in point-to-point form from Huygens' construction. But the essence of his argument was somewhat obscured by an apparent dependence on aether and aether drag. Lorentz's work was cited in 1959 by Adriaan J. de Witte, who then offered his own argument, which "although in essence the same, is believed to be more cogent and more general". De Witte's treatment is more original than that description might suggest, although limited to two dimensions; it uses calculus of variations to show that Huygens' construction and Fermat's principle lead to the same differential equation for the ray path, and that in the case of Fermat's principle, the converse holds. De Witte also noted that "The matter seems to have escaped treatment in textbooks." In popular culture The short story Story of Your Life by the speculative fiction writer Ted Chiang contains visual depictions of Fermat's Principle along with a discussion of its teleological dimension. Keith Devlin's The Math Instinct contains a chapter, "Elvis the Welsh Corgi Who Can Do Calculus" that discusses the calculus "embedded" in some animals as they solve the "least time" problem in actual situations. See also Notes References Bibliography M. Born and E. Wolf, 2002, Principles of Optics, 7th Ed., Cambridge, 1999 (reprinted with corrections, 2002). J. Chaves, 2016, Introduction to Nonimaging Optics, 2nd Ed., Boca Raton, FL: CRC Press, . O. Darrigol, 2012, A History of Optics: From Greek Antiquity to the Nineteenth Century, Oxford, . A.J. de Witte, 1959, "Equivalence of Huygens' principle and Fermat's principle in ray geometry", American Journal of Physics, vol.27, no.5 (May 1959), pp.293–301, .  Erratum: In Fig.7(b), each instance of "ray" should be "normal" (noted in vol.27, no.6, p.387). E. Frankel, 1974, "The search for a corpuscular theory of double refraction: Malus, Laplace and the competition of 1808", Centaurus, vol.18, no.3 (September 1974), pp.223–245, . A. Fresnel, 1827, "Mémoire sur la double réfraction", Mémoires de l'Académie Royale des Sciences de l'Institut de France, vol. (for 1824, printed 1827), pp.45–176; reprinted as "Second mémoire ..." in Oeuvres complètes d'Augustin Fresnel, vol.2 (Paris: Imprimerie Impériale, 1868), pp.479–596; translated by A.W. Hobson as "Memoir on double refraction", in R.Taylor (ed.), Scientific Memoirs, vol. (London: Taylor & Francis, 1852), pp.238–333. (Cited page numbers are from the translation.) C. Huygens, 1690, Traité de la Lumière (Leiden: Van der Aa), translated by S.P. Thompson as Treatise on Light, University of Chicago Press, 1912; Project Gutenberg, 2005. (Cited page numbers match the 1912 edition and the Gutenberg HTML edition.) P. Mihas, 2006, "Developing ideas of refraction, lenses and rainbow through the use of historical resources", Science & Education, vol.17, no.7 (August ), pp.751–777 (online 6 September 2006), . I. Newton, 1730, Opticks: or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light, 4th Ed. (London: William Innys, 1730; Project Gutenberg, 2010); republished with foreword by A. Einstein and Introduction by E.T. Whittaker (London: George Bell & Sons, 1931); reprinted with additional Preface by I.B. Cohen and Analytical Table of Contents by D.H.D. Roller,  Mineola, NY: Dover, 1952, 1979 (with revised preface), 2012. (Cited page numbers match the Gutenberg HTML edition and the Dover editions.) A.I. Sabra, 1981, Theories of Light: From Descartes to Newton (London: Oldbourne Book Co., 1967), reprinted Cambridge University Press, 1981, . A.E. Shapiro, 1973, "Kinematic optics: A study of the wave theory of light in the seventeenth century", Archive for History of Exact Sciences, vol.11, no.2/3 (June 1973), pp.134–266, . T. Young, 1809, Article in the Quarterly Review, vol.2, no.4 (November 1809), . A. Ziggelaar, 1980, "The sine law of refraction derived from the principle of Fermat – prior to Fermat? The theses of Wilhelm Boelmans S.J. in 1634", Centaurus, vol.24, no.1 (September 1980), pp.246–62, . Further reading . J.Z. Buchwald, 1989, The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century, University of Chicago Press, , especially . . M.S. Mahoney (1994), The Mathematical Career of Pierre de Fermat, 1601–1665, 2nd Ed., Princeton University Press, . R. Marqués; F. Martín; M. Sorolla, 2008 (reprinted 2013), Metamaterials with Negative Parameters: Theory, Design, and Microwave Applications, Hoboken, NJ: Wiley, . J.B. Pendry and D.R. Smith (2004), "Reversing Light With Negative Refraction", Physics Today, , . Physical phenomena Waves Optics Optical phenomena Physical optics Geometrical optics Calculus of variations Principles History of physics
Fermat's principle
[ "Physics", "Chemistry" ]
7,506
[ "Physical phenomena", "Applied and interdisciplinary physics", "Optics", "Optical phenomena", "Waves", "Motion (physics)", " molecular", "Atomic", " and optical physics" ]
37,245
https://en.wikipedia.org/wiki/Radionuclide
A radionuclide (radioactive nuclide, radioisotope or radioactive isotope) is a nuclide that has excess numbers of either neutrons or protons, giving it excess nuclear energy, and making it unstable. This excess energy can be used in one of three ways: emitted from the nucleus as gamma radiation; transferred to one of its electrons to release it as a conversion electron; or used to create and emit a new particle (alpha particle or beta particle) from the nucleus. During those processes, the radionuclide is said to undergo radioactive decay. These emissions are considered ionizing radiation because they are energetic enough to liberate an electron from another atom. The radioactive decay can produce a stable nuclide or will sometimes produce a new unstable radionuclide which may undergo further decay. Radioactive decay is a random process at the level of single atoms: it is impossible to predict when one particular atom will decay. However, for a collection of atoms of a single nuclide the decay rate, and thus the half-life (t1/2) for that collection, can be calculated from their measured decay constants. The range of the half-lives of radioactive atoms has no known limits and spans a time range of over 55 orders of magnitude. Radionuclides occur naturally or are artificially produced in nuclear reactors, cyclotrons, particle accelerators or radionuclide generators. There are about 730 radionuclides with half-lives longer than 60 minutes (see list of nuclides). Thirty-two of those are primordial radionuclides that were created before the Earth was formed. At least another 60 radionuclides are detectable in nature, either as daughters of primordial radionuclides or as radionuclides produced through natural production on Earth by cosmic radiation. More than 2400 radionuclides have half-lives less than 60 minutes. Most of those are only produced artificially, and have very short half-lives. For comparison, there are about 251 stable nuclides. All chemical elements can exist as radionuclides. Even the lightest element, hydrogen, has a well-known radionuclide, tritium. Elements heavier than lead, and the elements technetium and promethium, exist only as radionuclides. Unplanned exposure to radionuclides generally has a harmful effect on living organisms including humans, although low levels of exposure occur naturally without harm. The degree of harm will depend on the nature and extent of the radiation produced, the amount and nature of exposure (close contact, inhalation or ingestion), and the biochemical properties of the element; with increased risk of cancer the most usual consequence. However, radionuclides with suitable properties are used in nuclear medicine for both diagnosis and treatment. An imaging tracer made with radionuclides is called a radioactive tracer. A pharmaceutical drug made with radionuclides is called a radiopharmaceutical. Origin Natural On Earth, naturally occurring radionuclides fall into three categories: primordial radionuclides, secondary radionuclides, and cosmogenic radionuclides. Radionuclides are produced in stellar nucleosynthesis and supernova explosions along with stable nuclides. Most decay quickly but can still be observed astronomically and can play a part in understanding astronomic processes. Primordial radionuclides, such as uranium and thorium, exist in the present time because their half-lives are so long (>100 million years) that they have not yet completely decayed. Some radionuclides have half-lives so long (many times the age of the universe) that decay has only recently been detected, and for most practical purposes they can be considered stable, most notably bismuth-209: detection of this decay meant that bismuth was no longer considered stable. It is possible decay may be observed in other nuclides, adding to this list of primordial radionuclides. Secondary radionuclides are radiogenic isotopes derived from the decay of primordial radionuclides. They have shorter half-lives than primordial radionuclides. They arise in the decay chain of the primordial isotopes thorium-232, uranium-238, and uranium-235. Examples include the natural isotopes of polonium and radium. Cosmogenic isotopes, such as carbon-14, are present because they are continually being formed in the atmosphere due to cosmic rays. Many of these radionuclides exist only in trace amounts in nature, including all cosmogenic nuclides. Secondary radionuclides will occur in proportion to their half-lives, so short-lived ones will be very rare. For example, polonium can be found in uranium ores at about 0.1 mg per metric ton (1 part in 1010). Further radionuclides may occur in nature in virtually undetectable amounts as a result of rare events such as spontaneous fission or uncommon cosmic ray interactions. Nuclear fission Radionuclides are produced as an unavoidable result of nuclear fission and thermonuclear explosions. The process of nuclear fission creates a wide range of fission products, most of which are radionuclides. Further radionuclides can be created from irradiation of the nuclear fuel (creating a range of actinides) and of the surrounding structures, yielding activation products. This complex mixture of radionuclides with different chemistries and radioactivity makes handling nuclear waste and dealing with nuclear fallout particularly problematic. Synthetic Synthetic radionuclides are deliberately synthesised using nuclear reactors, particle accelerators or radionuclide generators: As well as being extracted from nuclear waste, radioisotopes can be produced deliberately with nuclear reactors, exploiting the high flux of neutrons present. These neutrons activate elements placed within the reactor. A typical product from a nuclear reactor is iridium-192. The elements that have a large propensity to take up the neutrons in the reactor are said to have a high neutron cross-section. Particle accelerators such as cyclotrons accelerate particles to bombard a target to produce radionuclides. Cyclotrons accelerate protons at a target to produce positron-emitting radionuclides, e.g. fluorine-18. Radionuclide generators contain a parent radionuclide that decays to produce a radioactive daughter. The parent is usually produced in a nuclear reactor. A typical example is the technetium-99m generator used in nuclear medicine. The parent produced in the reactor is molybdenum-99. Uses Radionuclides are used in two major ways: either for their radiation alone (irradiation, nuclear batteries) or for the combination of chemical properties and their radiation (tracers, biopharmaceuticals). In biology, radionuclides of carbon can serve as radioactive tracers because they are chemically very similar to the nonradioactive nuclides, so most chemical, biological, and ecological processes treat them in a nearly identical way. One can then examine the result with a radiation detector, such as a Geiger counter, to determine where the provided atoms were incorporated. For example, one might culture plants in an environment in which the carbon dioxide contained radioactive carbon; then the parts of the plant that incorporate atmospheric carbon would be radioactive. Radionuclides can be used to monitor processes such as DNA replication or amino acid transport. in physics and biology radionuclide X-ray fluorescence spectrometry is used to determine chemical composition of the compound. Radiation from a radionuclide source hits the sample and excites characteristic X-rays in the sample. This radiation is registered and the chemical composition of the sample can be determined from the analysis of the measured spectrum. By measuring the energy of the characteristic radiation lines, it is possible to determine the proton number of the chemical element that emits the radiation, and by measuring the number of emitted photons, it is possible to determine the concentration of individual chemical elements. In nuclear medicine, radioisotopes are used for diagnosis, treatment, and research. Radioactive chemical tracers emitting gamma rays or positrons can provide diagnostic information about internal anatomy and the functioning of specific organs, including the human brain. This is used in some forms of tomography: single-photon emission computed tomography and positron emission tomography (PET) scanning and Cherenkov luminescence imaging. Radioisotopes are also a method of treatment in hemopoietic forms of tumors; the success for treatment of solid tumors has been limited. More powerful gamma sources sterilise syringes and other medical equipment. In food preservation, radiation is used to stop the sprouting of root crops after harvesting, to kill parasites and pests, and to control the ripening of stored fruit and vegetables. Food irradiation usually uses beta-decaying nuclides with strong gamma emissions like cobalt-60 or caesium-137. In industry, and in mining, radionuclides are used to examine welds, to detect leaks, to study the rate of wear, erosion and corrosion of metals, and for on-stream analysis of a wide range of minerals and fuels. In spacecraft, radionuclides are used to provide power and heat, notably through radioisotope thermoelectric generators (RTGs) and radioisotope heater units (RHUs). In astronomy and cosmology, radionuclides play a role in understanding stellar and planetary process. In particle physics, radionuclides help discover new physics (physics beyond the Standard Model) by measuring the energy and momentum of their beta decay products (for example, neutrinoless double beta decay and the search for weakly interacting massive particles). In ecology, radionuclides are used to trace and analyze pollutants, to study the movement of surface water, and to measure water runoffs from rain and snow, as well as the flow rates of streams and rivers. In geology, archaeology, and paleontology, natural radionuclides are used to measure ages of rocks, minerals, and fossil materials. Examples The following table lists properties of selected radionuclides illustrating the range of properties and uses. Key: Z = atomic number; N = neutron number; DM = decay mode; DE = decay energy; EC = electron capture Household smoke detectors Radionuclides are present in many homes as they are used inside the most common household smoke detectors. The radionuclide used is americium-241, which is created by bombarding plutonium with neutrons in a nuclear reactor. It decays by emitting alpha particles and gamma radiation to become neptunium-237. Smoke detectors use a very small quantity of 241Am (about 0.29 micrograms per smoke detector) in the form of americium dioxide. 241Am is used as it emits alpha particles which ionize the air in the detector's ionization chamber. A small electric voltage is applied to the ionized air which gives rise to a small electric current. In the presence of smoke, some of the ions are neutralized, thereby decreasing the current, which activates the detector's alarm. Impacts on organisms Radionuclides that find their way into the environment may cause harmful effects as radioactive contamination. They can also cause damage if they are excessively used during treatment or in other ways exposed to living beings, by radiation poisoning. Potential health damage from exposure to radionuclides depends on a number of factors, and "can damage the functions of healthy tissue/organs. Radiation exposure can produce effects ranging from skin redness and hair loss, to radiation burns and acute radiation syndrome. Prolonged exposure can lead to cells being damaged and in turn lead to cancer. Signs of cancerous cells might not show up until years, or even decades, after exposure." Summary table for classes of nuclides, stable and radioactive Following is a summary table for the list of 989 nuclides with half-lives greater than one hour. A total of 251 nuclides have never been observed to decay, and are classically considered stable. Of these, 90 are believed to be absolutely stable except to proton decay (which has never been observed), while the rest are "observationally stable" and theoretically can undergo radioactive decay with extremely long half-lives. The remaining tabulated radionuclides have half-lives longer than 1 hour, and are well-characterized (see list of nuclides for a complete tabulation). They include 30 nuclides with measured half-lives longer than the estimated age of the universe (13.8 billion years), and another four nuclides with half-lives long enough (> 100 million years) that they are radioactive primordial nuclides, and may be detected on Earth, having survived from their presence in interstellar dust since before the formation of the Solar System, about 4.6 billion years ago. Another 60+ short-lived nuclides can be detected naturally as daughters of longer-lived nuclides or cosmic-ray products. The remaining known nuclides are known solely from artificial nuclear transmutation. Numbers are not exact, and may change slightly in the future, as "stable nuclides" are observed to be radioactive with very long half-lives. This is a summary table for the 989 nuclides with half-lives longer than one hour (including those that are stable), given in list of nuclides. List of commercially available radionuclides This list covers common isotopes, most of which are available in very small quantities to the general public in most countries. Others that are not publicly accessible are traded commercially in industrial, medical, and scientific fields and are subject to government regulation. Gamma emission only Beta emission only Alpha emission only Multiple radiation emitters See also List of nuclides shows all radionuclides with half-life > 1 hour Hyperaccumulators table – 3 Radioactivity in biology Radiometric dating Radionuclide cisternogram Uses of radioactivity in oil and gas wells Notes References Further reading External links EPA – Radionuclides – EPA's Radiation Protection Program: Information. FDA – Radionuclides – FDA's Radiation Protection Program: Information. Interactive Chart of Nuclides – A chart of all nuclides National Isotope Development Center – U.S. Government source of radionuclides – production, research, development, distribution, and information The Live Chart of Nuclides – IAEA Radionuclides production simulator – IAEA Radioactivity Isotopes Nuclear physics Nuclear chemistry
Radionuclide
[ "Physics", "Chemistry" ]
3,056
[ "Nuclear chemistry", "Isotopes", "nan", "Nuclear physics", "Radioactivity" ]
37,257
https://en.wikipedia.org/wiki/Radioactive%20waste
Radioactive waste is a type of hazardous waste that contains radioactive material. It is a result of many activities, including nuclear medicine, nuclear research, nuclear power generation, nuclear decommissioning, rare-earth mining, and nuclear weapons reprocessing. The storage and disposal of radioactive waste is regulated by government agencies in order to protect human health and the environment. Radioactive waste is broadly classified into 3 categories: low-level waste (LLW), such as paper, rags, tools, clothing, which contain small amounts of mostly short-lived radioactivity; intermediate-level waste (ILW), which contains higher amounts of radioactivity and requires some shielding; and high-level waste (HLW), which is highly radioactive and hot due to decay heat, thus requiring cooling and shielding. In nuclear reprocessing plants, about 96% of spent nuclear fuel is recycled back into uranium-based and mixed-oxide (MOX) fuels. The residual 4% is minor actinides and fission products, the latter of which are a mixture of stable and quickly decaying (most likely already having decayed in the spent fuel pool) elements, medium lived fission products such as strontium-90 and caesium-137 and finally seven long-lived fission products with half lives in the hundreds of thousands to millions of years. The minor actinides meanwhile are heavy elements other than uranium and plutonium which are created by neutron capture. Their half lives range from years to millions of years and as alpha emitters they are particularly radiotoxic. While there are proposed – and to a much lesser extent current – uses of all those elements, commercial scale reprocessing using the PUREX-process disposes of them as waste together with the fission products. The waste is subsequently converted into a glass-like ceramic for storage in a deep geological repository. The time radioactive waste must be stored depends on the type of waste and radioactive isotopes it contains. Short-term approaches to radioactive waste storage have been segregation and storage on the surface or near-surface of the earth. Burial in a deep geological repository is a favored solution for long-term storage of high-level waste, while re-use and transmutation are favored solutions for reducing the HLW inventory. Boundaries to recycling of spent nuclear fuel are regulatory and economic as well as the issue of radioactive contamination if chemical separation processes cannot achieve a very high purity. Furthermore, elements may be present in both useful and troublesome isotopes, which would require costly and energy intensive isotope separation for their use – a currently uneconomic prospect. A summary of the amounts of radioactive waste and management approaches for most developed countries are presented and reviewed periodically as part of a joint convention of the International Atomic Energy Agency (IAEA). Nature and significance A quantity of radioactive waste typically consists of a number of radionuclides, which are unstable isotopes of elements that undergo decay and thereby emit ionizing radiation, which is harmful to humans and the environment. Different isotopes emit different types and levels of radiation, which last for different periods of time. Physics The radioactivity of all radioactive waste weakens with time. All radionuclides contained in the waste have a half-life—the time it takes for half of the atoms to decay into another nuclide. Eventually, all radioactive waste decays into non-radioactive elements (i.e., stable nuclides). Since radioactive decay follows the half-life rule, the rate of decay is inversely proportional to the duration of decay. In other words, the radiation from a long-lived isotope like iodine-129 will be much less intense than that of a short-lived isotope like iodine-131. The two tables show some of the major radioisotopes, their half-lives, and their radiation yield as a proportion of the yield of fission of uranium-235. The energy and the type of the ionizing radiation emitted by a radioactive substance are also important factors in determining its threat to humans. The chemical properties of the radioactive element will determine how mobile the substance is and how likely it is to spread into the environment and contaminate humans. This is further complicated by the fact that many radioisotopes do not decay immediately to a stable state but rather to radioactive decay products within a decay chain before ultimately reaching a stable state. Pharmacokinetics Exposure to radioactive waste may cause health impacts due to ionizing radiation exposure. In humans, a dose of 1 sievert carries a 5.5% risk of developing cancer, and regulatory agencies assume the risk is linearly proportional to dose even for low doses. Ionizing radiation can cause deletions in chromosomes. If a developing organism such as a fetus is irradiated, it is possible a birth defect may be induced, but it is unlikely this defect will be in a gamete or a gamete-forming cell. The incidence of radiation-induced mutations in humans is small, as in most mammals, because of natural cellular-repair mechanisms, many just now coming to light. These mechanisms range from DNA, mRNA and protein repair, to internal lysosomic digestion of defective proteins, and even induced cell suicide—apoptosis Depending on the decay mode and the pharmacokinetics of an element (how the body processes it and how quickly), the threat due to exposure to a given activity of a radioisotope will differ. For instance, iodine-131 is a short-lived beta and gamma emitter, but because it concentrates in the thyroid gland, it is more able to cause injury than caesium-137 which, being water soluble, is rapidly excreted through urine. In a similar way, the alpha emitting actinides and radium are considered very harmful as they tend to have long biological half-lives and their radiation has a high relative biological effectiveness, making it far more damaging to tissues per amount of energy deposited. Because of such differences, the rules determining biological injury differ widely according to the radioisotope, time of exposure, and sometimes also the nature of the chemical compound which contains the radioisotope. Sources Radioactive waste comes from a number of sources. In countries with nuclear power plants, nuclear armament, or nuclear fuel treatment plants, the majority of waste originates from the nuclear fuel cycle and nuclear weapons reprocessing. Other sources include medical and industrial wastes, as well as naturally occurring radioactive materials (NORM) that can be concentrated as a result of the processing or consumption of coal, oil, and gas, and some minerals, as discussed below. Nuclear fuel cycle Front end Waste from the front end of the nuclear fuel cycle is usually alpha-emitting waste from the extraction of uranium. It often contains radium and its decay products. Uranium dioxide (UO2) concentrate from mining is a thousand or so times as radioactive as the granite used in buildings. It is refined from yellowcake (U3O8), then converted to uranium hexafluoride gas (UF6). As a gas, it undergoes enrichment to increase the U-235 content from 0.7% to about 4.4% (LEU). It is then turned into a hard ceramic oxide (UO2) for assembly as reactor fuel elements. The main by-product of enrichment is depleted uranium (DU), principally the U-238 isotope, with a U-235 content of ~0.3%. It is stored, either as UF6 or as U3O8. Some is used in applications where its extremely high density makes it valuable such as anti-tank shells, and on at least one occasion even a sailboat keel. It is also used with plutonium for making mixed oxide fuel (MOX) and to dilute, or downblend, highly enriched uranium from weapons stockpiles which is now being redirected to become reactor fuel. Back end The back-end of the nuclear fuel cycle, mostly spent fuel rods, contains fission products that emit beta and gamma radiation, and actinides that emit alpha particles, such as uranium-234 (half-life 245 thousand years), neptunium-237 (2.144 million years), plutonium-238 (87.7 years) and americium-241 (432 years), and even sometimes some neutron emitters such as californium (half-life of 898 years for californium-251). These isotopes are formed in nuclear reactors. It is important to distinguish the processing of uranium to make fuel from the reprocessing of used fuel. Used fuel contains the highly radioactive products of fission (see high-level waste below). Many of these are neutron absorbers, called neutron poisons in this context. These eventually build up to a level where they absorb so many neutrons that the chain reaction stops, even with the control rods completely removed from a reactor. At that point, the fuel has to be replaced in the reactor with fresh fuel, even though there is still a substantial quantity of uranium-235 and plutonium present. In the United States, this used fuel is usually "stored", while in other countries such as Russia, the United Kingdom, France, Japan, and India, the fuel is reprocessed to remove the fission products, and the fuel can then be re-used. The fission products removed from the fuel are a concentrated form of high-level waste as are the chemicals used in the process. While most countries reprocess the fuel carrying out single plutonium cycles, India is planning multiple plutonium recycling schemes and Russia pursues closed cycle. Fuel composition and long term radioactivity The use of different fuels in nuclear reactors results in different spent nuclear fuel (SNF) composition, with varying activity curves. The most abundant material being U-238 with other uranium isotopes, other actinides, fission products and activation products. Long-lived radioactive waste from the back end of the fuel cycle is especially relevant when designing a complete waste management plan for SNF. When looking at long-term radioactive decay, the actinides in the SNF have a significant influence due to their characteristically long half-lives. Depending on what a nuclear reactor is fueled with, the actinide composition in the SNF will be different. An example of this effect is the use of nuclear fuels with thorium. Th-232 is a fertile material that can undergo a neutron capture reaction and two beta minus decays, resulting in the production of fissile U-233. The SNF of a cycle with thorium will contain U-233. Its radioactive decay will strongly influence the long-term activity curve of the SNF for around a million years. A comparison of the activity associated to U-233 for three different SNF types can be seen in the figure on the top right. The burnt fuels are thorium with reactor-grade plutonium (RGPu), thorium with weapons-grade plutonium (WGPu), and Mixed oxide fuel (MOX, no thorium). For RGPu and WGPu, the initial amount of U-233 and its decay for around a million years can be seen. This has an effect on the total activity curve of the three fuel types. The initial absence of U-233 and its daughter products in the MOX fuel results in a lower activity in region 3 of the figure at the bottom right, whereas for RGPu and WGPu the curve is maintained higher due to the presence of U-233 that has not fully decayed. Nuclear reprocessing can remove the actinides from the spent fuel so they can be used or destroyed (see ). Proliferation concerns Since uranium and plutonium are nuclear weapons materials, there are proliferation concerns. Ordinarily (in spent nuclear fuel), plutonium is reactor-grade plutonium. In addition to plutonium-239, which is highly suitable for building nuclear weapons, it contains large amounts of undesirable contaminants: plutonium-240, plutonium-241, and plutonium-238. These isotopes are extremely difficult to separate, and more cost-effective ways of obtaining fissile material exist (e.g., uranium enrichment or dedicated plutonium production reactors). High-level waste is full of highly radioactive fission products, most of which are relatively short-lived. This is a concern since if the waste is stored, perhaps in deep geological storage, over many years the fission products decay, decreasing the radioactivity of the waste and making the plutonium easier to access. The undesirable contaminant Pu-240 decays faster than the Pu-239, and thus the quality of the bomb material increases with time (although its quantity decreases during that time as well). Thus, some have argued, as time passes, these deep storage areas have the potential to become "plutonium mines", from which material for nuclear weapons can be acquired with relatively little difficulty. Critics of the latter idea have pointed out the difficulty of recovering useful material from sealed deep storage areas makes other methods preferable. Specifically, high radioactivity and heat (80 °C in surrounding rock) greatly increase the difficulty of mining a storage area, and the enrichment methods required have high capital costs. Pu-239 decays to U-235 which is suitable for weapons and which has a very long half-life (roughly 109 years). Thus plutonium may decay and leave uranium-235. However, modern reactors are only moderately enriched with U-235 relative to U-238, so the U-238 continues to serve as a denaturation agent for any U-235 produced by plutonium decay. One solution to this problem is to recycle the plutonium and use it as a fuel e.g. in fast reactors. In pyrometallurgical fast reactors, the separated plutonium and uranium are contaminated by actinides and cannot be used for nuclear weapons. Nuclear weapons decommissioning Waste from nuclear weapons decommissioning is unlikely to contain much beta or gamma activity other than tritium and americium. It is more likely to contain alpha-emitting actinides such as Pu-239 which is a fissile material used in nuclear bombs, plus some material with much higher specific activities, such as Pu-238 or Po. In the past the neutron trigger for an atomic bomb tended to be beryllium and a high activity alpha emitter such as polonium; an alternative to polonium is Pu-238. For reasons of national security, details of the design of modern nuclear bombs are normally not released to the open literature. Some designs might contain a radioisotope thermoelectric generator using Pu-238 to provide a long-lasting source of electrical power for the electronics in the device. It is likely that the fissile material of an old nuclear bomb, which is due for refitting, will contain decay products of the plutonium isotopes used in it. These are likely to include U-236 from Pu-240 impurities plus some U-235 from decay of the Pu-239; due to the relatively long half-life of these Pu isotopes, these wastes from radioactive decay of bomb core material would be very small, and in any case, far less dangerous (even in terms of simple radioactivity) than the Pu-239 itself. The beta decay of Pu-241 forms Am-241; the in-growth of americium is likely to be a greater problem than the decay of Pu-239 and Pu-240 as the americium is a gamma emitter (increasing external-exposure to workers) and is an alpha emitter which can cause the generation of heat. The plutonium could be separated from the americium by several different processes; these would include pyrochemical processes and aqueous/organic solvent extraction. A truncated PUREX type extraction process would be one possible method of making the separation. Naturally occurring uranium is not fissile because it contains 99.3% of U-238 and only 0.7% of U-235. Legacy waste Due to historic activities typically related to the radium industry, uranium mining, and military programs, numerous sites contain or are contaminated with radioactivity. In the United States alone, the Department of Energy (DOE) states there are "millions of gallons of radioactive waste" as well as "thousands of tons of spent nuclear fuel and material" and also "huge quantities of contaminated soil and water." Despite copious quantities of waste, in 2007, the DOE stated a goal of cleaning all presently contaminated sites successfully by 2025. The Fernald, Ohio site for example had "31 million pounds of uranium product", "2.5 billion pounds of waste", "2.75 million cubic yards of contaminated soil and debris", and a "223 acre portion of the underlying Great Miami Aquifer had uranium levels above drinking standards." The United States has at least 108 sites designated as areas that are contaminated and unusable, sometimes many thousands of acres. The DOE wishes to clean or mitigate many or all by 2025, using the recently developed method of geomelting, however the task can be difficult and it acknowledges that some may never be completely remediated. In just one of these 108 larger designations, Oak Ridge National Laboratory (ORNL), there were for example at least "167 known contaminant release sites" in one of the three subdivisions of the site. Some of the U.S. sites were smaller in nature, however, cleanup issues were simpler to address, and the DOE has successfully completed cleanup, or at least closure, of several sites. Medicine Radioactive medical waste tends to contain beta particle and gamma ray emitters. It can be divided into two main classes. In diagnostic nuclear medicine a number of short-lived gamma emitters such as technetium-99m are used. Many of these can be disposed of by leaving it to decay for a short time before disposal as normal waste. Other isotopes used in medicine, with half-lives in parentheses, include: Y-90, used for treating lymphoma (2.7 days) I-131, used for thyroid function tests and for treating thyroid cancer (8.0 days) Sr-89, used for treating bone cancer, intravenous injection (52 days) Ir-192, used for brachytherapy (74 days) Co-60, used for brachytherapy and external radiotherapy (5.3 years) Cs-137, used for brachytherapy and external radiotherapy (30 years) Tc-99, product of the decay of Technetium-99m (221,000 years) Industry Industrial source waste can contain alpha, beta, neutron or gamma emitters. Gamma emitters are used in radiography while neutron emitting sources are used in a range of applications, such as oil well logging. Naturally occurring radioactive material Substances containing natural radioactivity are known as NORM (naturally occurring radioactive material). After human processing that exposes or concentrates this natural radioactivity (such as mining bringing coal to the surface or burning it to produce concentrated ash), it becomes technologically enhanced naturally occurring radioactive material (TENORM). Much of this waste is alpha particle-emitting matter from the decay chains of uranium and thorium. The main source of radiation in the human body is potassium-40 (40K), typically 17 milligrams in the body at a time and 0.4 milligrams/day intake. Most rocks, especially granite, have a low level of radioactivity due to the potassium-40, thorium and uranium contained. Usually ranging from 1 millisievert (mSv) to 13 mSv annually depending on location, average radiation exposure from natural radioisotopes is 2.0 mSv per person a year worldwide. This makes up the majority of typical total dosage (with mean annual exposure from other sources amounting to 0.6 mSv from medical tests averaged over the whole populace, 0.4 mSv from cosmic rays, 0.005 mSv from the legacy of past atmospheric nuclear testing, 0.005 mSv occupational exposure, 0.002 mSv from the Chernobyl disaster, and 0.0002 mSv from the nuclear fuel cycle). TENORM is not regulated as restrictively as nuclear reactor waste, though there are no significant differences in the radiological risks of these materials. Coal Coal contains a small amount of radioactive uranium, barium, thorium, and potassium, but, in the case of pure coal, this is significantly less than the average concentration of those elements in the Earth's crust. The surrounding strata, if shale or mudstone, often contain slightly more than average and this may also be reflected in the ash content of 'dirty' coals. The more active ash minerals become concentrated in the fly ash precisely because they do not burn well. The radioactivity of fly ash is about the same as black shale and is less than phosphate rocks, but is more of a concern because a small amount of the fly ash ends up in the atmosphere where it can be inhaled. According to U.S. National Council on Radiation Protection and Measurements (NCRP) reports, population exposure from 1000-MWe power plants amounts to 490 person-rem/year for coal power plants, 100 times as great as nuclear power plants (4.8 person-rem/year). The exposure from the complete nuclear fuel cycle from mining to waste disposal is 136 person-rem/year; the corresponding value for coal use from mining to waste disposal is "probably unknown". Oil and gas Residues from the oil and gas industry often contain radium and its decay products. The sulfate scale from an oil well can be radium rich, while the water, oil, and gas from a well often contain radon. The radon decays to form solid radioisotopes which form coatings on the inside of pipework. In an oil processing plant, the area of the plant where propane is processed is often one of the more contaminated areas of the plant as radon has a similar boiling point to propane. Radioactive elements are an industrial problem in some oil wells where workers operating in direct contact with the crude oil and brine can be exposed to doses having negative health effects. Due to the relatively high concentration of these elements in the brine, its disposal is also a technological challenge. Since the 1980s, in the United States, the brine is however exempt from the dangerous waste regulations and can be disposed of regardless of radioactive or toxic substances content. Rare-earth mining Due to natural occurrence of radioactive elements such as thorium and radium in rare-earth ore, mining operations also result in production of waste and mineral deposits that are slightly radioactive. Classification Classification of radioactive waste varies by country. The IAEA, which publishes the Radioactive Waste Safety Standards (RADWASS), also plays a significant role. The proportion of various types of waste generated in the UK: 94% – low-level waste (LLW) ~6% – intermediate-level waste (ILW) <1% – high-level waste (HLW) Mill tailings Uranium tailings are waste by-product materials left over from the rough processing of uranium-bearing ore. They are not significantly radioactive. Mill tailings are sometimes referred to as 11(e)2 wastes, from the section of the US Atomic Energy Act of 1946 that defines them. Uranium mill tailings typically also contain chemically hazardous heavy metal such as lead and arsenic. Vast mounds of uranium mill tailings are left at many old mining sites, especially in Colorado, New Mexico, and Utah. Although mill tailings are not very radioactive, they have long half-lives. Mill tailings often contain radium, thorium and trace amounts of uranium. Low-level waste Low-level waste (LLW) is generated from hospitals and industry, as well as the nuclear fuel cycle. Low-level wastes include paper, rags, tools, clothing, filters, and other materials which contain small amounts of mostly short-lived radioactivity. Materials that originate from any region of an Active Area are commonly designated as LLW as a precautionary measure even if there is only a remote possibility of being contaminated with radioactive materials. Such LLW typically exhibits no higher radioactivity than one would expect from the same material disposed of in a non-active area, such as a normal office block. Example LLW includes wiping rags, mops, medical tubes, laboratory animal carcasses, and more. LLW makes up 94% of all radioactive waste volume in the UK. Most of it is disposed of in Cumbria, first in landfill style trenches, and now using grouted metal containers that are stacked in concrete vaults. A new site in the north of Scotland is the Dounreay site which is prepared to withstand a 4m tsunami. Some high-activity LLW requires shielding during handling and transport but most LLW is suitable for shallow land burial. To reduce its volume, it is often compacted or incinerated before disposal. Low-level waste is divided into four classes: class A, class B, class C, and Greater Than Class C (GTCC). Intermediate-level waste Intermediate-level waste (ILW) contains higher amounts of radioactivity compared to low-level waste. It generally requires shielding, but not cooling. Intermediate-level wastes includes resins, chemical sludge and metal nuclear fuel cladding, as well as contaminated materials from reactor decommissioning. It may be solidified in concrete or bitumen or mixed with silica sand and vitrified for disposal. As a general rule, short-lived waste (mainly non-fuel materials from reactors) is buried in shallow repositories, while long-lived waste (from fuel and fuel reprocessing) is deposited in geological repository. Regulations in the United States do not define this category of waste; the term is used in Europe and elsewhere. ILW makes up 6% of all radioactive waste volume in the UK. High-level waste High-level waste (HLW) is produced by nuclear reactors and the reprocessing of nuclear fuel. The exact definition of HLW differs internationally. After a nuclear fuel rod serves one fuel cycle and is removed from the core, it is considered HLW. Spent fuel rods contain mostly uranium with fission products and transuranic elements generated in the reactor core. Spent fuel is highly radioactive and often hot. HLW accounts for over 95% of the total radioactivity produced in the process of nuclear electricity generation but it contributes to less than 1% of volume of all radioactive waste produced in the UK. Overall, the 60-year-long nuclear program in the UK up until 2019 produced 2150 m3 of HLW. The radioactive waste from spent fuel rods consists primarily of cesium-137 and strontium-90, but it may also include plutonium, which can be considered transuranic waste. The half-lives of these radioactive elements can differ quite extremely. Some elements, such as cesium-137 and strontium-90 have half-lives of approximately 30 years. Meanwhile, plutonium has a half-life that can stretch to as long as 24,000 years. The amount of HLW worldwide is increasing by about 12,000 tonnes per year. A 1000-megawatt nuclear power plant produces about 27 tonnes of spent nuclear fuel (unreprocessed) every year. For comparison, the amount of ash produced by coal power plants in the United States is estimated at 130,000,000 t per year and fly ash is estimated to release 100 times more radiation than an equivalent nuclear power plant. In 2010, it was estimated that about 250,000 t of nuclear HLW were stored globally. This does not include amounts that have escaped into the environment from accidents or tests. Japan is estimated to hold 17,000 t of HLW in storage in 2015. As of 2019, the United States has over 90,000 t of HLW. HLW have been shipped to other countries to be stored or reprocessed and, in some cases, shipped back as active fuel. The ongoing controversy over high-level radioactive waste disposal is a major constraint on nuclear power global expansion. Most scientists agree that the main proposed long-term solution is deep geological burial, either in a mine or a deep borehole. As of 2019, no dedicated civilian high-level nuclear waste site is operational as small amounts of HLW did not justify the investment in the past. Finland is in the advanced stage of the construction of the Onkalo spent nuclear fuel repository, which is planned to open in 2025 at 400–450 m depth. France is in the planning phase for a 500 m deep Cigeo facility in Bure. Sweden is planning a site in Forsmark. Canada plans a 680 m deep facility near Lake Huron in Ontario. The Republic of Korea plans to open a site around 2028. The site in Sweden enjoys 80% support from local residents as of 2020. The Morris Operation in Grundy County, Illinois, is currently the only de facto high-level radioactive waste storage site in the United States. Transuranic waste Transuranic waste (TRUW) as defined by U.S. regulations is, without regard to form or origin, waste that is contaminated with alpha-emitting transuranic radionuclides with half-lives greater than 20 years and concentrations greater than 100 nCi/g (3.7 MBq/kg), excluding high-level waste. Elements that have an atomic number greater than uranium are called transuranic ("beyond uranium"). Because of their long half-lives, TRUW is disposed of more cautiously than either low- or intermediate-level waste. In the United States, it arises mainly from nuclear weapons production, and consists of clothing, tools, rags, residues, debris, and other items contaminated with small amounts of radioactive elements (mainly plutonium). Under U.S. law, transuranic waste is further categorized into "contact-handled" (CH) and "remote-handled" (RH) on the basis of the radiation dose rate measured at the surface of the waste container. CH TRUW has a surface dose rate not greater than 200 mrem per hour (2 mSv/h), whereas RH TRUW has a surface dose rate of 200 mrem/h (2 mSv/h) or greater. CH TRUW does not have the very high radioactivity of high-level waste, nor its high heat generation, but RH TRUW can be highly radioactive, with surface dose rates up to 1,000,000 mrem/h (10,000 mSv/h). The United States currently disposes of TRUW generated from military facilities at the Waste Isolation Pilot Plant (WIPP) in a deep salt formation in New Mexico. Prevention A future way to reduce waste accumulation is to phase out current reactors in favor of Generation IV reactors, which output less waste per power generated. Fast reactors such as BN-800 in Russia are also able to consume MOX fuel that is manufactured from recycled spent fuel from traditional reactors. The UK's Nuclear Decommissioning Authority published a position paper in 2014 on the progress on approaches to the management of separated plutonium, which summarises the conclusions of the work that the NDA shared with the UK government. Management Of particular concern in nuclear waste management are two long-lived fission products, Tc-99 (half-life 220,000 years) and I-129 (half-life 15.7 million years), which dominate spent fuel radioactivity after a few thousand years. The most troublesome transuranic elements in spent fuel are Np-237 (half-life two million years) and Pu-239 (half-life 24,000 years). Nuclear waste requires sophisticated treatment and management to successfully isolate it from interacting with the biosphere. This usually necessitates treatment, followed by a long-term management strategy involving storage, disposal or transformation of the waste into a non-toxic form. Governments around the world are considering a range of waste management and disposal options, though there has been limited progress toward long-term waste management solutions. Several methods of disposal of radioactive waste have been investigated: Deep geological repository Dry cask storage Deep borehole disposal – not implemented. Rock melting – not implemented. Ocean disposal – used by the USSR, the United Kingdom, Switzerland, the United States, Belgium, France, the Netherlands, Japan, Sweden, Russia, Germany, Italy and South Korea (1954–1993). This is no longer permitted by international agreements. Disposal in ice sheets – rejected in Antarctic Treaty. Deep well injection – used by USSR and USA. Nuclear transmutation, using neutron capture to convert the unstable atoms to those with shorter half-lives. Nuclear reprocessing such as the PUREX process allows for reuse of some radioactive materials. Disposal in outer space – not implemented as too expensive. In the United States, waste management policy broke down with the ending of work on the incomplete Yucca Mountain Repository. At present there are 70 nuclear power plant sites where spent fuel is stored. A Blue Ribbon Commission was appointed by U.S. President Obama to look into future options for this and future waste. A deep geological repository seems to be favored. Ducrete, Saltcrete, and Synroc are methods for immobilizing nuclear waste. Initial treatment Vitrification Long-term storage of radioactive waste requires the stabilization of the waste into a form that will neither react nor degrade for extended periods. It is theorized that one way to do this might be through vitrification. Currently at Sellafield, the high-level waste (PUREX first cycle raffinate) is mixed with sugar and then calcined. Calcination involves passing the waste through a heated, rotating tube. The purposes of calcination are to evaporate the water from the waste and de-nitrate the fission products to assist the stability of the glass produced. The 'calcine' generated is fed continuously into an induction heated furnace with fragmented glass. The resulting glass is a new substance in which the waste products are bonded into the glass matrix when it solidifies. As a melt, this product is poured into stainless steel cylindrical containers ("cylinders") in a batch process. When cooled, the fluid solidifies ("vitrifies") into the glass. After being formed, the glass is highly resistant to water. After filling a cylinder, a seal is welded onto the cylinder head. The cylinder is then washed. After being inspected for external contamination, the steel cylinder is stored, usually in an underground repository. In this form, the waste products are expected to be immobilized for thousands of years. The glass inside a cylinder is usually a black glossy substance. All this work (in the United Kingdom) is done using hot cell systems. Sugar is added to control the ruthenium chemistry and to stop the formation of the volatile RuO4 containing radioactive ruthenium isotopes. In the West, the glass is normally a borosilicate glass (similar to Pyrex), while in the former Soviet Union it is normal to use a phosphate glass. The amount of fission products in the glass must be limited because some (palladium, the other Pt group metals, and tellurium) tend to form metallic phases which separate from the glass. Bulk vitrification uses electrodes to melt soil and wastes, which are then buried underground. In Germany, a vitrification plant is treating the waste from a small demonstration reprocessing plant which has since been closed. Phosphate ceramics Vitrification is not the only way to stabilize the waste into a form that will not react or degrade for extended periods. Immobilization via direct incorporation into a phosphate-based crystalline ceramic host is also used. The diverse chemistry of phosphate ceramics under various conditions demonstrates a versatile material that can withstand chemical, thermal, and radioactive degradation over time. The properties of phosphates, particularly ceramic phosphates, of stability over a wide pH range, low porosity, and minimization of secondary waste introduces possibilities for new waste immobilization techniques. Ion exchange It is common for medium active wastes in the nuclear industry to be treated with ion exchange or other means to concentrate the radioactivity into a small volume. The much less radioactive bulk (after treatment) is often then discharged. For instance, it is possible to use a ferric hydroxide floc to remove radioactive metals from aqueous mixtures. After the radioisotopes are absorbed onto the ferric hydroxide, the resulting sludge can be placed in a metal drum before being mixed with cement to form solid waste. In order to get better long-term performance (mechanical stability) from such forms, they may be made from a mixture of fly ash, or blast furnace slag, and portland cement, instead of normal concrete (made with portland cement, gravel and sand). Synroc The Australian Synroc (synthetic rock) is a more sophisticated way to immobilize such waste, and this process may eventually come into commercial use for civil wastes (it is currently being developed for U.S. military wastes). Synroc was invented by Ted Ringwood, a geochemist at the Australian National University. The Synroc contains pyrochlore and cryptomelane type minerals. The original form of Synroc (Synroc C) was designed for the liquid high-level waste (PUREX raffinate) from a light-water reactor. The main minerals in this Synroc are hollandite (BaAl2Ti6O16), zirconolite (CaZrTi2O7) and perovskite (CaTiO3). The zirconolite and perovskite are hosts for the actinides. The strontium and barium will be fixed in the perovskite. The caesium will be fixed in the hollandite. A Synroc waste treatment facility began construction in 2018 at ANSTO. Long-term management The time frame in question when dealing with radioactive waste ranges from 10,000 to 1,000,000 years, according to studies based on the effect of estimated radiation doses. Researchers suggest that forecasts of health detriment for such periods should be examined critically. Practical studies only consider up to 100 years as far as effective planning and cost evaluations are concerned. Long term behavior of radioactive wastes remains a subject for ongoing research projects in geoforecasting. Remediation Algae has shown selectivity for strontium in studies, where most plants used in bioremediation have not shown selectivity between calcium and strontium, often becoming saturated with calcium, which is present in greater quantities in nuclear waste. Strontium-90 with a half life around 30 years, is classified as high-level waste. Researchers have looked at the bioaccumulation of strontium by Scenedesmus spinosus (algae) in simulated wastewater. The study claims a highly selective biosorption capacity for strontium of S. spinosus, suggesting that it may be appropriate for use of nuclear wastewater. A study of the pond alga Closterium moniliferum using non-radioactive strontium found that varying the ratio of barium to strontium in water improved strontium selectivity. Above-ground disposal Dry cask storage typically involves taking waste from a spent fuel pool and sealing it (along with an inert gas) in a steel cylinder, which is placed in a concrete cylinder which acts as a radiation shield. It is a relatively inexpensive method which can be done at a central facility or adjacent to the source reactor. The waste can be easily retrieved for reprocessing. Geologic disposal The process of selecting appropriate deep final repositories for high-level waste and spent fuel is now underway in several countries with the first expected to be commissioned sometime after 2010. The basic concept is to locate a large, stable geologic formation and use mining technology to excavate a tunnel, or use large-bore tunnel boring machines (similar to those used to drill the Channel Tunnel from England to France) to drill a shaft below the surface where rooms or vaults can be excavated for disposal of high-level radioactive waste. The goal is to permanently isolate nuclear waste from the human environment. Many people remain uncomfortable with the immediate stewardship cessation of this disposal system, suggesting perpetual management and monitoring would be more prudent. Because some radioactive species have half-lives longer than one million years, even very low container leakage and radionuclide migration rates must be taken into account. Moreover, it may require more than one half-life until some nuclear materials lose enough radioactivity to cease being lethal to living things. A 1983 review of the Swedish radioactive waste disposal program by the National Academy of Sciences found that country's estimate of several hundred thousand years—perhaps up to one million years—being necessary for waste isolation "fully justified." The proposed land-based subductive waste disposal method disposes of nuclear waste in a subduction zone accessed from land and therefore is not prohibited by international agreement. This method has been described as the most viable means of disposing of radioactive waste, and as the state-of-the-art as of 2001 in nuclear waste disposal technology. Another approach termed Remix & Return would blend high-level waste with uranium mine and mill tailings down to the level of the original radioactivity of the uranium ore, then replace it in inactive uranium mines. This approach has the merits of providing jobs for miners who would double as disposal staff, and of facilitating a cradle-to-grave cycle for radioactive materials, but would be inappropriate for spent reactor fuel in the absence of reprocessing, due to the presence of highly toxic radioactive elements such as plutonium within it. Deep borehole disposal is the concept of disposing of high-level radioactive waste from nuclear reactors in extremely deep boreholes. Deep borehole disposal seeks to place the waste as much as beneath the surface of the Earth and relies primarily on the immense natural geological barrier to confine the waste safely and permanently so that it should never pose a threat to the environment. The Earth's crust contains 120 trillion tons of thorium and 40 trillion tons of uranium (primarily at relatively trace concentrations of parts per million each adding up over the crust's 3 × 1019 ton mass), among other natural radioisotopes. Since the fraction of nuclides decaying per unit of time is inversely proportional to an isotope's half-life, the relative radioactivity of the lesser amount of human-produced radioisotopes (thousands of tons instead of trillions of tons) would diminish once the isotopes with far shorter half-lives than the bulk of natural radioisotopes decayed. In January 2013, Cumbria county council rejected UK central government proposals to start work on an underground storage dump for nuclear waste near to the Lake District National Park. "For any host community, there will be a substantial community benefits package and worth hundreds of millions of pounds" said Ed Davey, Energy Secretary, but nonetheless, the local elected body voted 7–3 against research continuing, after hearing evidence from independent geologists that "the fractured strata of the county was impossible to entrust with such dangerous material and a hazard lasting millennia." Horizontal drillhole disposal describes proposals to drill over one km vertically, and two km horizontally in the earth's crust, for the purpose of disposing of high-level waste forms such as spent nuclear fuel, Caesium-137, or Strontium-90. After the emplacement and the retrievability period, drillholes would be backfilled and sealed. A series of tests of the technology were carried out in November 2018 and then again publicly in January 2019 by a U.S. based private company. The test demonstrated the emplacement of a test-canister in a horizontal drillhole and retrieval of the same canister. There was no actual high-level waste used in the test. The European Commission Joint Research Centre report of 2021 (see above) concluded: Ocean floor disposal From 1946 through 1993, thirteen countries used ocean disposal or ocean dumping as a method to dispose of nuclear/radioactive waste with an approximation of 200,000 tons sourcing mainly from the medical, research and nuclear industry. Ocean floor disposal of radioactive waste has been suggested by the finding that deep waters in the North Atlantic Ocean do not present an exchange with shallow waters for about 140 years based on oxygen content data recorded over a period of 25 years. They include burial beneath a stable abyssal plain, burial in a subduction zone that would slowly carry the waste downward into the Earth's mantle, and burial beneath a remote natural or human-made island. While these approaches all have merit and would facilitate an international solution to the problem of disposal of radioactive waste, they would require an amendment of the Law of the Sea. Nuclear submarines have been lost and these vessels reactors must also be counted in the amount of radioactive waste deposited at sea. Article 1 (Definitions), 7., of the 1996 Protocol to the Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter, (the London Dumping Convention) states: ""Sea" means all marine waters other than the internal waters of States, as well as the seabed and the subsoil thereof; it does not include sub-seabed repositories accessed only from land." Transmutation There have been proposals for reactors that consume nuclear waste and transmute it to other, less-harmful or shorter-lived, nuclear waste. In particular, the integral fast reactor was a proposed nuclear reactor with a nuclear fuel cycle that produced no transuranic waste and, in fact, could consume transuranic waste. It proceeded as far as large-scale tests but was eventually canceled by the U.S. Government. Another approach, considered safer but requiring more development, is to dedicate subcritical reactors to the transmutation of the left-over transuranic elements. An isotope that is found in nuclear waste and that represents a concern in terms of proliferation is Pu-239. The large stock of plutonium is a result of its production inside uranium-fueled reactors and of the reprocessing of weapons-grade plutonium during the weapons program. An option for getting rid of this plutonium is to use it as a fuel in a traditional light-water reactors (LWR). Several fuel types with differing plutonium destruction efficiencies are under study. Transmutation was banned in the United States in April 1977 by U. S. President Carter due to the danger of plutonium proliferation, but President Reagan rescinded the ban in 1981. Due to economic losses and risks, the construction of reprocessing plants during this time did not resume. Due to high energy demand, work on the method has continued in the European Union (EU). This has resulted in a practical nuclear research reactor called Myrrha in which transmutation is possible. Additionally, a new research program called ACTINET has been started in the EU to make transmutation possible on an industrial scale. According to U. S. President Bush's Global Nuclear Energy Partnership (GNEP) of 2007, the United States is actively promoting research on transmutation technologies needed to markedly reduce the problem of nuclear waste treatment. There have also been theoretical studies involving the use of fusion reactors as so-called "actinide burners" where a fusion reactor plasma such as in a tokamak, could be "doped" with a small amount of the "minor" transuranic atoms which would be transmuted (meaning fissioned in the actinide case) to lighter elements upon their successive bombardment by the very high energy neutrons produced by the fusion of deuterium and tritium in the reactor. A study at MIT found that only 2 or 3 fusion reactors with parameters similar to that of the International Thermonuclear Experimental Reactor (ITER) could transmute the entire annual minor actinide production from all of the light-water reactors presently operating in the United States fleet while simultaneously generating approximately 1 gigawatt of power from each reactor. 2018 Nobel Prize for Physics-winner Gérard Mourou has proposed using chirped pulse amplification to generate high-energy and low-duration laser pulses either to accelerate deuterons into a tritium target causing fusion events yielding fast neutrons, or accelerating protons for neutron spallation, with either method intended for transmutation of nuclear waste. Re-use Spent nuclear fuel contains abundant fertile uranium and traces of fissile materials. Methods such as the PUREX process can be used to remove useful actinides for the production of active nuclear fuel. Another option is to find applications for the isotopes in nuclear waste so as to re-use them. Already, caesium-137, strontium-90 and a few other isotopes are extracted for certain industrial applications such as food irradiation and radioisotope thermoelectric generators. While re-use does not eliminate the need to manage radioisotopes, it can reduce the quantity of waste produced. The Nuclear Assisted Hydrocarbon Production Method, Canadian patent application 2,659,302, is a method for the temporary or permanent storage of nuclear waste materials comprising the placing of waste materials into one or more repositories or boreholes constructed into an unconventional oil formation. The thermal flux of the waste materials fractures the formation and alters the chemical and/or physical properties of hydrocarbon material within the subterranean formation to allow removal of the altered material. A mixture of hydrocarbons, hydrogen, and/or other formation fluids is produced from the formation. The radioactivity of high-level radioactive waste affords proliferation resistance to plutonium placed in the periphery of the repository or the deepest portion of a borehole. Breeder reactors can run on U-238 and transuranic elements, which comprise the majority of spent fuel radioactivity in the 1,000–100,000-year time span. Space disposal Space disposal is attractive because it removes nuclear waste from the planet. It has significant disadvantages, such as the potential for catastrophic failure of a launch vehicle, which could spread radioactive material into the atmosphere and around the world. A high number of launches would be required because no individual rocket would be able to carry very much of the material relative to the total amount that needs to be disposed. This makes the proposal economically impractical and increases the risk of one or more launch failures. To further complicate matters, international agreements on the regulation of such a program would need to be established. Costs and inadequate reliability of modern rocket launch systems for space disposal has been one of the motives for interest in non-rocket spacelaunch systems such as mass drivers, space elevators, and other proposals. National management plans Sweden and Finland are furthest along in committing to a particular disposal technology, while many others reprocess spent fuel or contract with France or Great Britain to do it, taking back the resulting plutonium and high-level waste. "An increasing backlog of plutonium from reprocessing is developing in many countries... It is doubtful that reprocessing makes economic sense in the present environment of cheap uranium." In many European countries (e.g., Britain, Finland, the Netherlands, Sweden, and Switzerland) the risk or dose limit for a member of the public exposed to radiation from a future high-level nuclear waste facility is considerably more stringent than that suggested by the International Commission on Radiation Protection or proposed in the United States. European limits are often more stringent than the standard suggested in 1990 by the International Commission on Radiation Protection by a factor of 20, and more stringent by a factor of ten than the standard proposed by the U.S. Environmental Protection Agency (EPA) for the Yucca Mountain nuclear waste repository for the first 10,000 years after closure. The U.S. EPA's proposed standard for greater than 10,000 years is 250 times more permissive than the European limit. The U.S. EPA proposed a legal limit of a maximum of 3.5 millisieverts (350 millirem) each annually to local individuals after 10,000 years, which would be up to several percent of the exposure currently received by some populations in the highest natural background regions on Earth, though the United States Department of Energy (DOE) predicted that received dose would be much below that limit. Over a timeframe of thousands of years, after the most active short half-life radioisotopes decayed, burying U.S. nuclear waste would increase the radioactivity in the top 2000 feet of rock and soil in the United States (10 million km2) by approximately 1 part in 10 million over the cumulative amount of natural radioisotopes in such a volume, but the vicinity of the site would have a far higher concentration of artificial radioisotopes underground than such an average. Mongolia After serious opposition about plans and negotiations between Mongolia with Japan and the United States to build nuclear-waste facilities in Mongolia, Mongolia stopped all negotiations in September 2011. These negotiations had started after U.S. Deputy Secretary of Energy Daniel Poneman visited Mongolia in September 2010. Talks took place in Washington, D.C. between officials of Japan, the United States, and Mongolia in February 2011. After this the United Arab Emirates (UAE), which wanted to buy nuclear fuel from Mongolia, joined in the negotiations. The talks were kept secret and, although the Mainichi Daily News reported on them in May, Mongolia officially denied the existence of these negotiations. Alarmed by this news, Mongolian citizens protested against the plans and demanded the government withdraw the plans and disclose information. The Mongolian President Tsakhiagiin Elbegdorj issued a presidential order on September 13 banning all negotiations with foreign governments or international organizations on nuclear-waste storage plans in Mongolia. The Mongolian government has accused the newspaper of distributing false claims around the world. After the presidential order, the Mongolian president fired the individual who was supposedly involved in these conversations. Illegal dumping Authorities in Italy are investigating a 'Ndrangheta mafia clan accused of trafficking and illegally dumping nuclear waste. According to a whistleblower, a manager of the Italy state energy research agency Enea paid the clan to get rid of 600 drums of toxic and radioactive waste from Italy, Switzerland, France, Germany, and the United States, with Somalia as the destination, where the waste was buried after buying off local politicians. Former employees of Enea are suspected of paying the criminals to take waste off their hands in the 1980s and 1990s. Shipments to Somalia continued into the 1990s, while the 'Ndrangheta clan also blew up shiploads of waste, including radioactive hospital waste, sending them to the sea bed off the Calabrian coast. According to the environmental group Legambiente, former members of the 'Ndrangheta have said that they were paid to sink ships with radioactive material for the last 20 years. In 2008, Afghan authorities accused Pakistan of illegally dumping nuclear waste in the southern parts of Afghanistan when the Taliban were in power between 1996 and 2001. The Pakistani government denied the allegation. Accidents A few incidents have occurred when radioactive material was disposed of improperly, shielding during transport was defective, or when it was simply abandoned or even stolen from a waste store. In the Soviet Union, waste stored in Lake Karachay was blown over the area during a dust storm after the lake had partly dried out. In Italy, several radioactive waste deposits let material flow into river water, thus contaminating water for domestic use. In France in the summer of 2008, numerous incidents happened: in one, at the Areva plant in Tricastin, it was reported that, during a draining operation, liquid containing untreated uranium overflowed out of a faulty tank and about 75 kg of the radioactive material seeped into the ground and, from there, into two rivers nearby; in another case, over 100 staff were contaminated with low doses of radiation. There are ongoing concerns around the deterioration of the nuclear waste site on the Enewetak Atoll of the Marshall Islands and a potential radioactive spill. Scavenging of abandoned radioactive material has been the cause of several other cases of radiation exposure, mostly in developing nations, which may have less regulation of dangerous substances (and sometimes less general education about radioactivity and its hazards) and a market for scavenged goods and scrap metal. The scavengers and those who buy the material are almost always unaware that the material is radioactive and it is selected for its aesthetics or scrap value. Irresponsibility on the part of the radioactive material's owners, usually a hospital, university, or military, and the absence of regulation concerning radioactive waste, or a lack of enforcement of such regulations, have been significant factors in radiation exposures. For an example of an accident involving radioactive scrap originating from a hospital, see the Goiânia accident. Transportation accidents involving spent nuclear fuel from power plants are unlikely to have serious consequences due to the strength of the spent nuclear fuel shipping casks. On 15 December 2011, top government spokesman Osamu Fujimura of the Japanese government admitted that nuclear substances were found in the waste of Japanese nuclear facilities. Although Japan did commit itself in 1977 to inspections in the safeguard agreement with the IAEA, the reports were kept secret for the inspectors of the International Atomic Energy Agency. Japan did start discussions with the IAEA about the large quantities of enriched uranium and plutonium that were discovered in nuclear waste cleared away by Japanese nuclear operators. At the press conference Fujimura said: "Based on investigations so far, most nuclear substances have been properly managed as waste, and from that perspective, there is no problem in safety management," but according to him, the matter was at that moment still being investigated. Associated hazard warning signs See also Ducrete Environmental remediation Human Interference Task Force List of global issues Lists of nuclear disasters and radioactive incidents Material unaccounted for Mixed waste (radioactive/hazardous) Microbial corrosion Nuclear decommissioning Personal protective equipment Radiation protection Radioactive contamination Radioactive scrap metal Radioecology Toxic waste Waste management UraMin References Cited sources External links Alsos Digital Library – Radioactive Waste (annotated bibliography) Euridice European Interest Group in charge of Hades URL operation (link) Ondraf/Niras, the waste management authority in Belgium (link) Critical Hour: Three Mile Island, The Nuclear Legacy, And National Security (PDF) Environmental Protection Agency – Yucca Mountain (documents) Grist.org – How to tell future generations about nuclear waste (article) International Atomic Energy Agency – Internet Directory of Nuclear Resources (links) Nuclear Files.org – Yucca Mountain (documents) Nuclear Regulatory Commission – Radioactive Waste (documents) Nuclear Regulatory Commission – Spent Fuel Heat Generation Calculation (guide) Radwaste Solutions (magazine) UNEP Earthwatch – Radioactive Waste (documents and links) World Nuclear Association – Radioactive Waste (briefing papers) Worries can't be buried as nuclear waste piles up, Los Angeles Times, January 21, 2008 Radioactivity Environmental impact of nuclear power
Radioactive waste
[ "Physics", "Chemistry", "Technology" ]
12,266
[ "Hazardous waste", "Environmental impact of nuclear power", "Radioactivity", "Nuclear physics", "Radioactive waste" ]
37,411
https://en.wikipedia.org/wiki/Alkaline%20earth%20metal
|- ! colspan=2 style="text-align:left;" | ↓ Period |- ! 2 | |- ! 3 | |- ! 4 | |- ! 5 | |- ! 6 | |- ! 7 | |- | colspan="2"| Legend |} The alkaline earth metals are six chemical elements in group 2 of the periodic table. They are beryllium (Be), magnesium (Mg), calcium (Ca), strontium (Sr), barium (Ba), and radium (Ra). The elements have very similar properties: they are all shiny, silvery-white, somewhat reactive metals at standard temperature and pressure. Together with helium, these elements have in common an outer s orbital which is full—that is, this orbital contains its full complement of two electrons, which the alkaline earth metals readily lose to form cations with charge +2, and an oxidation state of +2. Helium is grouped with the noble gases and not with the alkaline earth metals, but it is theorized to have some similarities to beryllium when forced into bonding and has sometimes been suggested to belong to group 2. All the discovered alkaline earth metals occur in nature, although radium occurs only through the decay chain of uranium and thorium and not as a primordial element. There have been experiments, all unsuccessful, to try to synthesize element 120, the next potential member of the group. Characteristics Chemical As with other groups, the members of this family show patterns in their electronic configuration, especially the outermost shells, resulting in trends in chemical behavior: Most of the chemistry has been observed only for the first five members of the group. The chemistry of radium is not well-established due to its radioactivity; thus, the presentation of its properties here is limited. The alkaline earth metals are all silver-colored and soft, and have relatively low densities, melting points, and boiling points. In chemical terms, all of the alkaline earth metals react with the halogens to form the alkaline earth metal halides, all of which are ionic crystalline compounds (except for beryllium chloride, beryllium bromide and beryllium iodide, which are covalent). All the alkaline earth metals except beryllium also react with water to form strongly alkaline hydroxides and, thus, should be handled with great care. The heavier alkaline earth metals react more vigorously than the lighter ones. The alkaline earth metals have the second-lowest first ionization energies in their respective periods of the periodic table because of their somewhat low effective nuclear charges and the ability to attain a full outer shell configuration by losing just two electrons. The second ionization energy of all of the alkaline metals is also somewhat low. Beryllium is an exception: It does not react with water or steam unless at very high temperatures, and its halides are covalent. If beryllium did form compounds with an ionization state of +2, it would polarize electron clouds that are near it very strongly and would cause extensive orbital overlap, since beryllium has a high charge density. All compounds that include beryllium have a covalent bond. Even the compound beryllium fluoride, which is the most ionic beryllium compound, has a low melting point and a low electrical conductivity when melted. All the alkaline earth metals have two electrons in their valence shell, so the energetically preferred state of achieving a filled electron shell is to lose two electrons to form doubly charged positive ions. Compounds and reactions The alkaline earth metals all react with the halogens to form ionic halides, such as calcium chloride (), as well as reacting with oxygen to form oxides such as strontium oxide (). Calcium, strontium, and barium react with water to produce hydrogen gas and their respective hydroxides (magnesium also reacts, but much more slowly), and also undergo transmetalation reactions to exchange ligands. {| class="wikitable sortable" |+ Solubility-related constants for alkaline-earth-metal fluorides ! Metal ! M2+ hydration (-MJ/mol) ! "MF2" unit hydration (-MJ/mol) ! MF2 lattice (-MJ/mol) ! Solubility (mol/kL) |- | Be || 2.455 || 3.371 || 3.526 || soluble |- | Mg || 1.922 || 2.838 || 2.978 || 1.2 |- | Ca || 1.577 || 2.493 || 2.651 || 0.2 |- | Sr || 1.415 || 2.331 || 2.513 || 0.8 |- | Ba || 1.361 || 2.277 || 2.373 || 6 |} Physical and atomic Nuclear stability Isotopes of all six alkaline earth metals are present in the Earth's crust and the solar system at varying concentrations, dependent upon the nuclides' half lives and, hence, their nuclear stabilities. The first five have one, three, five, four, and six stable (or observationally stable) isotopes respectively, for a total of 19 stable nuclides, as listed here: beryllium-9; magnesium-24, -25, -26; calcium-40, -42, -43, -44, -46; strontium-84, -86, -87, -88; barium-132, -134, -135, -136, -137, -138. The four underlined isotopes in the list are predicted by radionuclide decay energetics to be only observationally stable and to decay with extremely long half-lives through double-beta decay, though no decays attributed definitively to these isotopes have yet been observed as of 2024. Radium has no stable nor primordial isotopes. In addition to the stable species, calcium and barium each have one extremely long-lived and primordial radionuclide: calcium-48 and barium-130, with half-lives of and years, respectively. Both are far longer than the current age of the universe (4.7× and 117× billion times longer, respectively) and less than one part per ten billion has decayed since the formation of the Earth. The two isotopes are stable for practical purposes. Apart from the 21 stable or nearly-stable isotopes, the six alkaline earth elements each possess a large number of known radioisotopes. None of the isotopes other than the aforementioned 21 are primordial: all have half lives too short for even a single atom to have survived since the solar system's formation, after the seeding of heavy nuclei by nearby supernovae and collisions between neutron stars, and any present are derived from ongoing natural processes. Beryllium-7, beryllium-10, and calcium-41 are trace, as well as cosmogenic, nuclides, formed by the impact of cosmic rays with atmospheric or crustal atoms. The longest half-lives among them are 1.387 million years for beryllium-10, 99.4 thousand years for calcium-41, 1599 years for radium-226 (radium's longest-lived isotope), 28.90 years for strontium-90, 10.51 years for barium-133, and 5.75 years for radium-228. All others have half-lives of less than half a year, most significantly shorter. Calcium-48 and barium-130, the two primordial and non-stable isotopes, decay only through double beta emission and have extremely long half-lives, by virtue of the extremely low probability of both beta decays occurring at the same time. All isotopes of radium are highly radioactive and are primarily generated through the decay of heavier radionuclides. The longest-lived of them is radium-226, a member of the decay chain of uranium-238. Strontium-90 and barium-140 are common fission products of uranium in nuclear reactors, accounting for 5.73% and 6.31% of uranium-235's fission products respectively when bombarded by thermal neutrons. The two isotopes have half-lives each of 28.90 years and 12.7 days. Strontium-90 is produced in appreciable quantities in operating nuclear reactors running on uranium-235 or plutonium-239 fuel, and a minuscule secular equilibrium concentration is also present due to rare spontaneous fission decays in naturally occurring uranium. Calcium-48 is the lightest nuclide known to undergo double beta decay. Naturally occurring calcium and barium are very weakly radioactive: calcium contains about 0.1874% calcium-48, and barium contains about 0.1062% barium-130. On average, one double-beta decay of calcium-48 will occur per second for every 90 tons of natural calcium, or 230 tons of limestone (calcium carbonate). Through the same decay mechanism, one decay of barium-130 will occur per second for every 16,000 tons of natural barium, or 27,000 tons of baryte (barium sulfate). The longest lived isotope of radium is radium-226 with a half-life of 1600 years; it along with radium-223, -224, and -228 occur naturally in the decay chains of primordial thorium and uranium. Beryllium-8 is notable by its absence as it splits in half virtually instantaneously into two alpha particles whenever it is formed. The triple alpha process in stars can only occur at energies high enough for beryllium-8 to fuse with a third alpha particle before it can decay, forming carbon-12. This thermonuclear rate-limiting bottleneck is the reason most main sequence stars spend billions of years fusing hydrogen within their cores, and only rarely manage to fuse carbon before collapsing into a stellar remnant, and even then merely for a timescale of ~1000 years. The radioisotopes of alkaline earth metals tend to be "bone seekers" as they behave chemically similar to calcium, an integral component of hydroxyapatite in compact bone, and gradually accumulate in the human skeleton. The incorporated radionuclides inflict significant damage to the bone marrow over time through the emission of ionizing radiation, primarily alpha particles. This property is made use of in a positive manner in the radiotherapy of certain bone cancers, since the radionuclides' chemical properties causes them to preferentially target cancerous growths in bone matter, leaving the rest of the body relatively unharmed. Compared to their neighbors in the periodic table, alkaline earth metals tend to have a larger number of stable isotopes as they all possess an even number of protons, owing to their status as group 2 elements. Their isotopes are generally more stable due to nucleon pairing. This stability is further enhanced if the isotope also has an even number of neutrons, as both kinds of nucleons can then participate in pairing and contribute to nuclei stability. History Etymology The alkaline earth metals are named after their oxides, the alkaline earths, whose old-fashioned names were beryllia, magnesia, lime, strontia, and baria. These oxides are basic (alkaline) when combined with water. "Earth" was a term applied by early chemists to nonmetallic substances that are insoluble in water and resistant to heating—properties shared by these oxides. The realization that these earths were not elements but compounds is attributed to the chemist Antoine Lavoisier. In his Traité Élémentaire de Chimie (Elements of Chemistry) of 1789 he called them salt-forming earth elements. Later, he suggested that the alkaline earths might be metal oxides, but admitted that this was mere conjecture. In 1808, acting on Lavoisier's idea, Humphry Davy became the first to obtain samples of the metals by electrolysis of their molten earths, thus supporting Lavoisier's hypothesis and causing the group to be named the alkaline earth metals. Discovery The calcium compounds calcite and lime have been known and used since prehistoric times. The same is true for the beryllium compounds beryl and emerald. The other compounds of the alkaline earth metals were discovered starting in the early 15th century. The magnesium compound magnesium sulfate was first discovered in 1618 by a farmer at Epsom in England. Strontium carbonate was discovered in minerals in the Scottish village of Strontian in 1790. The last element is the least abundant: radioactive radium, which was extracted from uraninite in 1898. All elements except beryllium were isolated by electrolysis of molten compounds. Magnesium, calcium, and strontium were first produced by Humphry Davy in 1808, whereas beryllium was independently isolated by Friedrich Wöhler and Antoine Bussy in 1828 by reacting beryllium compounds with potassium. In 1910, radium was isolated as a pure metal by Curie and André-Louis Debierne also by electrolysis. Beryllium Beryl, a mineral that contains beryllium, has been known since the time of the Ptolemaic Kingdom in Egypt. Although it was originally thought that beryl was an aluminum silicate, beryl was later found to contain a then-unknown element when, in 1797, Louis-Nicolas Vauquelin dissolved aluminum hydroxide from beryl in an alkali. In 1828, Friedrich Wöhler and Antoine Bussy independently isolated this new element, beryllium, by the same method, which involved a reaction of beryllium chloride with metallic potassium; this reaction was not able to produce large ingots of beryllium. It was not until 1898, when Paul Lebeau performed an electrolysis of a mixture of beryllium fluoride and sodium fluoride, that large pure samples of beryllium were produced. Magnesium Magnesium was first produced by Humphry Davy in England in 1808 using electrolysis of a mixture of magnesia and mercuric oxide. Antoine Bussy prepared it in coherent form in 1831. Davy's first suggestion for a name was magnium, but the name magnesium is now used. Calcium Lime has been used as a material for building since 7000 to 14,000 BCE, and kilns used for lime have been dated to 2,500 BCE in Khafaja, Mesopotamia. Calcium as a material has been known since at least the first century, as the ancient Romans were known to have used calcium oxide by preparing it from lime. Calcium sulfate has been known to be able to set broken bones since the tenth century. Calcium itself, however, was not isolated until 1808, when Humphry Davy, in England, used electrolysis on a mixture of lime and mercuric oxide, after hearing that Jöns Jakob Berzelius had prepared a calcium amalgam from the electrolysis of lime in mercury. Strontium In 1790, physician Adair Crawford discovered ores with distinctive properties, which were named strontites in 1793 by Thomas Charles Hope, a chemistry professor at the University of Glasgow, who confirmed Crawford's discovery. Strontium was eventually isolated in 1808 by Humphry Davy by electrolysis of a mixture of strontium chloride and mercuric oxide. The discovery was announced by Davy on 30 June 1808 at a lecture to the Royal Society. Barium Barite, a mineral containing barium, was first recognized as containing a new element in 1774 by Carl Scheele, although he was able to isolate only barium oxide. Barium oxide was isolated again two years later by Johan Gottlieb Gahn. Later in the 18th century, William Withering noticed a heavy mineral in the Cumberland lead mines, which are now known to contain barium. Barium itself was finally isolated in 1808 when Humphry Davy used electrolysis with molten salts, and Davy named the element barium, after baryta. Later, Robert Bunsen and Augustus Matthiessen isolated pure barium by electrolysis of a mixture of barium chloride and ammonium chloride. Radium While studying uraninite, on 21 December 1898, Marie and Pierre Curie discovered that, even after uranium had decayed, the material created was still radioactive. The material behaved somewhat similarly to barium compounds, although some properties, such as the color of the flame test and spectral lines, were much different. They announced the discovery of a new element on 26 December 1898 to the French Academy of Sciences. Radium was named in 1899 from the word radius, meaning ray, as radium emitted power in the form of rays. Occurrence Beryllium occurs in the Earth's crust at a concentration of two to six parts per million (ppm), much of which is in soils, where it has a concentration of six ppm. Beryllium is one of the rarest elements in seawater, even rarer than elements such as scandium, with a concentration of 0.2 parts per trillion. However, in freshwater, beryllium is somewhat more common, with a concentration of 0.1 parts per billion. Magnesium and calcium are very common in the Earth's crust, being respectively the fifth and eighth most abundant elements. None of the alkaline earth metals are found in their elemental state. Common magnesium-containing minerals are carnallite, magnesite, and dolomite. Common calcium-containing minerals are chalk, limestone, gypsum, and anhydrite. Strontium is the 15th most abundant element in the Earth's crust. The principal minerals are celestite and strontianite. Barium is slightly less common, much of it in the mineral barite. Radium, being a decay product of uranium, is found in all uranium-bearing ores. Due to its relatively short half-life, radium from the Earth's early history has decayed, and present-day samples have all come from the much slower decay of uranium. Production Most beryllium is extracted from beryllium hydroxide. One production method is sintering, done by mixing beryl, sodium fluorosilicate, and soda at high temperatures to form sodium fluoroberyllate, aluminum oxide, and silicon dioxide. A solution of sodium fluoroberyllate and sodium hydroxide in water is then used to form beryllium hydroxide by precipitation. Alternatively, in the melt method, powdered beryl is heated to high temperature, cooled with water, then heated again slightly in sulfuric acid, eventually yielding beryllium hydroxide. The beryllium hydroxide from either method then produces beryllium fluoride and beryllium chloride through a somewhat long process. Electrolysis or heating of these compounds can then produce beryllium. In general, strontium carbonate is extracted from the mineral celestite through two methods: by leaching the celestite with sodium carbonate, or in a more complicated way involving coal. To produce barium, barite (impure barium sulfate) is converted to barium sulfide by carbothermic reduction (such as with coke). The sulfide is water-soluble and easily reacted to form pure barium sulfate, used for commercial pigments, or other compounds, such as barium nitrate. These in turn are calcined into barium oxide, which eventually yields pure barium after reduction with aluminum. The most important supplier of barium is China, which produces more than 50% of world supply. Applications Beryllium is used mainly in military applications, but non-military uses exist. In electronics, beryllium is used as a p-type dopant in some semiconductors, and beryllium oxide is used as a high-strength electrical insulator and heat conductor. Beryllium alloys are used for mechanical parts when stiffness, light weight, and dimensional stability are required over a wide temperature range. Beryllium-9 is used in small-scale neutron sources that use the reaction , the reaction used by James Chadwick when he discovered the neutron. Its low atomic weight and low neutron absorption cross-section would make beryllium suitable as a neutron moderator, but its high price and the readily available alternatives such as water, heavy water and nuclear graphite have limited this to niche applications. In the FLiBe eutectic used in molten salt reactors, beryllium's role as a moderator is more incidental than the desired property leading to its use. Magnesium has many uses. It offers advantages over other structural materials such as aluminum, but magnesium's usage is hindered by its flammability. Magnesium is often alloyed with aluminum, zinc and manganese to increase its strength and corrosion resistance. Magnesium has many other industrial applications, such as its role in the production of iron and steel, and in the Kroll process for production of titanium. Calcium is used as a reducing agent in the separation of other metals such as uranium from ore. It is a major component of many alloys, especially aluminum and copper alloys, and is also used to deoxidize alloys. Calcium has roles in the making of cheese, mortars, and cement. Strontium and barium have fewer applications than the lighter alkaline earth metals. Strontium carbonate is used in the manufacturing of red fireworks. Pure strontium is used in the study of neurotransmitter release in neurons. Radioactive strontium-90 finds some use in RTGs, which utilize its decay heat. Barium is used in vacuum tubes as a getter to remove gases. Barium sulfate has many uses in the petroleum industry, and other industries. Radium has many former applications based on its radioactivity, but its use is no longer common because of the adverse health effects and long half-life. Radium was frequently used in luminous paints, although this use was stopped after it sickened workers. The nuclear quackery that alleged health benefits of radium formerly led to its addition to drinking water, toothpaste, and many other products. Radium is no longer used even when its radioactive properties are desired because its long half-life makes safe disposal challenging. For example, in brachytherapy, shorter-lived alternatives such as iridium-192 are usually used instead. Representative reactions of alkaline earth metals Reaction with halogens Ca + Cl2 → CaCl2 Anhydrous calcium chloride is a hygroscopic substance that is used as a desiccant. Exposed to air, it will absorb water vapour from the air, forming a solution. This property is known as deliquescence. Reaction with oxygen Ca + 1/2O2 → CaO Mg + 1/2O2 → MgO Reaction with sulfur Ca + 1/8S8 → CaS Reaction with carbon With carbon, they form acetylides directly. Beryllium forms carbide. 2Be + C → Be2C CaO + 3C → CaC2 + CO (at 2500 °C in furnace) CaC2 + 2H2O → Ca(OH)2 + C2H2 Mg2C3 + 4H2O → 2Mg(OH)2 + C3H4 Reaction with nitrogen Only Be and Mg form nitrides directly. 3Be + N2 → Be3N2 3Mg + N2 → Mg3N2 Reaction with hydrogen Alkaline earth metals react with hydrogen to generate saline hydride that are unstable in water. Ca + H2 → CaH2 Reaction with water Ca, Sr, and Ba readily react with water to form hydroxide and hydrogen gas. Be and Mg are passivated by an impervious layer of oxide. However, amalgamated magnesium will react with water vapor. Mg + H2O → MgO + H2 Reaction with acidic oxides Alkaline earth metals reduce the nonmetal from its oxide. 2Mg + SiO2 → 2MgO + Si 2Mg + CO2 → 2MgO + C (in solid carbon dioxide) Reaction with acids Mg + 2HCl → MgCl2 + H2 Be + 2HCl → BeCl2 + H2 Reaction with bases Be exhibits amphoteric properties. It dissolves in concentrated sodium hydroxide. Be + NaOH + 2H2O → Na[Be(OH)3] + H2 Reaction with alkyl halides Magnesium reacts with alkyl halides via an insertion reaction to generate Grignard reagents. RX + Mg → RMgX (in anhydrous ether) Identification of alkaline earth cations The flame test The table below presents the colors observed when the flame of a Bunsen burner is exposed to salts of alkaline earth metals. Be and Mg do not impart colour to the flame due to their small size. In solution Mg2+ Disodium phosphate is a very selective reagent for magnesium ions and, in the presence of ammonium salts and ammonia, forms a white precipitate of ammonium magnesium phosphate. Mg2+ + NH3 + Na2HPO4 → (NH4)MgPO4 + 2Na+ Ca2+ Ca2+ forms a white precipitate with ammonium oxalate. Calcium oxalate is insoluble in water, but is soluble in mineral acids. Ca2+ + (COO)2(NH4)2 → (COO)2Ca + NH4+ Sr2+ Strontium ions precipitate with soluble sulfate salts. Sr2+ + Na2SO4 → SrSO4 + 2Na+ All ions of alkaline earth metals form white precipitate with ammonium carbonate in the presence of ammonium chloride and ammonia. Compounds of alkaline earth metals Oxides The alkaline earth metal oxides are formed from the thermal decomposition of the corresponding carbonates. CaCO3 → CaO + CO2 (at approx. 900°C) In laboratory, they are obtained from hydroxides: Mg(OH)2 → MgO + H2O or nitrates: Ca(NO3)2 → CaO + 2NO2 + 1/2O2 The oxides exhibit basic character: they turn phenolphthalein red and litmus, blue. They react with water to form hydroxides in an exothermic reaction. CaO + H2O → Ca(OH)2 + Q Calcium oxide reacts with carbon to form acetylide. CaO + 3C → CaC2 + CO (at 2500°C) CaC2 + N2 → CaCN2 + C CaCN2 + H2SO4 → CaSO4 + H2N—CN H2N—CN + H2O → (H2N)2CO (urea) CaCN2 + 2H2O → CaCO3 + NH3 Hydroxides They are generated from the corresponding oxides on reaction with water. They exhibit basic character: they turn phenolphthalein pink and litmus, blue. Beryllium hydroxide is an exception as it exhibits amphoteric character. Be(OH)2 + 2HCl → BeCl2 + 2 H2O Be(OH)2 + NaOH → Na[Be(OH)3] Salts Ca and Mg are found in nature in many compounds such as dolomite, aragonite, magnesite (carbonate rocks). Calcium and magnesium ions are found in hard water. Hard water represents a multifold issue. It is of great interest to remove these ions, thus softening the water. This procedure can be done using reagents such as calcium hydroxide, sodium carbonate or sodium phosphate. A more common method is to use ion-exchange aluminosilicates or ion-exchange resins that trap Ca2+ and Mg2+ and liberate Na+ instead: Na2O·Al2O3·6SiO2 + Ca2+ → CaO·Al2O3·6SiO2 + 2Na+ Biological role and precautions Magnesium and calcium are ubiquitous and essential to all known living organisms. They are involved in more than one role, with, for example, magnesium or calcium ion pumps playing a role in some cellular processes, magnesium functioning as the active center in some enzymes, and calcium salts taking a structural role, most notably in bones. Strontium plays an important role in marine aquatic life, especially hard corals, which use strontium to build their exoskeletons. It and barium have some uses in medicine, for example "barium meals" in radiographic imaging, whilst strontium compounds are employed in some toothpastes. Excessive amounts of strontium-90 are toxic due to its radioactivity and strontium-90 mimics calcium (i.e. Behaves as a "bone seeker") where it bio-accumulates with a significant biological half life. While the bones themselves have higher radiation tolerance than other tissues, the rapidly dividing bone marrow does not and can thus be significantly harmed by Sr-90. The effect of ionizing radiation on bone marrow is also the reason why acute radiation syndrome can have anemia-like symptoms and why donation of red blood cells can increase survivability. Beryllium and radium, however, are toxic. Beryllium's low aqueous solubility means it is rarely available to biological systems; it has no known role in living organisms and, when encountered by them, is usually highly toxic. Radium has a low availability and is highly radioactive, making it toxic to life. Extensions The next alkaline earth metal after radium is thought to be element 120, although this may not be true due to relativistic effects. The synthesis of element 120 was first attempted in March 2007, when a team at the Flerov Laboratory of Nuclear Reactions in Dubna bombarded plutonium-244 with iron-58 ions; however, no atoms were produced, leading to a limit of 400 fb for the cross-section at the energy studied. In April 2007, a team at the GSI attempted to create element 120 by bombarding uranium-238 with nickel-64, although no atoms were detected, leading to a limit of 1.6 pb for the reaction. Synthesis was again attempted at higher sensitivities, although no atoms were detected. Other reactions have been tried, although all have been met with failure. The chemistry of element 120 is predicted to be closer to that of calcium or strontium instead of barium or radium. This noticeably contrasts with periodic trends, which would predict element 120 to be more reactive than barium and radium. This lowered reactivity is due to the expected energies of element 120's valence electrons, increasing element 120's ionization energy and decreasing the metallic and ionic radii. The next alkaline earth metal after element 120 has not been definitely predicted. Although a simple extrapolation using the Aufbau principle would suggest that element 170 is a congener of 120, relativistic effects may render such an extrapolation invalid. The next element with properties similar to the alkaline earth metals has been predicted to be element 166, though due to overlapping orbitals and lower energy gap below the 9s subshell, element 166 may instead be placed in group 12, below copernicium. See also Alkaline earth octacarbonyl complexes Explanatory notes References Bibliography Further reading Group 2 – Alkaline Earth Metals, Royal Chemistry Society. Hogan, C. Michael. 2010. "Calcium". A. Jorgensen, C. Cleveland, eds. Encyclopedia of Earth. National Council for Science and the Environment. Maguire, Michael E. "Alkaline Earth Metals". Chemistry: Foundations and Applications. Ed. J. J. Lagowski. Vol. 1. New York: Macmillan Reference USA, 2004. 33–34. 4 vols. Gale Virtual Reference Library. Thomson Gale. Petrucci R.H., Harwood W.S., and Herring F.G., General Chemistry (8th edition, Prentice-Hall, 2002) Silberberg, M.S., Chemistry: The Molecular Nature of Matter and Change (3rd edition, McGraw-Hill, 2009) Groups (periodic table) Periodic table
Alkaline earth metal
[ "Chemistry" ]
6,776
[ "Periodic table", "Groups (periodic table)" ]
37,427
https://en.wikipedia.org/wiki/Le%20Chatelier%27s%20principle
In chemistry, Le Chatelier's principle (pronounced or ) is a principle used to predict the effect of a change in conditions on chemical equilibrium. Other names include Chatelier's principle, Braun–Le Chatelier principle, Le Chatelier–Braun principle or the equilibrium law. The principle is named after French chemist Henry Louis Le Chatelier who enunciated the principle in 1884 by extending the reasoning from the Van 't Hoff relation of how temperature variations changes the equilibrium to the variations of pressure and what's now called chemical potential, and sometimes also credited to Karl Ferdinand Braun, who discovered it independently in 1887. It can be defined as: In scenarios outside thermodynamic equilibrium, there can arise phenomena in contradiction to an over-general statement of Le Chatelier's principle. Le Chatelier's principle is sometimes alluded to in discussions of topics other than thermodynamics. Thermodynamic statement Le Chatelier–Braun principle analyzes the qualitative behaviour of a thermodynamic system when a particular one of its externally controlled state variables, say changes by an amount the 'driving change', causing a change the 'response of prime interest', in its conjugate state variable all other externally controlled state variables remaining constant. The response illustrates 'moderation' in ways evident in two related thermodynamic equilibria. Obviously, one of has to be intensive, the other extensive. Also as a necessary part of the scenario, there is some particular auxiliary 'moderating' state variable , with its conjugate state variable For this to be of interest, the 'moderating' variable must undergo a change or in some part of the experimental protocol; this can be either by imposition of a change , or with the holding of constant, written For the principle to hold with full generality, must be extensive or intensive accordingly as is so. Obviously, to give this scenario physical meaning, the 'driving' variable and the 'moderating' variable must be subject to separate independent experimental controls and measurements. Explicit statement The principle can be stated in two ways, formally different, but substantially equivalent, and, in a sense, mutually 'reciprocal'. The two ways illustrate the Maxwell relations, and the stability of thermodynamic equilibrium according to the second law of thermodynamics, evident as the spread of energy amongst the state variables of the system in response to an imposed change. The two ways of statement differ in their experimental protocols. They share an index protocol (denoted that may be described as 'changed driver, moderation permitted'. Along with the driver change it imposes a constant with and allows the uncontrolled 'moderating' variable response along with the 'index' response of interest The two ways of statement differ in their respective compared protocols. One form of compared protocol posits 'changed driver, no moderation' (denoted The other form of compared protocol posits 'fixed driver, imposed moderation' (denoted ) Forced 'driver' change, free or fixed 'moderation' This way compares with to compare the effects of the imposed the change with and without moderation. The protocol prevents 'moderation' by enforcing that through an adjustment and it observes the 'no-moderation' response Provided that the observed response is indeed that then the principle states that . In other words, change in the 'moderating' state variable moderates the effect of the driving change in on the responding conjugate variable Forcedly changed or fixed 'driver', respectively free or forced 'moderation' This way also uses two experimental protocols, and , to compare the index effect with the effect of 'moderation' alone. The index protocol is executed first; the response of prime interest, is observed, and the response of the 'moderating' variable is also measured. With that knowledge, then the fixed driver, imposed moderation protocol enforces that with the driving variable held fixed; the protocol also, through an adjustment imposes a change (learnt from the just previous measurement) in the 'moderating' variable, and measures the change Provided that the 'moderated' response is indeed that then the principle states that the signs of and are opposite. Again, in other words, change in the 'moderating' state variable opposes the effect of the driving change in on the responding conjugate variable Other statements The duration of adjustment depends on the strength of the negative feedback to the initial shock. The principle is typically used to describe closed negative-feedback systems, but applies, in general, to thermodynamically closed and isolated systems in nature, since the second law of thermodynamics ensures that the disequilibrium caused by an instantaneous shock is eventually followed by a new equilibrium. While well rooted in chemical equilibrium, Le Chatelier's principle can also be used in describing mechanical systems in that a system put under stress will respond in such a way as to reduce or minimize that stress. Moreover, the response will generally be via the mechanism that most easily relieves that stress. Shear pins and other such sacrificial devices are design elements that protect systems against stress applied in undesired manners to relieve it so as to prevent more extensive damage to the entire system, a practical engineering application of Le Chatelier's principle. Chemistry Effect of change in concentration Changing the concentration of a chemical will shift the equilibrium to the side that would counter that change in concentration. The chemical system will attempt to partly oppose the change affected to the original state of equilibrium. In turn, the rate of reaction, extent, and yield of products will be altered corresponding to the impact on the system. This can be illustrated by the equilibrium of carbon monoxide and hydrogen gas, reacting to form methanol. CO + 2 H2 ⇌ CH3OH Suppose we were to increase the concentration of CO in the system. Using Le Chatelier's principle, we can predict that the concentration of methanol will increase, decreasing the total change in CO. If we are to add a species to the overall reaction, the reaction will favor the side opposing the addition of the species. Likewise, the subtraction of a species would cause the reaction to "fill the gap" and favor the side where the species was reduced. This observation is supported by the collision theory. As the concentration of CO is increased, the frequency of successful collisions of that reactant would increase also, allowing for an increase in forward reaction, and generation of the product. Even if the desired product is not thermodynamically favored, the end-product can be obtained if it is continuously removed from the solution. The effect of a change in concentration is often exploited synthetically for condensation reactions (i.e., reactions that extrude water) that are equilibrium processes (e.g., formation of an ester from carboxylic acid and alcohol or an imine from an amine and aldehyde). This can be achieved by physically sequestering water, by adding desiccants like anhydrous magnesium sulfate or molecular sieves, or by continuous removal of water by distillation, often facilitated by a Dean-Stark apparatus. Effect of change in temperature The effect of changing the temperature in the equilibrium can be made clear by 1) incorporating heat as either a reactant or a product, and 2) assuming that an increase in temperature increases the heat content of a system. When the reaction is exothermic (ΔH is negative and energy is released), heat is included as a product, and when the reaction is endothermic (ΔH is positive and energy is consumed), heat is included as a reactant. Hence, whether increasing or decreasing the temperature would favor the forward or the reverse reaction can be determined by applying the same principle as with concentration changes. Take, for example, the reversible reaction of nitrogen gas with hydrogen gas to form ammonia: N2(g) + 3 H2(g) ⇌ 2 NH3(g)    ΔH = −92 kJ mol−1 Because this reaction is exothermic, it produces heat: N2(g) + 3 H2(g) ⇌ 2 NH3(g) + heat If the temperature were increased, the heat content of the system would increase, so the system would consume some of that heat by shifting the equilibrium to the left, thereby producing less ammonia. More ammonia would be produced if the reaction were run at a lower temperature, but a lower temperature also lowers the rate of the process, so, in practice (the Haber process) the temperature is set at a compromise value that allows ammonia to be made at a reasonable rate with an equilibrium concentration that is not too unfavorable. In exothermic reactions, an increase in temperature decreases the equilibrium constant, K, whereas in endothermic reactions, an increase in temperature increases K. Le Chatelier's principle applied to changes in concentration or pressure can be understood by giving K a constant value. The effect of temperature on equilibria, however, involves a change in the equilibrium constant. The dependence of K on temperature is determined by the sign of ΔH. The theoretical basis of this dependence is given by the Van 't Hoff equation. Effect of change in pressure The equilibrium concentrations of the products and reactants do not directly depend on the total pressure of the system. They may depend on the partial pressure of the products and reactants, but if the number of moles of gaseous reactants is equal to the number of moles of gaseous products, pressure has no effect on equilibrium. Changing total pressure by adding an inert gas at constant volume does not affect the equilibrium concentrations (see ). Changing total pressure by changing the volume of the system changes the partial pressures of the products and reactants and can affect the equilibrium concentrations (see below). Effect of change in volume Changing the volume of the system changes the partial pressures of the products and reactants and can affect the equilibrium concentrations. With a pressure increase due to a decrease in volume, the side of the equilibrium with fewer moles is more favorable and with a pressure decrease due to an increase in volume, the side with more moles is more favorable. There is no effect on a reaction where the number of moles of gas is the same on each side of the chemical equation. Considering the reaction of nitrogen gas with hydrogen gas to form ammonia: ⇌     ΔH = −92kJ mol−1 Note the number of moles of gas on the left-hand side and the number of moles of gas on the right-hand side. When the volume of the system is changed, the partial pressures of the gases change. If we were to decrease pressure by increasing volume, the equilibrium of the above reaction will shift to the left, because the reactant side has a greater number of moles than does the product side. The system tries to counteract the decrease in partial pressure of gas molecules by shifting to the side that exerts greater pressure. Similarly, if we were to increase pressure by decreasing volume, the equilibrium shifts to the right, counteracting the pressure increase by shifting to the side with fewer moles of gas that exert less pressure. If the volume is increased because there are more moles of gas on the reactant side, this change is more significant in the denominator of the equilibrium constant expression, causing a shift in equilibrium. Effect of adding an inert gas An inert gas (or noble gas), such as helium, is one that does not react with other elements or compounds. Adding an inert gas into a gas-phase equilibrium at constant volume does not result in a shift. This is because the addition of a non-reactive gas does not change the equilibrium equation, as the inert gas appears on both sides of the chemical reaction equation. For example, if A and B react to form C and D, but X does not participate in the reaction: \mathit{a}A{} + \mathit{b}B{} + \mathit{x}X <=> \mathit{c}C{} + \mathit{d}D{} + \mathit{x}X. While it is true that the total pressure of the system increases, the total pressure does not have any effect on the equilibrium constant; rather, it is a change in partial pressures that will cause a shift in the equilibrium. If, however, the volume is allowed to increase in the process, the partial pressures of all gases would be decreased resulting in a shift towards the side with the greater number of moles of gas. The shift will never occur on the side with fewer moles of gas. It is also known as Le Chatelier's postulate. Effect of a catalyst A catalyst increases the rate of a reaction without being consumed in the reaction. The use of a catalyst does not affect the position and composition of the equilibrium of a reaction, because both the forward and backward reactions are sped up by the same factor. For example, consider the Haber process for the synthesis of ammonia (NH3): N2 + 3 H2 ⇌ 2 NH3 In the above reaction, iron (Fe) and molybdenum (Mo) will function as catalysts if present. They will accelerate any reactions, but they do not affect the state of the equilibrium. General statements Thermodynamic equilibrium processes Le Chatelier's principle refers to states of thermodynamic equilibrium. The latter are stable against perturbations that satisfy certain criteria; this is essential to the definition of thermodynamic equilibrium. OR It states that changes in the temperature, pressure, volume, or concentration of a system will result in predictable and opposing changes in the system in order to achieve a new equilibrium state. For this, a state of thermodynamic equilibrium is most conveniently described through a fundamental relation that specifies a cardinal function of state, of the energy kind, or of the entropy kind, as a function of state variables chosen to fit the thermodynamic operations through which a perturbation is to be applied. In theory and, nearly, in some practical scenarios, a body can be in a stationary state with zero macroscopic flows and rates of chemical reaction (for example, when no suitable catalyst is present), yet not in thermodynamic equilibrium, because it is metastable or unstable; then Le Chatelier's principle does not necessarily apply. Non-equilibrium processes A simple body or a complex thermodynamic system can also be in a stationary state with non-zero rates of flow and chemical reaction; sometimes the word "equilibrium" is used in reference to such a state, though by definition it is not a thermodynamic equilibrium state. Sometimes, it is proposed to consider Le Chatelier's principle for such states. For this exercise, rates of flow and of chemical reaction must be considered. Such rates are not supplied by equilibrium thermodynamics. For such states, there are no simple statements that echo Le Chatelier's principle. Prigogine and Defay demonstrate that such a scenario may exhibit moderation, or may exhibit a measured amount of anti-moderation, though not a run-away anti-moderation that goes to completion. The example analysed by Prigogine and Defay is the Haber process. This situation is clarified by considering two basic methods of analysis of a process. One is the classical approach of Gibbs, the other uses the near- or local equilibrium approach of De Donder. The Gibbs approach requires thermodynamic equilibrium. The Gibbs approach is reliable within its proper scope, thermodynamic equilibrium, though of course it does not cover non-equilibrium scenarios. The De Donder approach can cover equilibrium scenarios, but also covers non-equilibrium scenarios in which there is only local thermodynamic equilibrium, and not thermodynamic equilibrium proper. The De Donder approach allows state variables called extents of reaction to be independent variables, though in the Gibbs approach, such variables are not independent. Thermodynamic non-equilibrium scenarios can contradict an over-general statement of Le Chatelier's Principle. Related system concepts It is common to treat the principle as a more general observation of systems, such as or, "roughly stated": The concept of systemic maintenance of a stable steady state despite perturbations has a variety of names, and has been studied in a variety of contexts, chiefly in the natural sciences. In chemistry, the principle is used to manipulate the outcomes of reversible reactions, often to increase their yield. In pharmacology, the binding of ligands to receptors may shift the equilibrium according to Le Chatelier's principle, thereby explaining the diverse phenomena of receptor activation and desensitization. In biology, the concept of homeostasis is different from Le Chatelier's principle, in that homoeostasis is generally maintained by processes of active character, as distinct from the passive or dissipative character of the processes described by Le Chatelier's principle in thermodynamics. In economics, even further from thermodynamics, allusion to the principle is sometimes regarded as helping explain the price equilibrium of efficient economic systems. In some dynamic systems, the end-state cannot be determined from the shock or perturbation. Economics In economics, a similar concept also named after Le Chatelier was introduced by American economist Paul Samuelson in 1947. There the generalized Le Chatelier principle is for a maximum condition of economic equilibrium: Where all unknowns of a function are independently variable, auxiliary constraints—"just-binding" in leaving initial equilibrium unchanged—reduce the response to a parameter change. Thus, factor-demand and commodity-supply elasticities are hypothesized to be lower in the short run than in the long run because of the fixed-cost constraint in the short run. Since the change of the value of an objective function in a neighbourhood of the maximum position is described by the envelope theorem, Le Chatelier's principle can be shown to be a corollary thereof. See also Homeostasis Common-ion effect Response reactions References Bibliography of cited sources Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, . Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, . Münster, A. (1970), Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London, . Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, translated by D.H. Everett, Longmans, Green & Co, London. External links YouTube video of Le Chatelier's principle and pressure Equilibrium chemistry Homeostasis
Le Chatelier's principle
[ "Chemistry", "Biology" ]
3,952
[ "Equilibrium chemistry", "Homeostasis" ]
37,461
https://en.wikipedia.org/wiki/State%20of%20matter
In physics, a state of matter is one of the distinct forms in which matter can exist. Four states of matter are observable in everyday life: solid, liquid, gas, and plasma. Many intermediate states are known to exist, such as liquid crystal, and some states only exist under extreme conditions, such as Bose–Einstein condensates and Fermionic condensates (in extreme cold), neutron-degenerate matter (in extreme density), and quark–gluon plasma (at extremely high energy). Historically, the distinction is based on qualitative differences in properties. Matter in the solid state maintains a fixed volume (assuming no change in temperature or air pressure) and shape, with component particles (atoms, molecules or ions) close together and fixed into place. Matter in the liquid state maintains a fixed volume (assuming no change in temperature or air pressure), but has a variable shape that adapts to fit its container. Its particles are still close together but move freely. Matter in the gaseous state has both variable volume and shape, adapting both to fit its container. Its particles are neither close together nor fixed in place. Matter in the plasma state has variable volume and shape, and contains neutral atoms as well as a significant number of ions and electrons, both of which can move around freely. The term phase is sometimes used as a synonym for state of matter, but it is possible for a single compound to form different phases that are in the same state of matter. For example, ice is the solid state of water, but there are multiple phases of ice with different crystal structures, which are formed at different pressures and temperatures. Four classical states Solid In a solid, constituent particles (ions, atoms, or molecules) are closely packed together. The forces between particles are so strong that the particles cannot move freely but can only vibrate. As a result, a solid has a stable, definite shape, and a definite volume. Solids can only change their shape by an outside force, as when broken or cut. In crystalline solids, the particles (atoms, molecules, or ions) are packed in a regularly ordered, repeating pattern. There are various different crystal structures, and the same substance can have more than one structure (or solid phase). For example, iron has a body-centred cubic structure at temperatures below , and a face-centred cubic structure between 912 and . Ice has fifteen known crystal structures, or fifteen solid phases, which exist at various temperatures and pressures. Glasses and other non-crystalline, amorphous solids without long-range order are not thermal equilibrium ground states; therefore they are described below as nonclassical states of matter. Solids can be transformed into liquids by melting, and liquids can be transformed into solids by freezing. Solids can also change directly into gases through the process of sublimation, and gases can likewise change directly into solids through deposition. Liquid A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a (nearly) constant volume independent of pressure. The volume is definite if the temperature and pressure are constant. When a solid is heated above its melting point, it becomes liquid, given that the pressure is higher than the triple point of the substance. Intermolecular (or interatomic or interionic) forces are still important, but the molecules have enough energy to move relative to each other and the structure is mobile. This means that the shape of a liquid is not definite but is determined by its container. The volume is usually greater than that of the corresponding solid, the best known exception being water, HO. The highest temperature at which a given liquid can exist is its critical temperature. Gas A gas is a compressible fluid. Not only will a gas conform to the shape of its container but it will also expand to fill the container. In a gas, the molecules have enough kinetic energy so that the effect of intermolecular forces is small (or zero for an ideal gas), and the typical distance between neighboring molecules is much greater than the molecular size. A gas has no definite shape or volume, but occupies the entire container in which it is confined. A liquid may be converted to a gas by heating at constant pressure to the boiling point, or else by reducing the pressure at constant temperature. At temperatures below its critical temperature, a gas is also called a vapor, and can be liquefied by compression alone without cooling. A vapor can exist in equilibrium with a liquid (or solid), in which case the gas pressure equals the vapor pressure of the liquid (or solid). A supercritical fluid (SCF) is a gas whose temperature and pressure are above the critical temperature and critical pressure respectively. In this state, the distinction between liquid and gas disappears. A supercritical fluid has the physical properties of a gas, but its high density confers solvent properties in some cases, which leads to useful applications. For example, supercritical carbon dioxide is used to extract caffeine in the manufacture of decaffeinated coffee. Plasma A gas is usually converted to a plasma in one of two ways, either from a huge voltage difference between two points, or by exposing it to extremely high temperatures. Heating matter to high temperatures causes electrons to leave the atoms, resulting in the presence of free electrons. This creates a so-called partially ionised plasma. At very high temperatures, such as those present in stars, it is assumed that essentially all electrons are "free", and that a very high-energy plasma is essentially bare nuclei swimming in a sea of electrons. This forms the so-called fully ionised plasma. The plasma state is often misunderstood, and although not freely existing under normal conditions on Earth, it is quite commonly generated by either lightning, electric sparks, fluorescent lights, neon lights or in plasma televisions. The Sun's corona, some types of flame, and stars are all examples of illuminated matter in the plasma state. Plasma is by far the most abundant of the four fundamental states, as 99% of all ordinary matter in the universe is plasma, as it composes all stars. Phase transitions A state of matter is also characterized by phase transitions. A phase transition indicates a change in structure and can be recognized by an abrupt change in properties. A distinct state of matter can be defined as any set of states distinguished from any other set of states by a phase transition. Water can be said to have several distinct solid states. The appearance of superconductivity is associated with a phase transition, so there are superconductive states. Likewise, ferromagnetic states are demarcated by phase transitions and have distinctive properties. When the change of state occurs in stages the intermediate steps are called mesophases. Such phases have been exploited by the introduction of liquid crystal technology. The state or phase of a given set of matter can change depending on pressure and temperature conditions, transitioning to other phases as these conditions change to favor their existence; for example, solid transitions to liquid with an increase in temperature. Near absolute zero, a substance exists as a solid. As heat is added to this substance it melts into a liquid at its melting point, boils into a gas at its boiling point, and if heated high enough would enter a plasma state in which the electrons are so energized that they leave their parent atoms. Forms of matter that are not composed of molecules and are organized by different forces can also be considered different states of matter. Superfluids (like Fermionic condensate) and the quark–gluon plasma are examples. In a chemical equation, the state of matter of the chemicals may be shown as (s) for solid, (l) for liquid, and (g) for gas. An aqueous solution is denoted (aq), for example, Matter in the plasma state is seldom used (if at all) in chemical equations, so there is no standard symbol to denote it. In the rare equations that plasma is used it is symbolized as (p). Non-classical states Glass Glass is a non-crystalline or amorphous solid material that exhibits a glass transition when heated towards the liquid state. Glasses can be made of quite different classes of materials: inorganic networks (such as window glass, made of silicate plus additives), metallic alloys, ionic melts, aqueous solutions, molecular liquids, and polymers. Thermodynamically, a glass is in a metastable state with respect to its crystalline counterpart. The conversion rate, however, is practically zero. Crystals with some degree of disorder A plastic crystal is a molecular solid with long-range positional order but with constituent molecules retaining rotational freedom; in an orientational glass this degree of freedom is frozen in a quenched disordered state. Similarly, in a spin glass magnetic disorder is frozen. Liquid crystal states Liquid crystal states have properties intermediate between mobile liquids and ordered solids. Generally, they are able to flow like a liquid, but exhibiting long-range order. For example, the nematic phase consists of long rod-like molecules such as para-azoxyanisole, which is nematic in the temperature range . In this state the molecules flow as in a liquid, but they all point in the same direction (within each domain) and cannot rotate freely. Like a crystalline solid, but unlike a liquid, liquid crystals react to polarized light. Other types of liquid crystals are described in the main article on these states. Several types have technological importance, for example, in liquid crystal displays. Microphase separation Copolymers can undergo microphase separation to form a diverse array of periodic nanostructures, as shown in the example of the styrene-butadiene-styrene block copolymer shown at right. Microphase separation can be understood by analogy to the phase separation between oil and water. Due to chemical incompatibility between the blocks, block copolymers undergo a similar phase separation. However, because the blocks are covalently bonded to each other, they cannot demix macroscopically as water and oil can, and so instead the blocks form nanometre-sized structures. Depending on the relative lengths of each block and the overall block topology of the polymer, many morphologies can be obtained, each its own phase of matter. Ionic liquids also display microphase separation. The anion and cation are not necessarily compatible and would demix otherwise, but electric charge attraction prevents them from separating. Their anions and cations appear to diffuse within compartmentalized layers or micelles instead of freely as in a uniform liquid. Magnetically ordered states Transition metal atoms often have magnetic moments due to the net spin of electrons that remain unpaired and do not form chemical bonds. In some solids the magnetic moments on different atoms are ordered and can form a ferromagnet, an antiferromagnet or a ferrimagnet. In a ferromagnet—for instance, solid iron—the magnetic moment on each atom is aligned in the same direction (within a magnetic domain). If the domains are also aligned, the solid is a permanent magnet, which is magnetic even in the absence of an external magnetic field. The magnetization disappears when the magnet is heated to the Curie point, which for iron is . An antiferromagnet has two networks of equal and opposite magnetic moments, which cancel each other out so that the net magnetization is zero. For example, in nickel(II) oxide (NiO), half the nickel atoms have moments aligned in one direction and half in the opposite direction. In a ferrimagnet, the two networks of magnetic moments are opposite but unequal, so that cancellation is incomplete and there is a non-zero net magnetization. An example is magnetite (FeO), which contains Fe and Fe ions with different magnetic moments. A quantum spin liquid (QSL) is a disordered state in a system of interacting quantum spins which preserves its disorder to very low temperatures, unlike other disordered states. It is not a liquid in physical sense, but a solid whose magnetic order is inherently disordered. The name "liquid" is due to an analogy with the molecular disorder in a conventional liquid. A QSL is neither a ferromagnet, where magnetic domains are parallel, nor an antiferromagnet, where the magnetic domains are antiparallel; instead, the magnetic domains are randomly oriented. This can be realized e.g. by geometrically frustrated magnetic moments that cannot point uniformly parallel or antiparallel. When cooling down and settling to a state, the domain must "choose" an orientation, but if the possible states are similar in energy, one will be chosen randomly. Consequently, despite strong short-range order, there is no long-range magnetic order. Superfluids and condensates Superconductor Superconductors are materials which have zero electrical resistivity, and therefore perfect conductivity. This is a distinct physical state which exists at low temperature, and the resistivity increases discontinuously to a finite value at a sharply-defined transition temperature for each superconductor. A superconductor also excludes all magnetic fields from its interior, a phenomenon known as the Meissner effect or perfect diamagnetism. Superconducting magnets are used as electromagnets in magnetic resonance imaging machines. The phenomenon of superconductivity was discovered in 1911, and for 75 years was only known in some metals and metallic alloys at temperatures below 30 K. In 1986 so-called high-temperature superconductivity was discovered in certain ceramic oxides, and has now been observed in temperatures as high as 164 K. Superfluid Close to absolute zero, some liquids form a second liquid state described as superfluid because it has zero viscosity (or infinite fluidity; i.e., flowing without friction). This was discovered in 1937 for helium, which forms a superfluid below the lambda temperature of . In this state it will attempt to "climb" out of its container. It also has infinite thermal conductivity so that no temperature gradient can form in a superfluid. Placing a superfluid in a spinning container will result in quantized vortices. These properties are explained by the theory that the common isotope helium-4 forms a Bose–Einstein condensate (see next section) in the superfluid state. More recently, fermionic condensate superfluids have been formed at even lower temperatures by the rare isotope helium-3 and by lithium-6. Bose–Einstein condensate In 1924, Albert Einstein and Satyendra Nath Bose predicted the "Bose–Einstein condensate" (BEC), sometimes referred to as the fifth state of matter. In a BEC, matter stops behaving as independent particles, and collapses into a single quantum state that can be described with a single, uniform wavefunction. In the gas phase, the Bose–Einstein condensate remained an unverified theoretical prediction for many years. In 1995, the research groups of Eric Cornell and Carl Wieman, of JILA at the University of Colorado at Boulder, produced the first such condensate experimentally. A Bose–Einstein condensate is "colder" than a solid. It may occur when atoms have very similar (or the same) quantum levels, at temperatures very close to absolute zero, . Fermionic condensate A fermionic condensate is similar to the Bose–Einstein condensate but composed of fermions. The Pauli exclusion principle prevents fermions from entering the same quantum state, but a pair of fermions can behave as a boson, and multiple such pairs can then enter the same quantum state without restriction. High-energy states Degenerate matter Under extremely high pressure, as in the cores of dead stars, ordinary matter undergoes a transition to a series of exotic states of matter collectively known as degenerate matter, which are supported mainly by quantum mechanical effects. In physics, "degenerate" refers to two states that have the same energy and are thus interchangeable. Degenerate matter is supported by the Pauli exclusion principle, which prevents two fermionic particles from occupying the same quantum state. Unlike regular plasma, degenerate plasma expands little when heated, because there are simply no momentum states left. Consequently, degenerate stars collapse into very high densities. More massive degenerate stars are smaller, because the gravitational force increases, but pressure does not increase proportionally. Electron-degenerate matter is found inside white dwarf stars. Electrons remain bound to atoms but are able to transfer to adjacent atoms. Neutron-degenerate matter is found in neutron stars. Vast gravitational pressure compresses atoms so strongly that the electrons are forced to combine with protons via inverse beta-decay, resulting in a superdense conglomeration of neutrons. Normally free neutrons outside an atomic nucleus will decay with a half life of approximately 10 minutes, but in a neutron star, the decay is overtaken by inverse decay. Cold degenerate matter is also present in planets such as Jupiter and in the even more massive brown dwarfs, which are expected to have a core with metallic hydrogen. Because of the degeneracy, more massive brown dwarfs are not significantly larger. In metals, the electrons can be modeled as a degenerate gas moving in a lattice of non-degenerate positive ions. Quark matter In regular cold matter, quarks, fundamental particles of nuclear matter, are confined by the strong force into hadrons that consist of 2–4 quarks, such as protons and neutrons. Quark matter or quantum chromodynamical (QCD) matter is a group of phases where the strong force is overcome and quarks are deconfined and free to move. Quark matter phases occur at extremely high densities or temperatures, and there are no known ways to produce them in equilibrium in the laboratory; in ordinary conditions, any quark matter formed immediately undergoes radioactive decay. Strange matter is a type of quark matter that is suspected to exist inside some neutron stars close to the Tolman–Oppenheimer–Volkoff limit (approximately 2–3 solar masses), although there is no direct evidence of its existence. In strange matter, part of the energy available manifests as strange quarks, a heavier analogue of the common down quark. It may be stable at lower energy states once formed, although this is not known. Quark–gluon plasma is a very high-temperature phase in which quarks become free and able to move independently, rather than being perpetually bound into particles, in a sea of gluons, subatomic particles that transmit the strong force that binds quarks together. This is analogous to the liberation of electrons from atoms in a plasma. This state is briefly attainable in extremely high-energy heavy ion collisions in particle accelerators, and allows scientists to observe the properties of individual quarks. Theories predicting the existence of quark–gluon plasma were developed in the late 1970s and early 1980s, and it was detected for the first time in the laboratory at CERN in the year 2000. Unlike plasma, which flows like a gas, interactions within QGP are strong and it flows like a liquid. At high densities but relatively low temperatures, quarks are theorized to form a quark liquid whose nature is presently unknown. It forms a distinct color-flavor locked (CFL) phase at even higher densities. This phase is superconductive for color charge. These phases may occur in neutron stars but they are presently theoretical. Color-glass condensate Color-glass condensate is a type of matter theorized to exist in atomic nuclei traveling near the speed of light. According to Einstein's theory of relativity, a high-energy nucleus appears length contracted, or compressed, along its direction of motion. As a result, the gluons inside the nucleus appear to a stationary observer as a "gluonic wall" traveling near the speed of light. At very high energies, the density of the gluons in this wall is seen to increase greatly. Unlike the quark–gluon plasma produced in the collision of such walls, the color-glass condensate describes the walls themselves, and is an intrinsic property of the particles that can only be observed under high-energy conditions such as those at RHIC and possibly at the Large Hadron Collider as well. Very high energy states Various theories predict new states of matter at very high energies. An unknown state has created the baryon asymmetry in the universe, but little is known about it. In string theory, a Hagedorn temperature is predicted for superstrings at about 1030 K, where superstrings are copiously produced. At Planck temperature (1032 K), gravity becomes a significant force between individual particles. No current theory can describe these states and they cannot be produced with any foreseeable experiment. However, these states are important in cosmology because the universe may have passed through these states in the Big Bang. Other proposed states Supersolid A supersolid is a spatially ordered material (that is, a solid or crystal) with superfluid properties. Similar to a superfluid, a supersolid is able to move without friction but retains a rigid shape. Although a supersolid is a solid, it exhibits so many characteristic properties different from other solids that many argue it is another state of matter. String-net liquid In a string-net liquid, atoms have apparently unstable arrangement, like a liquid, but are still consistent in overall pattern, like a solid. When in a normal solid state, the atoms of matter align themselves in a grid pattern, so that the spin of any electron is the opposite of the spin of all electrons touching it. But in a string-net liquid, atoms are arranged in some pattern that requires some electrons to have neighbors with the same spin. This gives rise to curious properties, as well as supporting some unusual proposals about the fundamental conditions of the universe itself. Superglass A superglass is a phase of matter characterized, at the same time, by superfluidity and a frozen amorphous structure. Chain-melted state Metals, like potassium, in the chain-melted state appear to be in the liquid and solid state at the same time. This is a result of being subjected to high temperature and pressure, leading to the chains in the potassium to dissolve into liquid while the crystals remain solid. Quantum Hall state A quantum Hall state gives rise to quantized Hall voltage measured in the direction perpendicular to the current flow. A quantum spin Hall state is a theoretical phase that may pave the way for the development of electronic devices that dissipate less energy and generate less heat. This is a derivation of the Quantum Hall state of matter. Photonic matter Photonic matter is a phenomenon where photons interacting with a gas develop apparent mass, and can interact with each other, even forming photonic "molecules". The source of mass is the gas, which is massive. This is in contrast to photons moving in empty space, which have no rest mass, and cannot interact. See also Hidden states of matter Classical element Condensed matter physics Cooling curve Supercooling Superheating List of states of matter Notes and references External links 2005-06-22, MIT News: MIT physicists create new form of matter Citat: "... They have become the first to create a new type of matter, a gas of atoms that shows high-temperature superfluidity." 2003-10-10, Science Daily: Metallic Phase For Bosons Implies New State Of Matter 2004-01-15, ScienceDaily: Probable Discovery Of A New, Supersolid, Phase Of Matter Citat: "...We apparently have observed, for the first time, a solid material with the characteristics of a superfluid...but because all its particles are in the identical quantum state, it remains a solid even though its component particles are continually flowing..." 2004-01-29, ScienceDaily: NIST/University Of Colorado Scientists Create New Form Of Matter: A Fermionic Condensate Short videos demonstrating of States of Matter, solids, liquids and gases by Prof. J M Murrell, University of Sussex Condensed matter physics Engineering thermodynamics
State of matter
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
5,048
[ "Engineering thermodynamics", "Phases of matter", "Materials science", "Thermodynamics", "Condensed matter physics", "Mechanical engineering", "Matter" ]
37,515
https://en.wikipedia.org/wiki/Shaped%20charge
A shaped charge is an explosive charge shaped to focus the effect of the explosive's energy. Different types of shaped charges are used for various purposes such as cutting and forming metal, initiating nuclear weapons, penetrating armor, or perforating wells in the oil and gas industry. A typical modern shaped charge, with a metal liner on the charge cavity, can penetrate armor steel to a depth of seven or more times the diameter of the charge (charge diameters, CD), though depths of 10 CD and above have been achieved. Contrary to a misconception, possibly resulting from the acronym for high-explosive anti-tank, HEAT, the shaped charge does not depend in any way on heating or melting for its effectiveness; that is, the jet from a shaped charge does not melt its way through armor, as its effect is purely kinetic in nature – however the process creates significant heat and often has a significant secondary incendiary effect after penetration. Munroe effect The Munroe or Neumann effect is the focusing of blast energy by a hollow or void cut on a surface of an explosive. The earliest mention of hollow charges were mentioned in 1792. Franz Xaver von Baader (1765–1841) was a German mining engineer at that time; in a mining journal, he advocated a conical space at the forward end of a blasting charge to increase the explosive's effect and thereby save powder. The idea was adopted, for a time, in Norway and in the mines of the Harz mountains of Germany, although the only available explosive at the time was gunpowder, which is not a high explosive and hence incapable of producing the shock wave that the shaped-charge effect requires. The first true hollow charge effect was achieved in 1883, by Max von Foerster (1845–1905), chief of the nitrocellulose factory of Wolff & Co. in Walsrode, Germany. By 1886, Gustav Bloem of Düsseldorf, Germany, had filed for hemispherical cavity metal detonators to concentrate the effect of the explosion in an axial direction. The Munroe effect is named after Charles E. Munroe, who discovered it in 1888. As a civilian chemist working at the U.S. Naval Torpedo Station at Newport, Rhode Island, he noticed that when a block of explosive guncotton with the manufacturer's name stamped into it was detonated next to a metal plate, the lettering was cut into the plate. Conversely, if letters were raised in relief above the surface of the explosive, then the letters on the plate would also be raised above its surface. In 1894, Munroe constructed his first crude shaped charge: Among the experiments made ... was one upon a safe twenty-nine inches cube, with walls four inches and three quarters thick, made up of plates of iron and steel ... When a hollow charge of dynamite nine pounds and a half in weight and untamped was detonated on it, a hole three inches in diameter was blown clear through the wall ... The hollow cartridge was made by tying the sticks of dynamite around a tin can, the open mouth of the latter being placed downward. Although Munroe's experiment with the shaped charge was widely publicized in 1900 in Popular Science Monthly, the importance of the tin can "liner" of the hollow charge remained unrecognized for another 44 years. Part of that 1900 article was reprinted in the February 1945 issue of Popular Science, describing how shaped-charge warheads worked. It was this article that at last revealed to the general public how the United States Army bazooka actually worked against armored vehicles during WWII. In 1910, Egon Neumann of Germany discovered that a block of TNT, which would normally dent a steel plate, punched a hole through it if the explosive had a conical indentation. The military usefulness of Munroe's and Neumann's work was unappreciated for a long time. Between the world wars, academics in several countries Myron Yakovlevich Sukharevskii (Мирон Яковлевич Сухаревский) in the Soviet Union, William H. Payment and Donald Whitley Woodhead in Britain, and Robert Williams Wood in the U.S. recognized that projectiles could form during explosions. In 1932 Franz Rudolf Thomanek, a student of physics at Vienna's Technische Hochschule, conceived an anti-tank round that was based on the hollow charge effect. When the Austrian government showed no interest in pursuing the idea, Thomanek moved to Berlin's Technische Hochschule, where he continued his studies under the ballistics expert Carl Julius Cranz. There in 1935, he and Hellmuth von Huttern developed a prototype anti-tank round. Although the weapon's performance proved disappointing, Thomanek continued his developmental work, collaborating with Hubert Schardin at the Waffeninstitut der Luftwaffe (Air Force Weapons Institute) in Braunschweig. By 1937, Schardin believed that hollow-charge effects were due to the interactions of shock waves. It was during the testing of this idea that, on February 4, 1938, Thomanek conceived the shaped-charge explosive (or Hohlladungs-Auskleidungseffekt (hollow-charge liner effect)). (It was Gustav Adolf Thomer who in 1938 first visualized, by flash radiography, the metallic jet produced by a shaped-charge explosion.) Meanwhile, Henry Hans Mohaupt, a chemical engineer in Switzerland, had independently developed a shaped-charge munition in 1935, which was demonstrated to the Swiss, French, British, and U.S. militaries. During World War II, shaped-charge munitions were developed by Germany (Panzerschreck, Panzerfaust, Panzerwurfmine, Mistel), Britain (No. 68 AT grenade, PIAT, Beehive cratering charge), the Soviet Union (RPG-43, RPG-6), the U.S. (M9 rifle grenade, bazooka), and Italy (Effetto Pronto Speciale shells for various artillery pieces). The development of shaped charges revolutionized anti-tank warfare. Tanks faced a serious vulnerability from a weapon that could be carried by an infantryman or aircraft. One of the earliest uses of shaped charges was by German glider-borne troops against the Belgian Fort Eben-Emael in 1940. These demolition charges – developed by Dr. Wuelfken of the German Ordnance Office – were unlined explosive charges and did not produce a metal jet like the modern HEAT warheads. Due to the lack of metal liner they shook the turrets but they did not destroy them, and other airborne troops were forced to climb on the turrets and smash the gun barrels. Applications Modern military The common term in military terminology for shaped-charge warheads is high-explosive anti-tank (HEAT) warhead. HEAT warheads are frequently used in anti-tank guided missiles, unguided rockets, gun-fired projectiles (both spun (spin stabilized) and unspun), rifle grenades, land mines, bomblets, torpedoes, and various other weapons. Protection During World War II, the precision of the charge's construction and its detonation mode were both inferior to modern warheads. This lower precision caused the jet to curve and to break up at an earlier time and hence at a shorter distance. The resulting dispersion decreased the penetration depth for a given cone diameter and also shortened the optimum standoff distance. Since the charges were less effective at larger standoffs, side and turret skirts (known as Schürzen) fitted to some German tanks to protect against ordinary anti-tank rifles were fortuitously found to give the jet room to disperse and hence also reduce HEAT penetration. The use of add-on spaced armor skirts on armored vehicles may have the opposite effect and actually increase the penetration of some shaped-charge warheads. Due to constraints in the length of the projectile/missile, the built-in stand-off on many warheads is less than the optimum distance. In such cases, the skirting effectively increases the distance between the armor and the target, and the warhead detonates closer to its optimum standoff. Skirting should not be confused with cage armor which is primarily used to damage the fusing system of RPG-7 projectiles, but can also cause a HEAT projectile to pitch up or down on impact, lengthening the penetration path for the shaped charge's penetration stream. If the nose probe strikes one of the cage armor slats, the warhead will function as normal. Non-military In non-military applications shaped charges are used in explosive demolition of buildings and structures, in particular for cutting through metal piles, columns and beams and for boring holes. In steelmaking, small shaped charges are often used to pierce taps that have become plugged with slag. They are also used in quarrying, breaking up ice, breaking log jams, felling trees, and drilling post holes. Shaped charges are used most extensively in the petroleum and natural gas industries, in particular in the completion of oil and gas wells, in which they are detonated to perforate the metal casing of the well at intervals to admit the influx of oil and gas. Another use in the industry is to put out oil and gas fires by depriving the fire of oxygen. A shaped charge was used on the Hayabusa2 mission on asteroid 162173 Ryugu. The spacecraft dropped the explosive device onto the asteroid and detonated it with the spacecraft behind cover. The detonation dug a crater about 10 meters wide, to provide access to a pristine sample of the asteroid. Function A typical device consists of a solid cylinder of explosive with a metal-lined conical hollow in one end and a central detonator, array of detonators, or detonation wave guide at the other end. Explosive energy is released directly away from (normal to) the surface of an explosive, so shaping the explosive will concentrate the explosive energy in the void. If the hollow is properly shaped, usually conically, the enormous pressure generated by the detonation of the explosive drives the liner in the hollow cavity inward to collapse upon its central axis. The resulting collision forms and projects a high-velocity jet of metal particles forward along the axis. Most of the jet material originates from the innermost part of the liner, a layer of about 10% to 20% of the thickness. The rest of the liner forms a slower-moving slug of material, which, because of its appearance, is sometimes called a "carrot". Because of the variation along the liner in its collapse velocity, the jet's velocity also varies along its length, decreasing from the front. This variation in jet velocity stretches it and eventually leads to its break-up into particles. Over time, the particles tend to fall out of alignment, which reduces the depth of penetration at long standoffs. At the apex of the cone, which forms the very front of the jet, the liner does not have time to be fully accelerated before it forms its part of the jet. This results in its small part of jet being projected at a lower velocity than jet formed later behind it. As a result, the initial parts of the jet coalesce to form a pronounced wider tip portion. Most of the jet travels at hypersonic speed. The tip moves at 7 to 14 km/s, the jet tail at a lower velocity (1 to 3 km/s), and the slug at a still lower velocity (less than 1 km/s). The exact velocities depend on the charge's configuration and confinement, explosive type, materials used, and the explosive-initiation mode. At typical velocities, the penetration process generates such enormous pressures that it may be considered hydrodynamic; to a good approximation, the jet and armor may be treated as inviscid, compressible fluids (see, for example,), with their material strengths ignored. A recent technique using magnetic diffusion analysis showed that the temperature of the outer 50% by volume of a copper jet tip while in flight was between 1100K and 1200K, much closer to the melting point of copper (1358 K) than previously assumed. This temperature is consistent with a hydrodynamic calculation that simulated the entire experiment. In comparison, two-color radiometry measurements from the late 1970s indicate lower temperatures for various shaped-charge liner material, cone construction and type of explosive filler. A Comp-B loaded shaped charge with a copper liner and pointed cone apex had a jet tip temperature ranging from 668 K to 863 K over a five shot sampling. Octol-loaded charges with a rounded cone apex generally had higher surface temperatures with an average of 810 K, and the temperature of a tin-lead liner with Comp-B fill averaged 842 K. While the tin-lead jet was determined to be liquid, the copper jets are well below the melting point of copper. However, these temperatures are not completely consistent with evidence that soft recovered copper jet particles show signs of melting at the core while the outer portion remains solid and cannot be equated with bulk temperature. The location of the charge relative to its target is critical for optimum penetration for two reasons. If the charge is detonated too close there is not enough time for the jet to fully develop. But the jet disintegrates and disperses after a relatively short distance, usually well under two meters. At such standoffs, it breaks into particles which tend to tumble and drift off the axis of penetration, so that the successive particles tend to widen rather than deepen the hole. At very long standoffs, velocity is lost to air drag, further degrading penetration. The key to the effectiveness of the hollow charge is its diameter. As the penetration continues through the target, the width of the hole decreases leading to a characteristic "fist to finger" action, where the size of the eventual "finger" is based on the size of the original "fist". In general, shaped charges can penetrate a steel plate as thick as 150% to 700% of their diameter, depending on the charge quality. The figure is for basic steel plate, not for the composite armor, reactive armor, or other types of modern armor. Liner The most common shape of the liner is conical, with an internal apex angle of 40 to 90 degrees. Different apex angles yield different distributions of jet mass and velocity. Small apex angles can result in jet bifurcation, or even in the failure of the jet to form at all; this is attributed to the collapse velocity being above a certain threshold, normally slightly higher than the liner material's bulk sound speed. Other widely used shapes include hemispheres, tulips, trumpets, ellipses, and bi-conics; the various shapes yield jets with different velocity and mass distributions. Liners have been made from many materials, including various metals and glass. The deepest penetrations are achieved with a dense, ductile metal, and a very common choice has been copper. For some modern anti-armor weapons, molybdenum and pseudo-alloys of tungsten filler and copper binder (9:1, thus density is ≈18 Mg/m3) have been adopted. Nearly every common metallic element has been tried, including aluminum, tungsten, tantalum, depleted uranium, lead, tin, cadmium, cobalt, magnesium, titanium, zinc, zirconium, molybdenum, beryllium, nickel, silver, and even gold and platinum. The selection of the material depends on the target to be penetrated; for example, aluminum has been found advantageous for concrete targets. In early antitank weapons, copper was used as a liner material. Later, in the 1970s, it was found tantalum is superior to copper, due to its much higher density and very high ductility at high strain rates. Other high-density metals and alloys tend to have drawbacks in terms of price, toxicity, radioactivity, or lack of ductility. For the deepest penetrations, pure metals yield the best results, because they display the greatest ductility, which delays the breakup of the jet into particles as it stretches. In charges for oil well completion, however, it is essential that a solid slug or "carrot" not be formed, since it would plug the hole just penetrated and interfere with the influx of oil. In the petroleum industry, therefore, liners are generally fabricated by powder metallurgy, often of pseudo-alloys which, if unsintered, yield jets that are composed mainly of dispersed fine metal particles. Unsintered cold pressed liners, however, are not waterproof and tend to be brittle, which makes them easy to damage during handling. Bimetallic liners, usually zinc-lined copper, can be used; during jet formation the zinc layer vaporizes and a slug is not formed; the disadvantage is an increased cost and dependency of jet formation on the quality of bonding the two layers. Low-melting-point (below 500 °C) solder- or braze-like alloys (e.g., Sn50Pb50, Zn97.6Pb1.6, or pure metals like lead, zinc, or cadmium) can be used; these melt before reaching the well casing, and the molten metal does not obstruct the hole. Other alloys, binary eutectics (e.g. Pb88.8Sb11.1, Sn61.9Pd38.1, or Ag71.9Cu28.1), form a metal-matrix composite material with ductile matrix with brittle dendrites; such materials reduce slug formation but are difficult to shape. A metal-matrix composite with discrete inclusions of low-melting material is another option; the inclusions either melt before the jet reaches the well casing, weakening the material, or serve as crack nucleation sites, and the slug breaks up on impact. The dispersion of the second phase can be achieved also with castable alloys (e.g., copper) with a low-melting-point metal insoluble in copper, such as bismuth, 1–5% lithium, or up to 50% (usually 15–30%) lead; the size of inclusions can be adjusted by thermal treatment. Non-homogeneous distribution of the inclusions can also be achieved. Other additives can modify the alloy properties; tin (4–8%), nickel (up to 30% and often together with tin), up to 8% aluminium, phosphorus (forming brittle phosphides) or 1–5% silicon form brittle inclusions serving as crack initiation sites. Up to 30% zinc can be added to lower the material cost and to form additional brittle phases. Oxide glass liners produce jets of low density, therefore yielding less penetration depth. Double-layer liners, with one layer of a less dense but pyrophoric metal (e.g. aluminum or magnesium), can be used to enhance incendiary effects following the armor-piercing action; explosive welding can be used for making those, as then the metal-metal interface is homogeneous, does not contain significant amount of intermetallics, and does not have adverse effects to the formation of the jet. The penetration depth is proportional to the maximum length of the jet, which is a product of the jet tip velocity and time to particulation. The jet tip velocity depends on bulk sound velocity in the liner material, the time to particulation is dependent on the ductility of the material. The maximum achievable jet velocity is roughly 2.34 times the sound velocity in the material. The speed can reach 10 km/s, peaking some 40 microseconds after detonation; the cone tip is subjected to acceleration of about 25 million g. The jet tail reaches about 2–5 km/s. The pressure between the jet tip and the target can reach one terapascal. The immense pressure makes the metal flow like a liquid, though x-ray diffraction has shown the metal stays solid; one of the theories explaining this behavior proposes molten core and solid sheath of the jet. The best materials are face-centered cubic metals, as they are the most ductile, but even graphite and zero-ductility ceramic cones show significant penetration. Explosive charge For optimal penetration, a high explosive with a high detonation velocity and pressure is normally chosen. The most common explosive used in high performance anti-armor warheads is HMX (octogen), although never in its pure form, as it would be too sensitive. It is normally compounded with a few percent of some type of plastic binder, such as in the polymer-bonded explosive (PBX) LX-14, or with another less-sensitive explosive, such as TNT, with which it forms Octol. Other common high-performance explosives are RDX-based compositions, again either as PBXs or mixtures with TNT (to form Composition B and the Cyclotols) or wax (Cyclonites). Some explosives incorporate powdered aluminum to increase their blast and detonation temperature, but this addition generally results in decreased performance of the shaped charge. There has been research into using the very high-performance but sensitive explosive CL-20 in shaped-charge warheads, but, at present, due to its sensitivity, this has been in the form of the PBX composite LX-19 (CL-20 and Estane binder). Other features A 'waveshaper' is a body (typically a disc or cylindrical block) of an inert material (typically solid or foamed plastic, but sometimes metal, perhaps hollow) inserted within the explosive for the purpose of changing the path of the detonation wave. The effect is to modify the collapse of the cone and resulting jet formation, with the intent of increasing penetration performance. Waveshapers are often used to save space; a shorter charge with a waveshaper can achieve the same performance as a longer charge without a waveshaper. Given that the space of possible waveshapes is infinite, machine learning methods have been developed to engineer more optimal waveshapers that can enhance the performance of a shaped charge via computational design. Another useful design feature is sub-calibration, the use of a liner having a smaller diameter (caliber) than the explosive charge. In an ordinary charge, the explosive near the base of the cone is so thin that it is unable to accelerate the adjacent liner to sufficient velocity to form an effective jet. In a sub-calibrated charge, this part of the device is effectively cut off, resulting in a shorter charge with the same performance. Variants There are several forms of shaped charge. Linear shaped charges A linear shaped charge (LSC) has a lining with V-shaped profile and varying length. The lining is surrounded with explosive, the explosive then encased within a suitable material that serves to protect the explosive and to confine (tamp) it on detonation. "At detonation, the focusing of the explosive high pressure wave as it becomes incident to the side wall causes the metal liner of the LSC to collapse–creating the cutting force." The detonation projects into the lining, to form a continuous, knife-like (planar) jet. The jet cuts any material in its path, to a depth depending on the size and materials used in the charge. Generally, the jet penetrates around 1 to 1.2 times the charge width. For the cutting of complex geometries, there are also flexible versions of the linear shaped charge, these with a lead or high-density foam sheathing and a ductile/flexible lining material, which also is often lead. LSCs are commonly used in the cutting of rolled steel joists (RSJ) and other structural targets, such as in the controlled demolition of buildings. LSCs are also used to separate the stages of multistage rockets, and destroy them when they go errant. Explosively formed penetrator The explosively formed penetrator (EFP) is also known as the self-forging fragment (SFF), explosively formed projectile (EFP), self-forging projectile (SEFOP), plate charge, and Misnay-Schardin (MS) charge. An EFP uses the action of the explosive's detonation wave (and to a lesser extent the propulsive effect of its detonation products) to project and deform a plate or dish of ductile metal (such as copper, iron, or tantalum) into a compact high-velocity projectile, commonly called the slug. This slug is projected toward the target at about two kilometers per second. The chief advantage of the EFP over a conventional (e.g., conical) shaped charge is its effectiveness at very great standoffs, equal to hundreds of times the charge's diameter (perhaps a hundred meters for a practical device). The EFP is relatively unaffected by first-generation reactive armor and can travel up to perhaps 1000 charge diameters (CD)s before its velocity becomes ineffective at penetrating armor due to aerodynamic drag, or successfully hitting the target becomes a problem. The impact of a ball or slug EFP normally causes a large-diameter but relatively shallow hole, of, at most, a couple of CDs. If the EFP perforates the armor, spalling and extensive behind armor effects (BAE, also called behind armor damage, BAD) will occur. The BAE is mainly caused by the high-temperature and high-velocity armor and slug fragments being injected into the interior space and the blast overpressure caused by this debris. More modern EFP warhead versions, through the use of advanced initiation modes, can also produce long-rods (stretched slugs), multi-slugs and finned rod/slug projectiles. The long-rods are able to penetrate a much greater depth of armor, at some loss to BAE, multi-slugs are better at defeating light or area targets and the finned projectiles are much more accurate. The use of this warhead type is mainly restricted to lightly armored areas of main battle tanks (MBT) such as the top, belly and rear armored areas. It is well suited for the attack of other less heavily protected armored fighting vehicles (AFV) and in the breaching of material targets (buildings, bunkers, bridge supports, etc.). The newer rod projectiles may be effective against the more heavily armored areas of MBTs. Weapons using the EFP principle have already been used in combat; the "smart" submunitions in the CBU-97 cluster bomb used by the US Air Force and Navy in the 2003 Iraq war employed this principle, and the US Army is reportedly experimenting with precision-guided artillery shells under Project SADARM (Seek And Destroy ARMor). There are also various other projectile (BONUS, DM 642) and rocket submunitions (Motiv-3M, DM 642) and mines (MIFF, TMRP-6) that use EFP principle. Examples of EFP warheads are US patents 5038683 and US6606951. Tandem warhead Some modern anti-tank rockets (RPG-27, RPG-29) and missiles (TOW-2, TOW-2A, Eryx, HOT, MILAN) use a tandem warhead shaped charge, consisting of two separate shaped charges, one in front of the other, typically with some distance between them. TOW-2A was the first to use tandem warheads in the mid-1980s, an aspect of the weapon which the US Army had to reveal under news media and Congressional pressure resulting from the concern that NATO antitank missiles were ineffective against Soviet tanks that were fitted with the new ERA boxes. The Army revealed that a 40 mm precursor shaped-charge warhead was fitted on the tip of the TOW-2 and TOW-2A collapsible probe. Usually, the front charge is somewhat smaller than the rear one, as it is intended primarily to disrupt ERA boxes or tiles. Examples of tandem warheads are US patents 7363862 and US 5561261. The US Hellfire antiarmor missile is one of the few that have accomplished the complex engineering feat of having two shaped charges of the same diameter stacked in one warhead. Recently, a Russian arms firm revealed a 125mm tank cannon round with two same diameter shaped charges one behind the other, but with the back one offset so its penetration stream will not interfere with the front shaped charge's penetration stream. The reasoning behind both the Hellfire and the Russian 125 mm munitions having tandem same diameter warheads is not to increase penetration, but to increase the beyond-armour effect. Voitenko compressor In 1964 a Soviet scientist proposed that a shaped charge originally developed for piercing thick steel armor be adapted to the task of accelerating shock waves. The resulting device, looking a little like a wind tunnel, is called a Voitenko compressor. The Voitenko compressor initially separates a test gas from a shaped charge with a malleable steel plate. When the shaped charge detonates, most of its energy is focused on the steel plate, driving it forward and pushing the test gas ahead of it. Ames Laboratory translated this idea into a self-destroying shock tube. A 66-pound shaped charge accelerated the gas in a 3-cm glass-walled tube 2 meters in length. The velocity of the resulting shock wave was 220,000 feet per second (67 km/s). The apparatus exposed to the detonation was completely destroyed, but not before useful data was extracted. In a typical Voitenko compressor, a shaped charge accelerates hydrogen gas which in turn accelerates a thin disk up to about 40 km/s. A slight modification to the Voitenko compressor concept is a super-compressed detonation, a device that uses a compressible liquid or solid fuel in the steel compression chamber instead of a traditional gas mixture. A further extension of this technology is the explosive diamond anvil cell, utilizing multiple opposed shaped-charge jets projected at a single steel encapsulated fuel, such as hydrogen. The fuels used in these devices, along with the secondary combustion reactions and long blast impulse, produce similar conditions to those encountered in fuel-air and thermobaric explosives. Nuclear shaped charges The proposed Project Orion nuclear propulsion system would have required the development of nuclear shaped charges for reaction acceleration of spacecraft. Shaped-charge effects driven by nuclear explosions have been discussed speculatively, but are not known to have been produced in fact. For example, the early nuclear weapons designer Ted Taylor was quoted as saying, in the context of shaped charges, "A one-kiloton fission device, shaped properly, could make a hole in diameter a thousand feet (305 m) into solid rock." Also, a nuclear driven explosively formed penetrator was apparently proposed for terminal ballistic missile defense in the 1960s. See also Explosive lens High-explosive squash head M150 Penetration Augmented Munition List of established military terms Glossary of firearms terms References Further reading Fundamentals of Shaped Charges, W.P. Walters, J.A. Zukas, John Wiley & Sons Inc., June 1989, . Tactical Missile Warheads, Joseph Carleone (ed.), Progress in Astronautics and Aeronautics Series (V-155), Published by AIAA, 1993, . External links 1945 Popular Science article that at last revealed secrets of shaped-charge weapons; article also includes reprints of 1900 Popular Science drawings of Professor Munroe's experiments with crude shaped charges Elements of Fission Weapon Design Shaped bombs magnify Iraq attacks Shaped Charges Pierce the Toughest Targets The development of the first Hollow charges by the Germans in WWII Use of shaped charges and protection against them in WWII Ammunition Anti-tank weapons Explosives engineering Explosives
Shaped charge
[ "Chemistry", "Engineering" ]
6,585
[ "Explosives engineering", "Explosives", "Explosions" ]
37,637
https://en.wikipedia.org/wiki/Nucleophile
In chemistry, a nucleophile is a chemical species that forms bonds by donating an electron pair. All molecules and ions with a free pair of electrons or at least one pi bond can act as nucleophiles. Because nucleophiles donate electrons, they are Lewis bases. Nucleophilic describes the affinity of a nucleophile to bond with positively charged atomic nuclei. Nucleophilicity, sometimes referred to as nucleophile strength, refers to a substance's nucleophilic character and is often used to compare the affinity of atoms. Neutral nucleophilic reactions with solvents such as alcohols and water are named solvolysis. Nucleophiles may take part in nucleophilic substitution, whereby a nucleophile becomes attracted to a full or partial positive charge, and nucleophilic addition. Nucleophilicity is closely related to basicity. The difference between the two is, that basicity is a thermodynamic property (i.e. relates to an equilibrium state), but nucleophilicity is a kinetic property, which relates to rates of certain chemical reactions. History and Etymology The terms nucleophile and electrophile were introduced by Christopher Kelk Ingold in 1933, replacing the terms anionoid and cationoid proposed earlier by A. J. Lapworth in 1925. The word nucleophile is derived from nucleus and the Greek word φιλος, philos, meaning friend. Properties In general, in a group across the periodic table, the more basic the ion (the higher the pKa of the conjugate acid) the more reactive it is as a nucleophile. Within a series of nucleophiles with the same attacking element (e.g. oxygen), the order of nucleophilicity will follow basicity. Sulfur is in general a better nucleophile than oxygen. Nucleophilicity Many schemes attempting to quantify relative nucleophilic strength have been devised. The following empirical data have been obtained by measuring reaction rates for many reactions involving many nucleophiles and electrophiles. Nucleophiles displaying the so-called alpha effect are usually omitted in this type of treatment. Swain–Scott equation The first such attempt is found in the Swain–Scott equation derived in 1953: This free-energy relationship relates the pseudo first order reaction rate constant (in water at 25 °C), k, of a reaction, normalized to the reaction rate, k0, of a standard reaction with water as the nucleophile, to a nucleophilic constant n for a given nucleophile and a substrate constant s that depends on the sensitivity of a substrate to nucleophilic attack (defined as 1 for methyl bromide). This treatment results in the following values for typical nucleophilic anions: acetate 2.7, chloride 3.0, azide 4.0, hydroxide 4.2, aniline 4.5, iodide 5.0, and thiosulfate 6.4. Typical substrate constants are 0.66 for ethyl tosylate, 0.77 for β-propiolactone, 1.00 for 2,3-epoxypropanol, 0.87 for benzyl chloride, and 1.43 for benzoyl chloride. The equation predicts that, in a nucleophilic displacement on benzyl chloride, the azide anion reacts 3000 times faster than water. Ritchie equation The Ritchie equation, derived in 1972, is another free-energy relationship: where N+ is the nucleophile dependent parameter and k0 the reaction rate constant for water. In this equation, a substrate-dependent parameter like s in the Swain–Scott equation is absent. The equation states that two nucleophiles react with the same relative reactivity regardless of the nature of the electrophile, which is in violation of the reactivity–selectivity principle. For this reason, this equation is also called the constant selectivity relationship. In the original publication the data were obtained by reactions of selected nucleophiles with selected electrophilic carbocations such as tropylium or diazonium cations: or (not displayed) ions based on malachite green. Many other reaction types have since been described. Typical Ritchie N+ values (in methanol) are: 0.5 for methanol, 5.9 for the cyanide anion, 7.5 for the methoxide anion, 8.5 for the azide anion, and 10.7 for the thiophenol anion. The values for the relative cation reactivities are −0.4 for the malachite green cation, +2.6 for the benzenediazonium cation, and +4.5 for the tropylium cation. Mayr–Patz equation In the Mayr–Patz equation (1994): The second order reaction rate constant k at 20 °C for a reaction is related to a nucleophilicity parameter N, an electrophilicity parameter E, and a nucleophile-dependent slope parameter s. The constant s is defined as 1 with 2-methyl-1-pentene as the nucleophile. Many of the constants have been derived from reaction of so-called benzhydrylium ions as the electrophiles: and a diverse collection of π-nucleophiles: . Typical E values are +6.2 for R = chlorine, +5.90 for R = hydrogen, 0 for R = methoxy and −7.02 for R = dimethylamine. Typical N values with s in parentheses are −4.47 (1.32) for electrophilic aromatic substitution to toluene (1), −0.41 (1.12) for electrophilic addition to 1-phenyl-2-propene (2), and 0.96 (1) for addition to 2-methyl-1-pentene (3), −0.13 (1.21) for reaction with triphenylallylsilane (4), 3.61 (1.11) for reaction with 2-methylfuran (5), +7.48 (0.89) for reaction with isobutenyltributylstannane (6) and +13.36 (0.81) for reaction with the enamine 7. The range of organic reactions also include SN2 reactions: With E = −9.15 for the S-methyldibenzothiophenium ion, typical nucleophile values N (s) are 15.63 (0.64) for piperidine, 10.49 (0.68) for methoxide, and 5.20 (0.89) for water. In short, nucleophilicities towards sp2 or sp3 centers follow the same pattern. Unified equation In an effort to unify the above described equations the Mayr equation is rewritten as: with sE the electrophile-dependent slope parameter and sN the nucleophile-dependent slope parameter. This equation can be rewritten in several ways: with sE = 1 for carbocations this equation is equal to the original Mayr–Patz equation of 1994, with sN = 0.6 for most n nucleophiles the equation becomes or the original Scott–Swain equation written as: with sE = 1 for carbocations and sN = 0.6 the equation becomes: or the original Ritchie equation written as: Types Examples of nucleophiles are anions such as Cl−, or a compound with a lone pair of electrons such as NH3 (ammonia) and PR3. In the example below, the oxygen of the hydroxide ion donates an electron pair to form a new chemical bond with the carbon at the end of the bromopropane molecule. The bond between the carbon and the bromine then undergoes heterolytic fission, with the bromine atom taking the donated electron and becoming the bromide ion (Br−), because a SN2 reaction occurs by backside attack. This means that the hydroxide ion attacks the carbon atom from the other side, exactly opposite the bromine ion. Because of this backside attack, SN2 reactions result in a inversion of the configuration of the electrophile. If the electrophile is chiral, it typically maintains its chirality, though the SN2 product's absolute configuration is flipped as compared to that of the original electrophile. Ambident Nucleophile An ambident nucleophile is one that can attack from two or more places, resulting in two or more products. For example, the thiocyanate ion (SCN−) may attack from either the sulfur or the nitrogen. For this reason, the SN2 reaction of an alkyl halide with SCN− often leads to a mixture of an alkyl thiocyanate (R-SCN) and an alkyl isothiocyanate (R-NCS). Similar considerations apply in the Kolbe nitrile synthesis. Halogens While the halogens are not nucleophilic in their diatomic form (e.g. I2 is not a nucleophile), their anions are good nucleophiles. In polar, protic solvents, F− is the weakest nucleophile, and I− the strongest; this order is reversed in polar, aprotic solvents. Carbon Carbon nucleophiles are often organometallic reagents such as those found in the Grignard reaction, Blaise reaction, Reformatsky reaction, and Barbier reaction or reactions involving organolithium reagents and acetylides. These reagents are often used to perform nucleophilic additions. Enols are also carbon nucleophiles. The formation of an enol is catalyzed by acid or base. Enols are ambident nucleophiles, but, in general, nucleophilic at the alpha carbon atom. Enols are commonly used in condensation reactions, including the Claisen condensation and the aldol condensation reactions. Oxygen Examples of oxygen nucleophiles are water (H2O), hydroxide anion, alcohols, alkoxide anions, hydrogen peroxide, and carboxylate anions. Nucleophilic attack does not take place during intermolecular hydrogen bonding. Sulfur Of sulfur nucleophiles, hydrogen sulfide and its salts, thiols (RSH), thiolate anions (RS−), anions of thiolcarboxylic acids (RC(O)-S−), and anions of dithiocarbonates (RO-C(S)-S−) and dithiocarbamates (R2N-C(S)-S−) are used most often. In general, sulfur is very nucleophilic because of its large size, which makes it readily polarizable, and its lone pairs of electrons are readily accessible. Nitrogen Nitrogen nucleophiles include ammonia, azide, amines, nitrites, hydroxylamine, hydrazine, carbazide, phenylhydrazine, semicarbazide, and amide. Metal centers Although metal centers (e.g., Li+, Zn2+, Sc3+, etc.) are most commonly cationic and electrophilic (Lewis acidic) in nature, certain metal centers (particularly ones in a low oxidation state and/or carrying a negative charge) are among the strongest recorded nucleophiles and are sometimes referred to as "supernucleophiles." For instance, using methyl iodide as the reference electrophile, Ph3Sn– is about 10000 times more nucleophilic than I–, while the Co(I) form of vitamin B12 (vitamin B12s) is about 107 times more nucleophilic. Other supernucleophilic metal centers include low oxidation state carbonyl metalate anions (e.g., CpFe(CO)2–). Examples The following table shows the nucleophilicity of some molecules with methanol as the solvent: See also References Physical organic chemistry
Nucleophile
[ "Chemistry" ]
2,672
[ "Physical organic chemistry" ]
37,649
https://en.wikipedia.org/wiki/Amplitude
The amplitude of a periodic variable is a measure of its change in a single period (such as time or spatial period). The amplitude of a non-periodic signal is its magnitude compared with a reference value. There are various definitions of amplitude (see below), which are all functions of the magnitude of the differences between the variable's extreme values. In older texts, the phase of a periodic function is sometimes called the amplitude. Definitions Peak amplitude and semi-amplitude For symmetric periodic waves, like sine waves or triangle waves, peak amplitude and semi amplitude are the same. Peak amplitude In audio system measurements, telecommunications and others where the measurand is a signal that swings above and below a reference value but is not sinusoidal, peak amplitude is often used. If the reference is zero, this is the maximum absolute value of the signal; if the reference is a mean value (DC component), the peak amplitude is the maximum absolute value of the difference from that reference. Semi-amplitude Semi-amplitude means half of the peak-to-peak amplitude. The majority of scientific literature employs the term amplitude or peak amplitude to mean semi-amplitude. It is the most widely used measure of orbital wobble in astronomy and the measurement of small radial velocity semi-amplitudes of nearby stars is important in the search for exoplanets (see Doppler spectroscopy). Ambiguity In general, the use of peak amplitude is simple and unambiguous only for symmetric periodic waves, like a sine wave, a square wave, or a triangle wave. For an asymmetric wave (periodic pulses in one direction, for example), the peak amplitude becomes ambiguous. This is because the value is different depending on whether the maximum positive signal is measured relative to the mean, the maximum negative signal is measured relative to the mean, or the maximum positive signal is measured relative to the maximum negative signal (the peak-to-peak amplitude) and then divided by two (the semi-amplitude). In electrical engineering, the usual solution to this ambiguity is to measure the amplitude from a defined reference potential (such as ground or 0 V). Strictly speaking, this is no longer amplitude since there is the possibility that a constant (DC component) is included in the measurement. Peak-to-peak amplitude Peak-to-peak amplitude (abbreviated p–p or PtP or PtoP) is the change between peak (highest amplitude value) and trough (lowest amplitude value, which can be negative). With appropriate circuitry, peak-to-peak amplitudes of electric oscillations can be measured by meters or by viewing the waveform on an oscilloscope. Peak-to-peak is a straightforward measurement on an oscilloscope, the peaks of the waveform being easily identified and measured against the graticule. This remains a common way of specifying amplitude, but sometimes other measures of amplitude are more appropriate. Root mean square amplitude Root mean square (RMS) amplitude is used especially in electrical engineering: the RMS is defined as the square root of the mean over time of the square of the vertical distance of the graph from the rest state; i.e. the RMS of the AC waveform (with no DC component). For complicated waveforms, especially non-repeating signals like noise, the RMS amplitude is usually used because it is both unambiguous and has physical significance. For example, the average power transmitted by an acoustic or electromagnetic wave or by an electrical signal is proportional to the square of the RMS amplitude (and not, in general, to the square of the peak amplitude). For alternating current electric power, the universal practice is to specify RMS values of a sinusoidal waveform. One property of root mean square voltages and currents is that they produce the same heating effect as a direct current in a given resistance. The peak-to-peak value is used, for example, when choosing rectifiers for power supplies, or when estimating the maximum voltage that insulation must withstand. Some common voltmeters are calibrated for RMS amplitude, but respond to the average value of a rectified waveform. Many digital voltmeters and all moving coil meters are in this category. The RMS calibration is only correct for a sine wave input since the ratio between peak, average and RMS values is dependent on waveform. If the wave shape being measured is greatly different from a sine wave, the relationship between RMS and average value changes. True RMS-responding meters were used in radio frequency measurements, where instruments measured the heating effect in a resistor to measure a current. The advent of microprocessor-controlled meters capable of calculating RMS by sampling the waveform has made true RMS measurement commonplace. Pulse amplitude In telecommunications, pulse amplitude is the magnitude of a pulse parameter, such as the voltage level, current level, field intensity, or power level. Pulse amplitude is measured with respect to a specified reference and therefore should be modified by qualifiers, such as average, instantaneous, peak, or root-mean-square. Pulse amplitude also applies to the amplitude of frequency- and phase-modulated waveform envelopes. Formal representation In this simple wave equation is the amplitude (or peak amplitude), is the oscillating variable, is angular frequency, is time, and are arbitrary constants representing time and displacement offsets respectively. Units The units of the amplitude depend on the type of wave, but are always in the same units as the oscillating variable. A more general representation of the wave equation is more complex, but the role of amplitude remains analogous to this simple case. For waves on a string, or in a medium such as water, the amplitude is a displacement. The amplitude of sound waves and audio signals (which relates to the volume) conventionally refers to the amplitude of the air pressure in the wave, but sometimes the amplitude of the displacement (movements of the air or the diaphragm of a speaker) is described. The logarithm of the amplitude squared is usually quoted in dB, so a null amplitude corresponds to −∞ dB. Loudness is related to amplitude and intensity and is one of the most salient qualities of a sound, although in general sounds it can be recognized independently of amplitude. The square of the amplitude is proportional to the intensity of the wave. For electromagnetic radiation, the amplitude of a photon corresponds to the changes in the electric field of the wave. However, radio signals may be carried by electromagnetic radiation; the intensity of the radiation (amplitude modulation) or the frequency of the radiation (frequency modulation) is oscillated and then the individual oscillations are varied (modulated) to produce the signal. Amplitude envelopes Amplitude envelope refers to the changes in the amplitude of a sound over time, and is an influential property as it affects perception of timbre. A flat tone has a steady state amplitude that remains constant during time, which is represented by a scalar. Other sounds can have percussive amplitude envelopes featuring an abrupt onset followed by an immediate exponential decay. Percussive amplitude envelopes are characteristic of various impact sounds: two wine glasses clinking together, hitting a drum, slamming a door, etc. where the amplitude is transient and must be represented as either a continuous function or a discrete vector. Percussive amplitude envelopes model many common sounds that have a transient loudness attack, decay, sustain, and release. Amplitude normalization With waveforms containing many overtones, complex transient timbres can be achieved by assigning each overtone to its own distinct transient amplitude envelope. Unfortunately, this has the effect of modulating the loudness of the sound as well. It makes more sense to separate loudness and harmonic quality to be parameters controlled independently of each other. To do so, harmonic amplitude envelopes are frame-by-frame normalized to become amplitude proportion envelopes, where at each time frame all the harmonic amplitudes will add to 100% (or 1). This way, the main loudness-controlling envelope can be cleanly controlled. In Sound Recognition, max amplitude normalization can be used to help align the key harmonic features of 2 alike sounds, allowing similar timbres to be recognized independent of loudness. See also Body thermal amplitude Complex amplitude Pitch (music) Wave height Waves and their properties: Crest factor Envelope Frequency Wavelength Notes Physical quantities Sound Wave mechanics
Amplitude
[ "Physics", "Mathematics" ]
1,717
[ "Physical phenomena", "Physical quantities", "Quantity", "Classical mechanics", "Waves", "Wave mechanics", "Physical properties" ]
37,706
https://en.wikipedia.org/wiki/Covalent%20radius
The covalent radius, rcov, is a measure of the size of an atom that forms part of one covalent bond. It is usually measured either in picometres (pm) or angstroms (Å), with 1 Å = 100 pm. In principle, the sum of the two covalent radii should equal the covalent bond length between two atoms, R(AB) = r(A) + r(B). Moreover, different radii can be introduced for single, double and triple bonds (r1, r2 and r3 below), in a purely operational sense. These relationships are certainly not exact because the size of an atom is not constant but depends on its chemical environment. For heteroatomic A–B bonds, ionic terms may enter. Often the polar covalent bonds are shorter than would be expected based on the sum of covalent radii. Tabulated values of covalent radii are either average or idealized values, which nevertheless show a certain transferability between different situations, which makes them useful. The bond lengths R(AB) are measured by X-ray diffraction (more rarely, neutron diffraction on molecular crystals). Rotational spectroscopy can also give extremely accurate values of bond lengths. For homonuclear A–A bonds, Linus Pauling took the covalent radius to be half the single-bond length in the element, e.g. R(H–H, in H2) = 74.14 pm so rcov(H) = 37.07 pm: in practice, it is usual to obtain an average value from a variety of covalent compounds, although the difference is usually small. Sanderson has published a recent set of non-polar covalent radii for the main-group elements, but the availability of large collections of bond lengths, which are more transferable, from the Cambridge Crystallographic Database has rendered covalent radii obsolete in many situations. Average radii The values in the table below are based on a statistical analysis of more than 228,000 experimental bond lengths from the Cambridge Structural Database. For carbon, values are given for the different hybridisations of the orbitals. Radius for multiple bonds A different approach is to make a self-consistent fit for all elements in a smaller set of molecules. This was done separately for single, double, and triple bonds up to superheavy elements. Both experimental and computational data were used. The single-bond results are often similar to those of Cordero et al. When they are different, the coordination numbers used can be different. This is notably the case for most (d and f) transition metals. Normally one expects that r1 > r2 > r3. Deviations may occur for weak multiple bonds, if the differences of the ligand are larger than the differences of R in the data used. Note that elements up to atomic number 118 (oganesson) have now been experimentally produced and that there are chemical studies on an increasing number of them. The same, self-consistent approach was used to fit tetrahedral covalent radii for 30 elements in 48 crystals with subpicometer accuracy. See also Atomic radii of the elements (data page) Ionization energy Electron affinity Electron configuration Periodic table References Chemical properties Chemical bonding Atomic radius
Covalent radius
[ "Physics", "Chemistry", "Materials_science" ]
684
[ "Atomic radius", "Condensed matter physics", "nan", "Chemical bonding", "Atoms", "Matter" ]
37,710
https://en.wikipedia.org/wiki/Enthalpy%20of%20vaporization
In thermodynamics, the enthalpy of vaporization (symbol ), also known as the (latent) heat of vaporization or heat of evaporation, is the amount of energy (enthalpy) that must be added to a liquid substance to transform a quantity of that substance into a gas. The enthalpy of vaporization is a function of the pressure and temperature at which the transformation (vaporization or evaporation) takes place. The enthalpy of vaporization is often quoted for the normal boiling temperature of the substance. Although tabulated values are usually corrected to 298 K, that correction is often smaller than the uncertainty in the measured value. The heat of vaporization is temperature-dependent, though a constant heat of vaporization can be assumed for small temperature ranges and for reduced temperature . The heat of vaporization diminishes with increasing temperature and it vanishes completely at a certain point called the critical temperature (). Above the critical temperature, the liquid and vapor phases are indistinguishable, and the substance is called a supercritical fluid. Units Values are usually quoted in J/mol, or kJ/mol (molar enthalpy of vaporization), although kJ/kg, or J/g (specific heat of vaporization), and older units like kcal/mol, cal/g and Btu/lb are sometimes still used among others. Enthalpy of condensation The enthalpy of condensation (or heat of condensation) is by definition equal to the enthalpy of vaporization with the opposite sign: enthalpy changes of vaporization are always positive (heat is absorbed by the substance), whereas enthalpy changes of condensation are always negative (heat is released by the substance). Thermodynamic background The enthalpy of vaporization can be written as It is equal to the increased internal energy of the vapor phase compared with the liquid phase, plus the work done against ambient pressure. The increase in the internal energy can be viewed as the energy required to overcome the intermolecular interactions in the liquid (or solid, in the case of sublimation). Hence helium has a particularly low enthalpy of vaporization, 0.0845 kJ/mol, as the van der Waals forces between helium atoms are particularly weak. On the other hand, the molecules in liquid water are held together by relatively strong hydrogen bonds, and its enthalpy of vaporization, 40.65 kJ/mol, is more than five times the energy required to heat the same quantity of water from 0 °C to 100 °C (cp = 75.3 J/K·mol). Care must be taken, however, when using enthalpies of vaporization to measure the strength of intermolecular forces, as these forces may persist to an extent in the gas phase (as is the case with hydrogen fluoride), and so the calculated value of the bond strength will be too low. This is particularly true of metals, which often form covalently bonded molecules in the gas phase: in these cases, the enthalpy of atomization must be used to obtain a true value of the bond energy. An alternative description is to view the enthalpy of condensation as the heat which must be released to the surroundings to compensate for the drop in entropy when a gas condenses to a liquid. As the liquid and gas are in equilibrium at the boiling point (Tb), ΔvG = 0, which leads to: As neither entropy nor enthalpy vary greatly with temperature, it is normal to use the tabulated standard values without any correction for the difference in temperature from 298 K. A correction must be made if the pressure is different from 100 kPa, as the entropy of an ideal gas is proportional to the logarithm of its pressure. The entropies of liquids vary little with pressure, as the coefficient of thermal expansion of a liquid is small. These two definitions are equivalent: the boiling point is the temperature at which the increased entropy of the gas phase overcomes the intermolecular forces. As a given quantity of matter always has a higher entropy in the gas phase than in a condensed phase ( is always positive), and from , the Gibbs free energy change falls with increasing temperature: gases are favored at higher temperatures, as is observed in practice. Vaporization enthalpy of electrolyte solutions Estimation of the enthalpy of vaporization of electrolyte solutions can be simply carried out using equations based on the chemical thermodynamic models, such as Pitzer model or TCPC model. Selected values Elements The vaporization of metals is a key step in metal vapor synthesis, which exploits the increased reactivity of metal atoms or small particles relative to the bulk elements. Other common substances Enthalpies of vaporization of common substances, measured at their respective standard boiling points: See also Clausius–Clapeyron relation Shimansky equation, describes the temperature dependence of the heat of vaporization Enthalpy of fusion, specific heat of melting Enthalpy of sublimation Joback method, estimation of the heat of vaporization at the normal boiling point from molecular structures Latent heat References CODATA Key Values for Thermodynamics NIST Chemistry WebBook Enthalpy
Enthalpy of vaporization
[ "Physics", "Chemistry", "Mathematics" ]
1,105
[ "Enthalpy", "Quantity", "Physical quantities", "Thermodynamic properties" ]
37,808
https://en.wikipedia.org/wiki/Catalase
Catalase is a common enzyme found in nearly all living organisms exposed to oxygen (such as bacteria, plants, and animals) which catalyzes the decomposition of hydrogen peroxide to water and oxygen. It is a very important enzyme in protecting the cell from oxidative damage by reactive oxygen species (ROS). Catalase has one of the highest turnover numbers of all enzymes; one catalase molecule can convert millions of hydrogen peroxide molecules to water and oxygen each second. Catalase is a tetramer of four polypeptide chains, each over 500 amino acids long. It contains four iron-containing heme groups that allow the enzyme to react with hydrogen peroxide. The optimum pH for human catalase is approximately 7, and has a fairly broad maximum: the rate of reaction does not change appreciably between pH 6.8 and 7.5. The pH optimum for other catalases varies between 4 and 11 depending on the species. The optimum temperature also varies by species. Structure Human catalase forms a tetramer composed of four subunits, each of which can be conceptually divided into four domains. The extensive core of each subunit is generated by an eight-stranded antiparallel β-barrel (β1-8), with nearest neighbor connectivity capped by β-barrel loops on one side and α9 loops on the other. A helical domain at one face of the β-barrel is composed of four C-terminal helices (α16, α17, α18, and α19) and four helices derived from residues between β4 and β5 (α4, α5, α6, and α7). Alternative splicing may result in different protein variants. History Catalase was first noticed in 1818 by Louis Jacques Thénard, who discovered hydrogen peroxide (H2O2). Thénard suggested its breakdown was caused by an unknown substance. In 1900, Oscar Loew was the first to give it the name catalase, and found it in many plants and animals. In 1937 catalase from beef liver was crystallized by James B. Sumner and Alexander Dounce and the molecular weight was measured in 1938. The amino acid sequence of bovine catalase was determined in 1969, and the three-dimensional structure in 1981. Function Molecular mechanism While the complete mechanism of catalase is not currently known, the reaction is believed to occur in two stages: H2O2 + Fe(III)-E → H2O + O=Fe(IV)-E(.+) H2O2 + O=Fe(IV)-E(.+) → H2O + Fe(III)-E + O2 Here Fe()-E represents the iron center of the heme group attached to the enzyme. Fe(IV)-E(.+) is a mesomeric form of Fe(V)-E, meaning the iron is not completely oxidized to +V, but receives some stabilising electron density from the heme ligand, which is then shown as a radical cation (.+). As hydrogen peroxide enters the active site, it does not interact with the amino acids Asn148 (asparagine at position 148) and His75, causing a proton (hydrogen ion) to transfer between the oxygen atoms. The free oxygen atom coordinates, freeing the newly formed water molecule and Fe(IV)=O. Fe(IV)=O reacts with a second hydrogen peroxide molecule to reform Fe(III)-E and produce water and oxygen. The reactivity of the iron center may be improved by the presence of the phenolate ligand of Tyr358 in the fifth coordination position, which can assist in the oxidation of the Fe(III) to Fe(IV). The efficiency of the reaction may also be improved by the interactions of His75 and Asn148 with reaction intermediates. The decomposition of hydrogen peroxide by catalase proceeds according to first-order kinetics, the rate being proportional to the hydrogen peroxide concentration. Catalase can also catalyze the oxidation, by hydrogen peroxide, of various metabolites and toxins, including formaldehyde, formic acid, phenols, acetaldehyde and alcohols. It does so according to the following reaction: H2O2 + H2R → 2H2O + R The exact mechanism of this reaction is not known. Any heavy metal ion (such as copper cations in copper(II) sulfate) can act as a noncompetitive inhibitor of catalase. However, "Copper deficiency can lead to a reduction in catalase activity in tissues, such as heart and liver." Furthermore, the poison cyanide is a noncompetitive inhibitor of catalase at high concentrations of hydrogen peroxide. Arsenate acts as an activator. Three-dimensional protein structures of the peroxidated catalase intermediates are available at the Protein Data Bank. Cellular role Hydrogen peroxide is a harmful byproduct of many normal metabolic processes; to prevent damage to cells and tissues, it must be quickly converted into other, less dangerous substances. To this end, catalase is frequently used by cells to rapidly catalyze the decomposition of hydrogen peroxide into less-reactive gaseous oxygen and water molecules. Mice genetically engineered to lack catalase are initially phenotypically normal. However, catalase deficiency in mice may increase the likelihood of developing obesity, fatty liver, and type 2 diabetes. Some humans have very low levels of catalase (acatalasia), yet show few ill effects. The increased oxidative stress that occurs with aging in mice is alleviated by over-expression of catalase. Over-expressing mice do not exhibit the age-associated loss of spermatozoa, testicular germ and Sertoli cells seen in wild-type mice. Oxidative stress in wild-type mice ordinarily induces oxidative DNA damage (measured as 8-oxodG) in sperm with aging, but these damages are significantly reduced in aged catalase over-expressing mice. Furthermore, these over-expressing mice show no decrease in age-dependent number of pups per litter. Overexpression of catalase targeted to mitochondria extends the lifespan of mice. In eukaryotes, catalase is usually located in a cellular organelle called the peroxisome. Peroxisomes in plant cells are involved in photorespiration (the use of oxygen and production of carbon dioxide) and symbiotic nitrogen fixation (the breaking apart of diatomic nitrogen (N2) to reactive nitrogen atoms). Hydrogen peroxide is used as a potent antimicrobial agent when cells are infected with a pathogen. Catalase-positive pathogens, such as Mycobacterium tuberculosis, Legionella pneumophila, and Campylobacter jejuni, make catalase to deactivate the peroxide radicals, thus allowing them to survive unharmed within the host. Like alcohol dehydrogenase, catalase converts ethanol to acetaldehyde, but it is unlikely that this reaction is physiologically significant. Distribution among organisms The large majority of known organisms use catalase in every organ, with particularly high concentrations occurring in the liver in mammals. Catalase is found primarily in peroxisomes and the cytosol of erythrocytes (and sometimes in mitochondria) Almost all aerobic microorganisms use catalase. It is also present in some anaerobic microorganisms, such as Methanosarcina barkeri. Catalase is also universal among plants and occurs in most fungi. One unique use of catalase occurs in the bombardier beetle. This beetle has two sets of liquids that are stored separately in two paired glands. The larger of the pair, the storage chamber or reservoir, contains hydroquinones and hydrogen peroxide, while the smaller, the reaction chamber, contains catalases and peroxidases. To activate the noxious spray, the beetle mixes the contents of the two compartments, causing oxygen to be liberated from hydrogen peroxide. The oxygen oxidizes the hydroquinones and also acts as the propellant. The oxidation reaction is very exothermic (ΔH = −202.8 kJ/mol) and rapidly heats the mixture to the boiling point. Long-lived queens of the termite Reticulitermes speratus have significantly lower oxidative damage to their DNA than non-reproductive individuals (workers and soldiers). Queens have more than two times higher catalase activity and seven times higher expression levels of the catalase gene RsCAT1 than workers. It appears that the efficient antioxidant capability of termite queens can partly explain how they attain longer life. Catalase enzymes from various species have vastly differing optimum temperatures. Poikilothermic animals typically have catalases with optimum temperatures in the range of 15-25 °C, while mammalian or avian catalases might have optimum temperatures above 35 °C, and catalases from plants vary depending on their growth habit. In contrast, catalase isolated from the hyperthermophile archaeon Pyrobaculum calidifontis has a temperature optimum of 90 °C. Clinical significance and application Catalase is used in the food industry for removing hydrogen peroxide from milk prior to cheese production. Another use is in food wrappers, where it prevents food from oxidizing. Catalase is also used in the textile industry, removing hydrogen peroxide from fabrics to make sure the material is peroxide-free. A minor use is in contact lens hygiene – a few lens-cleaning products disinfect the lens using a hydrogen peroxide solution; a solution containing catalase is then used to decompose the hydrogen peroxide before the lens is used again. Bacterial identification (catalase test) The catalase test is one of the three main tests used by microbiologists to identify species of bacteria. If the bacteria possess catalase (i.e., are catalase-positive), bubbles of oxygen are observed when a small amount of bacterial isolate is added to hydrogen peroxide. The catalase test is done by placing a drop of hydrogen peroxide on a microscope slide. An applicator stick is touched to the colony, and the tip is then smeared onto the hydrogen peroxide drop. If the mixture produces bubbles or froth, the organism is said to be 'catalase-positive'. Staphylococci and Micrococci are catalase-positive. Other catalase-positive organisms include Listeria, Corynebacterium diphtheriae, Burkholderia cepacia, Nocardia, the family Enterobacteriaceae (Citrobacter, E. coli, Enterobacter, Klebsiella, Shigella, Yersinia, Proteus, Salmonella, Serratia), Pseudomonas, Mycobacterium tuberculosis, Aspergillus, Cryptococcus, and Rhodococcus equi. If not, the organism is 'catalase-negative'. Streptococcus and Enterococcus spp. are catalase-negative. While the catalase test alone cannot identify a particular organism, it can aid identification when combined with other tests such as antibiotic resistance. The presence of catalase in bacterial cells depends on both the growth condition and the medium used to grow the cells. Capillary tubes may also be used. A small sample of bacteria is collected on the end of the capillary tube, without blocking the tube, to avoid false negative results. The opposite end is then dipped into hydrogen peroxide, which is drawn into the tube through capillary action, and turned upside down, so that the bacterial sample points downwards. The hand holding the tube is then tapped on the bench, moving the hydrogen peroxide down until it touches the bacteria. If bubbles form on contact, this indicates a positive catalase result. This test can detect catalase-positive bacteria at concentrations above about 105 cells/mL, and is simple to use. Bacterial virulence Neutrophils and other phagocytes use peroxide to kill bacteria. The enzyme NADPH oxidase generates superoxide within the phagosome, which is converted via hydrogen peroxide to other oxidising substances like hypochlorous acid which kill phagocytosed pathogens. In individuals with chronic granulomatous disease (CGD), phagocytic peroxide production is impaired due to a defective NADPH oxidase system. Normal cellular metabolism will still produce a small amount of peroxide and this peroxide can be used to produce hypochlorous acid to eradicate the bacterial infection. However, if individuals with CGD are infected with catalase-positive bacteria, the bacterial catalase can destroy the excess peroxide before it can be used to produce other oxidising substances. In these individuals the pathogen survives and becomes a chronic infection. This chronic infection is typically surrounded by macrophages in an attempt to isolate the infection. This wall of macrophages surrounding a pathogen is called a granuloma. Many bacteria are catalase positive, but some are better catalase-producers than others. Some catalase-positive bacteria and fungi include: Nocardia, Pseudomonas, Listeria, Aspergillus, Candida, E. coli, Staphylococcus, Serratia, B. cepacia and H. pylori. Acatalasia Acatalasia is a condition caused by homozygous mutations in CAT, resulting in a lack of catalase. Symptoms are mild and include oral ulcers. A heterozygous CAT mutation results in lower, but still present catalase. Gray hair Low levels of catalase may play a role in the graying process of human hair. Hydrogen peroxide is naturally produced by the body and broken down by catalase. Hydrogen peroxide can accumulate in hair follicles and if catalase levels decline, this buildup can cause oxidative stress and graying. These low levels of catalase are associated with old age. Hydrogen peroxide interferes with the production of melanin, the pigment that gives hair its color. Interactions Catalase has been shown to interact with the ABL2 and Abl genes. Infection with the murine leukemia virus causes catalase activity to decline in the lungs, heart and kidneys of mice. Conversely, dietary fish oil increased catalase activity in the heart, and kidneys of mice. Methods for determining catalase activity In 1870, Schoenn discovered a formation of yellow color from the interaction of hydrogen peroxide with molybdate; then, from the middle of the 20th century, this reaction began to be used for colorimetric determination of unreacted hydrogen peroxide in the catalase activity assay. The reaction became widely used after publications by Korolyuk et al. (1988) and Goth (1991). The first paper describes serum catalase assay with no buffer in the reaction medium; the latter describes the procedure based on phosphate buffer as a reaction medium. Since phosphate ion reacts with ammonium molybdate, the use of MOPS buffer as a reaction medium is more appropriate. Direct UV measurement of the decrease in the concentration of hydrogen peroxide is also widely used after the publications by Beers & Sizer and Aebi. See also Enzyme kinetics Glutathione peroxidase Peroxidase Superoxide dismutase References External links EC 1.11.1 Antioxidants Hemoproteins Enzymes Catalysis Copper enzymes
Catalase
[ "Chemistry" ]
3,317
[ "Catalysis", "Chemical kinetics" ]
37,831
https://en.wikipedia.org/wiki/Hybrid-propellant%20rocket
A hybrid-propellant rocket is a rocket with a rocket motor that uses rocket propellants in two different phases: one solid and the other either gas or liquid. The hybrid rocket concept can be traced back to the early 1930s. Hybrid rockets avoid some of the disadvantages of solid rockets like the dangers of propellant handling, while also avoiding some disadvantages of liquid rockets like their mechanical complexity. Because it is difficult for the fuel and oxidizer to be mixed intimately (being different states of matter), hybrid rockets tend to fail more benignly than liquids or solids. Like liquid rocket engines, hybrid rocket motors can be shut down easily and the thrust is throttleable. The theoretical specific impulse () performance of hybrids is generally higher than solid motors and lower than liquid engines. as high as 400 s has been measured in a hybrid rocket using metalized fuels. Hybrid systems are more complex than solid ones, but they avoid significant hazards of manufacturing, shipping and handling solid rocket motors by storing the oxidizer and the fuel separately. History The first work on hybrid rockets was performed in the early 1930s at the Soviet Group for the Study of Reactive Motion. Mikhail Klavdievich Tikhonravov, who would later supervise the design of Sputnik I and the Luna programme, was responsible for the first hybrid propelled rocket launch, the GIRD-9, on 17 August 1933, which reached an altitude of . In the late 1930s at IG Farben in Germany and concurrently at the California Rocket Society in the United States. Leonid Andrussow, working in Germany, theorized hybrid propellant rockets. O. Lutz, W. Noeggerath, and Andrussow tested a hybrid rocket motor using coal and gaseous N2O as the propellants. Oberth also worked on a hybrid rocket motor using LOX as the oxidizer and graphite as the fuel. The high heat of sublimation of carbon prevented these rocket motors from operating efficiently, as it resulted in a negligible burning rate. In the 1940s, the California Pacific Rocket Society used LOX in combination with several different fuel types, including wood, wax, and rubber. The most successful of these tests was with the rubber fuel, which is still the dominant fuel in use today. In June 1951, a LOX / rubber rocket was flown to an altitude of . Two major efforts occurred in the 1950s. One of these efforts was by G. Moore and K. Berman at General Electric. The duo used 90% high test peroxide (HTP, or H2O2) and polyethylene (PE) in a rod and tube grain design. They drew several significant conclusions from their work. The fuel grain had uniform burning. Grain cracks did not affect combustion, like it does with solid rocket motors. No hard starts were observed (a hard start is a pressure spike seen close to the time of ignition, typical of liquid rocket engines). The fuel surface acted as a flame holder, which encouraged stable combustion. The oxidizer could be throttled with one valve, and a high oxidizer to fuel ratio helped simplify combustion. The negative observations were low burning rates and that the thermal instability of peroxide was problematic for safety reasons. Another effort that occurred in the 1950s was the development of a reverse hybrid. In a standard hybrid rocket motor, the solid material is the fuel. In a reverse hybrid rocket motor, the oxidizer is solid. William Avery of the Applied Physics Laboratory used jet fuel and ammonium nitrate, selected for their low cost. His O/F ratio was 0.035, which was 200 times smaller than the ratio used by Moore and Berman. In 1953 Pacific Rocket Society (est. 1943) was developing the XDF-23, a hybrid rocket, designed by Jim Nuding, using LOX and rubber polymer called "Thiokol". They had already tried other fuels in prior iterations including cotton, paraffin wax and wood. The XDF name itself comes from "experimental Douglas fir" from one of the first units. In the 1960s, European organizations also began work on hybrid rockets. ONERA, based in France, and Volvo Flygmotor, based in Sweden, developed sounding rockets using hybrid rocket motor technology. The ONERA group focused on a hypergolic rocket motor, using nitric acid and an amine fuel, developing the LEX sounding rocket. The company flew eight rockets: Once in April 1964, three times in June 1965, and four times in 1967. The maximum altitude the flights achieved was over . The Volvo Flygmotor group also used a hypergolic propellant combination. They also used nitric acid for their oxidizer, but used Tagaform (polybutadiene with an aromatic amine) as their fuel. Their flight was in 1969, lofting a payload to . Meanwhile, in the United States, United Technologies Center (Chemical Systems Division) and Beech Aircraft were working on a supersonic target drone, known as Sandpiper. It used MON-25 (mixed 25% NO, 75% N2O4) as the oxidizer and polymethyl methacrylate (PMM) and Mg for the fuel. The drone flew six times in 1968, for more than 300 seconds and to an altitude greater than . The second iteration of the rocket, known as the HAST, had IRFNA-PB/PMM for its propellants and was throttleable over a 10/1 range. HAST could carry a heavier payload than the Sandpiper. Another iteration, which used the same propellant combination as the HAST, was developed by Chemical Systems Division and Teledyne Aircraft. Development for this program ended in the mid-1980s. Chemical Systems Division also worked on a propellant combination of lithium and FLOx (mixed F2 and O2). This was an efficient hypergolic rocket that was throttleable. The vacuum specific impulse was 380 seconds at 93% combustion efficiency. American Rocket Company (AMROC) developed the largest hybrid rockets ever created in the late 1980s and early 1990s. The first version of their engine, fired at the Air Force Phillips Laboratory, produced of thrust for 70 seconds with a propellant combination of LOX and hydroxyl-terminated polybutadiene (HTPB) rubber. The second version of the motor, known as the H-250F, produced more than of thrust. Korey Kline of Environmental Aeroscience Corporation (eAc) first fired a gaseous oxygen and rubber hybrid in 1982 at Lucerne Dry Lake, CA, after discussions on the technology with Bill Wood, formerly with Westinghouse. The first SpaceShipOne hybrid tests were successfully conducted by Kline and eAc at Mojave, CA. In 1994, the U.S. Air Force Academy flew a hybrid sounding rocket to an altitude of . The rocket used HTPB and LOX for its propellant, and reached a peak thrust of and had a thrust duration of 16 seconds. Basic concepts In its simplest form, a hybrid rocket consists of a pressure vessel (tank) containing the liquid oxidizer, the combustion chamber containing the solid propellant, and a mechanical device separating the two. When thrust is desired, a suitable ignition source is introduced in the combustion chamber and the valve is opened. The liquid oxidiser (or gas) flows into the combustion chamber where it is vaporized and then reacted with the solid propellant. Combustion occurs in a boundary layer diffusion flame adjacent to the surface of the solid propellant. Generally, the liquid propellant is the oxidizer and the solid propellant is the fuel because solid oxidizers are extremely dangerous and lower performing than liquid oxidizers. Furthermore, using a solid fuel such as Hydroxyl-terminated polybutadiene (HTPB) or paraffin wax allows for the incorporation of high-energy fuel additives such as aluminium, lithium, or metal hydrides. Combustion The governing equation for hybrid rocket combustion shows that the regression rate is dependent on the oxidizer mass flux rate, which means the rate that the fuel will burn is proportional to the amount of oxidizer flowing through the port. This differs from a solid rocket motor, in which the regression rate is proportional to the chamber pressure of the motor. where is the regression rate, is the regression rate coefficient (incorporating the grain length), is the oxidizer mass flux rate, and is the regression rate exponent. As the motor burns, the increase in diameter of the fuel port results in an increased fuel mass flow rate. This phenomenon makes the oxidizer to fuel ratio (O/F) shift during the burn. The increased fuel mass flow rate can be compensated for by also increasing the oxidizer mass flow rate. In addition to the O/F varying as a function of time, it also varies based on the position down the fuel grain. The closer the position is to the top of the fuel grain, the higher the O/F ratio. Since the O/F varies down the port, a point called the stoichiometric point may exist at some point down the grain. Properties Hybrid rocket motors exhibit some obvious as well as some subtle advantages over liquid-fuel rockets and solid-fuel rockets. A brief summary of some of these is given below: Advantages compared with liquid rockets Mechanically simpler – requires only a single liquid propellant resulting in less plumbing, fewer valves, and simpler operations. Denser fuel – fuels in the solid phase generally have higher density than those in the liquid phase, reducing overall system volume. Metal additives – reactive metals such as aluminium, magnesium, lithium or beryllium can be easily included in the fuel grain increasing specific impulse (), density, or both. Combustion instabilities – Hybrid rockets do not typically exhibit high frequency combustion instabilities that plague liquid rockets due to the solid fuel grain breaking up acoustic waves that would otherwise reflect in an open liquid engine combustion chamber. Propellant pressurization – One of the most difficult to design portions of a liquid rocket system are the turbopumps. Turbopump design is complex as it has to precisely and efficiently pump and keep separated two fluids of different properties in precise ratios at very high volumetric flow rates, often cryogenic temperatures, and highly volatile chemicals while combusting those same fluids in order to power itself. Hybrids have far less fluid to move and can often be pressurized by a blow-down system (which would be prohibitively heavy in a liquid rocket) or self-pressurized oxidizers (such as N2O). Cooling – Liquid rockets often depend on one of the propellants, typically the fuel, to cool the combustion chamber and nozzle due to the very high heat fluxes and vulnerability of the metal walls to oxidation and stress cracking. Hybrid rockets have combustion chambers that are lined with the solid propellant which shields it from the product gases. Their nozzles are often graphite or coated in ablative materials similarly to solid rocket motors. The design, construction, and testing of liquid cooling flows is complex, making the system more prone to failure. Advantages compared with solid rockets Higher theoretical – Possible due to limits of known solid oxidizers compared to often used liquid oxidizers. Less explosion hazard – Propellant grain is more tolerant of processing errors such as cracks since the burn rate is dependent on oxidizer mass flux rate. Propellant grain cannot be ignited by stray electrical charge and is very insensitive to auto-igniting due to heat. Hybrid rocket motors can be transported to the launch site with the oxidizer and fuel stored separately, improving safety. Fewer handling and storage issues – Ingredients in solid rockets are often incompatible chemically and thermally. Repeated changes in temperature can cause distortion of the grain. Antioxidants and coatings are used to keep the grain from breaking down or decomposing. More controllable – Stop/restart and throttling are all easily incorporated into most designs. Solid rockets rarely can be shut down easily and almost never have throttling or restart capabilities. Disadvantages of hybrid rockets Hybrid rockets also exhibit some disadvantages when compared with liquid and solid rockets. These include: Oxidizer-to-fuel ratio shift ("O/F shift") – with a constant oxidizer flow-rate, the ratio of fuel production rate to oxidizer flow rate will change as a grain regresses. This leads to off-peak operation from a chemical performance point of view. However, for a well-designed hybrid, O/F shift has a very small impact on performance because is insensitive to O/F shift near the peak. Poor regression characteristics often drive multi-port fuel grains. Multi-port fuel grains have poor volumetric efficiency and, often, structural deficiencies. High regression rate liquefying fuels developed in the late 1990s offer a potential solution to this problem. Compared with liquid-based propulsion, re-fueling a partially or totally depleted hybrid rocket would present significant challenges, as the solid propellant cannot simply be pumped into a fuel tank. This may or may not be an issue, depending upon how the rocket is planned to be used. In general, much less development work has been completed with hybrids than liquids or solids and it is likely that some of these disadvantages could be rectified through further investment in research and development. One problem in designing large hybrid orbital rockets is that turbopumps become necessary to achieve high flow rates and pressurization of the oxidizer. This turbopump must be powered by something. In a traditional liquid-propellant rocket, the turbopump uses the same fuel and oxidizer as the rocket, since they are both liquid and can be fed to the pre-burner. But in a hybrid, the fuel is solid and cannot be fed to a turbopump's engine. Some hybrids use an oxidizer that can also be used as a monopropellant, such as hydrogen peroxide, and so a turbopump can run on it alone. However, hydrogen peroxide is significantly less efficient than liquid oxygen, which cannot be used alone to run a turbopump. Another fuel would be needed, requiring its own tank and decreasing rocket performance. Fuel Common fuel choices A reverse-hybrid rocket, which is not very common, is one where the engine uses a solid oxidizer and a liquid fuel. Some liquid fuel options are kerosene, hydrazine, and LH2. Common fuels for a typical hybrid rocket engine include polymers such as acrylics, polyethylene (PE), cross-linked rubber, such as HTPB, or liquefying fuels such as paraffin wax. Plexiglass was a common fuel, since the combustion could be visible through the transparent combustion chamber. Hydroxyl-terminated polybutadiene (HTPB) synthetic rubber is currently the most popular fuel for hybrid rocket engines, due to its energy, and due to how safe it is to handle. Tests have been performed in which HTPB was soaked in liquid oxygen, and it still did not become explosive. These fuels are generally not as dense as solid rocket motors, so they are often doped with aluminum to increase the density and therefore the rocket performance. Grain manufacturing methods Cast Hybrid rocket fuel grains can be manufactured via casting techniques, since they are typically a plastic or a rubber. Complex geometries, which are driven by the need for higher fuel mass flow rates, makes casting fuel grains for hybrid rockets expensive and time-consuming due in part to equipment costs. On a larger scale, cast grains must be supported by internal webbing, so that large chunks of fuel do not impact or even potentially block the nozzle. Grain defects are also an issue in larger grains. Traditional fuels that are cast are hydroxyl-terminated polybutadiene (HTPB) and paraffin waxes. Additive manufacturing Additive manufacturing is currently being used to create grain structures that were otherwise not possible to manufacture. Helical ports have been shown to increase fuel regression rates while also increasing volumetric efficiency. An example of material used for a hybrid rocket fuel is acrylonitrile butadiene styrene (ABS). The printed material is also typically enhanced with additives to improve rocket performance. Recent work at the University of Tennessee Knoxville has shown that, due to the increased surface area, the use of powdered fuels (i.e. graphite, coal, aluminum) encased in a 3D printed, ABS matrix can significantly increase the fuel burn rate and thrust level as compared to traditional polymer grains. Oxidizer Common oxidizer choices Common oxidizers include gaseous or liquid oxygen, nitrous oxide, and hydrogen peroxide. For a reverse hybrid, oxidizers such as frozen oxygen and ammonium perchlorate are used. Proper oxidizer vaporization is important for the rocket to perform efficiently. Improper vaporization can lead to very large regression rate differences at the head end of the motor when compared to the aft end. One method is to use a hot gas generator to heat the oxidizer in a pre-combustion chamber. Another method is to use an oxidizer that can also be used as a monopropellant. A good example is hydrogen peroxide, which can be catalytically decomposed over a silver bed into hot oxygen and steam. A third method is to inject a propellant that is hypergolic with the oxidizer into the flow. Some of the oxidizer will decompose, heating up the rest of the oxidizer in the flow. Hybrid safety Generally, well designed and carefully constructed hybrids are very safe. The primary hazards associated with hybrids are: Pressure vessel failures – Chamber insulation failure may allow hot combustion gases near the chamber walls leading to a "burn-through" in which the vessel ruptures. Blow back – For oxidizers that decompose exothermically such as nitrous oxide or hydrogen peroxide, flame or hot gasses from the combustion chamber can propagate back through the injector, vaporising the oxidizer and mixing it with hot fuel rich gasses leading to a tank explosion. Blow-back requires gases to flow back through the injector due to insufficient pressure drop which can occur during periods of unstable combustion. Blow back is inherent to specific oxidizers and is not possible with oxidizers such as oxygen, or nitrogen tetroxide, unless fuel is present in the oxidizer tank. Hard starts – An excess of oxidizer in the combustion chamber prior to ignition, particularly for monopropellants such as nitrous oxide, can result in a temporary over-pressure or "spike" at ignition. Because the fuel in a hybrid does not contain an oxidizer, it will not combust explosively on its own. For this reason, hybrids are classified as having no TNT equivalent explosive power. In contrast, solid rockets often have TNT equivalencies similar in magnitude to the mass of the propellant grain. Liquid-fuel rockets typically have a TNT equivalence calculated based on the amount of fuel and oxidizer which could realistically intimately combine before igniting explosively; this is often taken to be 10–20% of the total propellant mass. For hybrids, even filling the combustion chamber with oxidizer prior to ignition will not generally create an explosion with the solid fuel, the explosive equivalence is often quoted as 0%. Organizations working on hybrids Commercial companies In 1998 SpaceDev acquired all of the intellectual property, designs, and test results generated by over 200 hybrid rocket motor firings by the American Rocket Company over its eight-year life. SpaceShipOne, the first private crewed spacecraft, was powered by SpaceDev's hybrid rocket motor burning HTPB with nitrous oxide. However, nitrous oxide was the prime substance responsible for the explosion that killed three in the development of the successor of SpaceShipOne at Scaled Composites in 2007. The Virgin Galactic SpaceShipTwo follow-on commercial suborbital spaceplane uses a scaled-up hybrid motor. SpaceDev was developing the SpaceDev Streaker, an expendable small launch vehicle, and SpaceDev Dream Chaser, capable of both suborbital and orbital human space flight. Both Streaker and Dream Chaser use hybrid rocket motors that burn nitrous oxide and the synthetic HTPB rubber. SpaceDev was acquired by Sierra Nevada Corporation in 2009, becoming its Space Systems division, which continues to develop Dream Chaser for NASA's Commercial Crew Development contract. Sierra Nevada also developed RocketMotorTwo, the hybrid engine for SpaceShipTwo. On October 31, 2014, when SpaceShipTwo was lost, initial speculation had suggested that its hybrid engine had in fact exploded and killed one test pilot and seriously injured the other. However, investigation data now indicates an early deployment of the SpaceShip-Two feather system was the cause for aerodynamic breakup of the vehicle. U.S. Rockets manufactured and deployed hybrids using self-pressurizing nitrous oxide (N2O) and hydroxyl-terminated polybutadiene (HTPB) as well as mixed High-test peroxide (HTP) and HTPB. The High-test peroxide (H2O2) 86% and (HTPB) and aluminum hybrids developed by U.S. Rockets produced a sea level delivered specific impulse (Isp) of 240, well above the typical 180 of N2O-HTPB hybrids. In addition to that, they were self-starting, restartable, had considerably lower combustion instability making them suitable for fragile or crewed missions such as Bloodhound SSC, SpaceShipTwo or SpaceShipThree. The company had successfully tested and deployed both pressure fed and pump fed versions of the latter HTP-HTPB style. Deliverables to date have ranged from diameter, and developed units up to diameter. The vendor claimed scalability to over diameter with regression rates approaching solids, according to literature distributed at the November 2013 Defense Advanced Research Projects Agency (DARPA) meeting for XS-1. U.S. Rockets is no longer manufacturing large-scale rockets. Gilmour Space Technologies began testing Hybrid rocket engines in 2015 with both N2O and HP with HDPE and HDPE+wax blends. For 2016 testing includes a HP/PE engine. The company is planning to use hybrids for both sounding and orbital rockets. Orbital Technologies Corporation (Orbitec) has been involved in some U.S. government-funded research on hybrid rockets including the "Vortex Hybrid" concept. Environmental Aeroscience Corporation (eAc) was incorporated in 1994 to develop hybrid rocket propulsion systems. It was included in the design competition for the SpaceShipOne motor but lost the contract to SpaceDev. Environmental Aeroscience Corporation still supplied parts to SpaceDev for the oxidizer fill, vent, and dump system. Rocket Lab formerly sold hybrid sounding rockets and related technology. The Reaction Research Society (RRS), although known primarily for their work with liquid rocket propulsion, has a long history of research and development with hybrid rocket propulsion. Copenhagen Suborbitals, a Danish rocket group, has designed and test-fired several hybrids using N2O at first and currently LOX. Their fuel is epoxy, paraffin wax, or polyurethane. The group eventually moved away from hybrids because of thrust instabilities, and now uses a motor similar to that of the V-2 rocket. TiSPACE is a Taiwanese company which is developing a family of hybrid-propellant rockets. bluShift Aerospace in Brunswick, Maine, won a NASA SBIR grant to develop a modular hybrid rocket engine for its proprietary bio-derived fuel in June 2019. Having completed the grant bluShift has launched its first sounding rocket using the technology. Vaya Space based out of Cocoa, Florida, is expected to launch its hybrid fuel rocket Dauntless in 2023. Reaction Dynamics based out Saint-Jean-sur-Richelieu, Quebec, began developing a hybrid rocket engine in 2017 capable of producing 21.6 kN of thrust. Their Aurora rocket will use nine engines on the first stage and one engine on the second stage and will be capable of delivering a payload of 50–150 kg to LEO. In May 2022, Reaction Dynamics announced they were partnering with Maritime Launch Services to launch the Aurora rocket from their launch site currently under construction in Canso, Nova Scotia, beginning with suborbital test flights in Summer, 2023 with a target of 2024 for the first orbital launch. In 2017 DeltaV Uzay Teknolojileri A.Ş. was founded by Savunma Sanayi Teknolojileri A.Ş (SSTEK), a state company of Turkey, for hybrid-propellant-rocket research. The company CEO Arif Karabeyoglu is former Consulting Professor of Stanford University in the area of rocket propulsion and combustion. According to company web site DeltaV achieved many firsts in hybrid-propellant-rocket technology including first paraffin/LOX dual fuel rocket launch, highest specific impulses for a hybrid-propellant-rocket, first sounding rocket to reach 100 km altittude, first orbital hybrid-propellant-rocket design, first orbital firing of hybrid-propellant-rocket. Universities Space Propulsion Group was founded in 1999 by Arif Karabeyoglu, Brian Cantwell, and others from Stanford University to develop high regression-rate liquefying hybrid rocket fuels. They have successfully fired motors as large as . diameter which produce using the technology and are currently developing a diameter, motor to be initially fired in 2010. Stanford University is the institution where liquid-layer combustion theory for hybrid rockets was developed. The SPaSE group at Stanford is currently working with NASA Ames Research Center developing the Peregrine sounding rocket which will be capable of 100 km altitude. Engineering challenges include various types of combustion instabilities. Although the proposed motor was test fired in 2013, the Peregrine program eventually switched to a standard solid rocket for its 2016 debut. The University of Tennessee Knoxville has carried out hybrid rocket research since 1999, working in collaboration with NASA Marshall Space Flight Center and private industry. This work has included the integration of a water-cooled calorimeter nozzle, one of the first 3D-printed, hot section components successfully used in a rocket motor. Other work at the university has focused on the use of helical oxidizer injection, bio-derived fuels and powdered fuels encased in a 3D-printed, ABS matrix, including the successful launch of a coal-fired hybrid at the 2019 Spaceport America Cup. At the Delft University of Technology, the student team Delft Aerospace Rocket Engineering (DARE) is very active in the design and building of hybrid rockets. In October 2015, DARE broke the European student altitude record with the Stratos II+ sounding rocket. Stratos II+ was propelled by the DHX-200 hybrid rocket engine, using a nitrous oxide oxidizer and fuel blend of paraffin, sorbitol and aluminium powder. On July 26, 2018, DARE attempted to launch the Stratos III hybrid rocket. This rocket used the same fuel/oxidizer combination as its predecessor, but with an increased impulse of around 360 kNs. At the time of development, this was the most powerful hybrid rocket engine ever developed by a student team in terms of total impulse. The Stratos III vehicle was lost 20 seconds into the flight. Florida Institute of Technology has successfully tested and evaluated hybrid technologies with their Panther Project. The WARR student-team at the Technical University of Munich has been developing hybrid engines and rockets since the early 1970s. Using acids, oxygen, or nitrous oxide in combination with polyethylene, or HTPB. The development includes test stand engines as well as airborne versions, like the first German hybrid rocket Barbarella. They are currently working on a hybrid rocket with Liquid oxygen as its oxidizer, to break the European height record of amateur rockets. They are also working with Rocket Crafters and testing their hybrid rockets. Boston University's student-run "Rocket Propulsion Group", which in the past has launched only solid motor rockets, is attempting to design and build a single-stage hybrid sounding rocket to launch into sub-orbital space by July 2015. Brigham Young University (BYU), the University of Utah, and Utah State University launched a student-designed rocket called Unity IV in 1995 which burned the solid fuel hydroxyl-terminated polybutadiene (HTPB) with an oxidizer of gaseous oxygen, and in 2003 launched a larger version which burned HTPB with nitrous oxide. The University of Brasilia's (UnB) Hybrid Rocket Team initiated their endeavors in 1999 within the Faculty of Technology, marking the pioneering institution in the Southern Hemisphere to engage with hybrid rockets. Over time, the team has achieved notable milestones, encompassing the creation of various sounding rockets and hybrid rocket engines. Presently, the team is known as the Chemical Propulsion Laboratory (CPL) and is situated at Campus UnB Gama. CPL has made significant strides in the advancement of critical hybrid engine technologies. This includes the development of a modular 1 kN hybrid rocket engine for the SARA platform, an innovative methane-oxygen gas-torch ignition system, an efficient oxidizer feed system, precision flow control valves, and thrust vector control mechanisms tailored for hybrid engines. Additionally, they've achieved a breakthrough with a 3D-printed, actively cooled hybrid rocket engine. Furthermore, the Laboratory is actively engaged in diverse areas of research and development, with current projects spanning the formulation of hybrid engine fuels using paraffin wax and N2O, numerical simulations, optimization techniques, and rocket design. CPL collaborates extensively with governmental agencies, private investors, and other educational institutions, including FAPDF, FAPESP, CNPq, and AEB. A notable collaborative effort includes the Capital Rocket Team (CRT), a group of students from UnB, who are currently partnering with CPL to develop hybrid sounding rockets. In a remarkable achievement, CRT clinched the top spot in the 2022 Latin American Space Challenge (LASC). University of California, Los Angeles's student-run "Rocket Project at UCLA" launches hybrid propulsion rockets using nitrous oxide as an oxidizer and HTPB as the fuel. They are currently in the development process of their fifth student-built hybrid rocket engine. University of Toronto's student-run "University of Toronto Aerospace Team", designs and builds hybrid engine powered rockets. They are currently constructing a new engine testing facility at the University of Toronto Institute for Aerospace Studies, and are working towards breaking the Canadian amateur rocketry altitude record with their new rocket, Defiance MKIII, currently under rigorous testing. Defiance MK III's engine, QUASAR, is a Nitrous-Paraffin hybrid engine, capable of producing 7 kN of thrust for a period of 9 seconds. In 2016, Pakistan's DHA Suffa University successfully developed Raheel-1, hybrid rocket engines in 1 kN class, using paraffin wax and liquid oxygen, thereby becoming the first university run rocket research program in the country. In India, Birla Institute of Technology, Mesra Space engineering and rocketry department has been working on Hybrid Projects with various fuels and oxidizers. Pars Rocketry Group from Istanbul Technical University has designed and built the first hybrid rocket engine of Turkey, the rocket engine extensively tested in May 2015. A United Kingdom-based team (laffin-gas) is using four N2O hybrid rockets in a drag-racing style car. Each rocket has an outer diameter of 150 mm and is 1.4 m long. They use a fuel grain of high-density wound paper soaked in cooking oil. The N2O supply is provided by Nitrogen-pressurised piston accumulators which provide a higher rate of delivery than N2O gas alone and also provide damping of any reverse shock. In Italy one of the leading centers for research in hybrid propellants rockets is CISAS (Center of Studies and Activities for Space) "G. Colombo", University of Padua. The activities cover all stages of the development: from theoretical analysis of the combustion process to numerical simulation using CFD codes, and then by conducting ground tests of small scale and large-scale rockets (up to 20 kN, N2O-Paraffin wax based motors). One of these engines flew successfully in 2009. Since 2014, the research group is focused on the use of high test peroxide as oxidizer, in partnership with "Technology for Propulsion and Innovation", a university of Padua spin-off company. In Taiwan, hybrid rocket system developments began in 2009 through R&D projects of NSPO with two university teams. Both teams employed nitrous oxide / HTPB propellant system with different improvement schemes. Several hybrid rockets have been successfully launched by NCKU and NCTU teams so far, reaching altitudes of 10–20 km. Their plans include attempting 100–200 km altitude launch to test nanosatellites, and developing orbital launch capabilities for nanosatellites in the long run. A sub-scale N2O/PE dual-vortical-flow (DVF) hybrid engine hot-fire test in 2014 has delivered an averaged Isp of 280 sec, which indicates that the system has reached around 97% combustion efficiency. In (Germany) the University of Stuttgart's Student team HyEnd is the current world record holder for the highest-flying student-built hybrid rocket with their HEROS rockets. In Bangladesh, Amateur Experimental Rocketry Dhaka supported by the American International University Bangladesh has also tested the country's first hybrid rocket engine, and are now working towards larger paraffin/nitrous oxide based prototypes. The Aerospace Team of the TU Graz, Austria, is also developing a hybrid-propellant rocket. The Polish Student team PWr in Space at Wrocław University of Science and Technology has developed three hybrid rockets: R2 "Setka", R3 "Dziewięćdziesiątka dziewiątka" and the most powerful of all - R4 "Lynx" with a successful test at their test stand Many other universities, such as Embry-Riddle Aeronautical University, the University of Washington, Purdue University, the University of Michigan at Ann Arbor, the University of Arkansas at Little Rock, Hendrix College, the University of Illinois, Portland State University, University of KwaZulu-Natal, Texas A&M University, Aarhus University, Rice University, and AGH University of Science and Technology have hybrid motor test stands that allow for student research with hybrid rockets. High power rocketry There are a number of hybrid rocket motor systems available for amateur/hobbyist use in high-powered model rocketry. These include the popular HyperTek systems and a number of 'Urbanski-Colburn Valved' (U/C) systems such as RATTWorks, Contrail Rockets, and Propulsion Polymers. All of these systems use nitrous oxide as the oxidizer and a plastic fuel (such as Polyvinyl chloride (PVC), Polypropylene), or a polymer-based fuel such as HTPB. This reduces the cost per flight compared to solid rocket motors, although there is generally more ground support equipment required with hybrids. In popular culture An October 26, 2005 episode of the television show MythBusters entitled "Confederate Rocket" featured a hybrid rocket motor using liquid nitrous oxide and paraffin wax. The myth purported that during the American Civil War, the Confederate Army was able to construct a rocket of this type. The myth was revisited in a later episode entitled Salami Rocket, using hollowed out dry salami as the solid fuel. In the February 18, 2007, episode of Top Gear, a Reliant Robin was used by Richard Hammond and James May in an attempt to modify a normal K-reg Robin into a reusable Space Shuttle. Steve Holland, a professional radio-controlled aircraft pilot, helped Hammond to work out how to land a Robin safely. The craft was built by senior members of the United Kingdom Rocketry Association (UKRA) and achieved a successful launch, flew for several seconds into the air and managed to successfully jettison the solid-fuel rocket boosters on time. This was the largest rocket launched by a non-government organisation in Europe. It used motors by Contrail Rockets giving a maximum thrust of 8 tonnes. However, the car failed to separate from the large external fuel tank due to faulty explosive bolts between the Robin and the external tank, and the Robin subsequently crashed into the ground and seemed to have exploded soon after. This explosion was added for dramatic effect as neither Reliant Robins nor hybrid rocket motors explode in the way depicted. See also Spacecraft propulsion Rocket propulsion technologies (disambiguation) References Further reading External links Rocket propulsion Rocket engines by propellant Rocketry
Hybrid-propellant rocket
[ "Engineering" ]
7,603
[ "Rocketry", "Aerospace engineering" ]
37,838
https://en.wikipedia.org/wiki/Hall-effect%20thruster
In spacecraft propulsion, a Hall-effect thruster (HET) is a type of ion thruster in which the propellant is accelerated by an electric field. Hall-effect thrusters (based on the discovery by Edwin Hall) are sometimes referred to as Hall thrusters or Hall-current thrusters. Hall-effect thrusters use a magnetic field to limit the electrons' axial motion and then use them to ionize propellant, efficiently accelerate the ions to produce thrust, and neutralize the ions in the plume. The Hall-effect thruster is classed as a moderate specific impulse (1,600s) space propulsion technology and has benefited from considerable theoretical and experimental research since the 1960s. Hall thrusters operate on a variety of propellants, the most common being xenon and krypton. Other propellants of interest include argon, bismuth, iodine, magnesium, zinc and adamantane. Hall thrusters are able to accelerate their exhaust to speeds between 10 and 80 km/s (1,000–8,000 s specific impulse), with most models operating between 15 and 30 km/s. The thrust produced depends on the power level. Devices operating at 1.35 kW produce about 83 mN of thrust. High-power models have demonstrated up to 5.4 N in the laboratory. Power levels up to 100 kW have been demonstrated for xenon Hall thrusters. , Hall-effect thrusters ranged in input power levels from 1.35 to 10 kilowatts and had exhaust velocities of 10–50 kilometers per second, with thrust of 40–600 millinewtons and efficiency in the range of 45–60 percent. The applications of Hall-effect thrusters include control of the orientation and position of orbiting satellites and use as a main propulsion engine for medium-size robotic space vehicles. History Hall thrusters were studied independently in the United States and the Soviet Union. They were first described publicly in the US in the early 1960s. However, the Hall thruster was first developed into an efficient propulsion device in the Soviet Union. In the US, scientists focused on developing gridded ion thrusters. Soviet designs Two types of Hall thrusters were developed in the Soviet Union: thrusters with wide acceleration zone, SPT (; , Stationary Plasma Thruster) at Design Bureau Fakel thrusters with narrow acceleration zone, DAS (; , Thruster with Anode Layer), at the Central Research Institute for Machine Building (TsNIIMASH). The SPT design was largely the work of A. I. Morozov. The first SPT to operate in space, an SPT-50 aboard a Soviet Meteor spacecraft, was launched December 1971. They were mainly used for satellite stabilization in north–south and in east–west directions. Since then until the late 1990s 118 SPT engines completed their mission and some 50 continued to be operated. Thrust of the first generation of SPT engines, SPT-50 and SPT-60 was 20 and 30 mN respectively. In 1982, the SPT-70 and SPT-100 were introduced, their thrusts being 40 and 83 mN, respectively. In the post-Soviet Russia high-power (a few kilowatts) SPT-140, SPT-160, SPT-200, T-160, and low-power (less than 500 W) SPT-35 were introduced. Soviet and Russian TAL-type thrusters include the D-38, D-55, D-80, and D-100. Over 200 Hall thrusters have been flown on Soviet/Russian satellites since the 1980s. No failures have ever occurred in orbit. Non-Soviet designs Soviet-built thrusters were introduced to the West in 1992 after a team of electric propulsion specialists from NASA's Jet Propulsion Laboratory, Glenn Research Center, and the Air Force Research Laboratory, under the support of the Ballistic Missile Defense Organization, visited Russian laboratories and experimentally evaluated the SPT-100 (i.e., a 100 mm diameter SPT thruster). Hall thrusters continue to be used on Russian spacecraft and have also flown on European and American spacecraft. Space Systems/Loral, an American commercial satellite manufacturer, now flies Fakel SPT-100's on their GEO communications spacecraft. Since in the early 1990s, Hall thrusters have been the subject of a large number of research efforts throughout the United States, India, France, Italy, Japan, and Russia (with many smaller efforts scattered in various countries across the globe). Hall thruster research in the US is conducted at several government laboratories, universities and private companies. Government and government funded centers include NASA's Jet Propulsion Laboratory, NASA's Glenn Research Center, the Air Force Research Laboratory (Edwards AFB, California), and The Aerospace Corporation. Universities include the US Air Force Institute of Technology, University of Michigan, Stanford University, The Massachusetts Institute of Technology, Princeton University, Michigan Technological University, and Georgia Tech. In 2023, students at the Olin College of Engineering demonstrated the first undergraduate designed steady-state hall thruster. A considerable amount of development is being conducted in industry, such as IHI Corporation in Japan, Aerojet and Busek in the US, SNECMA in France, LAJP in Ukraine, SITAEL in Italy, and Satrec Initiative in South Korea. The first use of Hall thrusters on lunar orbit was the European Space Agency (ESA) lunar mission SMART-1 in 2003. Hall thrusters were first demonstrated on a western satellite on the Naval Research Laboratory (NRL) STEX spacecraft, which flew the Russian D-55. The first American Hall thruster to fly in space was the Busek BHT-200 on TacSat-2 technology demonstration spacecraft. The first flight of an American Hall thruster on an operational mission, was the Aerojet BPT-4000, which launched August 2010 on the military Advanced Extremely High Frequency GEO communications satellite. At 4.5 kW, the BPT-4000 is also the highest power Hall thruster ever flown in space. Besides the usual stationkeeping tasks, the BPT-4000 is also providing orbit-raising capability to the spacecraft. The X-37B has been used as a testbed for the Hall thruster for the AEHF satellite series. Several countries worldwide continue efforts to qualify Hall thruster technology for commercial uses. The SpaceX Starlink constellation, the largest satellite constellation in the world, uses Hall-effect thrusters. Starlink initially used krypton gas, but with its V2 satellites swapped to argon due to its cheaper price and widespread availability. The first deployment of Hall thrusters beyond Earth's sphere of influence was the Psyche spacecraft, launched in 2023 towards the asteroid belt to explore 16 Psyche. Indian designs Research in India is carried out by both public and private research institutes and companies. In 2010, ISRO used Hall-effect ion propulsion thrusters in GSAT-4 carried by GSLV-D3. It had four xenon powered thrusters for north-south station keeping. Two of them were Russian and the other two were Indian. The Indian thrusters were rated at 13mN. However, GSLV-D3 did not make it to orbit. In 2013, ISRO funded development of another class of electric thruster, the magnetoplasmadynamic thruster. The project subsequently developed a technology demonstrator prototype using argon propellant with a specific impulse of 2500s at a thrust of 25 mN. The following year in 2014, ISRO was pursuing development of 75 mN & 250 mN SPT thrusters to be used in its future high power communication satellites. The 75 mN thrusters were put to use aboard the GSAT-9 communication satellite. By 2021 development of a 300 mN thruster was complete. Alongside it, RF-powered 10 kW plasma engines and krypton based low power electric propulsion were being pursued. With private firms entering the space domain, Bellatrix Aerospace became the first commercial firm to bring out commercial Hall-effect thrusters. The current model of the thruster uses xenon as fuel. Tests were carried out at the spacecraft propulsion research laboratory in the Indian Institute of Science, Bengaluru. Heaterless cathode technology was used to increase the system's lifespan and redundancy. Bellatrix Aerospace had previously developed the first commercially available microwave electrothermal thruster, for which the company received an order from ISRO. The ARKA-series of HET was launched on PSLV-C55 mission. It was successfully tested on POEM-2. Principle of operation The essential working principle of the Hall thruster is that it uses an electrostatic potential to accelerate ions up to high speeds. In a Hall thruster, the attractive negative charge is provided by an electron plasma at the open end of the thruster instead of a grid. A radial magnetic field of about is used to confine the electrons, where the combination of the radial magnetic field and axial electric field cause the electrons to drift in azimuth thus forming the Hall current from which the device gets its name. A schematic of a Hall thruster is shown in the adjacent image. An electric potential of between 150 and 800 volts is applied between the anode and cathode. The central spike forms one pole of an electromagnet and is surrounded by an annular space, and around that is the other pole of the electromagnet, with a radial magnetic field in between. The propellant, such as xenon gas, is fed through the anode, which has numerous small holes in it to act as a gas distributor. As the neutral xenon atoms diffuse into the channel of the thruster, they are ionized by collisions with circulating high-energy electrons (typically 10–40 eV, or about 10% of the discharge voltage). Most of the xenon atoms are ionized to a net charge of +1, but a noticeable fraction (c. 20%) have +2 net charge. The xenon ions are then accelerated by the electric field between the anode and the cathode. For discharge voltages of 300 V, the ions reach speeds of around for a specific impulse of 1,500 s (15 kN·s/kg). Upon exiting, however, the ions pull an equal number of electrons with them, creating a plasma plume with no net charge. The radial magnetic field is designed to be strong enough to substantially deflect the low-mass electrons, but not the high-mass ions, which have a much larger gyroradius and are hardly impeded. The majority of electrons are thus stuck orbiting in the region of high radial magnetic field near the thruster exit plane, trapped in E×B (axial electric field and radial magnetic field). This orbital rotation of the electrons is a circulating Hall current, and it is from this that the Hall thruster gets its name. Collisions with other particles and walls, as well as plasma instabilities, allow some of the electrons to be freed from the magnetic field, and they drift towards the anode. About 20–30% of the discharge current is an electron current, which does not produce thrust, thus limiting the energetic efficiency of the thruster; the other 70–80% of the current is in the ions. Because the majority of electrons are trapped in the Hall current, they have a long residence time inside the thruster and are able to ionize almost all of the xenon propellant, allowing mass use of 90–99%. The mass use efficiency of the thruster is thus around 90%, while the discharge current efficiency is around 70%, for a combined thruster efficiency of around 63% (= 90% × 70%). Modern Hall thrusters have achieved efficiencies as high as 75% through advanced designs. Compared to chemical rockets, the thrust is very small, on the order of 83 mN for a typical thruster operating at 300 V and 1.5 kW. For comparison, the weight of a coin like the U.S. quarter or a 20-cent euro coin is approximately 60 mN. As with all forms of electrically powered spacecraft propulsion, thrust is limited by available power, efficiency, and specific impulse. However, Hall thrusters operate at the high specific impulses that are typical for electric propulsion. One particular advantage of Hall thrusters, as compared to a gridded ion thruster, is that the generation and acceleration of the ions takes place in a quasi-neutral plasma, so there is no Child-Langmuir charge (space charge) saturated current limitation on the thrust density. This allows much smaller thrusters compared to gridded ion thrusters. Another advantage is that these thrusters can use a wider variety of propellants supplied to the anode, even oxygen, although something easily ionized is needed at the cathode. Propellants Xenon Xenon has been the typical choice of propellant for many electric propulsion systems, including Hall thrusters. Xenon propellant is used because of its high atomic weight and low ionization potential. Xenon is relatively easy to store, and as a gas at spacecraft operating temperatures does not need to be vaporized before usage, unlike metallic propellants such as bismuth. Xenon's high atomic weight means that the ratio of energy expended for ionization per mass unit is low, leading to a more efficient thruster. Krypton Krypton is another choice of propellant for Hall thrusters. Xenon has an ionization potential of 12.1298 eV, while krypton has an ionization potential of 13.996 eV. This means that thrusters utilizing krypton need to expend a slightly higher energy per mole to ionize, which reduces efficiency. Additionally, krypton is a lighter ion, so the unit mass per ionization energy is further reduced compared to xenon. However, xenon can be more than ten times as expensive as krypton per kilogram, making krypton a more economical choice for building out satellite constellations like that of SpaceX's Starlink V1, whose original Hall thrusters were fueled with krypton. Argon SpaceX developed a new thruster that used argon as propellant for their Starlink V2 mini. The new thruster had 2.4 times the thrust and 1.5 times the specific impulse as SpaceX's previous thruster that used krypton. Argon is approximately 100 times less expensive than Krypton and 1000 times less expensive than Xenon. Comparison of noble gasses Variants As well as the Soviet SPT and TAL types mentioned above, there are: Cylindrical Hall thrusters Although conventional (annular) Hall thrusters are efficient in the kilowatt power regime, they become inefficient when scaled to small sizes. This is due to the difficulties associated with holding the performance scaling parameters constant while decreasing the channel size and increasing the applied magnetic field strength. This led to the design of the cylindrical Hall thruster. The cylindrical Hall thruster can be more readily scaled to smaller sizes due to its nonconventional discharge-chamber geometry and associated magnetic field profile. The cylindrical Hall thruster more readily lends itself to miniaturization and low-power operation than a conventional (annular) Hall thruster. The primary reason for cylindrical Hall thrusters is that it is difficult to achieve a regular Hall thruster that operates over a broad envelope from c.1 kW down to c. 100 W while maintaining an efficiency of 45–55%. External discharge Hall thruster Sputtering erosion of discharge channel walls and pole pieces that protect the magnetic circuit causes failure of thruster operation. Therefore, annular and cylindrical Hall thrusters have limited lifetime. Although magnetic shielding has been shown to dramatically reduce discharge channel wall erosion, pole piece erosion is still a concern. As an alternative, an unconventional Hall thruster design called external discharge Hall thruster or external discharge plasma thruster (XPT) has been introduced. The external discharge Hall thruster does not possess any discharge channel walls or pole pieces. Plasma discharge is produced and sustained completely in the open space outside the thruster structure, and thus erosion-free operation is achieved. Applications Hall thrusters have been flying in space since December 1971, when the Soviet Union launched an SPT-50 on a Meteor satellite. Over 240 thrusters have flown in space since that time, with a 100% success rate. Hall thrusters are now routinely flown on commercial LEO and GEO communications satellites, where they are used for orbital insertion and stationkeeping. The first Hall thruster to fly on a western satellite was a Russian D-55 built by TsNIIMASH, on the NRO's STEX spacecraft, launched on October 3, 1998. The solar electric propulsion system of the European Space Agency's SMART-1 spacecraft used a Snecma PPS-1350-G Hall thruster. SMART-1 was a technology demonstration mission that orbited the Moon. This use of the PPS-1350-G, starting on September 28, 2003, was the first use of a Hall thruster outside geosynchronous Earth orbit (GEO). Like most Hall thruster propulsion systems used in commercial applications, the Hall thruster on SMART-1 could be throttled over a range of power, specific impulse, and thrust. It has a discharge power range of 0.46–1.19 kW, a specific impulse of 1,100–1,600 s and thrust of 30–70 mN. Early small satellites of the SpaceX Starlink constellation used krypton-fueled Hall thrusters for position-keeping and deorbiting, while later Starlink satellites used argon-fueled Hall thrusters. Tiangong space station is fitted with Hall-effect thrusters. Tianhe core module is propelled by both chemical thrusters and four ion thrusters, which are used to adjust and maintain the station's orbit. Hall-effect thrusters are created with crewed mission safety in mind with effort to prevent erosion and damage caused by the accelerated ion particles. A magnetic field and specially designed ceramic shield was created to repel damaging particles and maintain integrity of the thrusters. According to the Chinese Academy of Sciences, the ion drive used on Tiangong has burned continuously for 8,240 hours without a glitch, indicating their suitability for the Chinese space station's designated 15-year lifespan. This is the world's first Hall thruster on a human-rated mission. The Jet Propulsion Laboratory (JPL) granted exclusive commercial licensing to Apollo Fusion, led by Mike Cassidy, for its Magnetically Shielded Miniature (MaSMi) Hall thruster technology. In January 2021, Apollo Fusion announced they had secured a contract with York Space Systems for an order of its latest iteration named the "Apollo Constellation Engine". The NASA mission to the asteroid Psyche utilizes xenon gas Hall thrusters. The electricity comes from the craft's 75 square meter solar panels. NASA's first Hall thrusters on a human-rated mission will be a combination of 6 kW Hall thrusters provided by Busek and NASA Advanced Electric Propulsion System (AEPS) Hall thrusters. They will serve as the primary propulsion on Maxar's Power and Propulsion Element (PPE) for the Lunar Gateway under NASA's Artemis program. The high specific impulse of Hall thrusters will allow for efficient orbit raising and station keep for the Lunar Gateway's polar near-rectilinear halo orbit. In development The highest power Hall-effect thruster in development (as of 2021) is the University of Michigan's 100 kW X3 Nested Channel Hall Thruster. The thruster is approximately 80 cm in diameter and weighs 230 kg, and has demonstrated a thrust of 5.4 N. Other high power thrusters include NASA's 40 kW Advanced Electric Propulsion System (AEPS), meant to propel large-scale science missions and cargo transportation in deep space. References External links Edgar, Y. (2009). New Dawn for Electric Rockets SITAEL S.p.A. (Italy)—Page presenting Hall effect thruster products & data sheets Snecma SA (France) page on PPS-1350 Hall thruster Electric Propulsion Sub-Systems (PDF) Stationary plasma thrusters (PDF) ESA page on Hall thrusters Apollo Fusion Thruster Magnetic propulsion devices Ion engines Soviet inventions
Hall-effect thruster
[ "Physics", "Chemistry", "Materials_science" ]
4,203
[ "Physical phenomena", "Matter", "Ion engines", "Hall effect", "Electric and magnetic fields in matter", "Electrical phenomena", "Solid state engineering", "Ions" ]
37,852
https://en.wikipedia.org/wiki/Fusion%20rocket
A fusion rocket is a theoretical design for a rocket driven by fusion propulsion that could provide efficient and sustained acceleration in space without the need to carry a large fuel supply. The design requires fusion power technology beyond current capabilities, and much larger and more complex rockets. Fusion nuclear pulse propulsion is one approach to using nuclear fusion energy to provide propulsion. Fusion's main advantage is its very high specific impulse, while its main disadvantage is the (likely) large mass of the reactor. A fusion rocket may produce less radiation than a fission rocket, reducing the shielding mass needed. The simplest way of building a fusion rocket is to use hydrogen bombs as proposed in Project Orion, but such a spacecraft would be massive and the Partial Nuclear Test Ban Treaty prohibits the use of such bombs. For that reason bomb-based rockets would likely be limited to operating only in space. An alternate approach uses electrical (e.g. ion) propulsion with electric power generated by fusion instead of direct thrust. Electricity generation vs. direct thrust Spacecraft propulsion methods such as ion thrusters require electric power to run, but are highly efficient. In some cases their thrust is limited by the amount of power that can be generated (for example, a mass driver). An electric generator running on fusion power could drive such a ship. One disadvantage is that conventional electricity production requires a low-temperature energy sink, which is difficult (i.e. heavy) in a spacecraft. Direct conversion of the kinetic energy of fusion products into electricity mitigates this problem. One attractive possibility is to direct the fusion exhaust out the back of the rocket to provide thrust without the intermediate production of electricity. This would be easier with some confinement schemes (e.g. magnetic mirrors) than with others (e.g. tokamaks). It is also more attractive for "advanced fuels" (see aneutronic fusion). Helium-3 propulsion would use the fusion of helium-3 atoms as a power source. Helium-3, an isotope of helium with two protons and one neutron, could be fused with deuterium in a reactor. The resulting energy release could expel propellant out the back of the spacecraft. Helium-3 is proposed as a power source for spacecraft mainly because of its lunar abundance. Scientists estimate that 1 million tons of accessible helium-3 are present on the moon. Only 20% of the power produced by the D-T reaction could be used this way; while the other 80% is released as neutrons which, because they cannot be directed by magnetic fields or solid walls, would be difficult to direct towards thrust, and may in turn require shielding. Helium-3 is produced via beta decay of tritium, which can be produced from deuterium, lithium, or boron. Even if a self-sustaining fusion reaction cannot be produced, it might be possible to use fusion to boost the efficiency of another propulsion system, such as a VASIMR engine. Confinement alternatives Magnetic To sustain a fusion reaction, the plasma must be confined. The most widely studied configuration for terrestrial fusion is the tokamak, a form of magnetic confinement fusion. Currently tokamaks weigh a great deal, so the thrust to weight ratio would seem unacceptable. NASA's Glenn Research Center proposed in 2001 a small aspect ratio spherical torus reactor for its "Discovery II" conceptual vehicle design. "Discovery II" could deliver a crewed 172 metric tons payload to Jupiter in 118 days (or 212 days to Saturn) using 861 metric tons of hydrogen propellant, plus 11 metric tons of Helium-3-Deuterium (D-He3) fusion fuel. The hydrogen is heated by the fusion plasma debris to increase thrust, at a cost of reduced exhaust velocity (348–463 km/s) and hence increased propellant mass. Inertial The main alternative to magnetic confinement is inertial confinement fusion (ICF), such as that proposed by Project Daedalus. A small pellet of fusion fuel (with a diameter of a couple of millimeters) would be ignited by an electron beam or a laser. To produce direct thrust, a magnetic field forms the pusher plate. In principle, the Helium-3-Deuterium reaction or an aneutronic fusion reaction could be used to maximize the energy in charged particles and to minimize radiation, but it is highly questionable whether using these reactions is technically feasible. Both the detailed design studies in the 1970s, the Orion drive and Project Daedalus, used inertial confinement. In the 1980s, Lawrence Livermore National Laboratory and NASA studied an ICF-powered "Vehicle for Interplanetary Transport Applications" (VISTA). The conical VISTA spacecraft could deliver a 100-tonne payload to Mars orbit and return to Earth in 130 days, or to Jupiter orbit and back in 403 days. 41 tonnes of deuterium/tritium (D-T) fusion fuel would be required, plus 4,124 tonnes of hydrogen expellant. The exhaust velocity would be 157 km/s. Magnetized target Magnetized target fusion (MTF) is a relatively new approach that combines the best features of the more widely studied magnetic confinement fusion (i.e. good energy confinement) and inertial confinement fusion (i.e. efficient compression heating and wall free containment of the fusing plasma) approaches. Like the magnetic approach, the fusion fuel is confined at low density by magnetic fields while it is heated into a plasma, but like the inertial confinement approach, fusion is initiated by rapidly squeezing the target to dramatically increase fuel density, and thus temperature. MTF uses "plasma guns" (i.e. electromagnetic acceleration techniques) instead of powerful lasers, leading to low cost and low weight compact reactors. The NASA/MSFC Human Outer Planets Exploration (HOPE) group has investigated a crewed MTF propulsion spacecraft capable of delivering a 164-tonne payload to Jupiter's moon Callisto using 106-165 metric tons of propellant (hydrogen plus either D-T or D-He3 fusion fuel) in 249–330 days. This design would thus be considerably smaller and more fuel efficient due to its higher exhaust velocity (700 km/s) than the previously mentioned "Discovery II", "VISTA" concepts. Inertial electrostatic Another popular confinement concept for fusion rockets is inertial electrostatic confinement (IEC), such as in the Farnsworth-Hirsch Fusor or the Polywell variation under development by Energy-Matter Conversion Corporation (EMC2). The University of Illinois has defined a 500-tonne "Fusion Ship II" concept capable of delivering a 100,000 kg crewed payload to Jupiter's moon Europa in 210 days. Fusion Ship II utilizes ion rocket thrusters (343 km/s exhaust velocity) powered by ten D-He3 IEC fusion reactors. The concept would need 300 tonnes of argon propellant for a 1-year round trip to the Jupiter system. Robert Bussard published a series of technical articles discussing its application to spaceflight throughout the 1990s. His work was popularised by an article in the Analog Science Fiction and Fact publication, where Tom Ligon described how the fusor would make for a highly effective fusion rocket. Antimatter A still more speculative concept is antimatter-catalyzed nuclear pulse propulsion, which would use antimatter to catalyze a fission and fusion reaction, allowing much smaller fusion explosions to be created. During the 1990s an abortive design effort was conducted at Penn State University under the name AIMStar. The project would require more antimatter than can currently be produced. In addition, some technical hurdles need to be surpassed before it would be feasible. Development projects MSNW Magneto-Inertial Fusion Driven Rocket See also Helium-3 Nuclear propulsion Rocket propulsion technologies (disambiguation) References External links Rocket propulsion Nuclear spacecraft propulsion Rocket Hypothetical technology
Fusion rocket
[ "Physics", "Chemistry" ]
1,605
[ "Nuclear fusion", "Fusion power", "Plasma physics" ]
37,854
https://en.wikipedia.org/wiki/Antimatter%20rocket
An antimatter rocket is a proposed class of rockets that use antimatter as their power source. There are several designs that attempt to accomplish this goal. The advantage to this class of rocket is that a large fraction of the rest mass of a matter/antimatter mixture may be converted to energy, allowing antimatter rockets to have a far higher energy density and specific impulse than any other proposed class of rocket. Methods Antimatter rockets can be divided into three types of application: those that directly use the products of antimatter annihilation for propulsion, those that heat a working fluid or an intermediate material which is then used for propulsion, and those that heat a working fluid or an intermediate material to generate electricity for some form of electric spacecraft propulsion system. The propulsion concepts that employ these mechanisms generally fall into four categories: solid core, gaseous core, plasma core, and beamed core configurations. The alternatives to direct antimatter annihilation propulsion offer the possibility of feasible vehicles with, in some cases, vastly smaller amounts of antimatter but require a lot more matter propellant. Then there are hybrid solutions using antimatter to catalyze fission/fusion reactions for propulsion. Pure antimatter rocket: direct use of reaction products Antiproton annihilation reactions produce charged and uncharged pions, in addition to neutrinos and gamma rays. The charged pions can be channelled by a magnetic nozzle, producing thrust. This type of antimatter rocket is a pion rocket or beamed core configuration. It is not perfectly efficient; energy is lost as the rest mass of the charged (22.3%) and uncharged pions (14.38%), lost as the kinetic energy of the uncharged pions (which can't be deflected for thrust); and lost as neutrinos and gamma rays (see antimatter as fuel). Positron annihilation has also been proposed for rocketry. Annihilation of positrons produces only gamma rays. Early proposals for this type of rocket, such as those developed by Eugen Sänger, assumed the use of some material that could reflect gamma rays, used as a light sail or parabolic shield to derive thrust from the annihilation reaction, but no known form of matter (consisting of atoms or ions) interacts with gamma rays in a manner that would enable specular reflection. The momentum of gamma rays can, however, be partially transferred to matter by Compton scattering. One method to reach relativistic velocities uses a matter-antimatter GeV gamma ray laser photon rocket made possible by a relativistic proton-antiproton pinch discharge, where the recoil from the laser beam is transmitted by the Mössbauer effect to the spacecraft. A new annihilation process has purportedly been developed by researchers from the University of Gothenburg, Sweden. Several annihilation reactors have been constructed in the past years which attempted to convert hydrogen or deuterium into relativistic particles through laser annihilation. The technology was explored by research groups led by Prof. Leif Holmlid and Sindre Zeiner-Gundersen, and a third relativistic particle reactor is currently being built at the University of Iceland. In theory, emitted particles from hydrogen annihilation processes could reach 0.94c and can be used in space propulsion. However the veracity of Holmlid's research is under dispute and no successful implementations have been peer reviewed or replicated. Thermal antimatter rocket: heating of a propellant This type of antimatter rocket is termed a thermal antimatter rocket as the energy or heat from the annihilation is harnessed to create an exhaust from non-exotic material or propellant. The solid core concept uses antiprotons to heat a solid, high-atomic weight (Z), refractory metal core. Propellant is pumped into the hot core and expanded through a nozzle to generate thrust. The performance of this concept is roughly equivalent to that of the nuclear thermal rocket ( ~ 103 sec) due to temperature limitations of the solid. However, the antimatter energy conversion and heating efficiencies are typically high due to the short mean path between collisions with core atoms (efficiency ~ 85%). Several methods for the liquid-propellant thermal antimatter engine using the gamma rays produced by antiproton or positron annihilation have been proposed. These methods resemble those proposed for nuclear thermal rockets. One proposed method is to use positron annihilation gamma rays to heat a solid engine core. Hydrogen gas is ducted through this core, heated, and expelled from a rocket nozzle. A second proposed engine type uses positron annihilation within a solid lead pellet or within compressed xenon gas to produce a cloud of hot gas, which heats a surrounding layer of gaseous hydrogen. Direct heating of the hydrogen by gamma rays was considered impractical, due to the difficulty of compressing enough of it within an engine of reasonable size to absorb the gamma rays. A third proposed engine type uses annihilation gamma rays to heat an ablative sail, with the ablated material providing thrust. As with nuclear thermal rockets, the specific impulse achievable by these methods is limited by materials considerations, typically being in the range of 1000–2000 seconds. The gaseous core system substitutes the low-melting point solid with a high temperature gas (i.e. tungsten gas/plasma), thus permitting higher operational temperatures and performance ( ~ 2 × 103 sec). However, the longer mean free path for thermalization and absorption results in much lower energy conversion efficiencies ( ~ 35%). The plasma core allows the gas to ionize and operate at even higher effective temperatures. Heat loss is suppressed by magnetic confinement in the reaction chamber and nozzle. Although performance is extremely high ( ~ 104-105 sec), the long mean free path results in very low energy utilization ( ~ 10%) Antimatter power generation The idea of using antimatter to power an electric space drive has also been proposed. These proposed designs are typically similar to those suggested for nuclear electric rockets. Antimatter annihilations are used to directly or indirectly heat a working fluid, as in a nuclear thermal rocket, but the fluid is used to generate electricity, which is then used to power some form of electric space propulsion system. The resulting system shares many of the characteristics of other charged particle/electric propulsion proposals, that typically being high specific impulse and low thrust (see also antimatter power generation). Catalyzed fission/fusion or spiked fusion This is a hybrid approach in which antiprotons are used to catalyze a fission/fusion reaction or to "spike" the propulsion of a fusion rocket or any similar applications. The antiproton-driven Inertial confinement fusion (ICF) Rocket concept uses pellets for the D-T reaction. The pellet consists of a hemisphere of fissionable material such as U235 with a hole through which a pulse of antiprotons and positrons is injected. It is surrounded by a hemisphere of fusion fuel, for example deuterium-tritium, or lithium deuteride. Antiproton annihilation occurs at the surface of the hemisphere, which ionizes the fuel. These ions heat the core of the pellet to fusion temperatures. The antiproton-driven Magnetically Insulated Inertial Confinement Fusion Propulsion (MICF) concept relies on self-generated magnetic field which insulates the plasma from the metallic shell that contains it during the burn. The lifetime of the plasma was estimated to be two orders of magnitude greater than implosion inertial fusion, which corresponds to a longer burn time, and hence, greater gain. The antimatter-driven P-B11 concept uses antiprotons to ignite the P-B11 reactions in an MICF scheme. Excessive radiation losses are a major obstacle to ignition and require modifying the particle density, and plasma temperature to increase the gain. It was concluded that it is entirely feasible that this system could achieve Isp~105s. A different approach was envisioned for AIMStar in which small fusion fuel droplets would be injected into a cloud of antiprotons confined in a very small volume within a reaction Penning trap. Annihilation takes place on the surface of the antiproton cloud, peeling back 0.5% of the cloud. The power density released is roughly comparable to a 1 kJ, 1 ns laser depositing its energy over a 200 μm ICF target. The ICAN-II project employs the antiproton catalyzed microfission (ACMF) concept which uses pellets with a molar ratio of 9:1 of D-T:U235 for nuclear pulse propulsion. Difficulties with antimatter rockets The chief practical difficulties with antimatter rockets are the problems of creating antimatter and storing it. Creating antimatter requires input of vast amounts of energy, at least equivalent to the rest energy of the created particle/antiparticle pairs, and typically (for antiproton production) tens of thousands to millions of times more. Most storage schemes proposed for interstellar craft require the production of frozen pellets of antihydrogen. This requires cooling of antiprotons, binding to positrons, and capture of the resulting antihydrogen atoms - tasks which have, , been performed only for small numbers of individual atoms. Storage of antimatter is typically done by trapping electrically charged frozen antihydrogen pellets in Penning or Paul traps. There is no theoretical barrier to these tasks being performed on the scale required to fuel an antimatter rocket. However, they are expected to be extremely (and perhaps prohibitively) expensive due to current production abilities being only able to produce small numbers of atoms, a scale approximately 1023 times smaller than needed for a 10-gram trip to Mars. Generally, the energy from antiproton annihilation is deposited over such a large region that it cannot efficiently drive nuclear capsules. Antiproton-induced fission and self-generated magnetic fields may greatly enhance energy localization and efficient use of annihilation energy. A secondary problem is the extraction of useful energy or momentum from the products of antimatter annihilation, which are primarily in the form of extremely energetic ionizing radiation. The antimatter mechanisms proposed to date have for the most part provided plausible mechanisms for harnessing energy from these annihilation products. The classic rocket equation with its "wet" mass ()(with propellant mass fraction) to "dry" mass ()(with payload) fraction (), the velocity change () and specific impulse () no longer holds due to the mass losses occurring in antimatter annihilation. Another general problem with high powered propulsion is excess heat or waste heat, and as with antimatter-matter annihilation also includes extreme radiation. A proton-antiproton annihilation propulsion system transforms 39% of the propellant mass into an intense high-energy flux of gamma radiation. The gamma rays and the high-energy charged pions will cause heating and radiation damage if they are not shielded against. Unlike neutrons, they will not cause the exposed material to become radioactive by transmutation of the nuclei. The components needing shielding are the crew, the electronics, the cryogenic tankage, and the magnetic coils for magnetically assisted rockets. Two types of shielding are needed: radiation protection and thermal protection (different from Heat shield or thermal insulation). Finally, relativistic considerations have to be taken into account. As the by products of annihilation move at relativistic velocities the rest mass changes according to relativistic mass–energy. For example, the total mass–energy content of the neutral pion is converted into gammas, not just its rest mass. It is necessary to use a relativistic rocket equation that takes into account the relativistic effects of both the vehicle and propellant exhaust (charged pions) moving near the speed of light. These two modifications to the two rocket equations result in a mass ratio () for a given () and () that is much higher for a relativistic antimatter rocket than for either a classical or relativistic "conventional" rocket. Modified relativistic rocket equation The loss of mass specific to antimatter annihilation requires a modification of the relativistic rocket equation given as where is the speed of light, and is the specific impulse (i.e. =0.69). The derivative form of the equation is where is the non-relativistic (rest) mass of the rocket ship, and is the fraction of the original (on board) propellant mass (non-relativistic) remaining after annihilation (i.e., =0.22 for the charged pions). is difficult to integrate analytically. If it is assumed that , such that then the resulting equation is can be integrated and the integral evaluated for and , and initial and final velocities ( and ). The resulting relativistic rocket equation with loss of propellant is Other general issues The cosmic background hard radiation will ionize the rocket's hull over time and poses a health threat. Also, gas plasma interactions may cause space charge. The major interaction of concern is differential charging of various parts of a spacecraft, leading to high electric fields and arcing between spacecraft components. This can be resolved with well placed plasma contactor. However, there is no solution yet for when plasma contactors are turned off to allow maintenance work on the hull. Long term space flight at interstellar velocities causes erosion of the rocket's hull due to collision with particles, gas, dust and micrometeorites. At 0.2 for a 6 light year distance, erosion is estimated to be in the order of about 30 kg/m2 or about 1 cm of aluminum shielding. See also Nuclear photonic rocket References Antimatter Rocket propulsion
Antimatter rocket
[ "Physics" ]
2,908
[ "Antimatter", "Matter" ]
37,864
https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon%20sampling%20theorem
The Nyquist–Shannon sampling theorem is an essential principle for digital signal processing linking the frequency range of a signal and the sample rate required to avoid a type of distortion called aliasing. The theorem states that the sample rate must be at least twice the bandwidth of the signal to avoid aliasing. In practice, it is used to select band-limiting filters to keep aliasing below an acceptable amount when an analog signal is sampled or when sample rates are changed within a digital signal processing function. The Nyquist–Shannon sampling theorem is a theorem in the field of signal processing which serves as a fundamental bridge between continuous-time signals and discrete-time signals. It establishes a sufficient condition for a sample rate that permits a discrete sequence of samples to capture all the information from a continuous-time signal of finite bandwidth. Strictly speaking, the theorem only applies to a class of mathematical functions having a Fourier transform that is zero outside of a finite region of frequencies. Intuitively we expect that when one reduces a continuous function to a discrete sequence and interpolates back to a continuous function, the fidelity of the result depends on the density (or sample rate) of the original samples. The sampling theorem introduces the concept of a sample rate that is sufficient for perfect fidelity for the class of functions that are band-limited to a given bandwidth, such that no actual information is lost in the sampling process. It expresses the sufficient sample rate in terms of the bandwidth for the class of functions. The theorem also leads to a formula for perfectly reconstructing the original continuous-time function from the samples. Perfect reconstruction may still be possible when the sample-rate criterion is not satisfied, provided other constraints on the signal are known (see below and compressed sensing). In some cases (when the sample-rate criterion is not satisfied), utilizing additional constraints allows for approximate reconstructions. The fidelity of these reconstructions can be verified and quantified utilizing Bochner's theorem. The name Nyquist–Shannon sampling theorem honours Harry Nyquist and Claude Shannon, but the theorem was also previously discovered by E. T. Whittaker (published in 1915), and Shannon cited Whittaker's paper in his work. The theorem is thus also known by the names Whittaker–Shannon sampling theorem, Whittaker–Shannon, and Whittaker–Nyquist–Shannon, and may also be referred to as the cardinal theorem of interpolation. Introduction Sampling is a process of converting a signal (for example, a function of continuous time or space) into a sequence of values (a function of discrete time or space). Shannon's version of the theorem states: A sufficient sample-rate is therefore anything larger than samples per second. Equivalently, for a given sample rate , perfect reconstruction is guaranteed possible for a bandlimit . When the bandlimit is too high (or there is no bandlimit), the reconstruction exhibits imperfections known as aliasing. Modern statements of the theorem are sometimes careful to explicitly state that must contain no sinusoidal component at exactly frequency or that must be strictly less than one half the sample rate. The threshold is called the Nyquist rate and is an attribute of the continuous-time input to be sampled. The sample rate must exceed the Nyquist rate for the samples to suffice to represent   The threshold is called the Nyquist frequency and is an attribute of the sampling equipment. All meaningful frequency components of the properly sampled exist below the Nyquist frequency. The condition described by these inequalities is called the Nyquist criterion, or sometimes the Raabe condition. The theorem is also applicable to functions of other domains, such as space, in the case of a digitized image. The only change, in the case of other domains, is the units of measure attributed to and The symbol is customarily used to represent the interval between samples and is called the sample period or sampling interval. The samples of function are commonly denoted by (alternatively in older signal processing literature), for all integer values of   The multiplier is a result of the transition from continuous time to discrete time (see Discrete-time Fourier transform#Relation to Fourier Transform), and it preserves the energy of the signal as varies. A mathematically ideal way to interpolate the sequence involves the use of sinc functions. Each sample in the sequence is replaced by a sinc function, centered on the time axis at the original location of the sample with the amplitude of the sinc function scaled to the sample value, Subsequently, the sinc functions are summed into a continuous function. A mathematically equivalent method uses the Dirac comb and proceeds by convolving one sinc function with a series of Dirac delta pulses, weighted by the sample values. Neither method is numerically practical. Instead, some type of approximation of the sinc functions, finite in length, is used. The imperfections attributable to the approximation are known as interpolation error. Practical digital-to-analog converters produce neither scaled and delayed sinc functions, nor ideal Dirac pulses. Instead they produce a piecewise-constant sequence of scaled and delayed rectangular pulses (the zero-order hold), usually followed by a lowpass filter (called an "anti-imaging filter") to remove spurious high-frequency replicas (images) of the original baseband signal. Aliasing When is a function with a Fourier transform : Then the samples, of are sufficient to create a periodic summation of (see Discrete-time Fourier transform#Relation to Fourier Transform): which is a periodic function and its equivalent representation as a Fourier series, whose coefficients are . This function is also known as the discrete-time Fourier transform (DTFT) of the sample sequence. As depicted, copies of are shifted by multiples of the sampling rate and combined by addition. For a band-limited function and sufficiently large it is possible for the copies to remain distinct from each other. But if the Nyquist criterion is not satisfied, adjacent copies overlap, and it is not possible in general to discern an unambiguous Any frequency component above is indistinguishable from a lower-frequency component, called an alias, associated with one of the copies. In such cases, the customary interpolation techniques produce the alias, rather than the original component. When the sample-rate is pre-determined by other considerations (such as an industry standard), is usually filtered to reduce its high frequencies to acceptable levels before it is sampled. The type of filter required is a lowpass filter, and in this application it is called an anti-aliasing filter. Derivation as a special case of Poisson summation When there is no overlap of the copies (also known as "images") of , the term of can be recovered by the product: where: The sampling theorem is proved since uniquely determines . All that remains is to derive the formula for reconstruction. need not be precisely defined in the region because is zero in that region. However, the worst case is when the Nyquist frequency. A function that is sufficient for that and all less severe cases is: where is the rectangular function. Therefore:       (from  , above).       The inverse transform of both sides produces the Whittaker–Shannon interpolation formula: which shows how the samples, , can be combined to reconstruct . Larger-than-necessary values of (smaller values of ), called oversampling, have no effect on the outcome of the reconstruction and have the benefit of leaving room for a transition band in which is free to take intermediate values. Undersampling, which causes aliasing, is not in general a reversible operation. Theoretically, the interpolation formula can be implemented as a low-pass filter, whose impulse response is and whose input is which is a Dirac comb function modulated by the signal samples. Practical digital-to-analog converters (DAC) implement an approximation like the zero-order hold. In that case, oversampling can reduce the approximation error. Shannon's original proof Poisson shows that the Fourier series in produces the periodic summation of , regardless of and . Shannon, however, only derives the series coefficients for the case . Virtually quoting Shannon's original paper: Let be the spectrum of   Then because is assumed to be zero outside the band   If we let where is any positive or negative integer, we obtain: On the left are values of at the sampling points. The integral on the right will be recognized as essentially the coefficient in a Fourier-series expansion of the function taking the interval to as a fundamental period. This means that the values of the samples determine the Fourier coefficients in the series expansion of   Thus they determine since is zero for frequencies greater than and for lower frequencies is determined if its Fourier coefficients are determined. But determines the original function completely, since a function is determined if its spectrum is known. Therefore the original samples determine the function completely. Shannon's proof of the theorem is complete at that point, but he goes on to discuss reconstruction via sinc functions, what we now call the Whittaker–Shannon interpolation formula as discussed above. He does not derive or prove the properties of the sinc function, as the Fourier pair relationship between the rect (the rectangular function) and sinc functions was well known by that time. As in the other proof, the existence of the Fourier transform of the original signal is assumed, so the proof does not say whether the sampling theorem extends to bandlimited stationary random processes. Notes Application to multivariable signals and images The sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to time-dependent signals and is normally formulated in that context. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. Grayscale images, for example, are often represented as two-dimensional arrays (or matrices) of real numbers representing the relative intensities of pixels (picture elements) located at the intersections of row and column sample locations. As a result, images require two independent variables, or indices, to specify each pixel uniquely—one for the row, and one for the column. Color images typically consist of a composite of three separate grayscale images, one to represent each of the three primary colors—red, green, and blue, or RGB for short. Other colorspaces using 3-vectors for colors include HSV, CIELAB, XYZ, etc. Some colorspaces such as cyan, magenta, yellow, and black (CMYK) may represent color by four dimensions. All of these are treated as vector-valued functions over a two-dimensional sampled domain. Similar to one-dimensional discrete-time signals, images can also suffer from aliasing if the sampling resolution, or pixel density, is inadequate. For example, a digital photograph of a striped shirt with high frequencies (in other words, the distance between the stripes is small), can cause aliasing of the shirt when it is sampled by the camera's image sensor. The aliasing appears as a moiré pattern. The "solution" to higher sampling in the spatial domain for this case would be to move closer to the shirt, use a higher resolution sensor, or to optically blur the image before acquiring it with the sensor using an optical low-pass filter. Another example is shown here in the brick patterns. The top image shows the effects when the sampling theorem's condition is not satisfied. When software rescales an image (the same process that creates the thumbnail shown in the lower image) it, in effect, runs the image through a low-pass filter first and then downsamples the image to result in a smaller image that does not exhibit the moiré pattern. The top image is what happens when the image is downsampled without low-pass filtering: aliasing results. The sampling theorem applies to camera systems, where the scene and lens constitute an analog spatial signal source, and the image sensor is a spatial sampling device. Each of these components is characterized by a modulation transfer function (MTF), representing the precise resolution (spatial bandwidth) available in that component. Effects of aliasing or blurring can occur when the lens MTF and sensor MTF are mismatched. When the optical image which is sampled by the sensor device contains higher spatial frequencies than the sensor, the under sampling acts as a low-pass filter to reduce or eliminate aliasing. When the area of the sampling spot (the size of the pixel sensor) is not large enough to provide sufficient spatial anti-aliasing, a separate anti-aliasing filter (optical low-pass filter) may be included in a camera system to reduce the MTF of the optical image. Instead of requiring an optical filter, the graphics processing unit of smartphone cameras performs digital signal processing to remove aliasing with a digital filter. Digital filters also apply sharpening to amplify the contrast from the lens at high spatial frequencies, which otherwise falls off rapidly at diffraction limits. The sampling theorem also applies to post-processing digital images, such as to up or down sampling. Effects of aliasing, blurring, and sharpening may be adjusted with digital filtering implemented in software, which necessarily follows the theoretical principles. Critical frequency To illustrate the necessity of consider the family of sinusoids generated by different values of in this formula: With or equivalently the samples are given by: That sort of ambiguity is the reason for the strict inequality of the sampling theorem's condition. Sampling of non-baseband signals As discussed by Shannon: That is, a sufficient no-loss condition for sampling signals that do not have baseband components exists that involves the width of the non-zero frequency interval as opposed to its highest frequency component. See sampling for more details and examples. For example, in order to sample FM radio signals in the frequency range of 100–102 MHz, it is not necessary to sample at 204 MHz (twice the upper frequency), but rather it is sufficient to sample at 4 MHz (twice the width of the frequency interval). A bandpass condition is that for all nonnegative outside the open band of frequencies: for some nonnegative integer . This formulation includes the normal baseband condition as the case The corresponding interpolation function is the impulse response of an ideal brick-wall bandpass filter (as opposed to the ideal brick-wall lowpass filter used above) with cutoffs at the upper and lower edges of the specified band, which is the difference between a pair of lowpass impulse responses: Other generalizations, for example to signals occupying multiple non-contiguous bands, are possible as well. Even the most generalized form of the sampling theorem does not have a provably true converse. That is, one cannot conclude that information is necessarily lost just because the conditions of the sampling theorem are not satisfied; from an engineering perspective, however, it is generally safe to assume that if the sampling theorem is not satisfied then information will most likely be lost. Nonuniform sampling The sampling theory of Shannon can be generalized for the case of nonuniform sampling, that is, samples not taken equally spaced in time. The Shannon sampling theory for non-uniform sampling states that a band-limited signal can be perfectly reconstructed from its samples if the average sampling rate satisfies the Nyquist condition. Therefore, although uniformly spaced samples may result in easier reconstruction algorithms, it is not a necessary condition for perfect reconstruction. The general theory for non-baseband and nonuniform samples was developed in 1967 by Henry Landau. He proved that the average sampling rate (uniform or otherwise) must be twice the occupied bandwidth of the signal, assuming it is a priori known what portion of the spectrum was occupied. In the late 1990s, this work was partially extended to cover signals for which the amount of occupied bandwidth is known but the actual occupied portion of the spectrum is unknown. In the 2000s, a complete theory was developed (see the section Sampling below the Nyquist rate under additional restrictions below) using compressed sensing. In particular, the theory, using signal processing language, is described in a 2009 paper by Mishali and Eldar. They show, among other things, that if the frequency locations are unknown, then it is necessary to sample at least at twice the Nyquist criteria; in other words, you must pay at least a factor of 2 for not knowing the location of the spectrum. Note that minimum sampling requirements do not necessarily guarantee stability. Sampling below the Nyquist rate under additional restrictions The Nyquist–Shannon sampling theorem provides a sufficient condition for the sampling and reconstruction of a band-limited signal. When reconstruction is done via the Whittaker–Shannon interpolation formula, the Nyquist criterion is also a necessary condition to avoid aliasing, in the sense that if samples are taken at a slower rate than twice the band limit, then there are some signals that will not be correctly reconstructed. However, if further restrictions are imposed on the signal, then the Nyquist criterion may no longer be a necessary condition. A non-trivial example of exploiting extra assumptions about the signal is given by the recent field of compressed sensing, which allows for full reconstruction with a sub-Nyquist sampling rate. Specifically, this applies to signals that are sparse (or compressible) in some domain. As an example, compressed sensing deals with signals that may have a low overall bandwidth (say, the effective bandwidth ) but the frequency locations are unknown, rather than all together in a single band, so that the passband technique does not apply. In other words, the frequency spectrum is sparse. Traditionally, the necessary sampling rate is thus Using compressed sensing techniques, the signal could be perfectly reconstructed if it is sampled at a rate slightly lower than With this approach, reconstruction is no longer given by a formula, but instead by the solution to a linear optimization program. Another example where sub-Nyquist sampling is optimal arises under the additional constraint that the samples are quantized in an optimal manner, as in a combined system of sampling and optimal lossy compression. This setting is relevant in cases where the joint effect of sampling and quantization is to be considered, and can provide a lower bound for the minimal reconstruction error that can be attained in sampling and quantizing a random signal. For stationary Gaussian random signals, this lower bound is usually attained at a sub-Nyquist sampling rate, indicating that sub-Nyquist sampling is optimal for this signal model under optimal quantization. Historical background The sampling theorem was implied by the work of Harry Nyquist in 1928, in which he showed that up to independent pulse samples could be sent through a system of bandwidth ; but he did not explicitly consider the problem of sampling and reconstruction of continuous signals. About the same time, Karl Küpfmüller showed a similar result and discussed the sinc-function impulse response of a band-limiting filter, via its integral, the step-response sine integral; this bandlimiting and reconstruction filter that is so central to the sampling theorem is sometimes referred to as a Küpfmüller filter (but seldom so in English). The sampling theorem, essentially a dual of Nyquist's result, was proved by Claude E. Shannon. The mathematician E. T. Whittaker published similar results in 1915, J. M. Whittaker in 1935, and Gabor in 1946 ("Theory of communication"). In 1948 and 1949, Claude E. Shannon published the two revolutionary articles in which he founded information theory. In Shannon's "A Mathematical Theory of Communication", the sampling theorem is formulated as "Theorem 13": Let contain no frequencies over W. Then where It was not until these articles were published that the theorem known as "Shannon's sampling theorem" became common property among communication engineers, although Shannon himself writes that this is a fact which is common knowledge in the communication art. A few lines further on, however, he adds: "but in spite of its evident importance, [it] seems not to have appeared explicitly in the literature of communication theory". Despite his sampling theorem being published at the end of the 1940s, Shannon had derived his sampling theorem as early as 1940. Other discoverers Others who have independently discovered or played roles in the development of the sampling theorem have been discussed in several historical articles, for example, by Jerri and by Lüke. For example, Lüke points out that H. Raabe, an assistant to Küpfmüller, proved the theorem in his 1939 Ph.D. dissertation; the term Raabe condition came to be associated with the criterion for unambiguous representation (sampling rate greater than twice the bandwidth). Meijering mentions several other discoverers and names in a paragraph and pair of footnotes: In Russian literature it is known as the Kotelnikov's theorem, named after Vladimir Kotelnikov, who discovered it in 1933. Why Nyquist? Exactly how, when, or why Harry Nyquist had his name attached to the sampling theorem remains obscure. The term Nyquist Sampling Theorem (capitalized thus) appeared as early as 1959 in a book from his former employer, Bell Labs, and appeared again in 1963, and not capitalized in 1965. It had been called the Shannon Sampling Theorem as early as 1954, but also just the sampling theorem by several other books in the early 1950s. In 1958, Blackman and Tukey cited Nyquist's 1928 article as a reference for the sampling theorem of information theory, even though that article does not treat sampling and reconstruction of continuous signals as others did. Their glossary of terms includes these entries: Exactly what "Nyquist's result" they are referring to remains mysterious. When Shannon stated and proved the sampling theorem in his 1949 article, according to Meijering, "he referred to the critical sampling interval as the Nyquist interval corresponding to the band in recognition of Nyquist's discovery of the fundamental importance of this interval in connection with telegraphy". This explains Nyquist's name on the critical interval, but not on the theorem. Similarly, Nyquist's name was attached to Nyquist rate in 1953 by Harold S. Black: According to the Oxford English Dictionary, this may be the origin of the term Nyquist rate. In Black's usage, it is not a sampling rate, but a signaling rate. See also 44,100 Hz, a customary rate used to sample audible frequencies is based on the limits of human hearing and the sampling theorem Balian–Low theorem, a similar theoretical lower bound on sampling rates, but which applies to time–frequency transforms Cheung–Marks theorem, which specifies conditions where restoration of a signal by the sampling theorem can become ill-posed Shannon–Hartley theorem Nyquist ISI criterion Reconstruction from zero crossings Zero-order hold Dirac comb Notes References Further reading and (10): pp. 178–182 External links Learning by Simulations Interactive simulation of the effects of inadequate sampling Interactive presentation of the sampling and reconstruction in a web-demo Institute of Telecommunications, University of Stuttgart Undersampling and an application of it Sampling Theory For Digital Audio Journal devoted to Sampling Theory Digital signal processing Information theory Theorems in Fourier analysis Articles containing proofs Mathematical theorems in theoretical computer science Claude Shannon Telecommunication theory Data compression
Nyquist–Shannon sampling theorem
[ "Mathematics", "Technology", "Engineering" ]
4,755
[ "Mathematical theorems", "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory", "Articles containing proofs", "Mathematical problems", "Mathematical theorems in theoretical computer science" ]
37,910
https://en.wikipedia.org/wiki/Spacecraft
A spacecraft is a vehicle that is designed to fly and operate in outer space. Spacecraft are used for a variety of purposes, including communications, Earth observation, meteorology, navigation, space colonization, planetary exploration, and transportation of humans and cargo. All spacecraft except single-stage-to-orbit vehicles cannot get into space on their own, and require a launch vehicle (carrier rocket). On a sub-orbital spaceflight, a space vehicle enters space and then returns to the surface without having gained sufficient energy or velocity to make a full Earth orbit. For orbital spaceflights, spacecraft enter closed orbits around the Earth or around other celestial bodies. Spacecraft used for human spaceflight carry people on board as crew or passengers from start or on orbit (space stations) only, whereas those used for robotic space missions operate either autonomously or telerobotically. Robotic spacecraft used to support scientific research are space probes. Robotic spacecraft that remain in orbit around a planetary body are artificial satellites. To date, only a handful of interstellar probes, such as Pioneer 10 and 11, Voyager 1 and 2, and New Horizons,are on trajectories that leave the Solar System. Orbital spacecraft may be recoverable or not. Most are not. Recoverable spacecraft may be subdivided by a method of reentry to Earth into non-winged space capsules and winged spaceplanes. Recoverable spacecraft may be reusable (can be launched again or several times, like the SpaceX Dragon and the Space Shuttle orbiters) or expendable (like the Soyuz). In recent years, more space agencies are tending towards reusable spacecraft. Humanity has achieved space flight, but only a few nations have the technology for orbital launches: Russia (Roscosmos), the United States (NASA), the member states of the European Space Agency, Japan (JAXA), China (CNSA), India (ISRO), Taiwan (TSA), Israel (ISA), Iran (ISA), and North Korea (NADA). In addition, several private companies have developed or are developing the technology for orbital launches independently from government agencies. Two prominent examples of such companies are SpaceX and Blue Origin. History A German V-2 became the first spacecraft when it reached an altitude of 189 km in June 1944 in Peenemünde, Germany. Sputnik 1 was the first artificial satellite. It was launched into an elliptical low Earth orbit (LEO) by the Soviet Union on 4 October 1957. The launch ushered in new political, military, technological, and scientific developments; while the Sputnik launch was a single event, it marked the start of the Space Age. Apart from its value as a technological first, Sputnik 1 also helped to identify the upper atmospheric layer's density, by measuring the satellite's orbital changes. It also provided data on radio-signal distribution in the ionosphere. Pressurized nitrogen in the satellite's false body provided the first opportunity for meteoroid detection. Sputnik 1 was launched during the International Geophysical Year from Site No.1/5, at the 5th Tyuratam range, in Kazakh SSR (now at the Baikonur Cosmodrome). The satellite travelled at , taking 96.2 minutes to complete an orbit, and emitted radio signals at 20.005 and 40.002 MHz While Sputnik 1 was the first spacecraft to orbit the Earth, other human-made objects had previously reached an altitude of 100 km, which is the height required by the international organization Fédération Aéronautique Internationale to count as a spaceflight. This altitude is called the Kármán line. In particular, in the 1940s there were several test launches of the V-2 rocket, some of which reached altitudes well over 100 km. Crewed and uncrewed spacecraft Crewed spacecraft As of 2016, only three nations have flown crewed spacecraft: USSR/Russia, USA, and China. The first crewed spacecraft was Vostok 1, which carried Soviet cosmonaut Yuri Gagarin into space in 1961, and completed a full Earth orbit. There were five other crewed missions which used a Vostok spacecraft. The second crewed spacecraft was named Freedom 7, and it performed a sub-orbital spaceflight in 1961 carrying American astronaut Alan Shepard to an altitude of just over . There were five other crewed missions using Mercury spacecraft. Other Soviet crewed spacecraft include the Voskhod, Soyuz, flown uncrewed as Zond/L1, L3, TKS, and the Salyut and Mir crewed space stations. Other American crewed spacecraft include the Gemini spacecraft, the Apollo spacecraft including the Apollo Lunar Module, the Skylab space station, the Space Shuttle with undetached European Spacelab and private US Spacehab space stations-modules, and the SpaceX Crew Dragon configuration of their Dragon 2. US company Boeing also developed and flown a spacecraft of their own, the CST-100, commonly referred to as Starliner, but a crewed flight is yet to occur. China developed, but did not fly Shuguang, and is currently using Shenzhou (its first crewed mission was in 2003). Except for the Space Shuttle and the Buran spaceplane of the Soviet Union, the latter of which only ever had one uncrewed test flight, all of the recoverable crewed orbital spacecraft were space capsules. The International Space Station, crewed since November 2000, is a joint venture between Russia, the United States, Canada and several other countries. Uncrewed spacecraft Uncrewed spacecraft are spacecraft without people on board. Uncrewed spacecraft may have varying levels of autonomy from human input; they may be remote controlled, remote guided or even autonomous, meaning they have a pre-programmed list of operations, which they will execute unless otherwise instructed. Many space missions are more suited to telerobotic rather than crewed operation, due to lower cost and lower risk factors. In addition, some planetary destinations such as Venus or the vicinity of Jupiter are too hostile for human survival. Outer planets such as Saturn, Uranus, and Neptune are too distant to reach with current crewed spaceflight technology, so telerobotic probes are the only way to explore them. Telerobotics also allows exploration of regions that are vulnerable to contamination by Earth micro-organisms since spacecraft can be sterilized. Humans can not be sterilized in the same way as a spaceship, as they coexist with numerous micro-organisms, and these micro-organisms are also hard to contain within a spaceship or spacesuit. Multiple space probes were sent to study Moon, the planets, the Sun, multiple small Solar System bodies (comets and asteroids). Special class of uncrewed spacecraft is space telescopes, a telescope in outer space used to observe astronomical objects. The first operational telescopes were the American Orbiting Astronomical Observatory, OAO-2 launched in 1968, and the Soviet Orion 1 ultraviolet telescope aboard space station Salyut 1 in 1971. Space telescopes avoid the filtering and distortion (scintillation) of electromagnetic radiation which they observe, and avoid light pollution which ground-based observatories encounter. The best-known examples are Hubble Space Telescope and James Webb Space Telescope. Cargo spacecraft are designed to carry cargo, possibly to support space stations' operation by transporting food, propellant and other supplies. Automated cargo spacecraft have been used since 1978 and have serviced Salyut 6, Salyut 7, Mir, the International Space Station and Tiangong space station. Other Some spacecrafts can operate as both a crewed and uncrewed spacecraft. For example, the Buran spaceplane could operate autonomously but also had manual controls, though it never flew with crew onboard. Other dual crewed/uncrewed spacecrafts include: SpaceX Dragon 2, Dream Chaser, and Tianzhou. Types of spacecraft Communications satellite A communications satellite is an artificial satellite that relays and amplifies radio telecommunication signals via a transponder; it creates a communication channel between a source transmitter and a receiver at different locations on Earth. Communications satellites are used for television, telephone, radio, internet, and military applications. Many communications satellites are in geostationary orbit above the equator, so that the satellite appears stationary at the same point in the sky; therefore the satellite dish antennas of ground stations can be aimed permanently at that spot and do not have to move to track the satellite. Others form satellite constellations in low Earth orbit, where antennas on the ground have to follow the position of the satellites and switch between satellites frequently. The high frequency radio waves used for telecommunications links travel by line of sight and so are obstructed by the curve of the Earth. The purpose of communications satellites is to relay the signal around the curve of the Earth allowing communication between widely separated geographical points. Communications satellites use a wide range of radio and microwave frequencies. To avoid signal interference, international organizations have regulations for which frequency ranges or "bands" certain organizations are allowed to use. This allocation of bands minimizes the risk of signal interference. Cargo spacecraft Cargo or resupply spacecraft are robotic spacecraft that are designed specifically to carry cargo, possibly to support space stations' operation by transporting food, propellant and other supplies. Automated cargo spacecraft have been used since 1978 and have serviced Salyut 6, Salyut 7, Mir, the International Space Station and Tiangong space station. As of 2023, three different cargo spacecraft are used to supply the International Space Station: Russian Progress, American SpaceX Dragon 2 and Cygnus. Chinese Tianzhou is used to supply Tiangong space station. Space probes Space probes are robotic spacecraft that are sent to explore deep space, or astronomical bodies other than Earth. They are distinguished from landers by the fact that they work in open space, not on planetary surfaces or in planetary atmospheres. Being robotic eliminates the need for expensive, heavy life support systems (the Apollo crewed Moon landings required the use of the Saturn V rocket that cost over a billion dollars per launch, adjusted for inflation) and so allows for lighter, less expensive rockets. Space probes have visited every planet in the Solar System and Pluto, and the Parker Solar Probe has an orbit that, at its closest point, is in the Sun's chromosphere. There are five space probes that are escaping the Solar System, these are Voyager 1, Voyager 2, Pioneer 10, Pioneer 11, and New Horizons. Voyager program The identical Voyager probes, weighing , were launched in 1977 to take advantage of a rare alignment of Jupiter, Saturn, Uranus and Neptune that would allow a spacecraft to visit all four planets in one mission, and get to each destination faster by using gravity assist. In fact, the rocket that launched the probes (the Titan IIIE) could not even send the probes to the orbit of Saturn, yet Voyager 1 is travelling at roughly and Voyager 2 moves at about kilometres per second as of 2023. In 2012, Voyager 1 exited the heliosphere, followed by Voyager 2 in 2018. Voyager 1 actually launched 16 days after Voyager 2 but it reached Jupiter sooner because Voyager 2 was taking a longer route that allowed it to visit Uranus and Neptune, whereas Voyager 1 did not visit Uranus or Neptune, instead choosing to fly past Saturn’s satellite Titan. As of August 2023, Voyager 1 has passed 160 astronomical units, which means it is over 160 times farther from the Sun than Earth is. This makes it the farthest spacecraft from the Sun. Voyager 2 is 134 AU away from the Sun as of August 2023. NASA provides real time data of their distances and data from the probe’s cosmic ray detectors. Because of the probe’s declining power output and degradation of the RTGs over time, NASA has had to shut down certain instruments to conserve power. The probes may still have some scientific instruments on until the mid-2020s or perhaps the 2030s. After 2036, they will both be out of range of the Deep Space Network. Space telescopes A space telescope or space observatory is a telescope in outer space used to observe astronomical objects. Space telescopes avoid the filtering and distortion of electromagnetic radiation which they observe, and avoid light pollution which ground-based observatories encounter. They are divided into two types: satellites which map the entire sky (astronomical survey), and satellites which focus on selected astronomical objects or parts of the sky and beyond. Space telescopes are distinct from Earth imaging satellites, which point toward Earth for satellite imaging, applied for weather analysis, espionage, and other types of information gathering. Landers A lander is a type of spacecraft that makes a soft landing on the surface of an astronomical body other than Earth. Some landers, such as Philae and the Apollo Lunar Module, land entirely by using their fuel supply, however many landers (and landings of spacecraft on Earth) use aerobraking, especially for more distant destinations. This involves the spacecraft using a fuel burn to change its trajectory so it will pass through a planet (or a moon's) atmosphere. Drag caused by the spacecraft hitting the atmosphere enables it to slow down without using fuel, however this generates very high temperatures and so adds a requirement for a heat shield of some sort. Space capsules Space capsules are a type of spacecraft that can return from space at least once. They have a blunt shape, do not usually contain much more fuel than needed, and they do not possess wings unlike spaceplanes. They are the simplest form of recoverable spacecraft, and so the most commonly used. The first such capsule was the Vostok capsule built by the Soviet Union, that carried the first person in space, Yuri Gagarin. Other examples include the Soyuz and Orion capsules, built by the Soviet Union and NASA, respectively. Spaceplanes Spaceplanes are spacecraft that are built in the shape of, and function as, airplanes. The first example of such was the North American X-15 spaceplane, which conducted two crewed flights which reached an altitude of over in the 1960s. This first reusable spacecraft was air-launched on a suborbital trajectory on July 19, 1963. The first reusable orbital spaceplane was the Space Shuttle orbiter. The first orbiter to fly in space, the Space Shuttle Columbia, was launched by the USA on the 20th anniversary of Yuri Gagarin's flight, on April 12, 1981. During the Shuttle era, six orbiters were built, all of which have flown in the atmosphere and five of which have flown in space. Enterprise was used only for approach and landing tests, launching from the back of a Boeing 747 SCA and gliding to deadstick landings at Edwards AFB, California. The first Space Shuttle to fly into space was Columbia, followed by Challenger, Discovery, Atlantis, and Endeavour. Endeavour was built to replace Challenger when it was lost in January 1986. Columbia broke up during reentry in February 2003. The first autonomous reusable spaceplane was the Buran-class shuttle, launched by the USSR on November 15, 1988, although it made only one flight and this was uncrewed. This spaceplane was designed for a crew and strongly resembled the U.S. Space Shuttle, although its drop-off boosters used liquid propellants and its main engines were located at the base of what would be the external tank in the American Shuttle. Lack of funding, complicated by the dissolution of the USSR, prevented any further flights of Buran. The Space Shuttle was subsequently modified to allow for autonomous re-entry in case of necessity. Per the Vision for Space Exploration, the Space Shuttle was retired in 2011 mainly due to its old age and high cost of program reaching over a billion dollars per flight. The Shuttle's human transport role is to be replaced by SpaceX's SpaceX Dragon 2 and Boeing's CST-100 Starliner. Dragon 2's first crewed flight occurred on May 30, 2020. The Shuttle's heavy cargo transport role is to be replaced by expendable rockets such as the Space Launch System and ULA's Vulcan rocket, as well as the commercial launch vehicles. Scaled Composites' SpaceShipOne was a reusable suborbital spaceplane that carried pilots Mike Melvill and Brian Binnie on consecutive flights in 2004 to win the Ansari X Prize. The Spaceship Company built a successor SpaceShipTwo. A fleet of SpaceShipTwos operated by Virgin Galactic was planned to begin reusable private spaceflight carrying paying passengers in 2014, but was delayed after the crash of VSS Enterprise. Space Shuttle The Space Shuttle is a retired reusable Low Earth Orbital launch system. It consisted of two reusable Solid Rocket Boosters that landed by parachute, were recovered at sea, and were the most powerful rocket motors ever made until they were superseded by those of NASA’s SLS rocket, with a liftoff thrust of , which soon increased to per booster, and were fueled by a combination of PBAN and APCP, the Space Shuttle Orbiter, with 3 RS-25 engines that used a liquid oxygen/liquid hydrogen propellant combination, and the bright orange throwaway Space Shuttle external tank from which the RS-25 engines sourced their fuel. The orbiter was a spaceplane that was launched at NASA’s Kennedy Space Centre and landed mainly at the Shuttle Landing Facility, which is part of Kennedy Space Centre. A second launch site, Vandenberg Space Launch Complex 6 in California, was revamped so it could be used to launch the shuttles, but it was never used. The launch system could lift about into an eastward Low Earth Orbit. Each orbiter weighed roughly , however the different orbiters had differing weights and thus payloads, with Columbia being the heaviest orbiter, Challenger being lighter than Columbia but still heavier than the other three. The orbiter structure was mostly composed of aluminium alloy. The orbiter had seven seats for crew members, though on STS-61-A the launch took place with 8 crew onboard. The orbiters had wide by long payload bays and also were equipped with a CanadaArm1, an upgraded version of which is used on the International Space Station. The heat shield (or Thermal Protection System) of the orbiter, used to protect it from extreme levels of heat during atmospheric reentry and the cold of space, was made up of different materials depending on weight and how much heating a particular area on the shuttle would receive during reentry, which ranged from over to under . The orbiter was manually operated, though an autonomous landing system was added while the shuttle was still on service. It had an in orbit maneouvreing system known as the Orbital Manoeuvring System, which used the hypergolic propellants monomethylhydrazine (MMH) and dinitrogen tetroxide, which was used for orbital insertion, changes to orbits and the deorbit burn. Though the shuttle’s goals were to drastically decrease launch costs, it did not do so, ending up being much more expensive than similar expendable launchers. This was due to expensive refurbishment costs and the external tank being expended. Once a landing had occurred, the SRBs and many parts of the orbiter had to be disassembled for inspection, which was long and arduous. Furthermore, the RS-25 engines had to be replaced every few flights. Each of the heat shielding tiles had to go in one specific area on the orbiter, increasing complexity more. Adding to this, the shuttle was a rather dangerous system, with fragile heat shielding tiles, some being so fragile that one could easily scrape it off by hand, often having been damaged in many flights. After 30 years in service from 1981 to 2011 and 135 flights, the shuttle was retired from service due to the cost of maintaining the shuttles, and the 3 remaining orbiters (the other two were destroyed in accidents) were prepared to be displayed in museums. Other Some spacecraft do not fit particularly well into any of the general spacecraft categories. This is a list of these spacecraft. SpaceX Starship Starship is a spacecraft and second stage under development by American aerospace company SpaceX. Stacked atop its booster, Super Heavy, it composes the identically named Starship super heavy-lift space vehicle. The spacecraft is designed to transport both crew and cargo to a variety of destinations, including Earth orbit, the Moon, Mars, and potentially beyond. It is intended to enable long duration interplanetary flights for a crew of up to 100 people. It will also be capable of point-to-point transport on Earth, enabling travel to anywhere in the world in less than an hour. Furthermore, the spacecraft will be used to refuel other Starship vehicles to allow them to reach higher orbits to and other space destinations. Elon Musk, the CEO of SpaceX, estimated in a tweet that 8 launches would be needed to completely refuel a Starship in low Earth orbit, extrapolating this from Starship's payload to orbit and how much fuel a fully fueled Starship contains. To land on bodies without an atmosphere, such as the Moon, Starship will fire its engines and thrusters to slow down. Mission Extension Vehicle The Mission Extension Vehicle is a robotic spacecraft designed to prolong the life on another spacecraft. It works by docking to its target spacecraft, then correcting its orientation or orbit. This also allows it to rescue a satellite which is in the wrong orbit by using its own fuel to move its target to the correct orbit. The project is currently managed by Northrop Grumman Innovation Systems. As of 2023, 2 have been launched. The first launched on a Proton rocket on 9 October 2019, and did a rendezvous with Intelsat-901 on 25 February 2020. It will remain with the satellite until 2025 before the satellite is moved to a final graveyard orbit and the vehicle does a rendezvous with another satellite. The other one launched on an Ariane 5 rocket on 15 August 2020. Subsystems A spacecraft astrionics system comprises different subsystems, depending on the mission profile. Spacecraft subsystems are mounted in the satellite bus and may include attitude determination and control (variously called ADAC, ADC, or ACS), guidance, navigation and control (GNC or GN&C), communications (comms), command and data handling (CDH or C&DH), power (EPS), thermal control (TCS), propulsion, and structures. Attached to the bus are typically payloads. Life support Spacecraft intended for human spaceflight must also include a life support system for the crew. Attitude control A spacecraft needs an attitude control subsystem to be correctly oriented in space and respond to external torques and forces properly. This may use reaction wheels or it may use small rocket thrusters. The altitude control subsystem consists of sensors and actuators, together with controlling algorithms. The attitude-control subsystem permits proper pointing for the science objective, sun pointing for power to the solar arrays and earth pointing for communications. GNC Guidance refers to the calculation of the commands (usually done by the CDH subsystem) needed to steer the spacecraft where it is desired to be. Navigation means determining a spacecraft's orbital elements or position. Control means adjusting the path of the spacecraft to meet mission requirements. Command and data handling The C&DH subsystem receives commands from the communications subsystem, performs validation and decoding of the commands, and distributes the commands to the appropriate spacecraft subsystems and components. The CDH also receives housekeeping data and science data from the other spacecraft subsystems and components, and packages the data for storage on a data recorder or transmission to the ground via the communications subsystem. Other functions of the CDH include maintaining the spacecraft clock and state-of-health monitoring. Communications Spacecraft, both robotic and crewed, have various communications systems for communication with terrestrial stations and for inter-satellite service. Technologies include space radio station and optical communication. In addition, some spacecraft payloads are explicitly for the purpose of ground–ground communication using receiver/retransmitter electronic technologies. Power Spacecraft need an electrical power generation and distribution subsystem for powering the various spacecraft subsystems. For spacecraft near the Sun, solar panels are frequently used to generate electrical power. Spacecraft designed to operate in more distant locations, for example Jupiter, might employ a radioisotope thermoelectric generator (RTG) to generate electrical power. Electrical power is sent through power conditioning equipment before it passes through a power distribution unit over an electrical bus to other spacecraft components. Batteries are typically connected to the bus via a battery charge regulator, and the batteries are used to provide electrical power during periods when primary power is not available, for example when a low Earth orbit spacecraft is eclipsed by Earth. Thermal control Spacecraft must be engineered to withstand transit through Earth's atmosphere and the space environment. They must operate in a vacuum with temperatures potentially ranging across hundreds of degrees Celsius as well as (if subject to reentry) in the presence of plasmas. Material requirements are such that either high melting temperature, low density materials such as beryllium and reinforced carbon–carbon or (possibly due to the lower thickness requirements despite its high density) tungsten or ablative carbon–carbon composites are used. Depending on mission profile, spacecraft may also need to operate on the surface of another planetary body. The thermal control subsystem can be passive, dependent on the selection of materials with specific radiative properties. Active thermal control makes use of electrical heaters and certain actuators such as louvers to control temperature ranges of equipments within specific ranges. Spacecraft propulsion Spacecraft may or may not have a propulsion subsystem, depending on whether or not the mission profile calls for propulsion. The Swift spacecraft is an example of a spacecraft that does not have a propulsion subsystem. Typically though, LEO spacecraft include a propulsion subsystem for altitude adjustments (drag make-up maneuvers) and inclination adjustment maneuvers. A propulsion system is also needed for spacecraft that perform momentum management maneuvers. Components of a conventional propulsion subsystem include fuel, tankage, valves, pipes, and thrusters. The thermal control system interfaces with the propulsion subsystem by monitoring the temperature of those components, and by preheating tanks and thrusters in preparation for a spacecraft maneuver. Structures Spacecraft must be engineered to withstand launch loads imparted by the launch vehicle, and must have a point of attachment for all the other subsystems. Depending on mission profile, the structural subsystem might need to withstand loads imparted by entry into the atmosphere of another planetary body, and landing on the surface of another planetary body. Payload The payload depends on the mission of the spacecraft, and is typically regarded as the part of the spacecraft "that pays the bills". Typical payloads could include scientific instruments (cameras, telescopes, or particle detectors, for example), cargo, or a human crew. Ground segment The ground segment, though not technically part of the spacecraft, is vital to the operation of the spacecraft. Typical components of a ground segment in use during normal operations include a mission operations facility where the flight operations team conducts the operations of the spacecraft, a data processing and storage facility, ground stations to radiate signals to and receive signals from the spacecraft, and a voice and data communications network to connect all mission elements. Launch vehicle The launch vehicle propels the spacecraft from Earth's surface, through the atmosphere, and into an orbit, the exact orbit being dependent on the mission configuration. The launch vehicle may be expendable or reusable. In a single stage to orbit rocket, the rocket can be considered a spacecraft itself. Spacecraft records Fastest spacecraft Parker Solar Probe (estimated at first sun close pass, will reach at final perihelion) Helios I and II Solar Probes () Furthest spacecraft from the Sun Voyager 1 at 156.13 AU as of April 2022, traveling outward at about Pioneer 10 at 122.48 AU as of December 2018, traveling outward at about Voyager 2 at 122.82 AU as of January 2020, traveling outward at about Pioneer 11 at 101.17 AU as of December 2018, traveling outward at about See also Astrionics Commercial astronaut Flying saucer List of crewed spacecraft List of fictional spacecraft Private spaceflight Spacecraft design Space exploration Space launch Space suit List of spaceflight records Starship Timeline of Solar System exploration U.S. Space Exploration History on U.S. Stamps Notes References Citations Sources External links NASA: Space Science Spacecraft Missions NSSDC Master Catalog Spacecraft Query Form Early History of Spacecraft Basics of Spaceflight tutorial from JPL/Caltech International Spaceflight Museum Astronautics Pressure vessels
Spacecraft
[ "Physics", "Chemistry", "Engineering" ]
5,888
[ "Structural engineering", "Chemical equipment", "Physical systems", "Hydraulics", "Pressure vessels" ]
38,145
https://en.wikipedia.org/wiki/Los%20Alamos%20National%20Laboratory
Los Alamos National Laboratory (often shortened as Los Alamos and LANL) is one of the sixteen research and development laboratories of the United States Department of Energy (DOE), located a short distance northwest of Santa Fe, New Mexico, in the American southwest. Best known for its central role in helping develop the first atomic bomb, LANL is one of the world's largest and most advanced scientific institutions. Los Alamos was established in 1943 as Project Y, a top-secret site for designing nuclear weapons under the Manhattan Project during World War II. Chosen for its remote yet relatively accessible location, it served as the main hub for conducting and coordinating nuclear research, bringing together some of the world's most famous scientists, among them numerous Nobel Prize winners. The town of Los Alamos, directly north of the lab, grew extensively through this period. After the war ended in 1945, Project Y's existence was made public, and it became known universally as Los Alamos. In 1952, the Atomic Energy Commission formed a second design lab under the direction of the University of California, Berkeley, which became the Lawrence Livermore National Laboratory (LLNL). The two labs competed on a wide variety of bomb designs, but with the end of the Cold War, have focused increasingly on civilian missions. Today, Los Alamos conducts multidisciplinary research in fields such as national security, space exploration, nuclear fusion, renewable energy, medicine, nanotechnology, and supercomputing. While owned by the federal government, LANL is privately managed and operated by Triad National Security, LLC. History The Manhattan Project The laboratory was founded during World War II as a secret, centralized facility to coordinate the scientific research of the Manhattan Project, the Allied project to develop the first nuclear weapons. In September 1942, the difficulties encountered in conducting preliminary studies on nuclear weapons at universities scattered across the country indicated the need for a laboratory dedicated solely to that purpose. General Leslie Groves wanted a central laboratory at an isolated location for safety, and to keep the scientists away from the populace. It should be at least 200 miles from international boundaries and west of the Mississippi. Major John Dudley suggested Oak City, Utah, or Jemez Springs, New Mexico, but both were rejected. Jemez Springs was only a short distance from the current site. Project Y director J. Robert Oppenheimer had spent much time in his youth in the New Mexico area and suggested the Los Alamos Ranch School on the mesa. Dudley had rejected the school as not meeting Groves' criteria, but as soon as Groves saw it he said in effect "This is the place". Oppenheimer became the laboratory's first director; from 19 October 1942. During the Manhattan Project, Los Alamos hosted thousands of employees, including many Nobel Prize-winning scientists. The location was a total secret. Its only mailing address was a post office box, number 1663, in Santa Fe, New Mexico. Eventually two other post office boxes were used, 180 and 1539, also in Santa Fe. Though its contract with the University of California was initially intended to be temporary, the relationship was maintained long after the war. Until the atomic bombings of Hiroshima and Nagasaki, Japan, University of California president Robert Sproul did not know what the purpose of the laboratory was and thought it might be producing a "death ray". The only member of the UC administration who knew its true purpose—indeed, the only one who knew its exact physical location—was the Secretary-Treasurer Robert Underhill, who was in charge of wartime contracts and liabilities. The work of the laboratory culminated in several atomic devices, one of which was used in the first nuclear test near Alamogordo, New Mexico, codenamed "Trinity", on July 16, 1945. The other two were weapons, "Little Boy" and "Fat Man", which were used in the attacks on Hiroshima and Nagasaki. The Laboratory received the Army-Navy "E" Award for Excellence in production on October 16, 1945. Post-war After the war, Oppenheimer retired from the directorship, and it was taken over by Norris Bradbury, whose initial mission was to make the previously hand-assembled atomic bombs "G.I. proof" so that they could be mass-produced and used without the assistance of highly trained scientists. Other founding members of Los Alamos left the laboratory and became outspoken opponents to the further development of nuclear weapons. The name officially changed to the Los Alamos Scientific Laboratory (LASL) on January 1, 1947. By this time, Argonne had already been made the first National Laboratory the previous year. Los Alamos would not become a National Laboratory in name until 1981. In the years since the 1940s, Los Alamos was responsible for the development of the hydrogen bomb, and many other variants of nuclear weapons. In 1952, Lawrence Livermore National Laboratory was founded to act as Los Alamos' "competitor", with the hope that two laboratories for the design of nuclear weapons would spur innovation. Los Alamos and Livermore served as the primary classified laboratories in the U.S. national laboratory system, designing all the country's nuclear arsenal. Additional work included basic scientific research, particle accelerator development, health physics, and fusion power research as part of Project Sherwood. Many nuclear tests were undertaken in the Marshall Islands and at the Nevada Test Site. During the late-1950s, a number of scientists including Dr. J. Robert "Bob" Beyster left Los Alamos to work for General Atomics (GA) in San Diego. Three major nuclear-related accidents have occurred at LANL. Criticality accidents occurred in August 1945 and May 1946, and a third accident occurred during an annual physical inventory in December 1958. Several buildings associated with the Manhattan Project at Los Alamos were declared a National Historic Landmark in 1965. Post-Cold War At the end of the Cold War, both labs went through a process of intense scientific diversification in their research programs to adapt to the changing political conditions that no longer required as much research towards developing new nuclear weapons and has led the lab to increase research for "non-war" science and technology. Los Alamos' nuclear work is currently thought to relate primarily to computer simulations and stockpile stewardship. The development of the Dual-Axis Radiographic Hydrodynamic Test Facility will allow complex simulations of nuclear tests to take place without full explosive yields. The laboratory contributed to the early development of the flow cytometry technology. In the 1950s, researcher Mack Fulwyler developed a technique for sorting erythrocytes that combined the Coulter Principle of Coulter counter technologies, which measures the presence of cells and their size, with ink jet technology, which produces a laminar flow of liquid that breaks up into separate, fine drops. In 1969, Los Alamos reported the first fluorescence detector apparatus, which accurately measured the number and size of ovarian cells and blood cells. As of 2017, other research performed at the lab included developing cheaper, cleaner biofuels and advancing scientific understanding around renewable energy. Non-nuclear national security and defense development is also a priority at the lab. This includes preventing outbreaks of deadly diseases by improving detection tools and the monitoring the effectiveness of the United States' vaccine distribution infrastructure. Additional advancements include the ASPECT airplane that can detect bio threats from the sky. Medical work In 2008, development for a safer, more comfortable and accurate test for breast cancer was ongoing by scientists Lianjie Huang and Kenneth M. Hanson and collaborators. The new technique, called ultrasound-computed tomography (ultrasound CT), uses sound waves to accurately detect small tumors that traditional mammography cannot. The lab has made intense efforts for humanitarian causes through its scientific research in medicine. In 2010, three vaccines for the Human Immunodeficiency Virus were being tested by lab scientist Bette Korber and her team. "These vaccines might finally deal a lethal blow to the AIDS virus", says Chang-Shung Tung, leader of the Lab's Theoretical Biology and Biophysics group. Negative publicity The laboratory has attracted negative publicity from a number of events. In 1999, Los Alamos scientist Wen Ho Lee was accused of 59 counts of mishandling classified information by downloading nuclear secrets—"weapons codes" used for computer simulations of nuclear weapons tests—to data tapes and removing them from the lab. After ten months in jail, Lee pleaded guilty to a single count and the other 58 were dismissed with an apology from U.S. District Judge James Parker for his incarceration. Lee had been suspected for having shared U.S. nuclear secrets with China, but investigators were never able to establish what Lee did with the downloaded data. In 2000, two computer hard drives containing classified data were announced to have gone missing from a secure area within the laboratory, but were later found behind a photocopier. Science mission Los Alamos National Laboratory's mission is to "solve national security challenges through simultaneous excellence". The laboratory's strategic plan reflects U.S. priorities spanning nuclear security, intelligence, defense, emergency response, nonproliferation, counterterrorism, energy security, emerging threats, and environmental management. This strategy is aligned with priorities set by the Department of Energy (DOE), the National Nuclear Security Administration (NNSA), and national strategy guidance documents, such as the Nuclear Posture Review, the National Security Strategy, and the Blueprint for a Secure Energy Future Los Alamos is the senior laboratory in the DOE system, and executes work in all areas of the DOE mission: national security, science, energy, and environmental management. The laboratory also performs work for the Department of Defense (DoD), Intelligence Community (IC), and Department of Homeland Security (DHS), among others. The laboratory's multidisciplinary scientific capabilities and activities are organized into six Capability Pillars: Information, Science and Technology (IS&T) Materials for the Future seeks to optimize materials for national security applications by predicting and controlling their performance and functionality through discovery science and engineering. Nuclear and Particle Futures integrates nuclear experiments, theory, and simulation to understand and engineer complex nuclear phenomena. Science of Signatures (SoS) applies science and technology to intransigent problems of system identification and characterization in areas of global security, nuclear defense, energy, and health. Complex Natural and Engineered Systems (CNES) Weapons Systems (WS) Los Alamos operates three main user facilities: The Center for Integrated Nanotechnologies: The Center for Integrated Nanotechnologies is a DOE/Office of Science National User Facility operated jointly by Sandia and Los Alamos National Laboratories with facilities at both Laboratories. CINT is dedicated to establishing the scientific principles that govern the design, performance, and integration of nanoscale materials into microscale and macroscale systems and devices. Los Alamos Neutron Science Center (LANSCE): The Los Alamos Neutron Science Center is one of the world's most powerful linear accelerators. LANSCE provides the scientific community with intense sources of neutrons with the capability of performing experiments supporting civilian and national security research. This facility is sponsored by the Department of Energy, the National Nuclear Security Administration, Office of Science and Office of Nuclear Energy, Science and Technology. The National High Magnetic Field Laboratory (NHMFL), Pulsed Field Facility: The Pulsed Field Facility at Los Alamos National Laboratory in Los Alamos, New Mexico, is one of three campuses of the National High Magnetic Field Laboratory (NHMFL), the other two being at Florida State University, Tallahassee and the University of Florida. The Pulsed Field Facility at Los Alamos National Laboratory operates an international user program for research in high magnetic fields. As of 2017, the Los Alamos National Laboratory is using data and algorithms to possibly protect public health by tracking the growth of infectious diseases. Digital epidemiologists at the lab's Information Systems and Modeling group are using clinical surveillance data, Google search queries, census data, Wikipedia, and even tweets to create a system that could predict epidemics. The team is using data from Brazil as its model; Brazil was notably threatened by the Zika virus as it prepared to host the Summer Olympics in 2016. Laboratory management and operations Within LANL's 43-square-mile property are approximately 2,000 dumpsites which have contaminated the environment. It also contributed to thousands of dumpsites at 108 locations in 29 US states. Contract changes Continuing efforts to make the laboratory more efficient led the Department of Energy to open its contract with the University of California to bids from other vendors in 2003. Though the university and the laboratory had difficult relations many times since their first World War II contract, this was the first time that the university ever had to compete for management of the laboratory. The University of California decided to create a private company with the Bechtel Corporation, Washington Group International, and the BWX Technologies to bid on the contract to operate the laboratory. The UC/Bechtel led corporation—Los Alamos National Security, LLC (LANS)—was pitted against a team formed by the University of Texas System partnered with Lockheed-Martin. In December 2005, the Department of Energy announced that LANS had won the next seven-year contract to manage and operate the laboratory. On June 1, 2006, the University of California ended its sixty years of direct involvement in operating Los Alamos National Laboratory, and management control of the laboratory was taken over by Los Alamos National Security, LLC with effect October 1, 2007. Approximately 95% of the former 10,000 plus UC employees at LANL were rehired by LANS to continue working at LANL. Other than UC appointing three members to the eleven member board of directors that oversees LANS, UC now has virtually no responsibility or direct involvement in LANL. UC policies and regulations that apply to UC campuses and its two national laboratories in California (Lawrence Berkeley and Lawrence Livermore) no longer apply to LANL, and the LANL director no longer reports to the UC Regents or UC Office of the President. On June 8, 2018, the NNSA announced that Triad National Security, LLC, a joint venture between Battelle Memorial Institute, the University of California, and Texas A&M University, would assume operation and management of LANL beginning November 1, 2018. Safety management In August 2011, the close placement of eight plutonium rods for a photo nearly led to a criticality incident. The photo shoot, which was directed by the laboratory's management, was one of several factors relating to unsafe management practices that led to the departure of 12 of the lab's 14 safety staff. The criticality incident was one of several that led the Department of Energy to seek alternative bids to manage the laboratory after the 2018 expiration of the LANS contract. The lab was penalized with a $57 million reduction in its 2014 budget over the February 14, 2014, accident at the Waste Isolation Pilot Plant for which it was partly responsible. In August 2017, the improper storage of plutonium metal could have triggered a criticality accident, and subsequently staff failed to declare the failure as required by procedure. Extended operations With support of the National Science Foundation, LANL operates one of the three National High Magnetic Field Laboratories in conjunction with and located at two other sites Florida State University in Tallahassee, Florida, and University of Florida in Gainesville, Florida. Los Alamos National Laboratory is a partner in the Joint Genome Institute (JGI) located in Walnut Creek, California. JGI was founded in 1997 to unite the expertise and resources in genome mapping, DNA sequencing, technology development, and information sciences pioneered at the three genome centers at University of California's Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), and LANL. The Integrated Computing Network (ICN) is a multi-security level network at the LANL integrating large host supercomputers, a file server, a batch server, a printer and graphics output server and numerous other general purpose and specialized systems. IBM Roadrunner, which was part of this network, was the first supercomputer to hit petaflop speeds. Until 1999, The Los Alamos National Laboratory hosted the arXiv e-print archive. The arXiv is currently operated and funded by Cornell University. The coreboot project was initially developed at LANL. In the recent years, the Laboratory has developed a major research program in systems biology modeling, known at LANL under the name q-bio. Several serials are published by LANL: National Security Science 1663 Community Connections Actinide Research Quarterly @theBradbury Physical Sciences Vistas LANL also published Los Alamos Science from 1980 to 2005, as well as the Nuclear Weapons Journal, which was replaced by National Security Science after two issues in 2009. Controversy and criticism In 2005, Congress held new hearings on lingering security issues at Los Alamos National Weapons Laboratory in New Mexico; documented problems continued to be ignored. In November 2008, a drum containing nuclear waste was ruptured due to a 'deflagration' according to an inspector general report of the Dept. of Energy, which due to lab mistakes, also occurred in 2014 at the Carlsbad plant with significant disruptions and costs across the industry. In 2009, 69 computers which did not contain classified information were lost. The same year also saw a scare in which 1 kg (2.2 lb) of missing plutonium prompted a Department of Energy investigation into the laboratory. The investigation found that the "missing plutonium" was a result of miscalculation by LANL's statisticians and did not actually exist; but the investigation did lead to heavy criticism of the laboratory by the DOE for security flaws and weaknesses that the DOE claimed to have found. Institutional statistics LANL is northern New Mexico's largest institution and the largest employer with approximately 8,762 direct employees, 277 guard force, 505 contractors, 1,613 students, 1,143 unionized craft workers, and 452 post-doctoral researchers. Additionally, there are roughly 120 DOE employees stationed at the laboratory to provide federal oversight of LANL's work and operations. Approximately one-third of the laboratory's technical staff members are physicists, one-quarter are engineers, one-sixth are chemists and materials scientists, and the remainder work in mathematics and computational science, biology, geoscience, and other disciplines. Professional scientists and students also come to Los Alamos as visitors to participate in scientific projects. The staff collaborates with universities and industry in both basic and applied research to develop resources for the future. The annual budget is approximately US$2.2 billion. Directors J. Robert Oppenheimer (1942–1945) Norris Bradbury (1945–1970) Harold Agnew (1970–1979) Donald Kerr (1979–1986) Siegfried S. Hecker (1986–1997) John C. Browne (1997–2003) George Peter Nanos (2003–2005) Robert W. Kuckuck (2005–2006) Michael R. Anastasio (2006–2011) Charles F. McMillan (2011–2017) Terry Wallace (2018) Thomas Mason (2018–present) Notable scientists Stirling Colgate (1925–2013) George Cowan (1920–2012), American physical chemist, businessman, and philanthropist Mitchell Feigenbaum (1944–2019) Richard Feynman (1918–1988) Bette Korber Tom Lehrer Maria Goeppert Mayer (1906–1972) Howard O. McMahon (1914–1990), Canadian-born American electrical engineer, inventor of the Gifford-McMahon cryocooler, and the Science Director, Vice President, Head of the Research and Development Division, and then President of Arthur D. Little, Inc; lived and worked partially in Los Alamos during development of the first Hydrogen bomb Emily Willbanks (1930–2007) See also Anti-nuclear movement in the United States Association of Los Alamos Scientists Bradbury Science Museum Chalk River Laboratories Federation of American Scientists Clarence Max Fowler David Greenglass Ed Grothus Theodore Hall History of nuclear weapons Hydrogen-moderated self-regulating nuclear power module National Historic Landmarks in New Mexico National Register of Historic Places listings in Los Alamos County, New Mexico Julius and Ethel Rosenberg Timeline of Cox Report controversy Timeline of nuclear weapons development Venona project Notes References Further reading External links Los AlamosOverview of Historical Operations Annotated bibliography on Los Alamos from the Alsos Digital Library University of California Office of Laboratory Management (official website) Los Alamos Neutron Science Center "LANSCE" Los Alamos Weather Machine LANL: The Real Story (LANL community blog) LANL: The Corporate Story (follow-up blog to "LANL: The Real Story) LANL: Technology Transfer, an example LANL: The Rest of the Story (ongoing blog for LANL employees) Protecting the Nation's Nuclear Materials. Government Calls Arms Complexes Secure; Critics Disagree NPR. Los Alamos Study Groupan Albuquerque-based group opposed to nuclear weapons Site Y: Los Alamos A map of Manhattan Project Era Site Y: Los Alamos, New Mexico. Los Alamos National Laboratory Nuclear Facilities, 1997 Machinists who assembled the atomic bomb. Archival collections Los Alamos University notebooks, 1945-1946, Niels Bohr Library & Archives Los Alamos, New Mexico United States Department of Energy national laboratories Buildings and structures in Los Alamos County, New Mexico Federally Funded Research and Development Centers Government buildings in New Mexico Manhattan Project sites Nuclear research institutes Nuclear weapons infrastructure of the United States Supercomputer sites History of Los Alamos County, New Mexico Government buildings on the National Register of Historic Places in New Mexico Historic districts on the National Register of Historic Places in New Mexico National Historic Landmarks in New Mexico National Register of Historic Places in Los Alamos County, New Mexico World War II on the National Register of Historic Places Bechtel University of California Military research of the United States Physics research institutes Theoretical physics institutes 1943 establishments in New Mexico Research institutes in New Mexico
Los Alamos National Laboratory
[ "Physics", "Engineering" ]
4,500
[ "Nuclear research institutes", "Theoretical physics", "Nuclear organizations", "Theoretical physics institutes" ]
38,413
https://en.wikipedia.org/wiki/Activation%20energy
In the Arrhenius model of reaction rates, activation energy is the minimum amount of energy that must be available to reactants for a chemical reaction to occur. The activation energy (Ea) of a reaction is measured in kilojoules per mole (kJ/mol) or kilocalories per mole (kcal/mol). Activation energy can be thought of as the magnitude of the potential barrier (sometimes called the energy barrier) separating minima of the potential energy surface pertaining to the initial and final thermodynamic state. For a chemical reaction to proceed at a reasonable rate, the temperature of the system should be high enough such that there exists an appreciable number of molecules with translational energy equal to or greater than the activation energy. The term "activation energy" was introduced in 1889 by the Swedish scientist Svante Arrhenius. Other uses Although less commonly used, activation energy also applies to nuclear reactions and various other physical phenomena. Temperature dependence and the relation to the Arrhenius equation The Arrhenius equation gives the quantitative basis of the relationship between the activation energy and the rate at which a reaction proceeds. From the equation, the activation energy can be found through the relation where A is the pre-exponential factor for the reaction, R is the universal gas constant, T is the absolute temperature (usually in kelvins), and k is the reaction rate coefficient. Even without knowing A, Ea can be evaluated from the variation in reaction rate coefficients as a function of temperature (within the validity of the Arrhenius equation). At a more advanced level, the net Arrhenius activation energy term from the Arrhenius equation is best regarded as an experimentally determined parameter that indicates the sensitivity of the reaction rate to temperature. There are two objections to associating this activation energy with the threshold barrier for an elementary reaction. First, it is often unclear as to whether or not reaction does proceed in one step; threshold barriers that are averaged out over all elementary steps have little theoretical value. Second, even if the reaction being studied is elementary, a spectrum of individual collisions contributes to rate constants obtained from bulk ('bulb') experiments involving billions of molecules, with many different reactant collision geometries and angles, different translational and (possibly) vibrational energies—all of which may lead to different microscopic reaction rates. Catalysts A substance that modifies the transition state to lower the activation energy is termed a catalyst; a catalyst composed only of protein and (if applicable) small molecule cofactors is termed an enzyme. A catalyst increases the rate of reaction without being consumed in the reaction. In addition, the catalyst lowers the activation energy, but it does not change the energies of the original reactants or products, and so does not change equilibrium. Rather, the reactant energy and the product energy remain the same and only the activation energy is altered (lowered). A catalyst is able to reduce the activation energy by forming a transition state in a more favorable manner. Catalysts, by nature, create a more "comfortable" fit for the substrate of a reaction to progress to a transition state. This is possible due to a release of energy that occurs when the substrate binds to the active site of a catalyst. This energy is known as Binding Energy. Upon binding to a catalyst, substrates partake in numerous stabilizing forces while within the active site (e.g. hydrogen bonding or van der Waals forces). Specific and favorable bonding occurs within the active site until the substrate forms to become the high-energy transition state. Forming the transition state is more favorable with the catalyst because the favorable stabilizing interactions within the active site release energy. A chemical reaction is able to manufacture a high-energy transition state molecule more readily when there is a stabilizing fit within the active site of a catalyst. The binding energy of a reaction is this energy released when favorable interactions between substrate and catalyst occur. The binding energy released assists in achieving the unstable transition state. Reactions without catalysts need a higher input of energy to achieve the transition state. Non-catalyzed reactions do not have free energy available from active site stabilizing interactions, such as catalytic enzyme reactions. Relationship with Gibbs energy of activation In the Arrhenius equation, the term activation energy (Ea) is used to describe the energy required to reach the transition state, and the exponential relationship holds. In transition state theory, a more sophisticated model of the relationship between reaction rates and the transition state, a superficially similar mathematical relationship, the Eyring equation, is used to describe the rate constant of a reaction: . However, instead of modeling the temperature dependence of reaction rate phenomenologically, the Eyring equation models individual elementary steps of a reaction. Thus, for a multistep process, there is no straightforward relationship between the two models. Nevertheless, the functional forms of the Arrhenius and Eyring equations are similar, and for a one-step process, simple and chemically meaningful correspondences can be drawn between Arrhenius and Eyring parameters. Instead of also using Ea, the Eyring equation uses the concept of Gibbs energy and the symbol ΔG‡ to denote the Gibbs energy of activation to achieve the transition state. In the equation, kB and h are the Boltzmann and Planck constants, respectively. Although the equations look similar, it is important to note that the Gibbs energy contains an entropic term in addition to the enthalpic one. In the Arrhenius equation, this entropic term is accounted for by the pre-exponential factor A. More specifically, we can write the Gibbs free energy of activation in terms of enthalpy and entropy of activation: . Then, for a unimolecular, one-step reaction, the approximate relationships and hold. Note, however, that in Arrhenius theory proper, A is temperature independent, while here, there is a linear dependence on T. For a one-step unimolecular process whose half-life at room temperature is about 2 hours, ΔG‡ is approximately 23 kcal/mol. This is also the roughly the magnitude of Ea for a reaction that proceeds over several hours at room temperature. Due to the relatively small magnitude of TΔS‡ and RT at ordinary temperatures for most reactions, in sloppy discourse, Ea, ΔG‡, and ΔH‡ are often conflated and all referred to as the "activation energy". The enthalpy, entropy and Gibbs energy of activation are more correctly written as Δ‡Ho, Δ‡So and Δ‡Go respectively, where the o indicates a quantity evaluated between standard states. However, some authors omit the o in order to simplify the notation. The total free energy change of a reaction is independent of the activation energy however. Physical and chemical reactions can be either exergonic or endergonic, but the activation energy is not related to the spontaneity of a reaction. The overall reaction energy change is not altered by the activation energy. Negative activation energy In some cases, rates of reaction decrease with increasing temperature. When following an approximately exponential relationship so the rate constant can still be fit to an Arrhenius expression, this results in a negative value of Ea. Elementary reactions exhibiting negative activation energies are typically barrierless reactions, in which the reaction proceeding relies on the capture of the molecules in a potential well. Increasing the temperature leads to a reduced probability of the colliding molecules capturing one another (with more glancing collisions not leading to reaction as the higher momentum carries the colliding particles out of the potential well), expressed as a reaction cross section that decreases with increasing temperature. Such a situation no longer leads itself to direct interpretations as the height of a potential barrier. Some multistep reactions can also have apparent negative activation energies. For example, the overall rate constant k for a two-step reaction A B, B → C is given by k = k2K1, where k2 is the rate constant of the rate-limiting slow second step and K1 is the equilibrium constant of the rapid first step. In some reactions, K1 decreases with temperature more rapidly than k2 increases, so that k actually decreases with temperature corresponding to a negative observed activation energy. An example is the oxidation of nitric oxide which is a termolecular reaction 2 NO + O2 -> 2 NO2. The rate law is with a negative activation energy. This is explained by the two-step mechanism: 2 NO <=> N2O2 and N2O2 + O2 -> 2 NO2. Certain cationic polymerization reactions have negative activation energies so that the rate decreases with temperature. For chain-growth polymerization, the overall activation energy is , where i, p and t refer respectively to initiation, propagation and termination steps. The propagation step normally has a very small activation energy, so that the overall value is negative if the activation energy for termination is larger than that for initiation. The normal range of overall activation energies for cationic polymerization varies from . See also Activation energy asymptotics Chemical kinetics Mean kinetic temperature Autoignition temperature Quantum tunnelling References Chemical kinetics Reaction mechanisms Catalysis Combustion Biochemistry terminology
Activation energy
[ "Chemistry", "Biology" ]
1,901
[ "Catalysis", "Reaction mechanisms", "Chemical reaction engineering", "Biochemistry terminology", "Combustion", "Physical organic chemistry", "Biochemistry", "Chemical kinetics" ]
38,415
https://en.wikipedia.org/wiki/Electrode%20potential
In electrochemistry, electrode potential is the voltage of a galvanic cell built from a standard reference electrode and another electrode to be characterized. By convention, the reference electrode is the standard hydrogen electrode (SHE). It is defined to have a potential of zero volts. It may also be defined as the potential difference between the charged metallic rods and salt solution. The electrode potential has its origin in the potential difference developed at the interface between the electrode and the electrolyte. It is common, for instance, to speak of the electrode potential of the redox couple. Origin and interpretation Electrode potential appears at the interface between an electrode and electrolyte due to the transfer of charged species across the interface, specific adsorption of ions at the interface, and specific adsorption/orientation of polar molecules, including those of the solvent. In an electrochemical cell, the cathode and the anode have certain electrode potentials independently and the difference between them is the cell potential: The electrode potential may be either that at equilibrium at the working electrode ("reversible potential"), or a potential with a non-zero net reaction on the working electrode but zero net current ("corrosion potential", "mixed potential"), or a potential with a non-zero net current on the working electrode (like in galvanic corrosion or voltammetry). Reversible potentials can be sometimes converted to the standard electrode potential for a given electroactive species by extrapolation of the measured values to the standard state. The value of the electrode potential under non-equilibrium depends on the nature and composition of the contacting phases, and on the kinetics of electrode reactions at the interface (see Butler–Volmer equation). An operational assumption for determinations of the electrode potentials with the standard hydrogen electrode involves this reference electrode with hydrogen ion in an ideal solution having is "zero potential at all temperatures" equivalently to standard enthalpy of formation of hydrogen ion is also "zero at all temperatures". Measurement The measurement is generally conducted using a three-electrode setup (see the drawing): working electrode, counter electrode, reference electrode (standard hydrogen electrode or an equivalent). In case of non-zero net current on the electrode, it is essential to minimize the ohmic IR-drop in the electrolyte, e.g., by positioning the reference electrode near the surface of the working electrode (e.g., see Luggin capillary), or by using a supporting electrolyte of sufficiently high conductivity. The potential measurements are performed with the positive terminal of the electrometer connected to the working electrode and the negative terminal to the reference electrode. Sign conventions Historically, two conventions for sign for the electrode potential have formed: convention "Nernst–Lewis–Latimer" (sometimes referred to as "American"), convention "Gibbs–Ostwald–Stockholm" (sometimes referred to as "European"). In 1953 in Stockholm IUPAC recognized that either of the conventions is permissible; however, it unanimously recommended that only the magnitude expressed according to the convention (2) be called "the electrode potential". To avoid possible ambiguities, the electrode potential thus defined can also be referred to as Gibbs–Stockholm electrode potential. In both conventions, the standard hydrogen electrode is defined to have a potential of 0 V. Both conventions also agree on the sign of for a half-cell reaction when it is written as a reduction. The main difference between the two conventions is that upon reversing the direction of a half-cell reaction as written, according to the convention (1) the sign of also switches, whereas in the convention (2) it does not. The logic behind switching the sign of is to maintain the correct sign relationship with the Gibbs free energy change, given by where is the number of electrons involved and is the Faraday constant. It is assumed that the half-reaction is balanced by the appropriate SHE half-reaction. Since switches sign when a reaction is written in reverse, so too, proponents of the convention (1) argue, should the sign of . Proponents of the convention (2) argue that all reported electrode potentials should be consistent with the electrostatic sign of the relative potential difference. Potential difference of a cell assembled of two electrodes Potential of a cell assembled of two electrodes can be determined from the two individual electrode potentials using however , it depends. or, equivalently, This follows from the IUPAC definition of the electric potential difference of a galvanic cell, according to which the electric potential difference of a cell is the difference of the potentials of the electrodes on the right and the left of the galvanic cell. When is positive, then positive electrical charge flows through the cell from the left electrode (anode) to the right electrode (cathode). See also Absolute electrode potential Electric potential Galvani potential Nernst equation Overpotential Potential difference (voltage) Standard electrode potential Table of standard electrode potentials Thermodynamic activity Volta potential References Electrochemistry Electrochemical potentials
Electrode potential
[ "Chemistry" ]
1,044
[ "Electrochemistry", "Electrochemical potentials" ]
38,454
https://en.wikipedia.org/wiki/Gravitational%20constant
The gravitational constant is an empirical physical constant involved in the calculation of gravitational effects in Sir Isaac Newton's law of universal gravitation and in Albert Einstein's theory of general relativity. It is also known as the universal gravitational constant, the Newtonian constant of gravitation, or the Cavendish gravitational constant, denoted by the capital letter . In Newton's law, it is the proportionality constant connecting the gravitational force between two bodies with the product of their masses and the inverse square of their distance. In the Einstein field equations, it quantifies the relation between the geometry of spacetime and the energy–momentum tensor (also referred to as the stress–energy tensor). The measured value of the constant is known with some certainty to four significant digits. In SI units, its value is approximately The modern notation of Newton's law involving was introduced in the 1890s by C. V. Boys. The first implicit measurement with an accuracy within about 1% is attributed to Henry Cavendish in a 1798 experiment. Definition According to Newton's law of universal gravitation, the magnitude of the attractive force () between two bodies each with a spherically symmetric density distribution is directly proportional to the product of their masses, and , and inversely proportional to the square of the distance, , directed along the line connecting their centres of mass: The constant of proportionality, , in this non-relativistic formulation is the gravitational constant. Colloquially, the gravitational constant is also called "Big G", distinct from "small g" (), which is the local gravitational field of Earth (also referred to as free-fall acceleration). Where is the mass of the Earth and is the radius of the Earth, the two quantities are related by: The gravitational constant appears in the Einstein field equations of general relativity, where is the Einstein tensor (not the gravitational constant despite the use of ), is the cosmological constant, is the metric tensor, is the stress–energy tensor, and is the Einstein gravitational constant, a constant originally introduced by Einstein that is directly related to the Newtonian constant of gravitation: Value and uncertainty The gravitational constant is a physical constant that is difficult to measure with high accuracy. This is because the gravitational force is an extremely weak force as compared to other fundamental forces at the laboratory scale. In SI units, the CODATA-recommended value of the gravitational constant is: = The relative standard uncertainty is . Natural units Due to its use as a defining constant in some systems of natural units, particularly geometrized unit systems such as Planck units and Stoney units, the value of the gravitational constant will generally have a numeric value of 1 or a value close to it when expressed in terms of those units. Due to the significant uncertainty in the measured value of G in terms of other known fundamental constants, a similar level of uncertainty will show up in the value of many quantities when expressed in such a unit system. Orbital mechanics In astrophysics, it is convenient to measure distances in parsecs (pc), velocities in kilometres per second (km/s) and masses in solar units . In these units, the gravitational constant is: For situations where tides are important, the relevant length scales are solar radii rather than parsecs. In these units, the gravitational constant is: In orbital mechanics, the period of an object in circular orbit around a spherical object obeys where is the volume inside the radius of the orbit, and is the total mass of the two objects. It follows that This way of expressing shows the relationship between the average density of a planet and the period of a satellite orbiting just above its surface. For elliptical orbits, applying Kepler's 3rd law, expressed in units characteristic of Earth's orbit: where distance is measured in terms of the semi-major axis of Earth's orbit (the astronomical unit, AU), time in years, and mass in the total mass of the orbiting system (). The above equation is exact only within the approximation of the Earth's orbit around the Sun as a two-body problem in Newtonian mechanics, the measured quantities contain corrections from the perturbations from other bodies in the solar system and from general relativity. From 1964 until 2012, however, it was used as the definition of the astronomical unit and thus held by definition: Since 2012, the AU is defined as exactly, and the equation can no longer be taken as holding precisely. The quantity —the product of the gravitational constant and the mass of a given astronomical body such as the Sun or Earth—is known as the standard gravitational parameter (also denoted ). The standard gravitational parameter appears as above in Newton's law of universal gravitation, as well as in formulas for the deflection of light caused by gravitational lensing, in Kepler's laws of planetary motion, and in the formula for escape velocity. This quantity gives a convenient simplification of various gravity-related formulas. The product is known much more accurately than either factor is. Calculations in celestial mechanics can also be carried out using the units of solar masses, mean solar days and astronomical units rather than standard SI units. For this purpose, the Gaussian gravitational constant was historically in widespread use, , expressing the mean angular velocity of the Sun–Earth system. The use of this constant, and the implied definition of the astronomical unit discussed above, has been deprecated by the IAU since 2012. History of measurement Early history The existence of the constant is implied in Newton's law of universal gravitation as published in the 1680s (although its notation as dates to the 1890s), but is not calculated in his Philosophiæ Naturalis Principia Mathematica where it postulates the inverse-square law of gravitation. In the Principia, Newton considered the possibility of measuring gravity's strength by measuring the deflection of a pendulum in the vicinity of a large hill, but thought that the effect would be too small to be measurable. Nevertheless, he had the opportunity to estimate the order of magnitude of the constant when he surmised that "the mean density of the earth might be five or six times as great as the density of water", which is equivalent to a gravitational constant of the order: ≈ A measurement was attempted in 1738 by Pierre Bouguer and Charles Marie de La Condamine in their "Peruvian expedition". Bouguer downplayed the significance of their results in 1740, suggesting that the experiment had at least proved that the Earth could not be a hollow shell, as some thinkers of the day, including Edmond Halley, had suggested. The Schiehallion experiment, proposed in 1772 and completed in 1776, was the first successful measurement of the mean density of the Earth, and thus indirectly of the gravitational constant. The result reported by Charles Hutton (1778) suggested a density of ( times the density of water), about 20% below the modern value. This immediately led to estimates on the densities and masses of the Sun, Moon and planets, sent by Hutton to Jérôme Lalande for inclusion in his planetary tables. As discussed above, establishing the average density of Earth is equivalent to measuring the gravitational constant, given Earth's mean radius and the mean gravitational acceleration at Earth's surface, by setting Based on this, Hutton's 1778 result is equivalent to . The first direct measurement of gravitational attraction between two bodies in the laboratory was performed in 1798, seventy-one years after Newton's death, by Henry Cavendish. He determined a value for implicitly, using a torsion balance invented by the geologist Rev. John Michell (1753). He used a horizontal torsion beam with lead balls whose inertia (in relation to the torsion constant) he could tell by timing the beam's oscillation. Their faint attraction to other balls placed alongside the beam was detectable by the deflection it caused. In spite of the experimental design being due to Michell, the experiment is now known as the Cavendish experiment for its first successful execution by Cavendish. Cavendish's stated aim was the "weighing of Earth", that is, determining the average density of Earth and the Earth's mass. His result, , corresponds to value of . It is surprisingly accurate, about 1% above the modern value (comparable to the claimed relative standard uncertainty of 0.6%). 19th century The accuracy of the measured value of has increased only modestly since the original Cavendish experiment. is quite difficult to measure because gravity is much weaker than other fundamental forces, and an experimental apparatus cannot be separated from the gravitational influence of other bodies. Measurements with pendulums were made by Francesco Carlini (1821, ), Edward Sabine (1827, ), Carlo Ignazio Giulio (1841, ) and George Biddell Airy (1854, ). Cavendish's experiment was first repeated by Ferdinand Reich (1838, 1842, 1853), who found a value of , which is actually worse than Cavendish's result, differing from the modern value by 1.5%. Cornu and Baille (1873), found . Cavendish's experiment proved to result in more reliable measurements than pendulum experiments of the "Schiehallion" (deflection) type or "Peruvian" (period as a function of altitude) type. Pendulum experiments still continued to be performed, by Robert von Sterneck (1883, results between 5.0 and ) and Thomas Corwin Mendenhall (1880, ). Cavendish's result was first improved upon by John Henry Poynting (1891), who published a value of , differing from the modern value by 0.2%, but compatible with the modern value within the cited relative standard uncertainty of 0.55%. In addition to Poynting, measurements were made by C. V. Boys (1895) and Carl Braun (1897), with compatible results suggesting = . The modern notation involving the constant was introduced by Boys in 1894 and becomes standard by the end of the 1890s, with values usually cited in the cgs system. Richarz and Krigar-Menzel (1898) attempted a repetition of the Cavendish experiment using 100,000 kg of lead for the attracting mass. The precision of their result of was, however, of the same order of magnitude as the other results at the time. Arthur Stanley Mackenzie in The Laws of Gravitation (1899) reviews the work done in the 19th century. Poynting is the author of the article "Gravitation" in the Encyclopædia Britannica Eleventh Edition (1911). Here, he cites a value of = with a relative uncertainty of 0.2%. Modern value Paul R. Heyl (1930) published the value of (relative uncertainty 0.1%), improved to (relative uncertainty 0.045% = 450 ppm) in 1942. However, Heyl used the statistical spread as his standard deviation, and he admitted himself that measurements using the same material yielded very similar results while measurements using different materials yielded vastly different results. He spent the next 12 years after his 1930 paper to do more precise measurements, hoping that the composition-dependent effect would go away, but it did not, as he noted in his final paper from the year 1942. Published values of derived from high-precision measurements since the 1950s have remained compatible with Heyl (1930), but within the relative uncertainty of about 0.1% (or 1000 ppm) have varied rather broadly, and it is not entirely clear if the uncertainty has been reduced at all since the 1942 measurement. Some measurements published in the 1980s to 2000s were, in fact, mutually exclusive. Establishing a standard value for with a relative standard uncertainty better than 0.1% has therefore remained rather speculative. By 1969, the value recommended by the National Institute of Standards and Technology (NIST) was cited with a relative standard uncertainty of 0.046% (460 ppm), lowered to 0.012% (120 ppm) by 1986. But the continued publication of conflicting measurements led NIST to considerably increase the standard uncertainty in the 1998 recommended value, by a factor of 12, to a standard uncertainty of 0.15%, larger than the one given by Heyl (1930). The uncertainty was again lowered in 2002 and 2006, but once again raised, by a more conservative 20%, in 2010, matching the relative standard uncertainty of 120 ppm published in 1986. For the 2014 update, CODATA reduced the uncertainty to 46 ppm, less than half the 2010 value, and one order of magnitude below the 1969 recommendation. The following table shows the NIST recommended values published since 1969: In the January 2007 issue of Science, Fixler et al. described a measurement of the gravitational constant by a new technique, atom interferometry, reporting a value of , 0.28% (2800 ppm) higher than the 2006 CODATA value. An improved cold atom measurement by Rosi et al. was published in 2014 of . Although much closer to the accepted value (suggesting that the Fixler et al. measurement was erroneous), this result was 325 ppm below the recommended 2014 CODATA value, with non-overlapping standard uncertainty intervals. As of 2018, efforts to re-evaluate the conflicting results of measurements are underway, coordinated by NIST, notably a repetition of the experiments reported by Quinn et al. (2013). In August 2018, a Chinese research group announced new measurements based on torsion balances, and based on two different methods. These are claimed as the most accurate measurements ever made, with standard uncertainties cited as low as 12 ppm. The difference of 2.7σ between the two results suggests there could be sources of error unaccounted for. Constancy Analysis of observations of 580 type Ia supernovae shows that the gravitational constant has varied by less than one part in ten billion per year over the last nine billion years. See also Gravity of Earth Standard gravity Gaussian gravitational constant Orbital mechanics Escape velocity Gravitational potential Gravitational wave Strong gravity Dirac large numbers hypothesis Accelerating expansion of the universe Lunar Laser Ranging experiment Cosmological constant References Footnotes Citations Sources (Complete report available online: PostScript; PDF. Tables from the report also available: Astrodynamic Constants and Parameters) External links Newtonian constant of gravitation at the National Institute of Standards and Technology References on Constants, Units, and Uncertainty The Controversy over Newton's Gravitational Constant — additional commentary on measurement problems Gravity Fundamental constants
Gravitational constant
[ "Physics" ]
2,972
[ "Physical constants", "Physical quantities", "Fundamental constants" ]
38,579
https://en.wikipedia.org/wiki/Gravity
In physics, gravity () is a fundamental interaction primarily observed as mutual attraction between all things that have mass. Gravity is, by far, the weakest of the four fundamental interactions, approximately 1038 times weaker than the strong interaction, 1036 times weaker than the electromagnetic force and 1029 times weaker than the weak interaction. As a result, it has no significant influence at the level of subatomic particles. However, gravity is the most significant interaction between objects at the macroscopic scale, and it determines the motion of planets, stars, galaxies, and even light. On Earth, gravity gives weight to physical objects, and the Moon's gravity is responsible for sublunar tides in the oceans. The corresponding antipodal tide is caused by the inertia of the Earth and Moon orbiting one another. Gravity also has many important biological functions, helping to guide the growth of plants through the process of gravitropism and influencing the circulation of fluids in multicellular organisms. The gravitational attraction between the original gaseous matter in the universe caused it to coalesce and form stars which eventually condensed into galaxies, so gravity is responsible for many of the large-scale structures in the universe. Gravity has an infinite range, although its effects become weaker as objects get farther away. Gravity is most accurately described by the general theory of relativity, proposed by Albert Einstein in 1915, which describes gravity not as a force, but as the curvature of spacetime, caused by the uneven distribution of mass, and causing masses to move along geodesic lines. The most extreme example of this curvature of spacetime is a black hole, from which nothing—not even light—can escape once past the black hole's event horizon. However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which describes gravity as a force causing any two bodies to be attracted toward each other, with magnitude proportional to the product of their masses and inversely proportional to the square of the distance between them. Current models of particle physics imply that the earliest instance of gravity in the universe, possibly in the form of quantum gravity, supergravity or a gravitational singularity, along with ordinary space and time, developed during the Planck epoch (up to 10−43 seconds after the birth of the universe), possibly from a primeval state, such as a false vacuum, quantum vacuum or virtual particle, in a currently unknown manner. Scientists are currently working to develop a theory of gravity consistent with quantum mechanics, a quantum gravity theory, which would allow gravity to be united in a common mathematical framework (a theory of everything) with the other three fundamental interactions of physics. Definitions , also known as gravitational attraction, is the mutual attraction between all masses in the universe. Gravity is the gravitational attraction at the surface of a planet or other celestial body; gravity may also include, in addition to gravitation, the centrifugal force resulting from the planet's rotation . History Ancient world The nature and mechanism of gravity were explored by a wide range of ancient scholars. In Greece, Aristotle believed that objects fell towards the Earth because the Earth was the center of the Universe and attracted all of the mass in the Universe towards it. He also thought that the speed of a falling object should increase with its weight, a conclusion that was later shown to be false. While Aristotle's view was widely accepted throughout Ancient Greece, there were other thinkers such as Plutarch who correctly predicted that the attraction of gravity was not unique to the Earth. Although he did not understand gravity as a force, the ancient Greek philosopher Archimedes discovered the center of gravity of a triangle. He postulated that if two equal weights did not have the same center of gravity, the center of gravity of the two weights together would be in the middle of the line that joins their centers of gravity. Two centuries later, the Roman engineer and architect Vitruvius contended in his De architectura that gravity is not dependent on a substance's weight but rather on its "nature". In the 6th century CE, the Byzantine Alexandrian scholar John Philoponus proposed the theory of impetus, which modifies Aristotle's theory that "continuation of motion depends on continued action of a force" by incorporating a causative force that diminishes over time. In 628 CE, the Indian mathematician and astronomer Brahmagupta proposed the idea that gravity is an attractive force that draws objects to the Earth and used the term gurutvākarṣaṇ to describe it. In the ancient Middle East, gravity was a topic of fierce debate. The Persian intellectual Al-Biruni believed that the force of gravity was not unique to the Earth, and he correctly assumed that other heavenly bodies should exert a gravitational attraction as well. In contrast, Al-Khazini held the same position as Aristotle that all matter in the Universe is attracted to the center of the Earth. Scientific revolution In the mid-16th century, various European scientists experimentally disproved the Aristotelian notion that heavier objects fall at a faster rate. In particular, the Spanish Dominican priest Domingo de Soto wrote in 1551 that bodies in free fall uniformly accelerate. De Soto may have been influenced by earlier experiments conducted by other Dominican priests in Italy, including those by Benedetto Varchi, Francesco Beato, Luca Ghini, and Giovan Bellaso which contradicted Aristotle's teachings on the fall of bodies. The mid-16th century Italian physicist Giambattista Benedetti published papers claiming that, due to specific gravity, objects made of the same material but with different masses would fall at the same speed. With the 1586 Delft tower experiment, the Flemish physicist Simon Stevin observed that two cannonballs of differing sizes and weights fell at the same rate when dropped from a tower. In the late 16th century, Galileo Galilei's careful measurements of balls rolling down inclines allowed him to firmly establish that gravitational acceleration is the same for all objects. Galileo postulated that air resistance is the reason that objects with a low density and high surface area fall more slowly in an atmosphere. In 1604, Galileo correctly hypothesized that the distance of a falling object is proportional to the square of the time elapsed. This was later confirmed by Italian scientists Jesuits Grimaldi and Riccioli between 1640 and 1650. They also calculated the magnitude of the Earth's gravity by measuring the oscillations of a pendulum. Newton's theory of gravitation In 1657, Robert Hooke published his Micrographia, in which he hypothesized that the Moon must have its own gravity. In 1666, he added two further principles: that all bodies move in straight lines until deflected by some force and that the attractive force is stronger for closer bodies. In a communication to the Royal Society in 1666, Hooke wrote Hooke's 1674 Gresham lecture, An Attempt to prove the Annual Motion of the Earth, explained that gravitation applied to "all celestial bodies" In 1684, Newton sent a manuscript to Edmond Halley titled De motu corporum in gyrum ('On the motion of bodies in an orbit'), which provided a physical justification for Kepler's laws of planetary motion. Halley was impressed by the manuscript and urged Newton to expand on it, and a few years later Newton published a groundbreaking book called Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy). In this book, Newton described gravitation as a universal force, and claimed that "the forces which keep the planets in their orbs must [be] reciprocally as the squares of their distances from the centers about which they revolve." This statement was later condensed into the following inverse-square law: where is the force, and are the masses of the objects interacting, is the distance between the centers of the masses and is the gravitational constant Newton's Principia was well received by the scientific community, and his law of gravitation quickly spread across the European world. More than a century later, in 1821, his theory of gravitation rose to even greater prominence when it was used to predict the existence of Neptune. In that year, the French astronomer Alexis Bouvard used this theory to create a table modeling the orbit of Uranus, which was shown to differ significantly from the planet's actual trajectory. In order to explain this discrepancy, many astronomers speculated that there might be a large object beyond the orbit of Uranus which was disrupting its orbit. In 1846, the astronomers John Couch Adams and Urbain Le Verrier independently used Newton's law to predict Neptune's location in the night sky, and the planet was discovered there within a day. General relativity Eventually, astronomers noticed an eccentricity in the orbit of the planet Mercury which could not be explained by Newton's theory: the perihelion of the orbit was increasing by about 42.98 arcseconds per century. The most obvious explanation for this discrepancy was an as-yet-undiscovered celestial body, such as a planet orbiting the Sun even closer than Mercury, but all efforts to find such a body turned out to be fruitless. In 1915, Albert Einstein developed a theory of general relativity which was able to accurately model Mercury's orbit. In general relativity, the effects of gravitation are ascribed to spacetime curvature instead of a force. Einstein began to toy with this idea in the form of the equivalence principle, a discovery which he later described as "the happiest thought of my life." In this theory, free fall is considered to be equivalent to inertial motion, meaning that free-falling inertial objects are accelerated relative to non-inertial observers on the ground. In contrast to Newtonian physics, Einstein believed that it was possible for this acceleration to occur without any force being applied to the object. Einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths in curved spacetime. These straight paths are called geodesics. As in Newton's first law of motion, Einstein believed that a force applied to an object would cause it to deviate from a geodesic. For instance, people standing on the surface of the Earth are prevented from following a geodesic path because the mechanical resistance of the Earth exerts an upward force on them. This explains why moving along the geodesics in spacetime is considered inertial. Einstein's description of gravity was quickly accepted by the majority of physicists, as it was able to explain a wide variety of previously baffling experimental results. In the coming years, a wide range of experiments provided additional support for the idea of general relativity. Today, Einstein's theory of relativity is used for all gravitational calculations where absolute precision is desired, although Newton's inverse-square law is accurate enough for virtually all ordinary calculations. Modern research In modern physics, general relativity remains the framework for the understanding of gravity. Physicists continue to work to find solutions to the Einstein field equations that form the basis of general relativity and continue to test the theory, finding excellent agreement in all cases. Einstein field equations The Einstein field equations are a system of 10 partial differential equations which describe how matter affects the curvature of spacetime. The system is often expressed in the form where is the Einstein tensor, is the metric tensor, is the stress–energy tensor, is the cosmological constant, is the Newtonian constant of gravitation and is the speed of light. The constant is referred to as the Einstein gravitational constant. A major area of research is the discovery of exact solutions to the Einstein field equations. Solving these equations amounts to calculating a precise value for the metric tensor (which defines the curvature and geometry of spacetime) under certain physical conditions. There is no formal definition for what constitutes such solutions, but most scientists agree that they should be expressable using elementary functions or linear differential equations. Some of the most notable solutions of the equations include: The Schwarzschild solution, which describes spacetime surrounding a spherically symmetric non-rotating uncharged massive object. For compact enough objects, this solution generated a black hole with a central singularity. At points far away from the central mass, the accelerations predicted by the Schwarzschild solution are practically identical to those predicted by Newton's theory of gravity. The Reissner–Nordström solution, which analyzes a non-rotating spherically symmetric object with charge and was independently discovered by several different researchers between 1916 and 1921. In some cases, this solution can predict the existence of black holes with double event horizons. The Kerr solution, which generalizes the Schwarzchild solution to rotating massive objects. Because of the difficulty of factoring in the effects of rotation into the Einstein field equations, this solution was not discovered until 1963. The Kerr–Newman solution for charged, rotating massive objects. This solution was derived in 1964, using the same technique of complex coordinate transformation that was used for the Kerr solution. The cosmological Friedmann–Lemaître–Robertson–Walker solution, discovered in 1922 by Alexander Friedmann and then confirmed in 1927 by Georges Lemaître. This solution was revolutionary for predicting the expansion of the Universe, which was confirmed seven years later after a series of measurements by Edwin Hubble. It even showed that general relativity was incompatible with a static universe, and Einstein later conceded that he had been wrong to design his field equations to account for a Universe that was not expanding. Today, there remain many important situations in which the Einstein field equations have not been solved. Chief among these is the two-body problem, which concerns the geometry of spacetime around two mutually interacting massive objects, such as the Sun and the Earth, or the two stars in a binary star system. The situation gets even more complicated when considering the interactions of three or more massive bodies (the "n-body problem"), and some scientists suspect that the Einstein field equations will never be solved in this context. However, it is still possible to construct an approximate solution to the field equations in the n-body problem by using the technique of post-Newtonian expansion. In general, the extreme nonlinearity of the Einstein field equations makes it difficult to solve them in all but the most specific cases. Gravity and quantum mechanics Despite its success in predicting the effects of gravity at large scales, general relativity is ultimately incompatible with quantum mechanics. This is because general relativity describes gravity as a smooth, continuous distortion of spacetime, while quantum mechanics holds that all forces arise from the exchange of discrete particles known as quanta. This contradiction is especially vexing to physicists because the other three fundamental forces (strong force, weak force and electromagnetism) were reconciled with a quantum framework decades ago. As a result, modern researchers have begun to search for a theory that could unite both gravity and quantum mechanics under a more general framework. One path is to describe gravity in the framework of quantum field theory, which has been successful to accurately describe the other fundamental interactions. The electromagnetic force arises from an exchange of virtual photons, where the QFT description of gravity is that there is an exchange of virtual gravitons. This description reproduces general relativity in the classical limit. However, this approach fails at short distances of the order of the Planck length, where a more complete theory of quantum gravity (or a new approach to quantum mechanics) is required. Tests of general relativity Testing the predictions of general relativity has historically been difficult, because they are almost identical to the predictions of Newtonian gravity for small energies and masses. Still, since its development, an ongoing series of experimental results have provided support for the theory: In 1919, the British astrophysicist Arthur Eddington was able to confirm the predicted gravitational lensing of light during that year's solar eclipse. Eddington measured starlight deflections twice those predicted by Newtonian corpuscular theory, in accordance with the predictions of general relativity. Although Eddington's analysis was later disputed, this experiment made Einstein famous almost overnight and caused general relativity to become widely accepted in the scientific community. In 1959, American physicists Robert Pound and Glen Rebka performed an experiment in which they used gamma rays to confirm the prediction of gravitational time dilation. By sending the rays down a 74-foot tower and measuring their frequency at the bottom, the scientists confirmed that light is redshifted as it moves towards a source of gravity. The observed redshift also supported the idea that time runs more slowly in the presence of a gravitational field. The time delay of light passing close to a massive object was first identified by Irwin I. Shapiro in 1964 in interplanetary spacecraft signals. In 1971, scientists discovered the first-ever black hole in the galaxy Cygnus. The black hole was detected because it was emitting bursts of x-rays as it consumed a smaller star, and it came to be known as Cygnus X-1. This discovery confirmed yet another prediction of general relativity, because Einstein's equations implied that light could not escape from a sufficiently large and compact object. General relativity states that gravity acts on light and matter equally, meaning that a sufficiently massive object could warp light around it and create a gravitational lens. This phenomenon was first confirmed by observation in 1979 using the 2.1 meter telescope at Kitt Peak National Observatory in Arizona, which saw two mirror images of the same quasar whose light had been bent around the galaxy YGKOW G1. Frame dragging, the idea that a rotating massive object should twist spacetime around it, was confirmed by Gravity Probe B results in 2011. In 2015, the LIGO observatory detected faint gravitational waves, the existence of which had been predicted by general relativity. Scientists believe that the waves emanated from a black hole merger that occurred 1.5 billion light-years away. Specifics Earth's gravity Every planetary body (including the Earth) is surrounded by its own gravitational field, which can be conceptualized with Newtonian physics as exerting an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of this field at any given point above the surface is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body. The strength of the gravitational field is numerically equal to the acceleration of objects under its influence. The rate of acceleration of falling objects near the Earth's surface varies very slightly depending on latitude, surface features such as mountains and ridges, and perhaps unusually high or low sub-surface densities. For purposes of weights and measures, a standard gravity value is defined by the International Bureau of Weights and Measures, under the International System of Units (SI). The force of gravity experienced by objects on Earth's surface is the vector sum of two forces: (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force, which results from the choice of an earthbound, rotating frame of reference. The force of gravity is weakest at the equator because of the centrifugal force caused by the Earth's rotation and because points on the equator are furthest from the center of the Earth. The force of gravity varies with latitude and increases from about 9.780 m/s2 at the Equator to about 9.832 m/s2 at the poles. Gravitational radiation General relativity predicts that energy can be transported out of a system through gravitational radiation. The first indirect evidence for gravitational radiation was through measurements of the Hulse–Taylor binary in 1973. This system consists of a pulsar and neutron star in orbit around one another. Its orbital period has decreased since its initial discovery due to a loss of energy, which is consistent for the amount of energy loss due to gravitational radiation. This research was awarded the Nobel Prize in Physics in 1993. The first direct evidence for gravitational radiation was measured on 14 September 2015 by the LIGO detectors. The gravitational waves emitted during the collision of two black holes 1.3 billion light years from Earth were measured. This observation confirms the theoretical predictions of Einstein and others that such waves exist. It also opens the way for practical observation and understanding of the nature of gravity and events in the Universe including the Big Bang. Neutron star and black hole formation also create detectable amounts of gravitational radiation. This research was awarded the Nobel Prize in Physics in 2017. Speed of gravity In December 2012, a research team in China announced that it had produced measurements of the phase lag of Earth tides during full and new moons which seem to prove that the speed of gravity is equal to the speed of light. This means that if the Sun suddenly disappeared, the Earth would keep orbiting the vacant point normally for 8 minutes, which is the time light takes to travel that distance. The team's findings were released in Science Bulletin in February 2013. In October 2017, the LIGO and Virgo detectors received gravitational wave signals within 2 seconds of gamma ray satellites and optical telescopes seeing signals from the same direction. This confirmed that the speed of gravitational waves was the same as the speed of light. Anomalies and discrepancies There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways. Extra-fast stars: Stars in galaxies follow a distribution of velocities where stars on the outskirts are moving faster than they should according to the observed distributions of normal matter. Galaxies within galaxy clusters show a similar pattern. Dark matter, which would interact through gravitation but not electromagnetically, would account for the discrepancy. Various modifications to Newtonian dynamics have also been proposed. Accelerated expansion: The expansion of the universe seems to be speeding up. Dark energy has been proposed to explain this. Flyby anomaly: Various spacecraft have experienced greater acceleration than expected during gravity assist maneuvers. The Pioneer anomaly has been shown to be explained by thermal recoil due to the distant sun radiation on one side of the space craft. Alternative theories Historical alternative theories Aristotelian theory of gravity Le Sage's theory of gravitation (1784) also called LeSage gravity but originally proposed by Fatio and further elaborated by Georges-Louis Le Sage, based on a fluid-based explanation where a light gas fills the entire Universe. Ritz's theory of gravitation, Ann. Chem. Phys. 13, 145, (1908) pp. 267–271, Weber–Gauss electrodynamics applied to gravitation. Classical advancement of perihelia. Nordström's theory of gravitation (1912, 1913), an early competitor of general relativity. Kaluza–Klein theory (1921) Whitehead's theory of gravitation (1922), another early competitor of general relativity. Modern alternative theories Brans–Dicke theory of gravity (1961) Induced gravity (1967), a proposal by Andrei Sakharov according to which general relativity might arise from quantum field theories of matter String theory (late 1960s) ƒ(R) gravity (1970) Horndeski theory (1974) Supergravity (1976) In the modified Newtonian dynamics (MOND) (1981), Mordehai Milgrom proposes a modification of Newton's second law of motion for small accelerations The self-creation cosmology theory of gravity (1982) by G.A. Barber in which the Brans–Dicke theory is modified to allow mass creation Loop quantum gravity (1988) by Carlo Rovelli, Lee Smolin, and Abhay Ashtekar Nonsymmetric gravitational theory (NGT) (1994) by John Moffat Tensor–vector–scalar gravity (TeVeS) (2004), a relativistic modification of MOND by Jacob Bekenstein Chameleon theory (2004) by Justin Khoury and Amanda Weltman. Pressuron theory (2013) by Olivier Minazzoli and Aurélien Hees. Conformal gravity Gravity as an entropic force, gravity arising as an emergent phenomenon from the thermodynamic concept of entropy. In the superfluid vacuum theory the gravity and curved spacetime arise as a collective excitation mode of non-relativistic background superfluid. Massive gravity, a theory where gravitons and gravitational waves have a non-zero mass See also References Sources Further reading External links The Feynman Lectures on Physics Vol. I Ch. 7: The Theory of Gravitation Fundamental interactions Acceleration Articles containing video clips Empirical laws
Gravity
[ "Physics", "Mathematics" ]
5,040
[ "Physical phenomena", "Force", "Physical quantities", "Acceleration", "Quantity", "Fundamental interactions", "Particle physics", "Wikipedia categories named after physical quantities" ]
8,054,686
https://en.wikipedia.org/wiki/Printed%20electronics
Printed electronics is a set of printing methods used to create electrical devices on various substrates. Printing typically uses common printing equipment suitable for defining patterns on material, such as screen printing, flexography, gravure, offset lithography, and inkjet. By electronic-industry standards, these are low-cost processes. Electrically functional electronic or optical inks are deposited on the substrate, creating active or passive devices, such as thin film transistors, capacitors, coils, and resistors. Some researchers expect printed electronics to facilitate widespread, very low-cost, low-performance electronics for applications such as flexible displays, smart labels, decorative and animated posters, and active clothing that do not require high performance. The term printed electronics is often related to organic electronics or plastic electronics, in which one or more inks are composed of carbon-based compounds. These other terms refer to the ink material, which can be deposited by solution-based, vacuum-based, or other processes. Printed electronics, in contrast, specifies the process, and, subject to the specific requirements of the printing process selected, can utilize any solution-based material. This includes organic semiconductors, inorganic semiconductors, metallic conductors, nanoparticles, and nanotubes. The solution usually consist of filler materials dispersed in a suitable solvent. The most commonly used solvents include ethanol, xylene, Dimethylformamide (DMF),Dimethyl sulfoxide (DMSO), toluene and water, whereas, the most common conductive fillers include silver nanoparticles, silver flakes, carbon black, graphene, carbon nanotubes, conductive polymers (such as polyaniline and polypyrrole), and metal powders (such as copper or nickel). Considering the environmental impacts of the organic solvents, researchers are now focused on developing printable inks using water. For the preparation of printed electronics nearly all industrial printing methods are employed. Similar to conventional printing, printed electronics applies ink layers one atop another. So the coherent development of printing methods and ink materials are the field's essential tasks. The most important benefit of printing is low-cost volume fabrication. The lower cost enables use in more applications. An example is RFID-systems, which enable contactless identification in trade and transport. In some domains, such as light-emitting diodes printing does not impact performance. Printing on flexible substrates allows electronics to be placed on curved surfaces, for example: printing solar cells on vehicle roofs. More typically, conventional semiconductors justify their much higher costs by providing much higher performance. Resolution, registration, thickness, holes, materials The maximum required resolution of structures in conventional printing is determined by the human eye. Feature sizes smaller than approximately 20 μm cannot be distinguished by the human eye and consequently exceed the capabilities of conventional printing processes. In contrast, higher resolution and smaller structures are necessary in most electronics printing, because they directly affect circuit density and functionality (especially transistors). A similar requirement holds for the precision with which layers are printed on top of each other (layer to layer registration). Control of thickness, holes, and material compatibility (wetting, adhesion, solubility) are essential, but matter in conventional printing only if the eye can detect them. Conversely, the visual impression is irrelevant for printed electronics. Printing technologies The attraction of printing technology for the fabrication of electronics mainly results from the possibility of preparing stacks of micro-structured layers (and thereby thin-film devices) in a much simpler and cost-effective way compared to conventional electronics. Also, the ability to implement new or improved functionalities (e.g. mechanical flexibility) plays a role. The selection of the printing method used is determined by requirements concerning printed layers, by the properties of printed materials as well as economic and technical considerations of the final printed products. Printing technologies divide between sheet-based and roll-to-roll-based approaches. Sheet-based inkjet and screen printing are best for low-volume, high-precision work. Gravure, offset and flexographic printing are more common for high-volume production, such as solar cells, reaching 10,000 square meters per hour (m2/h). While offset and flexographic printing are mainly used for inorganic and organic conductors (the latter also for dielectrics), gravure printing is especially suitable for quality-sensitive layers like organic semiconductors and semiconductor/dielectric-interfaces in transistors, due to high layer quality. If high resolution is needed, gravure is also suitable for inorganic and organic conductors. Organic field-effect transistors and integrated circuits can be prepared completely by means of mass-printing methods. Inkjet printing Inkjets are flexible and versatile, and can be set up with relatively low effort. However, inkjets offer lower throughput of around 100 m2/h and lower resolution (ca. 50 μm). It is well suited for low-viscosity, soluble materials like organic semiconductors. With high-viscosity materials, like organic dielectrics, and dispersed particles, like inorganic metal inks, difficulties due to nozzle clogging occur. Because ink is deposited via droplets, thickness and dispersion homogeneity is reduced. Using many nozzles simultaneously and pre-structuring the substrate allows improvements in productivity and resolution, respectively. However, in the latter case non-printing methods must be employed for the actual patterning step. Inkjet printing is preferable for organic semiconductors in organic field-effect transistors (OFETs) and organic light-emitting diodes (OLEDs), but also OFETs completely prepared by this method have been demonstrated. Frontplanes and backplanes of OLED-displays, integrated circuits, organic photovoltaic cells (OPVCs) and other devices can be prepared with inkjets. Screen printing Screen printing is appropriate for fabricating electrics and electronics due to its ability to produce patterned, thick layers from paste-like materials. This method can produce conducting lines from inorganic materials (e.g. for circuit boards and antennas), but also insulating and passivating layers, whereby layer thickness is more important than high resolution. Its 50 m2/h throughput and 100 μm resolution are similar to inkjets. This versatile and comparatively simple method is used mainly for conductive and dielectric layers, but also organic semiconductors, e.g. for OPVCs, and even complete OFETs can be printed. Aerosol jet printing Aerosol Jet Printing (also known as Maskless Mesoscale Materials Deposition or M3D) is another material deposition technology for printed electronics. The Aerosol Jet process begins with atomization of an ink, via ultrasonic or pneumatic means, producing droplets on the order of one to two micrometers in diameter. The droplets then flow through a virtual impactor which deflects the droplets having lower momentum away from the stream. This step helps maintaining a tight droplet size distribution. The droplets are entrained in a gas stream and delivered to the print head. Here, an annular flow of clean gas is introduced around the aerosol stream to focus the droplets into a tightly collimated beam of material. The combined gas streams exit the print head through a converging nozzle that compresses the aerosol stream to a diameter as small as 10 μm. The jet of droplets exits the print head at high velocity (~50 meters/second) and impinges upon the substrate. Electrical interconnects, passive and active components are formed by moving the print head, equipped with a mechanical stop/start shutter, relative to the substrate. The resulting patterns can have features ranging from 10 μm wide, with layer thicknesses from tens of nanometers to >10 μm. A wide nozzle print head enables efficient patterning of millimeter size electronic features and surface coating applications. All printing occurs without the use of vacuum or pressure chambers. The high exit velocity of the jet enables a relatively large separation between the print head and the substrate, typically 2–5 mm. The droplets remain tightly focused over this distance, resulting in the ability to print conformal patterns over three dimensional substrates. Despite the high velocity, the printing process is gentle; substrate damage does not occur and there is generally minimal splatter or overspray from the droplets. Once patterning is complete, the printed ink typically requires post treatment to attain final electrical and mechanical properties. Post-treatment is driven more by the specific ink and substrate combination than by the printing process. A wide range of materials has been successfully deposited with the Aerosol Jet process, including diluted thick film pastes, conducting polymer inks, thermosetting polymers such as UV-curable epoxies, and solvent-based polymers like polyurethane and polyimide, and biologic materials. Recently, printing paper was proposed to be used as the substrate of the printing. Highly conductive (close to bulk copper) and high-resolution traces can be printed on foldable and available office printing papers, with 80°Celsius curing temperature and 40 minutes of curing time. Evaporation printing Evaporation printing uses a combination of high precision screen printing with material vaporization to print features to 5 μm. This method uses techniques such as thermal, e-beam, sputter and other traditional production technologies to deposit materials through a high precision shadow mask (or stencil) that is registered to the substrate to better than 1 μm. By layering different mask designs and/or adjusting materials, reliable, cost-effective circuits can be built additively, without the use of photo-lithography. Other methods Other methods with similarities to printing, among them microcontact printing and nano-imprint lithography are of interest. Here, μm- and nm-sized layers, respectively, are prepared by methods similar to stamping with soft and hard forms, respectively. Often the actual structures are prepared subtractively, e.g. by deposition of etch masks or by lift-off processes. For example, electrodes for OFETs can be prepared. Sporadically pad printing is used in a similar manner. Occasionally so-called transfer methods, where solid layers are transferred from a carrier to the substrate, are considered printed electronics. Electrophotography is currently not used in printed electronics. Materials Both organic and inorganic materials are used for printed electronics. Ink materials must be available in liquid form, for solution, dispersion or suspension. They must function as conductors, semiconductors, dielectrics, or insulators. Material costs must be fit for the application. Electronic functionality and printability can interfere with each other, mandating careful optimization. For example, a higher molecular weight in polymers enhances conductivity, but diminishes solubility. For printing, viscosity, surface tension and solid content must be tightly controlled. Cross-layer interactions such as wetting, adhesion, and solubility as well as post-deposition drying procedures affect the outcome. Additives often used in conventional printing inks are unavailable, because they often defeat electronic functionality. Material properties largely determine the differences between printed and conventional electronics. Printable materials provide decisive advantages beside printability, such as mechanical flexibility and functional adjustment by chemical modification (e.g. light color in OLEDs). Printed conductors offer lower conductivity and charge carrier mobility. With a few exceptions, inorganic ink materials are dispersions of metallic or semiconducting micro- and nano-particles. Semiconducting nanoparticles used include silicon and oxide semiconductors. Silicon is also printed as an organic precursor which is then converted by pyrolisis and annealing into crystalline silicon. PMOS but not CMOS is possible in printed electronics. Organic materials Organic printed electronics integrates knowledge and developments from printing, electronics, chemistry, and materials science, especially from organic and polymer chemistry. Organic materials in part differ from conventional electronics in terms of structure, operation and functionality, which influences device and circuit design and optimization as well as fabrication method. The discovery of conjugated polymers and their development into soluble materials provided the first organic ink materials. Materials from this class of polymers variously possess conducting, semiconducting, electroluminescent, photovoltaic and other properties. Other polymers are used mostly as insulators and dielectrics. In most organic materials, hole transport is favored over electron transport. Recent studies indicate that this is a specific feature of organic semiconductor/dielectric-interfaces, which play a major role in OFETs. Therefore, p-type devices should dominate over n-type devices. Durability (resistance to dispersion) and lifetime is less than conventional materials. Organic semiconductors include the conductive polymers poly(3,4-ethylene dioxitiophene), doped with poly(styrene sulfonate), (PEDOT:PSS) and poly(aniline) (PANI). Both polymers are commercially available in different formulations and have been printed using inkjet, screen and offset printing or screen, flexo and gravure printing, respectively. Polymer semiconductors are processed using inkjet printing, such as poly(thiopene)s like poly(3-hexylthiophene) (P3HT) and poly(9,9-dioctylfluorene co-bithiophen) (F8T2). The latter material has also been gravure printed. Different electroluminescent polymers are used with inkjet printing, as well as active materials for photovoltaics (e.g. blends of P3HT with fullerene derivatives), which in part also can be deposited using screen printing (e.g. blends of poly(phenylene vinylene) with fullerene derivatives). Printable organic and inorganic insulators and dielectrics exist, which can be processed with different printing methods. Inorganic materials Inorganic electronics provides highly ordered layers and interfaces that organic and polymer materials cannot provide. Silver nanoparticles are used with flexo, offset and inkjet. Gold particles are used with inkjet. A.C. electroluminescent (EL) multi-color displays can cover many tens of square meters, or be incorporated in watch faces and instrument displays. They involve six to eight printed inorganic layers, including a copper doped phosphor, on a plastic film substrate. CIGS cells can be printed directly onto molybdenum coated glass sheets. A printed gallium arsenide germanium solar cell demonstrated 40.7% conversion efficiency, eight times that of the best organic cells, approaching the best performance of crystalline silicon. Substrates Printed electronics allows the use of flexible substrates, which lowers production costs and allows fabrication of mechanically flexible circuits. While inkjet and screen printing typically imprint rigid substrates like glass and silicon, mass-printing methods nearly exclusively use flexible foil and paper. Poly(ethylene terephthalate)-foil (PET) is a common choice, due to its low cost and moderately high temperature stability. Poly(ethylene naphthalate)- (PEN) and poly(imide)-foil (PI) are higher performance, higher cost alternatives. Paper's low costs and manifold applications make it an attractive substrate, however, its high roughness and high wettability have traditionally made it problematic for electronics. This is an active research area, however, and print-compatible metal deposition techniques have been demonstrated that adapt to the rough 3D surface geometry of paper. Other important substrate criteria are low roughness and suitable wet-ability, which can be tuned pre-treatment by use of coating or Corona discharge. In contrast to conventional printing, high absorbency is usually disadvantageous. History Albert Hanson, a German by birth, is credited to have introduced the concept of printed electronics. in 1903 he filled a patent for “Printed Wires,” and thus printed electronics were born. Hanson proposed forming a Printed Circuit Board pattern on copper foil through cutting or stamping. The drawn elements were glued to the dielectric, in this case, paraffined paper. The first printed circuit was produced in 1936 by Paul Eisler, and that process was used for large-scale production of radios by the USA during World War II. Printed circuit technology was released for commercial use in the US in 1948 (Printed Circuits Handbook, 1995). In the over a half-century since its inception, printed electronics has evolved from the production of printed circuit boards (PCBs), through the everyday use of membrane switches, to today's RFID, photovoltaic and electroluminescent technologies. Today it is nearly impossible to look around a modern American household and not see devices that either uses printed electronic components or that are the direct result of printed electronic technologies. Widespread production of printed electronics for household use began in the 1960s when the Printed Circuit Board became the foundation for all consumer electronics. Since then printed electronics have become a cornerstone in many new commercial products. The biggest trend in recent history when it comes to printed electronics is the widespread use of them in solar cells. In 2011, researchers from MIT created a flexible solar cell by inkjet printing on normal paper. In 2018, researchers at Rice University have developed organic solar cells which can be painted or printed onto surfaces. These solar cells have been shown to max out at fifteen percent efficiency. Konarka Technologies, now a defunct company in the US, was the pioneering company in producing inkjet solar cells. Today there are more than fifty companies across a diverse number of countries that are producing printed solar cells. While printed electronics have been around since the 1960s, they are predicted to have a major boom in total revenue. As of 2011, the total printed electronic revenue was reported to be at $12.385 (billion). A report by IDTechEx predicts the PE market will reach $330 (billion) in 2027. A big reason for this increase in revenue is because of the incorporation of printed electronic into cellphones. Nokia was one of the companies that pioneered the idea of creating a “Morph” phone using printed electronics. Since then, Apple has implemented this technology into their iPhone XS, XS Max, and XR devices. Printed electronics can be used to make all of the following components of a cellphone: 3D main antenna, GPS antenna, energy storage, 3D interconnections, multi-layer PCB, edge circuits, ITO jumpers, hermetic seals, LED packaging, and tactile feedback. With the revolutionary discoveries and advantages that printed electronic gives to companies many large companies have made recent investments into this technology. In 2007, Soligie Inc. and Thinfilm Electronics entered into an agreement to combine IPs for soluble memory materials and functional materials printing to develop printed memory in commercial volumes. LG announce significant investment, potentially $8.71 billion in OLEDs on Plastic. Sharp (Foxconn) will invest $570m in pilot line for OLED displays. BOE announce potential $6.8 billion in flexible AMOLED fab. Heliatek has secured €80m in additional funding for OPV manufacturing in Dresden. PragmatIC has raised ~ €20m from investors including Avery Dennison. Thinfilm invests in new production site in Silicon Valley (formerly owned by Qualcomm). Cambrios back in business after acquisition by TPK. Applications Printed electronics are in use or under consideration include wireless sensors in packaging, skin patches that communicate with the internet, and buildings that detect leaks to enable preventative maintenance. Most of these applications are still in the prototyping and development stages. There is a particularly growing interest for flexible smart electronic systems, including photovoltaic, sensing and processing devices, driven by the desire to extend and integrate the latest advances in (opto-)electronic technologies into a broad range of low-cost (even disposable) consumer products of our everyday life, and as tools to bring together the digital and physical worlds. Norwegian company ThinFilm demonstrated roll-to-roll printed organic memory in 2009. Another company, Rotimpres based in Spain, has successfully introduced applications on different markets as for instance; heaters for smart furniture or to prevent mist and capacitive switch for keyboards on white goods and industrial machines. Standards development and activities Technical standards and road-mapping initiatives are intended to facilitate value chain development (for sharing of product specifications, characterization standards, etc.) This strategy of standards development mirrors the approach used by silicon-based electronics over the past 50 years. Initiatives include: The IEEE Standards Association has published IEEE 1620-2004 and IEEE 1620.1-2006. Similar to the well-established International Technology Roadmap for Semiconductors (ITRS), the International Electronics Manufacturing Initiative (iNEMI) has published a roadmap for printed and other organic electronics. IPC—Association Connecting Electronics Industries has published three standards for printed electronics. All three have been published in cooperation with the Japan Electronic Packaging and Circuits Association (JPCA): IPC/JPCA-4921, Requirements for Printed Electronics Base Materials IPC/JPCA-4591, Requirements for Printed Electronics Functional Conductive Materials IPC/JPCA-2291, Design Guideline for Printed Electronics These standards, and others in development, are part of IPC's Printed Electronics Initiative. See also Amorphous silicon Anilox rolls Chip tag Coating and printing processes Conductive ink Electronic paper Flexible battery Flexible electronics Laminar electronics Nanoparticle silicon Oligomer Organic electronics References Further reading Printed Organic and Molecular Electronics, edited by D. Gamota, P. Brazis, K. Kalyanasundaram, and J. Zhang (Kluwer Academic Publishers: New York, 2004). External links Cleaner Electronics Research Group - Brunel University Printed Electronics conference/exhibition Asia USA New Nano Silver Powder Enables Flexible Printed Circuits (Ferro Corporation) Western Michigan University's Center for Advancement of Printed Electronics (CAPE) includes AccuPress gravure printer Major Trends in Gravure Printed Electronics June 2010 Printed Electronics – avistando el futuro. Printed Electronics en Español Organic Solar Cells - Theory and Practice (Coursera) Electronics manufacturing Flexible electronics
Printed electronics
[ "Engineering" ]
4,600
[ "Electronic engineering", "Electronics manufacturing", "Flexible electronics" ]
8,054,792
https://en.wikipedia.org/wiki/Background%20selection
Background selection describes the loss of genetic diversity at a locus due to negative selection against deleterious alleles with which it is in linkage disequilibrium. The name emphasizes the fact that the genetic background, or genomic environment, of a mutation has a significant impact on whether it will be preserved versus lost from a population. Background selection contradicts the assumption of the neutral theory of molecular evolution that the fixation or loss of a neutral allele can be described by one-locus models of genetic drift, independently from other loci. As well as reducing neutral nucleotide diversity, background selection reduces the fixation probability of beneficial mutations, and increases the fixation probability of deleterious mutations. Effect on neutral diversity The degree to which neutral nucleotide diversity, which is quantified as the 'effective population size', is reduced due to background selection, depends on whether the neutral sites are linked to deleterious sites. For unlinked sites, it is reduced by exp(-8Ush), where U is the genome-wide deleterious mutation rate, s is the selection coefficient of deleterious mutations, and h is the dominance coefficient. This corresponds to the probability that an individual cannot appreciably contribute to the next generation because its genetic load is too high. The reduction is smaller for large s because deleterious mutations are removed more quickly from the population. For linked sites, diversity is reduced by exp(-u/r), where u/r is the ratio of deleterious mutation to recombination within a genomic window surrounding the neutral allele of interest. This corresponds to the probability that a gene copy is able to escape via recombination from nearby deleterious alleles. Background selection at linked sites dominates when U<1, while background selection at unlinked sites dominates when U>1. Background selection contributes to a selective explanation of the positive correlation between local rates of recombination and polymorphism across the genome. In areas of high recombination, new mutations are more likely to ‘escape' the effects of nearby selection and be retained in the population. The same correlation is also produced by genetic hitchhiking. The two theories are easiest to distinguish in regions of low recombination. Failing to account for background selection can lead to errors in the inference of the demographic history of populations. Implications for asexual populations Background selection in asexual populations produces Muller's ratchet, the accumulation of irreversible deleterious mutations. Background selection reduces the effective population size down to represent only those individuals with the fewest mutations, and sometimes this size stochastically falls to zero, producing one click of the ratchet. References Biodiversity Neutral theory it:Selezione ambientale
Background selection
[ "Biology" ]
574
[ "Non-Darwinian evolution", "Neutral theory", "Biology theories", "Biodiversity" ]
8,056,148
https://en.wikipedia.org/wiki/Differential%20ideal
In the theory of differential forms, a differential ideal I is an algebraic ideal in the ring of smooth differential forms on a smooth manifold, in other words a graded ideal in the sense of ring theory, that is further closed under exterior differentiation d, meaning that for any form α in I, the exterior derivative dα is also in I. In the theory of differential algebra, a differential ideal I in a differential ring R is an ideal which is mapped to itself by each differential operator. Exterior differential systems and partial differential equations An exterior differential system consists of a smooth manifold and a differential ideal . An integral manifold of an exterior differential system consists of a submanifold having the property that the pullback to of all differential forms contained in vanishes identically. One can express any partial differential equation system as an exterior differential system with independence condition. Suppose that we have a kth order partial differential equation system for maps , given by . The graph of the -jet of any solution of this partial differential equation system is a submanifold of the jet space, and is an integral manifold of the contact system on the -jet bundle. This idea allows one to analyze the properties of partial differential equations with methods of differential geometry. For instance, we can apply the Cartan–Kähler_theorem to a system of partial differential equations by writing down the associated exterior differential system. We can frequently apply Cartan's equivalence method to exterior differential systems to study their symmetries and their diffeomorphism invariants. Perfect differential ideals A differential ideal is perfect if it has the property that if it contains an element then it contains any element such that for some . In other words, perfect differential ideals are radical differential ideals. References Robert Bryant, Phillip Griffiths and Lucas Hsu, Toward a geometry of differential equations(DVI file), in Geometry, Topology, & Physics, Conf. Proc. Lecture Notes Geom. Topology, edited by S.-T. Yau, vol. IV (1995), pp. 1–76, Internat. Press, Cambridge, MA Robert Bryant, Shiing-Shen Chern, Robert Gardner, Phillip Griffiths, Hubert Goldschmidt, Exterior Differential Systems, Springer--Verlag, Heidelberg, 1991. Thomas A. Ivey, J. M. Landsberg, Cartan for beginners. Differential geometry via moving frames and exterior differential systems. Second edition. Graduate Studies in Mathematics, 175. American Mathematical Society, Providence, RI, 2016. H. W. Raudenbush, Jr. "Ideal Theory and Algebraic Differential Equations", Transactions of the American Mathematical Society, Vol. 36, No. 2. (Apr., 1934), pp. 361–368. Stable URL: J. F. Ritt, Differential Algebra, Dover, New York, 1950. Differential forms Differential algebra Differential systems
Differential ideal
[ "Mathematics", "Engineering" ]
583
[ "Differential algebra", "Fields of abstract algebra", "Tensors", "Differential forms" ]
8,057,418
https://en.wikipedia.org/wiki/Quantum%20potential
The quantum potential or quantum potentiality is a central concept of the de Broglie–Bohm formulation of quantum mechanics, introduced by David Bohm in 1952. Initially presented under the name quantum-mechanical potential, subsequently quantum potential, it was later elaborated upon by Bohm and Basil Hiley in its interpretation as an information potential which acts on a quantum particle. It is also referred to as quantum potential energy, Bohm potential, quantum Bohm potential or Bohm quantum potential. In the framework of the de Broglie–Bohm theory, the quantum potential is a term within the Schrödinger equation which acts to guide the movement of quantum particles. The quantum potential approach introduced by Bohm provides a physically less fundamental exposition of the idea presented by Louis de Broglie: de Broglie had postulated in 1925 that the relativistic wave function defined on spacetime represents a pilot wave which guides a quantum particle, represented as an oscillating peak in the wave field, but he had subsequently abandoned his approach because he was unable to derive the guidance equation for the particle from a non-linear wave equation. The seminal articles of Bohm in 1952 introduced the quantum potential and included answers to the objections which had been raised against the pilot wave theory. The Bohm quantum potential is closely linked with the results of other approaches, in particular relating to works of Erwin Madelung in 1927 and Carl Friedrich von Weizsäcker in 1935. Building on the interpretation of the quantum theory introduced by Bohm in 1952, David Bohm and Basil Hiley in 1975 presented how the concept of a quantum potential leads to the notion of an "unbroken wholeness of the entire universe", proposing that the fundamental new quality introduced by quantum physics is nonlocality. Relation to the Schrödinger equation The Schrödinger equation is re-written using the polar form for the wave function with real-valued functions and , where is the amplitude (absolute value) of the wave function , and its phase. This yields two equations: from the imaginary and real part of the Schrödinger equation follow the continuity equation and the quantum Hamilton–Jacobi equation respectively. Continuity equation The imaginary part of the Schrödinger equation in polar form yields which, provided , can be interpreted as the continuity equation for the probability density and the velocity field Quantum Hamilton–Jacobi equation The real part of the Schrödinger equation in polar form yields a modified Hamilton–Jacobi equation also referred to as quantum Hamilton–Jacobi equation. It differs from the classical Hamilton–Jacobi equation only by the term This term , called quantum potential, thus depends on the curvature of the amplitude of the wave function. In the limit , the function is a solution of the (classical) Hamilton–Jacobi equation; therefore, the function is also called the Hamilton–Jacobi function, or action, extended to quantum physics. Properties Hiley emphasised several aspects that regard the quantum potential of a quantum particle: it is derived mathematically from the real part of the Schrödinger equation under polar decomposition of the wave function, is not derived from a Hamiltonian or other external source, and could be said to be involved in a self-organising process involving a basic underlying field; it does not change if is multiplied by a constant, as this term is also present in the denominator, so that is independent of the magnitude of and thus of field intensity; therefore, the quantum potential fulfils a precondition for nonlocality: it need not fall off as distance increases; it carries information about the whole experimental arrangement in which the particle finds itself. In 1979, Hiley and his co-workers Philippidis and Dewdney presented a full calculation on the explanation of the two-slit experiment in terms of Bohmian trajectories that arise for each particle moving under the influence of the quantum potential, resulting in the well-known interference patterns. Also the shift of the interference pattern which occurs in presence of a magnetic field in the Aharonov–Bohm effect could be explained as arising from the quantum potential. Relation to the measurement process The collapse of the wave function of the Copenhagen interpretation of quantum theory is explained in the quantum potential approach by the demonstration that, after a measurement, "all the packets of the multi-dimensional wave function that do not correspond to the actual result of measurement have no effect on the particle" from then on. Bohm and Hiley pointed out that Measurement then "involves a participatory transformation in which both the system under observation and the observing apparatus undergo a mutual participation so that the trajectories behave in a correlated manner, becoming correlated and separated into different, non-overlapping sets (which we call 'channels')". Quantum potential of an n-particle system The Schrödinger wave function of a many-particle quantum system cannot be represented in ordinary three-dimensional space. Rather, it is represented in configuration space, with three dimensions per particle. A single point in configuration space thus represents the configuration of the entire n-particle system as a whole. A two-particle wave function of identical particles of mass has the quantum potential where and refer to particle 1 and particle 2 respectively. This expression generalizes in straightforward manner to particles: In case the wave function of two or more particles is separable, then the system's total quantum potential becomes the sum of the quantum potentials of the two particles. Exact separability is extremely unphysical given that interactions between the system and its environment destroy the factorization; however, a wave function that is a superposition of several wave functions of approximately disjoint support will factorize approximately. Derivation for a separable quantum system That the wave function is separable means that factorizes in the form . Then it follows that also factorizes, and the system's total quantum potential becomes the sum of the quantum potentials of the two particles. In case the wave function is separable, that is, if factorizes in the form , the two one-particle systems behave independently. More generally, the quantum potential of an -particle system with separable wave function is the sum of quantum potentials, separating the system into independent one-particle systems. Formulation in terms of probability density Quantum potential in terms of the probability density function Bohm, as well as other physicists after him, have sought to provide evidence that the Born rule linking to the probability density function can be understood, in a pilot wave formulation, as not representing a basic law, but rather a theorem (called quantum equilibrium hypothesis) which applies when a quantum equilibrium is reached during the course of the time development under the Schrödinger equation. With Born's rule, and straightforward application of the chain and product rules the quantum potential, expressed in terms of the probability density function, becomes: Quantum force The quantum force , expressed in terms of the probability distribution, amounts to: Formulation in configuration space and in momentum space, as the result of projections M. R. Brown and B. Hiley showed that, as alternative to its formulation terms of configuration space (-space), the quantum potential can also be formulated in terms of momentum space (-space). In line with David Bohm's approach, Basil Hiley and mathematician Maurice de Gosson showed that the quantum potential can be seen as a consequence of a projection of an underlying structure, more specifically of a non-commutative algebraic structure, onto a subspace such as ordinary space (-space). In algebraic terms, the quantum potential can be seen as arising from the relation between implicate and explicate orders: if a non-commutative algebra is employed to describe the non-commutative structure of the quantum formalism, it turns out that it is impossible to define an underlying space, but that rather "shadow spaces" (homomorphic spaces) can be constructed and that in so doing the quantum potential appears. The quantum potential approach can be seen as a way to construct the shadow spaces. The quantum potential thus results as a distortion due to the projection of the underlying space into -space, in similar manner as a Mercator projection inevitably results in a distortion in a geographical map. There exists complete symmetry between the -representation, and the quantum potential as it appears in configuration space can be seen as arising from the dispersion of the momentum -representation. The approach has been applied to extended phase space, also in terms of a Duffin–Kemmer–Petiau algebra approach. Relation to other quantities and theories Relation to the Fisher information It can be shown that the mean value of the quantum potential is proportional to the probability density's Fisher information about the observable Using this definition for the Fisher information, we can write: Quantum potential as energy of internal motion associated with spin Giovanni Salesi, Erasmo Recami and co-workers showed in 1998 that, in agreement with the König's theorem, the quantum potential can be identified with the kinetic energy of the internal motion ("zitterbewegung") associated with the spin of a spin-1/2 particle observed in a center-of-mass frame. More specifically, they showed that the internal zitterbewegung velocity for a spinning, non-relativistic particle of constant spin with no precession, and in absence of an external field, has the squared value: from which the second term is shown to be of negligible size; then with it follows that Salesi gave further details on this work in 2009. In 1999, Salvatore Esposito generalized their result from spin-1/2 particles to particles of arbitrary spin, confirming the interpretation of the quantum potential as a kinetic energy for an internal motion. Esposito showed that (using the notation =1) the quantum potential can be written as: and that the causal interpretation of quantum mechanics can be reformulated in terms of a particle velocity where the "drift velocity" is and the "relative velocity" is , with and representing the spin direction of the particle. In this formulation, according to Esposito, quantum mechanics must necessarily be interpreted in probabilistic terms, for the reason that a system's initial motion condition cannot be exactly determined. Esposito explained that "the quantum effects present in the Schrödinger equation are due to the presence of a peculiar spatial direction associated with the particle that, assuming the isotropy of space, can be identified with the spin of the particle itself". Esposito generalized it from matter particles to gauge particles, in particular photons, for which he showed that, if modelled as , with probability function , they can be understood in a quantum potential approach. James R. Bogan, in 2002, published the derivation of a reciprocal transformation from the Hamilton-Jacobi equation of classical mechanics to the time-dependent Schrödinger equation of quantum mechanics which arises from a gauge transformation representing spin, under the simple requirement of conservation of probability. This spin-dependent transformation is a function of the quantum potential. Re-interpretation in terms of Clifford algebras B. Hiley and R. E. Callaghan re-interpret the role of the Bohm model and its notion of quantum potential in the framework of Clifford algebra, taking account of recent advances that include the work of David Hestenes on spacetime algebra. They show how, within a nested hierarchy of Clifford algebras , for each Clifford algebra an element of a minimal left ideal and an element of a right ideal representing its Clifford conjugation can be constructed, and from it the Clifford density element (CDE) , an element of the Clifford algebra which is isomorphic to the standard density matrix but independent of any specific representation. On this basis, bilinear invariants can be formed which represent properties of the system. Hiley and Callaghan distinguish bilinear invariants of a first kind, of which each stands for the expectation value of an element of the algebra which can be formed as , and bilinear invariants of a second kind which are constructed with derivatives and represent momentum and energy. Using these terms, they reconstruct the results of quantum mechanics without depending on a particular representation in terms of a wave function nor requiring reference to an external Hilbert space. Consistent with earlier results, the quantum potential of a non-relativistic particle with spin (Pauli particle) is shown to have an additional spin-dependent term, and the momentum of a relativistic particle with spin (Dirac particle) is shown to consist in a linear motion and a rotational part. The two dynamical equations governing the time evolution are re-interpreted as conservation equations. One of them stands for the conservation of energy; the other stands for the conservation of probability and of spin. The quantum potential plays the role of an internal energy which ensures the conservation of total energy. Relativistic and field-theoretic extensions Quantum potential and relativity Bohm and Hiley demonstrated that the non-locality of quantum theory can be understood as limit case of a purely local theory, provided the transmission of active information is allowed to be greater than the speed of light, and that this limit case yields approximations to both quantum theory and relativity. The quantum potential approach was extended by Hiley and co-workers to quantum field theory in Minkowski spacetime and to curved spacetime. Carlo Castro and Jorge Mahecha derived the Schrödinger equation from the Hamilton-Jacobi equation in conjunction with the continuity equation, and showed that the properties of the relativistic Bohm quantum potential in terms of the ensemble density can be described by the Weyl properties of space. In Riemann flat space, the Bohm potential is shown to equal the Weyl curvature. According to Castro and Mahecha, in the relativistic case, the quantum potential (using the d'Alembert operator  and in the notation ) takes the form and the quantum force exerted by the relativistic quantum potential is shown to depend on the Weyl gauge potential and its derivatives. Furthermore, the relationship among Bohm's potential and the Weyl curvature in flat spacetime corresponds to a similar relationship among Fisher Information and Weyl geometry after introduction of a complex momentum. Diego L. Rapoport, on the other hand, associates the relativistic quantum potential with the metric scalar curvature (Riemann curvature). In relation to the Klein–Gordon equation for a particle with mass and charge, Peter R. Holland spoke in his book of 1993 of a "quantum potential-like term" that is proportional . He emphasized however that to give the Klein–Gordon theory a single-particle interpretation in terms of trajectories, as can be done for nonrelativistic Schrödinger quantum mechanics, would lead to unacceptable inconsistencies. For instance, wave functions that are solutions to the Klein–Gordon or the Dirac equation cannot be interpreted as the probability amplitude for a particle to be found in a given volume at time in accordance with the usual axioms of quantum mechanics, and similarly in the causal interpretation it cannot be interpreted as the probability for the particle to be in that volume at that time. Holland pointed out that, while efforts have been made to determine a Hermitian position operator that would allow an interpretation of configuration space quantum field theory, in particular using the Newton–Wigner localization approach, but that no connection with possibilities for an empirical determination of position in terms of a relativistic measurement theory or for a trajectory interpretation has so far been established. Yet according to Holland this does not mean that the trajectory concept is to be discarded from considerations of relativistic quantum mechanics. Hrvoje Nikolić derived as expression for the quantum potential, and he proposed a Lorentz-covariant formulation of the Bohmian interpretation of many-particle wave functions. He also developed a generalized relativistic-invariant probabilistic interpretation of quantum theory, in which is no longer a probability density in space but a probability density in space-time. Quantum potential in quantum field theory Starting from the space representation of the field coordinate, a causal interpretation of the Schrödinger picture of relativistic quantum theory has been constructed. The Schrödinger picture for a neutral, spin 0, massless field , with real-valued functionals, can be shown to lead to This has been called the superquantum potential by Bohm and his co-workers.<ref>Basil Hiley: The conceptual structure of the Bohm interpretation of quantum mechanics, Kalervo Vihtori Laurikainen et al (ed.): Symposium on the Foundations of Modern Physics 1994: 70 years of matter waves, Editions Frontières, , p. 99–117, p. 144</ref> Basil Hiley showed that the energy–momentum-relations in the Bohm model can be obtained directly from the energy–momentum tensor of quantum field theory and that the quantum potential is an energy term that is required for local energy–momentum conservation. He has also hinted that for particle with energies equal to or higher than the pair creation threshold, Bohm's model constitutes a many-particle theory that describes also pair creation and annihilation processes. Interpretation and naming of the quantum potential In his article of 1952, providing an alternative interpretation of quantum mechanics, Bohm already spoke of a "quantum-mechanical" potential. Bohm and Basil Hiley also called the quantum potential an information potential, given that it influences the form of processes and is itself shaped by the environment. Bohm indicated "The ship or aeroplane (with its automatic Pilot) is a self-active system, i.e. it has its own energy. But the form of its activity is determined by the information content concerning its environment that is carried by the radar waves. This is independent of the intensity of the waves. We can similarly regard the quantum potential as containing active information. It is potentially active everywhere, but actually active only where and when there is a particle." (italics in original). Hiley refers to the quantum potential as internal energy and as "a new quality of energy only playing a role in quantum processes". He explains that the quantum potential is a further energy term aside the well-known kinetic energy and the (classical) potential energy and that it is a nonlocal energy term that arises necessarily in view of the requirement of energy conservation; he added that much of the physics community's resistance against the notion of the quantum potential may have been due to scientists' expectations that energy should be local. Hiley has emphasized that the quantum potential, for Bohm, was "a key element in gaining insights into what could underlie the quantum formalism. Bohm was convinced by his deeper analysis of this aspect of the approach that the theory could not be mechanical. Rather, it is organic in the sense of Whitehead. Namely, that it was the whole that determined the properties of the individual particles and their relationship, not the other way round."See also: Basil Hiley#Quantum potential and active information Peter R. Holland, in his comprehensive textbook, also refers to it as quantum potential energy. The quantum potential is also referred to in association with Bohm's name as Bohm potential, quantum Bohm potential or Bohm quantum potential. Applications The quantum potential approach can be used to model quantum effects without requiring the Schrödinger equation to be explicitly solved, and it can be integrated in simulations, such as Monte Carlo simulations using the hydrodynamic and drift diffusion equations. This is done in form of a "hydrodynamic" calculation of trajectories: starting from the density at each "fluid element", the acceleration of each "fluid element" is computed from the gradient of and , and the resulting divergence of the velocity field determines the change to the density. The approach using Bohmian trajectories and the quantum potential is used for calculating properties of quantum systems which cannot be solved exactly, which are often approximated using semi-classical approaches. Whereas in mean field approaches the potential for the classical motion results from an average over wave functions, this approach does not require the computation of an integral over wave functions. The expression for the quantum force has been used, together with Bayesian statistical analysis and Expectation-maximisation methods, for computing ensembles of trajectories that arise under the influence of classical and quantum forces. Further reading Fundamental articles (full text) (full text) D. Bohm, B. J. Hiley, P. N. Kaloyerou: An ontological basis for the quantum theory, Physics Reports (Review section of Physics Letters), volume 144, number 6, pp. 321–375, 1987 (full text ), therein: D. Bohm, B. J. Hiley: I. Non-relativistic particle systems, pp. 321–348, and D. Bohm, B. J. Hiley, P. N. Kaloyerou: II. A causal interpretation of quantum fields, pp. 349–375 Recent articles Spontaneous creation of the universe from nothing, arXiv:1404.1207v1, 4 April 2014 Maurice de Gosson, Basil Hiley: Short Time Quantum Propagator and Bohmian Trajectories, arXiv:1304.4771v1 (submitted 17 April 2013) Robert Carroll: Fluctuations, gravity, and the quantum potential, 13 January 2005, asXiv:gr-qc/0501045v1 Overview Davide Fiscaletti: About the Different Approaches to Bohm's Quantum Potential in Non-Relativistic Quantum Mechanics, Quantum Matter, Volume 3, Number 3, June 2014, pp. 177–199(23), . Ignazio Licata, Davide Fiscaletti (with a foreword by B.J. Hiley): Quantum potential: Physics, Geometry and Algebra, AMC, Springer, 2013, (print) / (online) Peter R. Holland: The Quantum Theory of Motion: An Account of the De Broglie-Bohm Causal Interpretation of Quantum Mechanics, Cambridge University Press, Cambridge (first published June 25, 1993), hardback, paperback, transferred to digital printing 2004 David Bohm, Basil Hiley: The Undivided Universe: An Ontological Interpretation of Quantum Theory, Routledge, 1993, David Bohm, F. David Peat: Science, Order and Creativity'', 1987, Routledge, 2nd ed. 2000 (transferred to digital printing 2008, Routledge), References Quantum mechanical potentials Physical quantities
Quantum potential
[ "Physics", "Mathematics" ]
4,629
[ "Physical phenomena", "Physical quantities", "Quantity", "Quantum mechanics", "Quantum mechanical potentials", "Physical properties" ]
15,522,867
https://en.wikipedia.org/wiki/Motion%20interpolation
Motion interpolation or motion-compensated frame interpolation (MCFI) is a form of video processing in which intermediate film, video or animation frames are generated between existing ones by means of interpolation, in an attempt to make animation more fluid, to compensate for display motion blur, and for fake slow motion effects. Hardware applications Displays Motion interpolation is a common, optional feature of various modern display devices such as HDTVs and video players, aimed at increasing perceived framerate or alleviating display motion blur, a common problem on LCD flat-panel displays. Difference from display framerate A display's framerate is not always equivalent to that of the content being displayed. In other words, a display capable of or operating at a high framerate does not necessarily mean that it can or must perform motion interpolation. For example, a TV running at 120 Hz and displaying 24 FPS content will simply display each content frame for five of the 120 display frames per second. This has no effect on the picture other than eliminating the need for 3:2 pulldown and thus film judder as a matter of course (since 120 is evenly divisible by 24). Eliminating judder results in motion that is less "jumpy" and which matches that of a theater projector. Motion interpolation can be used to reduce judder, but it is not required in order to do so. Relationship to advertised display framerate The advertised frame-rate of a specific display may refer to either the maximum number of content frames which may be displayed per second, or the number of times the display is refreshed in some way, irrespective of content. In the latter case, the actual presence or strength of any motion interpolation option may vary. In addition, the ability of a display to show content at a specific framerate does not mean that display is capable of accepting content running at that rate; most consumer displays above 60 Hz do not accept a higher frequency signal, but rather use the extra frame capability to eliminate judder, reduce ghosting, or create interpolated frames. As an example, a TV may be advertised as "240 Hz", which would mean one of two things: The TV can natively display 240 frames per second, and perform advanced motion interpolation which inserts between 2 and 8 new frames between existing ones (for content running at 60 FPS to 24 FPS, respectively). For active 3D, this framerate would be halved. The TV is natively only capable of displaying 120 frames per second, and basic motion interpolation which inserts between 1 and 4 new frames between existing ones. Typically the only difference from a "120 Hz" TV in this case is the addition of a strobing backlight, which flickers on and off at 240 Hz, once after every 120 Hz frame. The intent of a strobing backlight is to increase the apparent response rate and thus reduce ghosting, which results in smoother motion overall. However, this technique has nothing to do with actual framerate. For active 3D, this framerate is halved, and no motion interpolation or pulldown functionality is typically provided. 600 Hz is an oft-advertised figure for plasma TVs, and while technically correct, it only refers to an inter-frame response time of 1.6 milliseconds. This can significantly reduce ghosting and thus improve motion quality, but is unrelated to interpolation and content framerate. There are no consumer films shot at 600 frames per second, nor any TV processors capable of generating 576 interpolated frames per second. Software applications Video playback software Motion interpolation features are included with several video player applications. WinDVD uses Philips' TrimensionDNM for frame interpolation. PowerDVD uses TrueTheater Motion for interpolation of DVD and video files to up to 72 frame/s. Splash PRO uses Mirillis Motion² technology for up to Full HD video interpolation. DmitriRender uses GPU-oriented frame rate conversion algorithm with native DXVA support for frame interpolation. Bluesky Frame Rate Converter is a DirectShow filter that can convert the frame rate using AMD Fluid Motion. SVP (SmoothVideo Project) comes integrated by default with MPC-HC; paid version can integrate with more players, including VLC. Video editing software Some video editing software and plugins offer motion interpolation effects to enhance digitally-slowed video. FFmpeg is a free software non-interactive tool with such functionality. Adobe After Effects has this in a feature called "Pixel Motion". AI software company Topaz Labs produces Video AI, a video upscaling application with motion interpolation. The effects plugin "Twixtor" is available for most major video editing suites, and offers similar functionality. Neural networks Depth-Aware Video Frame Interpolation Channel Attention Is All You Need Real-Time Intermediate Flow Estimation Intermediate Feature Refine Network Deep learning super sampling used specifically to interpolate frames in real-time for video games Side effects Visual artifacts Motion interpolation on certain brands of TVs is sometimes accompanied by visual anomalies in the picture, described by CNET's David Carnoy as a "little tear or glitch" in the picture, appearing for a fraction of a second. He adds that the effect is most noticeable when the technology suddenly kicks in during a fast camera pan. Television and display manufacturers refer to this phenomenon as a type of digital artifact. Due to the improvement of associated technology over time, such artifacts appear less frequently with modern consumer TVs, though they have yet to be eliminated "the artifacts happens more often when the gap between frames are bigger". Soap opera effect As a byproduct of the perceived increase in frame rate, motion interpolation may introduce a "video" (versus "film") look. This look is commonly referred to as the "soap opera effect" (SOE), in reference to the distinctive appearance of most broadcast television soap operas or pre-2000s multicam sitcoms, which were typically shot using less expensive 60i video rather than film. Many complain that the soap opera effect ruins the theatrical look of cinematic works, by making it appear as if the viewer is either on set or watching a behind the scenes featurette. Almost all manufacturers provide ways to disable the feature, but because methods and terminology differ, the UHD Alliance proposed that all televisions have a "Filmmaker Mode" button on remote controls to disable motion smoothing. See also Inbetweening Motion compensation Motion interpolation (computer graphics) Flicker-free Television standards conversion 3:2 pulldown References External links High Frame Rate Motion Compensated Frame Interpolation in High-Definition Video Processing A Low Complexity Motion Compensated Frame Interpolation Method Display technology Video processing Interpolation Video Film and video technology Film post-production technology
Motion interpolation
[ "Engineering" ]
1,419
[ "Electronic engineering", "Display technology" ]
15,523,181
https://en.wikipedia.org/wiki/Riemannian%20Penrose%20inequality
In mathematical general relativity, the Penrose inequality, first conjectured by Sir Roger Penrose, estimates the mass of a spacetime in terms of the total area of its black holes and is a generalization of the positive mass theorem. The Riemannian Penrose inequality is an important special case. Specifically, if (M, g) is an asymptotically flat Riemannian 3-manifold with nonnegative scalar curvature and ADM mass m, and A is the area of the outermost minimal surface (possibly with multiple connected components), then the Riemannian Penrose inequality asserts This is purely a geometrical fact, and it corresponds to the case of a complete three-dimensional, space-like, totally geodesic submanifold of a (3 + 1)-dimensional spacetime. Such a submanifold is often called a time-symmetric initial data set for a spacetime. The condition of (M, g) having nonnegative scalar curvature is equivalent to the spacetime obeying the dominant energy condition. This inequality was first proved by Gerhard Huisken and Tom Ilmanen in 1997 in the case where A is the area of the largest component of the outermost minimal surface. Their proof relied on the machinery of weakly defined inverse mean curvature flow, which they developed. In 1999, Hubert Bray gave the first complete proof of the above inequality using a conformal flow of metrics. Both of the papers were published in 2001. Physical motivation The original physical argument that led Penrose to conjecture such an inequality invoked the Hawking area theorem and the cosmic censorship hypothesis. Case of equality Both the Bray and Huisken–Ilmanen proofs of the Riemannian Penrose inequality state that under the hypotheses, if then the manifold in question is isometric to a slice of the Schwarzschild spacetime outside its outermost minimal surface, which is a sphere of Schwarzschild radius. Penrose conjecture More generally, Penrose conjectured that an inequality as above should hold for spacelike submanifolds of spacetimes that are not necessarily time-symmetric. In this case, nonnegative scalar curvature is replaced with the dominant energy condition, and one possibility is to replace the minimal surface condition with an apparent horizon condition. Proving such an inequality remains an open problem in general relativity, called the Penrose conjecture. In popular culture In episode 6 of season 8 of the television sitcom The Big Bang Theory, Dr. Sheldon Cooper claims to be in the process of solving the Penrose Conjecture while at the same time composing his Nobel Prize acceptance speech. References Riemannian geometry Geometric inequalities General relativity Theorems in geometry
Riemannian Penrose inequality
[ "Physics", "Mathematics" ]
551
[ "General relativity", "Relativity stubs", "Theorems in geometry", "Theory of relativity", "Mathematical problems", "Geometry", "Inequalities (mathematics)", "Mathematical theorems", "Geometric inequalities" ]
15,524,134
https://en.wikipedia.org/wiki/Excitation%20function
Excitation function ( yield curve) is a term used in nuclear physics to describe a graphical plot of the yield of a radionuclide or reaction channel as a function of the bombarding projectile energy or the calculated excitation energy of the compound nucleus. The yield is the measured intensity of a particular transition. The excitation function typically resembles a Gaussian bell curve and is mathematically described by a Breit–Wigner function, owing to the resonant nature of the production of the compound nucleus. The energy value at the maximum yield on the excitation curve corresponds to the energy of the resonance. The energy interval between 25% and 75% of the maximum yield on the excitation curve are equivalent to the resonance width. A nuclear reaction should be described by a complete study of the exit channel (1n,2n,3n etc.) excitation functions in order to allow a determination of the optimum energy to be used to maximize the yield. See also Resonant reaction References Nuclear physics
Excitation function
[ "Physics" ]
213
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
15,525,006
https://en.wikipedia.org/wiki/List%20of%20binary%20codes
This is a list of some binary codes that are (or have been) used to represent text as a sequence of binary digits "0" and "1". Fixed-width binary codes use a set number of bits to represent each character in the text, while in variable-width binary codes, the number of bits may vary from character to character. Five-bit binary codes Several different five-bit codes were used for early punched tape systems. Five bits per character only allows for 32 different characters, so many of the five-bit codes used two sets of characters per value referred to as FIGS (figures) and LTRS (letters), and reserved two characters to switch between these sets. This effectively allowed the use of 60 characters. Standard five-bit standard codes are: International Telegraph Alphabet No. 1 (ITA1) – Also commonly referred to as Baudot code International Telegraph Alphabet No. 2 (ITA2) – Also commonly referred to as Murray code American Teletypewriter code (USTTY) – A variant of ITA2 used in the USA DIN 66006 – Developed for the presentation of ALGOL/ALCOR programs on paper tape and punch cards The following early computer systems each used its own five-bit code: J. Lyons and Co. LEO (Lyon's Electronic Office) English Electric DEUCE University of Illinois at Urbana-Champaign ILLIAC ZEBRA EMI 1100 Ferranti Mercury, Pegasus, and Orion systems The steganographic code, commonly known as Bacon's cipher uses groups of 5 binary-valued elements to represent letters of the alphabet. Six-bit binary codes Six bits per character allows 64 distinct characters to be represented. Examples of six-bit binary codes are: International Telegraph Alphabet No. 4 (ITA4) Six-bit BCD (Binary Coded Decimal), used by early mainframe computers. Six-bit ASCII subset of the primitive seven-bit ASCII Braille – Braille characters are represented using six dot positions, arranged in a rectangle. Each position may contain a raised dot or not, so Braille can be considered to be a six-bit binary code. See also: Six-bit character codes Seven-bit binary codes Examples of seven-bit binary codes are: International Telegraph Alphabet No. 3 (ITA3) – derived from the Moore ARQ code, and also known as the RCA ASCII – The ubiquitous ASCII code was originally defined as a seven-bit character set. The ASCII article provides a detailed set of equivalent standards and variants. In addition, there are various extensions of ASCII to eight bits (see Eight-bit binary codes) CCIR 476 – Extends ITA2 from 5 to 7 bits, using the extra 2 bits as check digits International Telegraph Alphabet No. 4 (ITA4) Eight-bit binary codes Extended ASCII – A number of standards extend ASCII to eight bits by adding a further 128 characters, such as: HP Roman ISO/IEC 8859 Mac OS Roman Windows-1252 EBCDIC – Used in early IBM computers and current IBM i and System z systems. 10-bit binary codes AUTOSPEC – Also known as Bauer code. AUTOSPEC repeats a five-bit character twice, but if the character has odd parity, the repetition is inverted. Decabit – A datagram of electronic pulses which are transmitted commonly through power lines. Decabit is mainly used in Germany and other European countries. 16-bit binary codes UCS-2 – An obsolete encoding capable of representing the basic multilingual plane of Unicode 32-bit binary codes UTF-32/UCS-4 – A four-bytes-per-character representation of Unicode. Variable-length binary codes UTF-8 – Encodes characters in a way that is mostly compatible with ASCII but can also encode the full repertoire of Unicode characters with sequences of up to four 8-bit bytes. UTF-16 – Extends UCS-2 to cover the whole of Unicode with sequences of one or two 16-bit elements GB 18030 – A full-Unicode variable-length code designed for compatibility with older Chinese multibyte encodings Huffman coding – A technique for expressing more common characters using shorter bit strings than are used for less common characters Data compression systems such as Lempel–Ziv–Welch can compress arbitrary binary data. They are therefore not binary codes themselves but may be applied to binary codes to reduce storage needs. Other Morse code is a variable-length telegraphy code, which traditionally uses a series of long and short pulses to encode characters. It relies on gaps between the pulses to provide separation between letters and words, as the letter codes do not have the "prefix property". This means that Morse code is not necessarily a binary system, but in a sense may be a ternary system, with a 10 for a "dit" or a "dot", a 1110 for a dash, and a 00 for a single unit of separation. Morse code can be represented as a binary stream by allowing each bit to represent one unit of time. Thus a "dit" or "dot" is represented as a 1 bit, while a "dah" or "dash" is represented as three consecutive 1 bits. Spaces between symbols, letters, and words are represented as one, three, or seven consecutive 0 bits. For example, "NO U" in Morse code is "", which could be represented in binary as "1110100011101110111000000010101110". If, however, Morse code is represented as a ternary system, "NO U" would be represented as "1110|10|00|1110|1110|1110|00|00|00|10|10|1110". See also List of computer character sets References Primitive types Data types Computing terminology Data unit Units of information
List of binary codes
[ "Mathematics", "Technology" ]
1,219
[ "Units of information", "Quantity", "Units of measurement", "Computing terminology" ]
15,533,279
https://en.wikipedia.org/wiki/G%C3%B6rtler%20vortices
In fluid dynamics, Görtler vortices are secondary flows that appear in a boundary layer flow along a concave wall. If the boundary layer is thin compared to the radius of curvature of the wall, the pressure remains constant across the boundary layer. On the other hand, if the boundary layer thickness is comparable to the radius of curvature, the centrifugal action creates a pressure variation across the boundary layer. This leads to the centrifugal instability (Görtler instability) of the boundary layer and consequent formation of Görtler vortices. These phenomena are named after mathematician . Görtler number The onset of Görtler vortices can be predicted using the dimensionless number called Görtler number (G). It is the ratio of centrifugal effects to the viscous effects in the boundary layer and is defined as where = external velocity = momentum thickness = kinematic viscosity = radius of curvature of the wall Görtler instability occurs when G exceeds about 0.3. Other instances A similar phenomenon arising from the same centrifugal action is sometimes observed in rotational flows which do not follow a curved wall, such as the rib vortices seen in the wakes of cylinders and generated behind moving structures. References Boundary layers Dimensionless numbers of fluid mechanics Fluid dynamics Fluid dynamic instabilities
Görtler vortices
[ "Chemistry", "Engineering" ]
275
[ "Fluid dynamic instabilities", "Chemical engineering", "Boundary layers", "Piping", "Fluid dynamics" ]
15,534,771
https://en.wikipedia.org/wiki/36%20Ursae%20Majoris
36 Ursae Majoris is a double star in the northern constellation of Ursa Major. With an apparent visual magnitude of 4.8, it can be seen with the naked eye in suitable dark skies. Based upon parallax measurements, this binary lies at a distance of from Earth. The brighter star of the two is a solar analog—meaning it has physical properties that make it similar to the Sun. It has 10% more mass and a radius 17% larger than the Sun, with an estimated age of four billion years. The spectrum of this star matches a stellar classification of F8 V, which indicates this is a main sequence star that is generating energy at its core through the nuclear fusion of hydrogen. The energy is being radiated into space from its outer envelope at an effective temperature of . This gives the star the characteristic yellow-white hue of an F-type star. The fainter of the two stars has an apparent magnitude 8.86 and shares a common proper motion witIts spectral type of K7Ve indicates it is a red dwarf. Its has a mass 60% of the Sun's, a temperature of and a bolometric luminosity only 10% of the Sun's. 36 Ursae Majoris has a second companion with a magnitude of 11.44 located at an angular separation of 240.6″ along a position angle of 292°, as of 2004. It does not share the proper motion of the other two stars and is a more massive and luminous star but much further away. Hunt for substellar objects According to Nelson & Angel (1998), 36 Ursae Majoris could host one or two (or at least three) jovian planets (or even brown dwarfs) at wide separations from the host star, with orbital periods of 10–15, 25 and 50 years respectively. The authors have set upper limits of 1.1–2, 5.3 and 24 Jupiter masses for the putative planetary objects. Also Lippincott (1983) had previously noticed the possible presence of a massive unseen companion (with nearly 70 times the mass of Jupiter, just below the stellar regime, thus a brown dwarf). Putative parameters for the substellar object show an orbital period of 18 years and quite a high eccentricity (e=0.8). Even Campbell et al. 1988 inferred the existence of planetary objects or even brown dwarfs less massive than 14 Jupiter masses around 36 Ursae Majoris. Nevertheless, no certain planetary companion has yet been detected or confirmed. The McDonald Observatory team has set limits to the presence of one or more planets with masses between 0.13 and 2.5 Jupiter masses and average separations spanning between 0.05 and 5.2 AU. An infrared excess has been detected around this star, most likely indicating the presence of a circumstellar disk at a radius of 38.6 AU. The temperature of this dust is . References External links The Range of Masses and Periods Explored by Radial Velocity Searches for Planetary Companions An unseen companion to 36 Ursae Majoris A from analysis of plates taken with the Sproul 61-CM refractor A search for substellar companions to southern solar-type stars Detection Limits from the McDonald Observatory Planet Search Program Ursa Major 36 Ursae Majoris A 36 Ursae Majoris C Ursae Majoris, 36 Triple stars Ursae Majoris, 36 4112 090839 051459 Durchmusterung objects 0394 5
36 Ursae Majoris
[ "Astronomy" ]
714
[ "Ursa Major", "Constellations" ]
243,134
https://en.wikipedia.org/wiki/Hess%27s%20law
Hess's law of constant heat summation, also known simply as Hess's law, is a relationship in physical chemistry and thermodynamics named after Germain Hess, a Swiss-born Russian chemist and physician who published it in 1840. The law states that the total enthalpy change during the complete course of a chemical reaction is independent of the sequence of steps taken. Hess's law is now understood as an expression of the fact that the enthalpy of a chemical process is independent of the path taken from the initial to the final state (i.e. enthalpy is a state function). According to the first law of thermodynamics, the enthalpy change in a system due to a reaction at constant pressure is equal to the heat absorbed (or the negative of the heat released), which can be determined by calorimetry for many reactions. The values are usually stated for reactions with the same initial and final temperatures and pressures (while conditions are allowed to vary during the course of the reactions). Hess's law can be used to determine the overall energy required for a chemical reaction that can be divided into synthetic steps that are individually easier to characterize. This affords the compilation of standard enthalpies of formation, which may be used to predict the enthalpy change in complex synthesis. Theory Hess's law states that the change of enthalpy in a chemical reaction is the same regardless of whether the reaction takes place in one step or several steps, provided the initial and final states of the reactants and products are the same. Enthalpy is an extensive property, meaning that its value is proportional to the system size. Because of this, the enthalpy change is proportional to the number of moles participating in a given reaction. In other words, if a chemical change takes place by several different routes, the overall enthalpy change is the same, regardless of the route by which the chemical change occurs (provided the initial and final condition are the same). If this were not true, then one could violate the first law of thermodynamics. Hess's law allows the enthalpy change (ΔH) for a reaction to be calculated even when it cannot be measured directly. This is accomplished by performing basic algebraic operations based on the chemical equations of reactions using previously determined values for the enthalpies of formation. Combination of chemical equations leads to a net or overall equation. If the enthalpy changes are known for all the equations in the sequence, their sum will be the enthalpy change for the net equation. If the net enthalpy change is negative (), the reaction is exothermic and is more likely to be spontaneous; positive ΔH values correspond to endothermic reactions. (Entropy also plays an important role in determining spontaneity, as some reactions with a positive enthalpy change are nevertheless spontaneous due to an entropy increase in the reaction system.) Use of enthalpies of formation Hess's law states that enthalpy changes are additive. Thus the value of the standard enthalpy of reaction can be calculated from standard enthalpies of formation of products and reactants as follows: Here, the first sum is over all products and the second over all reactants, and are the stoichiometric coefficients of products and reactants respectively, and are the standard enthalpies of formation of products and reactants respectively, and the o superscript indicates standard state values. This may be considered as the sum of two (real or fictitious) reactions: Reactants → Elements (in their standard states) and Elements → Products Examples Given: Cgraphite + O2 → CO2() ( ΔH = −393.5 kJ/mol) (direct step) Cgraphite + 1/2 O2 → CO() (ΔH = −110.5 kJ/mol) CO() +1/2 O2 → CO2() (ΔH = −283.0 kJ/mol) Reaction (a) is the sum of reactions (b) and (c), for which the total ΔH = −393.5 kJ/mol, which is equal to ΔH in (a). Given: B2O3() + 3H2O() → 3O2() + B2H6() (ΔH = 2035 kJ/mol) H2O() → H2O() (ΔH = 44 kJ/mol) H2() + 1/2 O2() → H2O() (ΔH = −286 kJ/mol) 2B() + 3H2() → B2H6() (ΔH = 36 kJ/mol) Find the ΔfH of: 2B() + 3/2 O2() → B2O3() After multiplying the equations (and their enthalpy changes) by appropriate factors and reversing the direction when necessary, the result is: B2H6() + 3O2() → B2O3() + 3H2O() (ΔH = 2035 × (−1) = −2035 kJ/mol) 3H2O() → 3H2O() (ΔH = 44 × (−3) = −132 kJ/mol) 3H2O() → 3H2() + (3/2) O2() (ΔH = −286 × (−3) = 858 kJ/mol) 2B() + 3H2() → B2H6() (ΔH = 36 kJ/mol) Adding these equations and canceling out the common terms on both sides, we obtain 2B() + 3/2 O2() → B2O3() (ΔH = −1273 kJ/mol) Extension to free energy and entropy The concepts of Hess's law can be expanded to include changes in entropy and in Gibbs free energy, since these are also state functions. The Bordwell thermodynamic cycle is an example of such an extension that takes advantage of easily measured equilibria and redox potentials to determine experimentally inaccessible Gibbs free energy values. Combining ΔGo values from Bordwell thermodynamic cycles and ΔHo values found with Hess's law can be helpful in determining entropy values that have not been measured directly and therefore need to be calculated through alternative paths. For the free energy: For entropy, the situation is a little different. Because entropy can be measured as an absolute value, not relative to those of the elements in their reference states (as with ΔHo and ΔGo), there is no need to use the entropy of formation; one simply uses the absolute entropies for products and reactants: Applications Hess's law is useful in the determination of enthalpies of the following: Heats of formation of unstable intermediates like CO(g) and NO(g). Heat changes in phase transitions and allotropic transitions. Lattice energies of ionic substances by constructing Born–Haber cycles if the electron affinity to form the anion is known, or Electron affinities using a Born–Haber cycle with a theoretical lattice energy. See also Thermochemistry Thermodynamics References Further reading External links Hess's paper (1840) on which his law is based (at ChemTeam site) a Hess's Law experiment Chemical thermodynamics Physical chemistry Thermochemistry
Hess's law
[ "Physics", "Chemistry" ]
1,598
[ "Applied and interdisciplinary physics", "Thermochemistry", "nan", "Chemical thermodynamics", "Physical chemistry" ]
243,420
https://en.wikipedia.org/wiki/Avalanche%20diode
In electronics, an avalanche diode is a diode (made from silicon or other semiconductor) that is designed to experience avalanche breakdown at a specified reverse bias voltage. The junction of an avalanche diode is designed to prevent current concentration and resulting hot spots, so that the diode is undamaged by the breakdown. The avalanche breakdown is due to minority carriers accelerated enough to create ionization in the crystal lattice, producing more carriers, which in turn create more ionization. Because the avalanche breakdown is uniform across the whole junction, the breakdown voltage is nearly constant with changing current when compared to a non-avalanche diode. The Zener diode exhibits an apparently similar effect in addition to Zener breakdown. Both effects are present in any such diode, but one usually dominates the other. Avalanche diodes are optimized for avalanche effect, so they exhibit small but significant voltage drop under breakdown conditions, unlike Zener diodes that always maintain a voltage higher than breakdown. This feature provides better surge protection than a simple Zener diode and acts more like a gas-discharge tube replacement. Avalanche diodes have a small positive temperature coefficient of voltage, whereas diodes relying on the Zener effect have a negative temperature coefficient. Uses Voltage reference The voltage after breakdown varies only slightly with changing current. This makes the avalanche diode useful as a type of voltage reference. Voltage reference diodes rated more than about 6–8 volts are generally avalanche diodes. Protection A common application is to protect electronic circuits against damaging high voltages. The avalanche diode is connected to the circuit so that it is reverse-biased. In other words, its cathode is positive with respect to its anode. In this configuration, the diode is non-conducting and does not interfere with the circuit. If the voltage increases beyond the design limit, the diode goes into avalanche breakdown, causing the harmful voltage to be conducted to ground. When used in this fashion, they are often referred to as clamping diodes or transient-voltage suppressors because they fix or "clamp" the maximum voltage to a predetermined level. Avalanche diodes are normally specified for this role by their clamping voltage VBR and the maximum amount of transient energy they can absorb, specified by either energy (in joules) or . Avalanche breakdown is not destructive as long as the diode is prevented from overheating. Radio-frequency noise generation Avalanche diodes generate radio-frequency noise. They are commonly used as noise sources in radio equipment and hardware random number generators. For instance, they are often used as a source of RF for antenna analyzer bridges. Avalanche diodes can also be used as white noise generators. Microwave-frequency generation If placed into a resonant circuit, avalanche diodes can act as negative-resistance devices. The IMPATT diode is an avalanche diode optimized for frequency generation. Single-photon avalanche detector These are made from doped silicon and depend on the avalanche breakdown effect to detect even single photons. The silicon avalanche photodiode is a high-gain photon detector. They are "ideal for use in high-speed, low-light-level applications". The avalanche photodiode is operated with a reverse bias voltage of up to hundreds of volts, slightly below its breakdown voltage. In this regime, electron–hole pairs generated by the incident photons take a large amount of energy from the electric field, which creates more secondary charge carriers. The photocurrent of just one photon can be registered with these electronic devices. See also Avalanche transistor Transient-voltage-suppression diode References Diodes Voltage stability
Avalanche diode
[ "Physics" ]
743
[ "Voltage", "Voltage stability", "Physical quantities" ]
243,561
https://en.wikipedia.org/wiki/EcoSCOPE
The ecoSCOPE is an optical sensor system, deployed from a small remotely operated vehicle (ROV) or fibre optic cable, to investigate behavior and microdistribution of small organisms in the ocean. Deployment Although an ROV may be very small and quiet, it is impossible to approach feeding herring closer than 40 cm. The ecoSCOPE allows observation of feeding herring from a distance of only 4 cm. From 40 cm, the herrings' prey (copepods) in front of the herring are invisible due to the deflection of light by phytoplankton and microparticles in highly productive waters where herring live. With the ecoSCOPE, the predators are illuminated by natural light, the prey by a light sheet, projected via a second endoscope from strobed LEDs (2 ms, 100% relative intensity at 700 nm, 53% at 690 nm, 22% at 680 nm, 4% at 660 nm, 0% at 642 nm). By imitating the long, thin snout of the garfish protruding into the security sphere of the alert herrings, an endoscope with a tip diameter of 11 mm is used. The endoscope is camouflaged to reduce the brightness-contrast against the background: the top is black and the sides are silvery. Additionally, the front of the ROV is covered by a mirror, reflecting a light gradient resembling the natural scene and making the instrument body virtually invisible to the animals. A second sensor images other copepods, phytoplankton and particles at very high magnification. Another advantage of these small "optical probes" is the minimal disruption of the current-field in the measuring volume, allowing for less disturbed surveys of microturbulence and shear. Another video can be seen in the article for Atlantic herring. An ecoSCOPE was also deployed to measure the dynamics of particles in a polluted estuary: see image on Particle (ecology), another as an underwater environmental monitoring system, utilizing the orientation capacity of juvenile glasseel. Specifications The ecoSCOPE is a product of the new initiative of "Ocean Online Biosensors": a synthesis of IT-sensoric and the sensing capability of ocean organisms. Depicted in the image on the right is the central unit. On all four corners are small entrances, through which water from different sources enters (in this case, rivers and creeks in New Jersey). It flows through a small labyrinth and mixes in the central chamber. It exits through a small tube in the middle. The glasseels migrate through this small tube heading into the current. In the middle is the entrance for the eels. They test the different water qualities and migrate toward the corner, where they exit. It is the opinion of many scientists that eels have developed the finest nose on the planet. They can sense concentrations of one part in 19 trillion. This is the same concentration as one glass of alcohol in the waters of all America's Great Lakes. For the eels the sensory impressions are probably as diverse as the colors visible for us. The system is submerged, and a digital camera observes the exits. The dynIMAGE software monitors the frequency of decisions per exit. Many thousand of glasseels pass through the system on a single day. The three exits in the left lower corner carry water from polluted sources (one is a drinking water reservoir). EcoSCOPE systems have already been tracking water pollution and its effect on fish and plankton behavior in Europe and the United States). For the future it is anticipated to deploy ecoSCOPEs continuously online, within the project LEO Projekt off New York City, visible for the public. Tests have also been performed with different qualities of drinking water and with solutions of runoff juice from different samples of fish. See also American eel Eel life history References Cury PM (2004) "Tuning the ecoscope for the Ecosystem Approach to Fisheries : Perspectives on eco-system-based approaches to the management of marine resources" Marine ecology, 274: 272-275. Julien B, Philippe C, Pascal C and Pierre C (2008) "Safeguarding, Integrating and Disseminating Knowledge on Exploited Marine Ecosystems: The Ecoscope" International Marine Data and Information Systems, IMDIS - 2008. Kils, U (1992) "The ecoSCOPE and dynIMAGE: Microscale Tools for in situ Studies of Predator Prey Interaction" , Archiv für Hydrobiologie, Beihefte 36: 83-96. Kils, U (1994) In: M DeLuca (ed) Diving for Science...1994 Proceedings of the 14th Annual Scientific Diving Symposium, American Academy of Underwater Sciences. New Brunswick, New Jersey. Ulanowicz RE (1993) "Inventing the ecoscope", In V. Christensen and D. Pauly (eds) Trophic models of aquatic ecosystems, ICLARM Conf. Proc. 26: ix-x. External links The Ecoscope Project Visit of President of Germany to Kiel laboratory, the first propototype of the EcoSCOPE is visible in the picture, hanging from the roof. Optical devices Marine biology Fisheries science
EcoSCOPE
[ "Materials_science", "Engineering", "Biology" ]
1,043
[ "Glass engineering and science", "Optical devices", "Marine biology" ]
244,115
https://en.wikipedia.org/wiki/Lorazepam
Lorazepam, sold under the brand name Ativan among others, is a benzodiazepine medication. It is used to treat anxiety (including anxiety disorders), trouble sleeping, severe agitation, active seizures including status epilepticus, alcohol withdrawal, and chemotherapy-induced nausea and vomiting. It is also used during surgery to interfere with memory formation, to sedate those who are being mechanically ventilated, and, along with other treatments, for acute coronary syndrome due to cocaine use. It can be given orally (by mouth), transdermally (on the skin via a topical gel or patch), intravenously (IV) (injection into a vein), or intramuscularly (injection into a muscle.) When given by injection, onset of effects is between one and thirty minutes and effects last for up to a day. Common side effects include weakness, sleepiness, ataxia, decreased alertness, decreased memory formation, low blood pressure, and a decreased effort to breathe. When given intravenously, the person should be closely monitored. Among those who are depressed, there may be an increased risk of suicide. With long-term use, larger doses may be required for the same effect. Physical dependence and psychological dependence may also occur. If stopped suddenly after long-term use, benzodiazepine withdrawal syndrome may occur. Older people more often develop adverse effects. In this age group, lorazepam is associated with falls and hip fractures. Due to these concerns, lorazepam use is generally recommended only for up to four weeks. Lorazepam was initially patented in 1963 and went on sale in the United States in 1977. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 80th most commonly prescribed medication in the United States, with more than 8million prescriptions. Medical uses Anxiety Lorazepam is used in the short-term management of severe anxiety. In the US, the Food and Drug Administration (FDA) advises against use of benzodiazepines such as lorazepam for longer than four weeks. It is fast-acting, and useful in treating fast-onset anxiety and panic attacks. Lorazepam can effectively reduce agitation and induce sleep, and the duration of effects from a single dose makes it an appropriate choice for the short-term treatment of insomnia, especially in the presence of severe anxiety or night terrors. It has a fairly short duration of action. Withdrawal symptoms, including rebound insomnia and rebound anxiety, may occur after seven days of use of lorazepam. Seizures Intravenous diazepam or lorazepam are first-line treatments for convulsive status epilepticus. Lorazepam is more effective than diazepam and intravenous phenytoin in the treatment of status epilepticus and has a lower risk of continuing seizures that might require additional medication. However, phenobarbital has a superior success rate compared to lorazepam and other drugs, at least in the elderly. Lorazepam's anticonvulsant properties and pharmacokinetic profile make intravenous use reliable for terminating acute seizures, but induce prolonged sedation. Orally administered benzodiazepines, including lorazepam, are occasionally used as long-term prophylactic treatment of resistant absence seizures; because of gradual tolerance to their anti-seizure effects, benzodiazepines are not considered first-line therapies. Additionally, common seizure characteristics (e.g., hypersalivation, jaw-clenching, involuntary swallowing) pose some difficulties with regard to oral administration. Lorazepam's anticonvulsant and central nervous system (CNS) depressant properties are useful for the treatment and prevention of alcohol withdrawal syndrome. In this setting, impaired liver function is not a hazard with lorazepam, since lorazepam does not require oxidation, in the liver or otherwise, for its metabolism. Lorazepam is noted as being the most tolerable benzodiazepine in those with advanced-stage liver disease. Sedation Lorazepam is sometimes used for individuals receiving mechanical ventilation. However, in critically ill people, propofol has been found to be superior to lorazepam both in effectiveness and overall cost; as a result, the use of propofol for this indication is now encouraged, whereas the use of lorazepam is discouraged. Its relative effectiveness in preventing new memory formation, along with its ability to reduce agitation and anxiety, makes lorazepam useful as premedication. It is given before a general anesthetic to reduce the amount of anesthetic required or before unpleasant awake procedures, such as in dentistry or endoscopies, to reduce anxiety, increase compliance, and induce amnesia for the procedure. Lorazepam by mouth is given 90 to 120 minutes before procedures, and intravenous lorazepam is given as late as 10 minutes before procedures. Lorazepam is sometimes used as an alternative to midazolam in palliative sedation. In intensive care units, lorazepam is sometimes used to produce anxiolysis, hypnosis, and amnesia. Agitation Lorazepam is sometimes used as an alternative to haloperidol when there is the need for rapid sedation of violent or agitated individuals, but haloperidol plus promethazine is preferred due to better effectiveness and due to lorazepam's adverse effects on respiratory function. However, adverse effects such as behavioral disinhibition may make benzodiazepines inappropriate for some people who are acutely psychotic. Acute delirium is sometimes treated with lorazepam, but as it can cause paradoxical effects, it is preferably given together with haloperidol. Lorazepam is absorbed relatively slowly if given intramuscularly, a common route in restraint situations. Other Catatonia with inability to speak is responsive to lorazepam. Symptoms may recur and treatment for some days may be necessary. Catatonia due to abrupt or overly rapid withdrawal from benzodiazepines, as part of the benzodiazepine withdrawal syndrome, should also respond to lorazepam treatment. As lorazepam can have paradoxical effects, haloperidol is sometimes given at the same time. It is sometimes used in chemotherapy in addition to medications used to treat nausea and vomiting (i.e., nausea and vomiting caused or worsened by psychological sensitization to the thought of being sick). Adverse effects Many beneficial effects of lorazepam (e.g., sedative, muscle relaxant, anti-anxiety, and amnesic effects) may become adverse effects when unwanted. Adverse effects can include sedation and low blood pressure; the effects of lorazepam are increased in combination with other CNS depressants. Other adverse effects include confusion, ataxia, inhibiting the formation of new memories, pupil constriction, and hangover effects. With long-term benzodiazepine use, it is unclear whether cognitive impairments fully return to normal after stopping lorazepam use; cognitive deficits persist for at least six months after withdrawal, but longer than six months may be required for recovery of cognitive function. Lorazepam appears to have more profound adverse effects on memory than other benzodiazepines; it impairs both explicit and implicit memory. In the elderly, falls may occur as a result of benzodiazepines. Adverse effects are more common in the elderly, and they appear at lower doses than in younger people. Benzodiazepines can cause or worsen depression. Paradoxical effects can also occur, such as worsening of seizures, or paradoxical excitement; paradoxical excitement is more likely to occur in the elderly, children, those with a history of alcohol abuse, and in people with a history of aggression or anger problems. Lorazepam's effects are dose-dependent, meaning the higher the dose, the stronger the effects (and side effects) will be. Using the smallest dose needed to achieve desired effects lessens the risk of adverse effects. Sedative drugs and sleeping pills, including lorazepam, have been associated with an increased risk of death. Sedation is the side effect people taking lorazepam most frequently report. In a group of around 3,500 people treated for anxiety, the most common side effects complained of from lorazepam were sedation (15.9%), dizziness (6.9%), weakness (4.2%), and unsteadiness (3.4%). Side effects such as sedation and unsteadiness increased with age. Cognitive impairment, behavioral disinhibition and respiratory depression as well as hypotension may also occur. Paradoxical effects: In some cases, paradoxical effects can occur with benzodiazepines, such as increased hostility, aggression, angry outbursts, and psychomotor agitation. These effects are seen more commonly with lorazepam than with other benzodiazepines. Paradoxical effects are more likely to occur with higher doses, in people with pre-existing personality disorders and those with a psychiatric illness. Frustrating stimuli may trigger such reactions, though the drug may have been prescribed to help the person cope with such stress and frustration in the first place. As paradoxical effects appear to be dose-related, they usually subside on dose reduction or on complete withdrawal of lorazepam. Suicidality: Benzodiazepines are associated with an increased risk of suicide, possibly due to disinhibition. Higher dosages appear to confer greater risk. Amnesic effects: Among benzodiazepines, lorazepam has relatively strong amnesic effects, but people soon develop tolerance to this with regular use. To avoid amnesia (or excess sedation) being a problem, the initial total daily lorazepam dose should not exceed 2 mg. This also applies to use for night sedation. Five participants in a sleep study were prescribed lorazepam 4 mg at night, and the next evening, three subjects unexpectedly volunteered memory gaps for parts of that day, an effect that subsided completely after two to three days' use. Amnesic effects cannot be estimated from the degree of sedation present, since the two effects are unrelated. High-dose or prolonged parenterally-administered lorazepam with its associated solvent can cause propylene glycol intoxication and poisoning. In September 2020, the U.S. Food and Drug Administration (FDA) required the boxed warning be updated for all benzodiazepine medicines to describe the risks of abuse, misuse, addiction, physical dependence, and withdrawal reactions consistently across all the medicines in the class. Contraindications Lorazepam should be avoided in people with: Allergy or hypersensitivity – Past hypersensitivity or allergy to lorazepam, to any benzodiazepine, or to any of the ingredients in lorazepam tablets or injections Respiratory failure – Benzodiazepines, including lorazepam, may depress central nervous system respiratory drive and are contraindicated in severe respiratory failure. An example would be the inappropriate use to relieve anxiety associated with acute severe asthma. The anxiolytic effects may also be detrimental to a person's willingness and ability to fight for breath. However, if mechanical ventilation becomes necessary, lorazepam may be used to facilitate deep sedation. Acute intoxication – Lorazepam may interact synergistically with the effects of alcohol, narcotics, or other psychoactive substances. It should, therefore, not be administered to a drunk or intoxicated person. Ataxia – This is a neurological clinical sign, consisting of unsteady and clumsy motion of the limbs and torso, due to the failure of gross muscle movement coordination, most evident on standing and walking. It is the classic way in which acute alcohol intoxication may affect a person. Benzodiazepines should not be administered to people who are already ataxic. Acute narrow-angle glaucoma – Lorazepam has pupil-dilating effects, which may further interfere with the drainage of aqueous humor from the anterior chamber of the eye, thus worsening narrow-angle glaucoma. Sleep apnea – Sleep apnea may be worsened by lorazepam's central nervous system depressant effects. It may further reduce the person's ability to protect his or her airway during sleep. Myasthenia gravis – This condition is characterized by muscle weakness, so a muscle relaxant such as lorazepam may exacerbate symptoms. Pregnancy and breastfeeding – Lorazepam belongs to the Food and Drug Administration (FDA) pregnancy category D, which means it is likely to cause harm to the developing baby if taken during the first trimester of pregnancy. The evidence is inconclusive as to whether lorazepam if taken early in pregnancy results in reduced intelligence, neurodevelopmental problems, physical malformations in cardiac or facial structure, or other malformations in some newborns. Lorazepam given to pregnant women antenatally may cause floppy infant syndrome in the neonate, or respiratory depression necessitating ventilation. Regular lorazepam use during late pregnancy (the third trimester), carries a definite risk of benzodiazepine withdrawal syndrome in the neonate. Neonatal benzodiazepine withdrawal may include hypotonia, reluctance to suck, apneic spells, cyanosis, and impaired metabolic responses to cold stress. Symptoms of floppy infant syndrome and neonatal benzodiazepine withdrawal syndrome have been reported to persist from hours to months after birth. Lorazepam may also inhibit fetal liver bilirubin glucuronidation, leading to neonatal jaundice. Lorazepam is present in breast milk, so caution must be exercised about breastfeeding. Specific groups Children and the elderly – The safety and effectiveness of lorazepam are not well determined in children under 18 years of age, but it is used to treat acute seizures. Dose requirements have to be individualized, especially in people who are elderly and debilitated in whom the risk of oversedation is greater. Long-term therapy may lead to cognitive deficits, especially in the elderly, which may only be partially reversible. The elderly metabolize benzodiazepines more slowly than younger people and are more sensitive to the adverse effects of benzodiazepines compared to younger individuals even at similar plasma levels. Additionally, the elderly tend to take more drugs which may interact with or enhance the effects of benzodiazepines. Benzodiazepines, including lorazepam, have been found to increase the risk of falls and fractures in the elderly. As a result, dosage recommendations for the elderly are about half of those used in younger individuals and used for no longer than two weeks. Lorazepam may also be slower to clear in the elderly, leading potentially to accumulation and enhanced effects. Lorazepam, similar to other benzodiazepines and nonbenzodiazepines, causes impairments in body balance and standing steadiness in individuals who wake up at night or the next morning. Falls and hip fractures are frequently reported. The combination with alcohol increases these impairments. Partial, but incomplete, tolerance develops to these impairments. Liver or kidney failure – Lorazepam may be safer than most benzodiazepines in people with impaired liver function. Like oxazepam, it does not require liver oxidation, but only liver glucuronidation into lorazepam-glucuronide. Therefore, impaired liver function is unlikely to result in lorazepam accumulation to an extent causing adverse reactions. Similarly kidney disease has minimal effects on lorazepam levels. Drug and alcohol dependence – The risk of abuse of lorazepam is increased in dependent people. Comorbid psychiatric disorders also increase the risk of dependence and paradoxical adverse effects. Tolerance and dependence Dependence typified by a withdrawal syndrome occurs in about one-third of individuals who are treated for longer than four weeks with a benzodiazepine. Higher doses and longer periods of use increase the risk of developing a benzodiazepine dependence. Potent benzodiazepines with a relatively short half-life, such as lorazepam, alprazolam, and triazolam, have the highest risk of causing dependence. If regular treatment is continued for longer than four to six months, dose increases may be necessary to maintain effects, but treatment-resistant symptoms may in fact be benzodiazepine withdrawal symptoms. Due to the development of tolerance to the anticonvulsant effects, benzodiazepines are generally not recommended for long-term use for the management of epilepsy. Increasing the dose may overcome tolerance, but tolerance may then develop to the higher dose and adverse effects may persist and worsen. The mechanism of tolerance to benzodiazepines is complex and involves GABAA receptor downregulation, alterations to subunit configuration of GABAA receptors, uncoupling, and internalization of the benzodiazepine binding site from the GABAA receptor complex as well as changes in gene expression. The likelihood of dependence is relatively high with lorazepam compared to other benzodiazepines. Lorazepam's relatively short serum half-life, its confinement mainly to blood, and its inactive metabolite can result in interdose withdrawal phenomena and next-dose cravings, that may reinforce psychological dependence. Because of its high potency, the smallest lorazepam tablet strength of 0.5 mg is also a significant dose. To minimise the risk of physical/psychological dependence, lorazepam is best used only short-term, at the smallest effective dose. If any benzodiazepine has been used long-term, the recommendation is a gradual dose taper over weeks, months, or longer, according to dose and duration of use, the degree of dependence and the individual. Coming off long-term lorazepam use may be more realistically achieved by a gradual switch to an equivalent dose of diazepam and a period of stabilization on this, and only then initiating dose reductions. The advantage of switching to diazepam is that dose reductions are felt less acutely, because of the longer half-lives (20–200 hours) of diazepam and its active metabolites. Withdrawal On abrupt or overly rapid discontinuation of lorazepam, anxiety, and signs of physical withdrawal have been observed, similar to those seen on withdrawal from alcohol and barbiturates. Lorazepam, as with other benzodiazepine drugs, can cause physical dependence, addiction, and benzodiazepine withdrawal syndrome. The higher the dose and the longer the drug is taken, the greater the risk of experiencing unpleasant withdrawal symptoms. Withdrawal symptoms can, however, occur from standard dosages and also after short-term use. Benzodiazepine treatment should be discontinued as soon as possible via a slow and gradual dose reduction regimen. Rebound effects often resemble the condition being treated, but typically at a more intense level and may be difficult to diagnose. Withdrawal symptoms can range from mild anxiety and insomnia to more severe symptoms such as seizures and psychosis. The risk and severity of withdrawal are increased with long-term use, use of high doses, abrupt or over-rapid reduction, among other factors. Short-acting benzodiazepines such as lorazepam are more likely to cause a more severe withdrawal syndrome compared to longer-acting benzodiazepines. Withdrawal symptoms can occur after taking therapeutic doses of lorazepam for as little as one week. Withdrawal symptoms include headaches, anxiety, tension, depression, insomnia, restlessness, confusion, irritability, sweating, dysphoria, dizziness, derealization, depersonalization, numbness/tingling of extremities, hypersensitivity to light, sound, and smell, perceptual distortions, nausea, vomiting, diarrhea, appetite loss, hallucinations, delirium, seizures, tremor, stomach cramps, myalgia, agitation, palpitations, tachycardia, panic attacks, short-term memory loss, and hyperthermia. It takes about 18–36 hours for the benzodiazepine to be removed from the body. The ease of physical dependence to lorazepam, (Ativan brand was particularly cited), and its withdrawal were brought to the attention of the British public during the early 1980s in Esther Rantzen's BBC TV series That's Life!, in a feature on the drug over a number of episodes. Interactions Lorazepam is not usually fatal in overdose but may cause respiratory depression if taken in overdose with alcohol. The combination also causes greater enhancement of the disinhibitory and amnesic effects of both drugs, with potentially embarrassing or criminal consequences. Some experts advise that people should be warned against drinking alcohol while on lorazepam treatment, but such clear warnings are not universal. Greater adverse effects may also occur when lorazepam is used with other drugs, such as opioids or other hypnotics. Lorazepam may also interact with rifabutin. Valproate inhibits the metabolism of lorazepam, whereas carbamazepine, lamotrigine, phenobarbital, phenytoin, and rifampin increase its rate of metabolism. Some antidepressants, antiepileptic drugs such as phenobarbital, phenytoin, and carbamazepine, sedative antihistamines, opiates, antipsychotics, and alcohol, when taken with lorazepam may result in enhanced sedative effects. Overdose In cases of a suspected lorazepam overdose, it is important to establish whether the person is a regular user of lorazepam or other benzodiazepines since regular use causes tolerance to develop. Also, one must ascertain whether other substances were also ingested. Signs of overdose range through mental confusion, dysarthria, paradoxical reactions, drowsiness, hypotonia, ataxia, hypotension, hypnotic state, coma, cardiovascular depression, respiratory depression, and death. However, fatal overdoses on benzodiazepines alone are rare and less common than with barbiturates. Such a difference is largely due to benzodiazepine activity as a neuroreceptor modulator, and not as an activator per se. Lorazepam and similar medications do however act in synergy with alcohol, which increases the risk of overdose. Early management of people under alert includes emetics, gastric lavage, and activated charcoal. Otherwise, management is by observation, including vital signs, support and, only if necessary, considering the hazards of doing so, giving intravenous flumazenil. People are ideally nursed in a kind, frustration-free environment, since, when given or taken in high doses, benzodiazepines are more likely to cause paradoxical reactions. If shown sympathy, even quite crudely feigned, people may respond solicitously, but they may respond with disproportionate aggression to frustrating cues. Opportunistic counseling has limited value here, as the person is unlikely to recall this later, owing to drug-induced anterograde amnesia. Detection in body fluids Lorazepam may be quantitated in blood or plasma to confirm poisoning in hospitalized people, provide evidence of an impaired driving arrest or to assist in a medicolegal death investigation. Blood or plasma concentrations are usually in a range of 10–300 μg/L in persons either receiving the drug therapeutically or in those arrested for impaired driving. Approximately 300–1000 μg/L is found in people after acute overdosage. Lorazepam may not be detected by commonly used urine drug screenings for benzodiazepines. This is due to the fact that the majority of these screening tests are only able to detect benzodiazepines that undergo oxazepam glucuronide metabolism. Pharmacology Lorazepam has anxiolytic, sedative, hypnotic, amnesic, anticonvulsant, and muscle relaxant properties. It is a high-potency and an benzodiazepine, and its uniqueness, advantages, and disadvantages are largely explained by its pharmacokinetic properties (poor water and lipid solubility, high protein binding and anoxidative metabolism to a pharmacologically inactive glucuronide form) and by its high relative potency (lorazepam 1 mg is equal in effect to diazepam 10 mg). The biological half-life of lorazepam is 10–20 hours. Pharmacokinetics Lorazepam is highly protein-bound and is extensively metabolized into pharmacologically inactive metabolites. Due to its poor lipid solubility, lorazepam is absorbed relatively slowly by mouth and is unsuitable for rectal administration. However, its poor lipid solubility and a high degree of protein binding (85–90%) mean that its volume of distribution is mainly the vascular compartment, causing relatively prolonged peak effects. This contrasts with the highly lipid-soluble diazepam, which, although rapidly absorbed orally or rectally, soon redistributes from the serum to other parts of the body, in particular, body fat. This explains why one lorazepam dose, despite its shorter serum half-life, has more prolonged peak effects than an equivalent diazepam dose. Lorazepam is rapidly conjugated at its 3-hydroxy group into lorazepam glucuronide which is then excreted in the urine. Lorazepam glucuronide has no demonstrable CNS activity in animals. The plasma levels of lorazepam are proportional to the dose given. There is no evidence of accumulation of lorazepam on administration up to six months. On regular administration, diazepam will accumulate, since it has a longer half-life and active metabolites, these metabolites also have long half-lives. Clinical example: Diazepam has long been a drug of choice for status epilepticus; its high lipid solubility means it gets absorbed with equal speed whether given orally, or rectally (nonintravenous routes are convenient outside of hospital settings), but diazepam's high lipid solubility also means it does not remain in the vascular space, but soon redistributes into other body tissues. So, it may be necessary to repeat diazepam doses to maintain peak anticonvulsant effects, resulting in excess body accumulation. Lorazepam is a different case; its low lipid solubility makes it relatively slowly absorbed by any route other than intravenously, but once injected, it will not get significantly redistributed beyond the vascular space. Therefore, lorazepam's anticonvulsant effects are more durable, thus reducing the need for repeated doses. If a person is known to usually stop convulsing after only one or two diazepam doses, it may be preferable because sedative after effects will be less than if a single dose of lorazepam is given (diazepam anticonvulsant/sedative effects wear off after 15–30 minutes, but lorazepam effects last 12–24 hours). The prolonged sedation from lorazepam may, however, be an acceptable trade-off for its reliable duration of effects, particularly if the person needs to be transferred to another facility. Although lorazepam is not necessarily better than diazepam at initially terminating seizures, lorazepam is, nevertheless, replacing diazepam as the intravenous agent of choice in status epilepticus. Lorazepam serum levels are proportional to the dose administered. Giving 2 mg oral lorazepam will result in a peak total serum level of around 20 ng/mL around two hours later, half of which is lorazepam, half its inactive metabolite, lorazepam-glucuronide. A similar lorazepam dose given intravenously will result in an earlier and higher peak serum level, with a higher relative proportion of unmetabolised (active) lorazepam. On regular administration, maximum serum levels are attained after three days. Longer-term use, up to six months, does not result in further accumulation. On discontinuation, lorazepam serum levels become negligible after three days and undetectable after about a week. Lorazepam is metabolized in the liver by conjugation into inactive lorazepam-glucuronide. This metabolism does not involve liver oxidation, so is relatively unaffected by reduced liver function. Lorazepam-glucuronide is more water-soluble than its precursor, so gets more widely distributed in the body, leading to a longer half-life than lorazepam. Lorazepam-glucuronide is eventually excreted by the kidneys, and, because of its tissue accumulation, it remains detectable, particularly in the urine, for substantially longer than lorazepam. Pharmacodynamics Relative to other benzodiazepines, lorazepam is thought to have a high affinity for GABA receptors, which may also explain its marked amnesic effects. Its main pharmacological effects are the enhancement of the effects of the neurotransmitter GABA at the GABAA receptor. Benzodiazepines, such as lorazepam, enhance the effects of GABA at the GABAA receptor via increasing the frequency of opening of the chloride ion channel on the GABAA receptors; which results in the therapeutic actions of benzodiazepines. They, however, do not on their own activate the GABAA receptors but require the neurotransmitter GABA to be present. Thus, the effect of benzodiazepines is to enhance the effects of the neurotransmitter GABA. The magnitude and duration of lorazepam effects are dose-related, meaning larger doses have stronger and longer-lasting effects, because the brain has spare benzodiazepine drug receptor capacity, with single, clinical doses leading only to an occupancy of some 3% of the available receptors. The anticonvulsant properties of lorazepam and other benzodiazepines may be, in part or entirely, due to binding to voltage-dependent sodium channels rather than benzodiazepine receptors. Sustained repetitive firing seems to be limited by the benzodiazepine effect of slowing recovery of sodium channels from inactivation to deactivation in mouse spinal cord cell cultures, hence prolonging the refractory period. Physical properties and formulations Pure lorazepam is an almost white powder that is nearly insoluble in water and oil. In medicinal form, it is mainly available as tablets and a solution for injection, but, in some locations, it is also available as a skin patch, an oral solution, and a sublingual tablet. Lorazepam tablets and syrups are administered orally. Lorazepam tablets of the Ativan brand also contain lactose, microcrystalline cellulose, polacrilin, magnesium stearate, and coloring agents (indigo carmine in blue tablets and tartrazine in yellow tablets). Lorazepam for injection formulated with polyethylene glycol 400 in propylene glycol with 2.0% benzyl alcohol as preservative. Lorazepam injectable solution is administered either by deep intramuscular injection or by intravenous injection. The injectable solution comes in 1 mL ampoules containing 2 or 4 mg of lorazepam. The solvents used are polyethylene glycol 400 and propylene glycol. As a preservative, the injectable solution contains benzyl alcohol. Toxicity from propylene glycol has been reported in the case of a person receiving a continuous lorazepam infusion. Intravenous injections should be given slowly and they should be closely monitored for side effects, such as respiratory depression, hypotension, or loss of airway control. Peak effects roughly coincide with peak serum levels, which occur 10 minutes after intravenous injection, up to 60 minutes after intramuscular injection, and 90 to 120 minutes after oral administration, but initial effects will be noted before this. A clinically relevant lorazepam dose will normally be effective for six to 12 hours, making it unsuitable for regular once-daily administration, so it is usually prescribed as two to four daily doses when taken regularly, but this may be extended to five or six, especially in the case of elderly people who could not handle large doses at once. Topical formulations of lorazepam, while sometimes used as a treatment for nausea, especially in people in hospice, has been advised against by the American Academy of Hospice and Palliative Medicine for this purpose as it has not been proven effective. History Historically, lorazepam is one of the "classical" benzodiazepines. Others include diazepam, clonazepam, oxazepam, nitrazepam, flurazepam, bromazepam, and clorazepate. Lorazepam was first introduced by Wyeth Pharmaceuticals in 1977 under the brand names Ativan and Temesta. The drug was developed by D.J. Richards, president of research. Wyeth's original patent on lorazepam is expired in the United States. Society and culture Recreational use Lorazepam is also used for other purposes, such as recreational drug use, wherein it is taken to achieve a high, or when the medication is continued long-term against medical advice. A 2006 large-scale, nationwide, US government study of pharmaceutical-related emergency department visits by SAMHSA found sedative-hypnotics are the pharmaceuticals most frequently used outside of their prescribed medical purpose in the United States, with 35% of drug-related emergency department visits involving sedative-hypnotics. In this category, benzodiazepines are most commonly used. Males and females use benzodiazepines for nonmedical purposes equally. Of drugs used in attempted suicide, benzodiazepines are the most commonly used pharmaceutical drugs, with 25% of attempted suicides involving them and lorazepam specifically being used in 3.6% of attempts. Lorazepam was the third-most-common benzodiazepine used outside of prescription in these ER visit statistics. Legal status Lorazepam is a Schedule IV drug under the Controlled Substances Act in the US and internationally under the United Nations Convention on Psychotropic Substances. It is a Schedule IV drug under the Controlled Drugs and Substances Act in Canada. In the United Kingdom, it is a Class C, Schedule 4 Controlled Drug under the Misuse of Drugs Regulations 2001. Pricing In 2000, the US drug company Mylan agreed to pay to settle accusations by the Federal Trade Commission (FTC) that they had raised the price of generic lorazepam by 2600% and generic clorazepate by 3200% in 1998 after having obtained exclusive licensing agreements for certain ingredients. References External links Lorazepam data sheet IPCS INCHEM 2-Chlorophenyl compounds Antiemetics Anxiolytics Benzodiazepines Drugs developed by Wyeth Drugs developed by Pfizer Chemical substances for emergency medicine Chloroarenes Hallucinogen antidotes Lactams Lactims Sodium channel blockers World Health Organization essential medicines Wikipedia medicine articles ready to translate
Lorazepam
[ "Chemistry" ]
7,630
[ "Chemicals in medicine", "Chemical substances for emergency medicine" ]
244,601
https://en.wikipedia.org/wiki/Effects%20of%20nuclear%20explosions
The effects of a nuclear explosion on its immediate vicinity are typically much more destructive and multifaceted than those caused by conventional explosives. In most cases, the energy released from a nuclear weapon detonated within the lower atmosphere can be approximately divided into four basic categories: the blast and shock wave: 50% of total energy thermal radiation: 35% of total energy ionizing radiation: 5% of total energy (more in a neutron bomb) residual radiation: 5–10% of total energy with the mass of the explosion. Depending on the design of the weapon and the location in which it is detonated, the energy distributed to any one of these categories may be significantly higher or lower. The physical blast effect is created by the coupling of immense amounts of energy, spanning the electromagnetic spectrum, with the surroundings. The environment of the explosion (e.g. submarine, ground burst, air burst, or exo-atmospheric) determines how much energy is distributed to the blast and how much to radiation. In general, surrounding a bomb with denser media, such as water, absorbs more energy and creates more powerful shock waves while at the same time limiting the area of its effect. When a nuclear weapon is surrounded only by air, lethal blast and thermal effects proportionally scale much more rapidly than lethal radiation effects as explosive yield increases. This bubble is faster than the speed of sound. The physical damage mechanisms of a nuclear weapon (blast and thermal radiation) are identical to those of conventional explosives, but the energy produced by a nuclear explosion is usually millions of times more powerful per unit mass, and temperatures may briefly reach the tens of millions of degrees. Energy from a nuclear explosion is initially released in several forms of penetrating radiation. When there is surrounding material such as air, rock, or water, this radiation interacts with and rapidly heats the material to an equilibrium temperature (i.e. so that the matter is at the same temperature as the fuel powering the explosion). This causes vaporization of the surrounding material, resulting in its rapid expansion. Kinetic energy created by this expansion contributes to the formation of a shock wave which expands spherically from the center. Intense thermal radiation at the hypocenter forms a nuclear fireball which, if the explosion is low enough in altitude, is often associated with a mushroom cloud. In a high-altitude burst where the density of the atmosphere is low, more energy is released as ionizing gamma radiation and X-rays than as an atmosphere-displacing shockwave. Direct effects Blast damage The high temperatures and radiation cause gas to move outward radially in a thin, dense shell called "the hydrodynamic front". The front acts like a piston that pushes against and compresses the surrounding medium to make a spherically expanding shock wave. At first, this shock wave is inside the surface of the developing fireball, which is created in a volume of air heated by the explosion's "soft" X-rays. Within a fraction of a second, the dense shock front obscures the fireball and continues to move past it, expanding outwards and free from the fireball, causing a reduction of light emanating from a nuclear detonation. Eventually the shock wave dissipates to the point where the light becomes visible again giving rise to the characteristic double flash caused by the shock wave–fireball interaction. It is this unique feature of nuclear explosions that is exploited when verifying that an atmospheric nuclear explosion has occurred and not simply a large conventional explosion, with radiometer instruments known as Bhangmeters capable of determining the nature of explosions. For air bursts at or near sea level, 50–60% of the explosion's energy goes into the blast wave, depending on the size and the yield of the bomb. As a general rule, the blast fraction is higher for low yield weapons. Furthermore, it decreases at high altitudes because there is less air mass to absorb radiation energy and convert it into a blast. This effect is most important for altitudes above 30  km, corresponding to less than 1 percent of sea-level air density. The effects of a moderate rain storm during an Operation Castle nuclear explosion were found to dampen, or reduce, peak pressure levels by approximately 15% at all ranges. Much of the destruction caused by a nuclear explosion is from blast effects. Most buildings, except reinforced or blast-resistant structures, will suffer moderate damage when subjected to overpressures of only 35.5 kilopascals (kPa) (5.15 pounds-force per square inch or 0.35 atm). Data obtained from Japanese surveys following the atomic bombings of Hiroshima and Nagasaki found that was sufficient to destroy all wooden and brick residential structures. This can reasonably be defined as the pressure capable of producing severe damage. The blast wind at sea level may exceed 1,000 km/h, or ~300 m/s, approaching the speed of sound in air. The range for blast effects increases with the explosive yield of the weapon and also depends on the burst altitude. Contrary to what might be expected from geometry, the blast range is not maximal for surface or low altitude blasts but increases with altitude up to an "optimum burst altitude" and then decreases rapidly for higher altitudes. This is caused by the nonlinear behavior of shock waves. When the blast wave from an air burst reaches the ground it is reflected. Below a certain reflection angle, the reflected wave and the direct wave merge and form a reinforced horizontal wave, known as the '"Mach stem" and is a form of constructive interference. This phenomenon is responsible for the bumps or 'knees' in the above overpressure range graph. For each goal overpressure, there is a certain optimum burst height at which the blast range is maximized over ground targets. In a typical air burst, where the blast range is maximized to produce the greatest range of severe damage, i.e. the greatest range that ~ of pressure is extended over, is a GR/ground range of 0.4 km for 1 kiloton (kt) of TNT yield; 1.9 km for 100 kt; and 8.6 km for 10 megatons (Mt) of TNT. The optimum height of burst to maximize this desired severe ground range destruction for a 1 kt bomb is 0.22  km; for 100 kt, 1  km; and for 10 Mt, 4.7  km. Two distinct, simultaneous phenomena are associated with the blast wave in the air: Static overpressure, i.e., the sharp increase in pressure exerted by the shock wave. The overpressure at any given point is directly proportional to the density of the air in the wave. Dynamic pressures, i.e., drag exerted by the blast winds required to form the blast wave. These winds push, tumble and tear objects. Most of the material damage caused by a nuclear air burst is caused by a combination of the high static overpressures and the blast winds. The long compression of the blast wave weakens structures, which are then torn apart by the blast winds. The compression, vacuum and drag phases together may last several seconds or longer, and exert forces many times greater than the strongest hurricane. Acting on the human body, the shock waves cause pressure waves through the tissues. These waves mostly damage junctions between tissues of different densities (bone and muscle) or the interface between tissue and air. Lungs and the abdominal cavity, which contain air, are particularly injured. The damage causes severe hemorrhaging or air embolisms, either of which can be rapidly fatal. The overpressure estimated to damage lungs is about 70 kPa. Some eardrums would probably rupture around 22 kPa (0.2 atm) and half would rupture between 90 and 130 kPa (0.9 to 1.2 atm). Thermal radiation Nuclear weapons emit large amounts of thermal radiation as visible, infrared, and ultraviolet light, to which the atmosphere is largely transparent. This is known as "flash". The chief hazards are burns and eye injuries. On clear days, these injuries can occur well beyond blast ranges, depending on weapon yield. Fires may also be started by the initial thermal radiation, but the following high winds due to the blast wave may put out almost all such fires, unless the yield is very high where the range of thermal effects vastly outranges blast effects, as observed from explosions in the multi-megaton range. This is because the intensity of the blast effects drops off with the third power of distance from the explosion, while the intensity of radiation effects drops off with the second power of distance. This results in the range of thermal effects increasing markedly more than blast range as higher and higher device yields are detonated. Thermal radiation accounts for between 35 and 45% of the energy released in the explosion, depending on the yield of the device. In urban areas, the extinguishing of fires ignited by thermal radiation may matter little, as in a surprise attack fires may also be started by blast-effect-induced electrical shorts, gas pilot lights, overturned stoves, and other ignition sources, as was the case in the breakfast-time bombing of Hiroshima. Whether or not these secondary fires will in turn be snuffed out as modern noncombustible brick and concrete buildings collapse in on themselves from the same blast wave is uncertain, not least of which, because of the masking effect of modern city landscapes on thermal and blast transmission are continually examined. When combustible frame buildings were blown down in Hiroshima and Nagasaki, they did not burn as rapidly as they would have done had they remained standing. The noncombustible debris produced by the blast frequently covered and prevented the burning of combustible material. Fire experts suggest that unlike Hiroshima, due to the nature of modern U.S. city design and construction, a firestorm in modern times is unlikely after a nuclear detonation. This does not exclude fires from being started but means that these fires will not form into a firestorm, due largely to the differences between modern building materials and those used in World War II-era Hiroshima. There are two types of eye injuries from thermal radiation: flash blindness and retinal burn. Flash blindness is caused by the initial brilliant flash of light produced by the nuclear detonation. More light energy is received on the retina than can be tolerated but less than is required for irreversible injury. The retina is particularly susceptible to visible and short wavelength infrared light since this part of the electromagnetic spectrum is focused by the lens on the retina. The result is bleaching of the visual pigments and temporary blindness for up to 40 minutes. A retinal burn resulting in permanent damage from scarring is also caused by the concentration of direct thermal energy on the retina by the lens. It will occur only when the fireball is actually in the individual's field of vision and would be a relatively uncommon injury. Retinal burns may be sustained at considerable distances from the explosion. The height of burst and apparent size of the fireball, a function of yield and range will determine the degree and extent of retinal scarring. A scar in the central visual field would be more debilitating. Generally, a limited visual field defect, which will be barely noticeable, is all that is likely to occur. When thermal radiation strikes an object, part will be reflected, part transmitted, and the rest absorbed. The fraction that is absorbed depends on the nature and color of the material. A thin material may transmit most of the radiation. A light-colored object may reflect much of the incident radiation and thus escape damage, like anti-flash white paint. The absorbed thermal radiation raises the temperature of the surface and results in scorching, charring, and burning of wood, paper, fabrics, etc. If the material is a poor thermal conductor, the heat is confined to the surface of the material. The actual ignition of materials depends on how long the thermal pulse lasts and the thickness and moisture content of the target. Near ground zero where the energy flux exceeds 125 J/cm2, what can burn, will. Farther away, only the most easily ignited materials will flame. Incendiary effects are compounded by secondary fires started by the blast wave effects such as from upset stoves and furnaces. In Hiroshima on 6 August 1945, a tremendous firestorm developed within 20 minutes after detonation and destroyed many more buildings and homes, built out of predominantly 'flimsy' wooden materials. A firestorm has gale-force winds blowing in towards the center of the fire from all directions. It is not peculiar to nuclear explosions, having been observed frequently in large forest fires and following incendiary raids during World War II. Despite fires destroying a large area of Nagasaki, no true firestorm occurred in the city even though a higher yielding weapon was used. Many factors explain this seeming contradiction, including a different time of bombing than Hiroshima, terrain, and crucially, a lower fuel loading/fuel density than that of Hiroshima. As thermal radiation travels more or less in a straight line from the fireball (unless scattered), any opaque object will produce a protective shadow that provides protection from the flash burn. Depending on the properties of the underlying surface material, the exposed area outside the protective shadow will be either burnt to a darker color, such as charring wood, or a brighter color, such as asphalt. If such a weather phenomenon as fog or haze is present at the point of the nuclear explosion, it scatters the flash, with radiant energy then reaching burn-sensitive substances from all directions. Under these conditions, opaque objects are therefore less effective than they would otherwise be without scattering, as they demonstrate maximum shadowing effect in an environment of perfect visibility and therefore zero scatterings. Similar to a foggy or overcast day, although there are few if any, shadows produced by the sun on such a day, the solar energy that reaches the ground from the sun's infrared rays is nevertheless considerably diminished, due to it being absorbed by the water of the clouds and the energy also being scattered back into space. Analogously, so too is the intensity at a range of burning flash energy attenuated, in units of J/cm2, along with the slant/horizontal range of a nuclear explosion, during fog or haze conditions. So despite any object that casts a shadow being rendered ineffective as a shield from the flash by fog or haze, due to scattering, the fog fills the same protective role, but generally only at the ranges that survival in the open is just a matter of being protected from the explosion's flash energy. The thermal pulse also is responsible for warming the atmospheric nitrogen close to the bomb and causing the creation of atmospheric NOx smog components. This, as part of the mushroom cloud, is shot into the stratosphere where it is responsible for dissociating ozone there, in the same way combustion NOx compounds do. The amount created depends on the yield of the explosion and the blast's environment. Studies done on the total effect of nuclear blasts on the ozone layer have been at least tentatively exonerating after initial discouraging findings. Indirect effects Electromagnetic pulse Gamma rays from a nuclear explosion produce high energy electrons through Compton scattering. For high altitude nuclear explosions, these electrons are captured in the Earth's magnetic field at altitudes between 20 and 40 kilometers where they interact with the Earth's magnetic field to produce a coherent nuclear electromagnetic pulse (NEMP) which lasts about one millisecond. Secondary effects may last for more than a second. The pulse is powerful enough to cause moderately long metal objects (such as cables) to act as antennas and generate high voltages due to interactions with the electromagnetic pulse. These voltages can destroy unshielded electronics. There are no known biological effects of EMP. The ionized air also disrupts radio traffic that would normally bounce off the ionosphere. Electronics can be shielded by wrapping them completely in conductive material such as metal foil; the effectiveness of the shielding may be less than perfect. Proper shielding is a complex subject due to the large number of variables involved. Semiconductors, especially integrated circuits, are extremely susceptible to the effects of EMP due to the close proximity of their p–n junctions, but this is not the case with thermionic tubes (or valves) which are relatively immune to EMP. A Faraday cage does not offer protection from the effects of EMP unless the mesh is designed to have holes no bigger than the smallest wavelength emitted from a nuclear explosion. Large nuclear weapons detonated at high altitudes also cause geomagnetically induced current in very long electrical conductors. The mechanism by which these geomagnetically induced currents are generated is entirely different from the gamma-ray induced pulse produced by Compton electrons. Radar blackout The heat of the explosion causes air in the vicinity to become ionized, creating the fireball. The free electrons in the fireball affect radio waves, especially at lower frequencies. This causes a large area of the sky to become opaque to radar, especially those operating in the VHF and UHF frequencies, which is common for long-range early warning radars. The effect is less for higher frequencies in the microwave region, as well as lasting a shorter time – the effect falls off both in strength and the affected frequencies as the fireball cools and the electrons begin to re-form onto free nuclei. A second blackout effect is caused by the emission of beta particles from the fission products. These can travel long distances, following the Earth's magnetic field lines. When they reach the upper atmosphere they cause ionization similar to the fireball but over a wider area. Calculations demonstrate that one megaton of fission, typical of a two-megaton H-bomb, will create enough beta radiation to blackout an area across for five minutes. Careful selection of the burst altitudes and locations can produce an extremely effective radar-blanking effect. The physical effects giving rise to blackouts also cause EMP, which can also cause power blackouts. The two effects are otherwise unrelated, and the similar naming can be confusing. Ionizing radiation About 5% of the energy released in a nuclear air burst is in the form of ionizing radiation: neutrons, gamma rays, alpha particles and electrons moving at speeds up to the speed of light. Gamma rays are high-energy electromagnetic radiation; the others are particles that move slower than light. The neutrons result almost exclusively from the fission and fusion reactions, while the initial gamma radiation includes that arising from these reactions as well as that resulting from the decay of short-lived fission products. The intensity of initial nuclear radiation decreases rapidly with distance from the point of burst because the radiation spreads over a larger area as it travels away from the explosion (the inverse-square law). It is also reduced by atmospheric absorption and scattering. The character of the radiation received at a given location also varies with the distance from the explosion. Near the point of the explosion, the neutron intensity is greater than the gamma intensity, but with increasing distance the neutron-gamma ratio decreases. Ultimately, the neutron component of the initial radiation becomes negligible in comparison with the gamma component. The range for significant levels of initial radiation does not increase markedly with weapon yield and, as a result, the initial radiation becomes less of a hazard with increasing yield. With larger weapons, above 50 kt (200 TJ), blast and thermal effects are so much greater in importance that prompt radiation effects can be ignored. The neutron radiation serves to transmute the surrounding matter, often rendering it radioactive. When added to the dust of radioactive material released by the bomb, a large amount of radioactive material is released into the environment. This form of radioactive contamination is known as nuclear fallout and poses the primary risk of exposure to ionizing radiation for a large nuclear weapon. Details of nuclear weapon design also affect neutron emission: the gun-type assembly Little Boy leaked far more neutrons than the implosion-type 21 kt Fat Man because the light hydrogen nuclei (protons) predominating in the exploded TNT molecules (surrounding the core of Fat Man) slowed down neutrons very efficiently while the heavier iron atoms in the steel nose forging of Little Boy scattered neutrons without absorbing much neutron energy. It was found in early experimentation that normally most of the neutrons released in the cascading chain reaction of the fission bomb are absorbed by the bomb case. Building a bomb case of materials which transmitted rather than absorbed the neutrons could make the bomb more intensely lethal to humans from prompt neutron radiation. This is one of the features used in the development of the neutron bomb. Earthquake The seismic pressure waves created from an explosion may release energy within nearby plates or otherwise cause an earthquake event. An underground explosion concentrates this pressure wave, and a localized earthquake event is more probable. The first and fastest wave, equivalent to a normal earthquake's P wave, can inform the location of the test; the S wave and the Rayleigh wave follow. These can all be measured in most circumstances by seismic stations across the globe, and comparisons with actual earthquakes can be used to help determine estimated yield via differential analysis, by the modelling of the high-frequency (>4 Hz) teleseismic P wave amplitudes. However, theory does not suggest that a nuclear explosion of current yields could trigger fault rupture and cause a major quake at distances beyond a few tens of kilometers from the shot point. Summary of the effects The following table summarizes the most important effects of single nuclear explosions under ideal, clear skies, weather conditions. Tables like these are calculated from nuclear weapons effects scaling laws. Advanced computer modelling of real-world conditions and how they impact on the damage to modern urban areas has found that most scaling laws are too simplistic and tend to overestimate nuclear explosion effects. The scaling laws that were used to produce the table below assume (among other things) a perfectly level target area, no attenuating effects from urban terrain masking (e.g. skyscraper shadowing), and no enhancement effects from reflections and tunneling by city streets. As a point of comparison in the chart below, the most likely nuclear weapons to be used against countervalue city targets in a global nuclear war are in the sub-megaton range. Weapons of yields from 100 to 475 kilotons have become the most numerous in the US and Russian nuclear arsenals; for example, the warheads equipping the Russian Bulava submarine-launched ballistic missile (SLBM) have a yield of 150 kilotons. US examples are the W76 and W88 warheads, with the lower yield W76 being over twice as numerous as the W88 in the US nuclear arsenal. 1 For the direct radiation effects the slant range instead of the ground range is shown here because some effects are not given even at ground zero for some burst heights. If the effect occurs at ground zero the ground range can be derived from slant range and burst altitude (Pythagorean theorem). 2 "Acute radiation syndrome" corresponds here to a total dose of one gray, "lethal" to ten grays. This is only a rough estimate since biological conditions are neglected here. Further complicating matters, under global nuclear war scenarios with conditions similar to that during the Cold War, major strategically important cities like Moscow and Washington are likely to be hit numerous times from sub-megaton multiple independently targetable re-entry vehicles, in a cluster bomb or "cookie-cutter" configuration. It has been reported that during the height of the Cold War in the 1970s Moscow was targeted by up to 60 warheads. The reason that the cluster bomb concept is preferable in the targeting of cities is twofold: the first is that large singular warheads are much easier to neutralize as both tracking and successful interception by anti-ballistic missile systems than it is when several smaller incoming warheads are approaching. This strength in numbers advantage to lower yield warheads is further compounded by such warheads tending to move at higher incoming speeds, due to their smaller, more slender physics package size, assuming both nuclear weapon designs are the same (a design exception being the advanced W88). The second reason for this cluster bomb, or 'layering' (using repeated hits by accurate low yield weapons) is that this tactic along with limiting the risk of failure reduces individual bomb yields, and therefore reduces the possibility of any serious collateral damage to non-targeted nearby civilian areas, including that of neighboring countries. This concept was pioneered by Philip J. Dolan and others. Other phenomena Gamma rays from the nuclear processes preceding the true explosion may be partially responsible for the following fireball, as they may superheat nearby air and/or other material. The vast majority of the energy that goes on to form the fireball is in the soft X-ray region of the electromagnetic spectrum, with these X-rays being produced by the inelastic collisions of the high-speed fission and fusion products. It is these reaction products and not the gamma rays which contain most of the energy of the nuclear reactions in the form of kinetic energy. This kinetic energy of the fission and fusion fragments is converted into internal and then radiation energy by approximately following the process of blackbody radiation emitting in the soft X-ray region. As a result of numerous inelastic collisions, part of the kinetic energy of the fission fragments is converted into internal and radiation energy. Some of the electrons are removed entirely from the atoms, thus causing ionization. Others are raised to higher energy (or excited) states while still remaining attached to the nuclei. Within an extremely short time, perhaps a hundredth of a microsecond or so, the weapon residues consist essentially of completely and partially stripped (ionized) atoms, many of the latter being in excited states, together with the corresponding free electrons. The system then immediately emits electromagnetic (thermal) radiation, the nature of which is determined by the temperature. Since this is of the order of 107 degrees, most of the energy emitted within a microsecond or so is in the soft X-ray region. Because temperature depends on the average internal energy/heat of the particles in a certain volume, internal energy or heat is from kinetic energy. For an explosion in the atmosphere, the fireball quickly expands to maximum size and then begins to cool as it rises like a balloon through buoyancy in the surrounding air. As it does so, it takes on the flow pattern of a vortex ring with incandescent material in the vortex core as seen in certain photographs. This effect is known as a mushroom cloud. Sand will fuse into glass if it is close enough to the nuclear fireball to be drawn into it, and is thus heated to the necessary temperatures to do so; this is known as trinitite. At the explosion of nuclear bombs lightning discharges sometimes occur. Smoke trails are often seen in photographs of nuclear explosions. These are not from the explosion; they are left by sounding rockets launched just prior to detonation. These trails allow observation of the blast's normally invisible shock wave in the moments following the explosion. The heat and airborne debris created by a nuclear explosion can cause rain; the debris is thought to do this by acting as cloud condensation nuclei. During the city firestorm which followed the Hiroshima explosion, drops of water were recorded to have been about the size of marbles. This was termed black rain and has served as the source of a book and film by the same name. Black rain is not unusual following large fires and is commonly produced by pyrocumulus clouds during large forest fires. The rain directly over Hiroshima on that day is said to have begun around 9 a.m. with it covering a wide area from the hypocenter to the northwest, raining heavily for one hour or more in some areas. The rain directly over the city may have carried neutron activated building material combustion products, but it did not carry any appreciable nuclear weapon debris or fallout, although this is generally to the contrary to what other less technical sources state. The "oily" black soot particles, are a characteristic of incomplete combustion in the city firestorm. The element einsteinium was discovered when analyzing nuclear fallout. A side-effect of the Pascal-B nuclear test during Operation Plumbbob may have resulted in the first man-made object launched on an Earth escape trajectory. The so-called "thunder well" effect from the underground explosion may have launched a metal cover plate into space at six times Earth's escape velocity, although the evidence remains subject to debate, due to aerodynamic heating likely disintegrating it before it could exit the atmosphere. Ignition of fusion in the environment Atmospheric ignition In 1942, there was speculation among the scientists developing the first nuclear weapons in the Manhattan Project that a sufficiently large nuclear explosion might ignite fusion reactions the Earth's atmosphere. Since the proposal of the CNO cycle in 1937, it was known that not only the hydrogen in water vapor, but the carbon, nitrogen, and oxygen nuclei in the atmosphere undergo exothermic fusion reactions to heavier nuclei; at stellar temperatures they behave as a fuel. The fear was that similar temperatures in the bomb's initial fireball might trigger the exothermic reactions ^{14}N \ + \ ^{1}H \rightarrow ^{15}O \ + \ \gamma or ^{14}N \ + \ ^{14}N \rightarrow ^{24}Mg \ + \ \alpha , sustaining itself until all the world's atmospheric nitrogen was consumed. Hans Bethe was assigned to study this hypothesis from the project's earliest days, and he eventually concluded that such a reaction could not sustain itself on a large scale due to cooling of the nuclear fireball through an inverse Compton effect. Richard Hamming was asked to make a similar calculation just before the first nuclear test, and he reached the same conclusion. Nevertheless, the notion has persisted as a rumor for many years and was the source of apocalyptic gallows humor at the Trinity test where Enrico Fermi took side bets on atmospheric ignition. Subsequent analysis shows that besides the cooling effect, the latter reaction, with a Gamow energy of 16.46 GeV, was unlikely to have occurred in even a single instance during the Trinity test, as the fireball core reached 1.01 × 1011 K, equivalent to the far lower thermal energy of 8.7 MeV. However, the possibility of fusion of hydrogen nuclei during the test, whose Gamow energies are on the order of 1 MeV, is not known. The first artificial initiation of a thermonuclear reaction is accepted to be the 1951 American nuclear test Greenhouse George. Oceanic ignition Fears of igniting the ocean's higher density of hydrogen, deuterium, or oxygen nuclei during American testing in the Pacific, remained a serious concern, especially as yields increased by orders of magnitude. These were raised from the first air burst over water and submerged tests in Operation Crossroads at Bikini Atoll, and continuing with the first full thermonuclear and megaton-level test of Ivy Mike. Survivability Survivability is highly dependent on factors such as if one is indoors or out, the size of the explosion, the proximity to the explosion, and to a lesser degree the direction of the wind carrying fallout. Death is highly likely and radiation poisoning is almost certain if one is caught in the open with no terrain or building masking effects within a radius of from a 1 megaton airburst, and the 50% chance of death from the blast extends out to ~ from the same 1 megaton atmospheric explosion. An example that highlights the variability in the real world and the effect of being indoors is Akiko Takakura. Despite the lethal radiation and blast zone extending well past her position at Hiroshima, Takakura survived the effects of a 16 kt atomic bomb at a distance of from the hypocenter, with only minor injuries, due mainly to her position in the lobby of the Bank of Japan, a reinforced concrete building, at the time. In contrast, the unknown person sitting outside, fully exposed, on the steps of the Sumitomo Bank, next door to the Bank of Japan, received lethal third-degree burns and was then likely killed by the blast, in that order, within two seconds. With medical attention, radiation exposure is survivable to 200 rems of acute dose exposure. If a group of people is exposed to a 50 to 59 rems acute (within 24 hours) radiation dose, none will get radiation sickness. If the group is exposed to 60 to 180 rems, 50% will become sick with radiation poisoning. If medically treated, all of the 60–180 rems group will survive. If the group is exposed to 200 to 450 rems, most if not all of the group will become sick; 50% will die within two to four weeks, even with medical attention. If the group is exposed to 460 to 600 rems, 100% of the group will get radiation poisoning, and 50% will die within one to three weeks. If the group is exposed to 600 to 1000 rems, 50% will die in one to three weeks. If the group is exposed to 1,000 to 5,000 rems, 100% of the group will die within 2 weeks. At 5,000 rems, 100% of the group will die within 2 days. Nuclear explosion impact on humans indoors Researchers from the University of Nicosia simulated, using high-order computational fluid dynamics, an atomic bomb explosion from a typical intercontinental ballistic missile and the resulting blast wave to see how it would affect people sheltering indoors. They found that the blast wave was enough in the moderate damage zone to topple some buildings and injure people caught outdoors. However, sturdier buildings, such as concrete structures, can remain standing. The team used advanced computer modelling to study how a nuclear blast wave speeds through a standing structure. Their simulated structure featured rooms, windows, doorways, and corridors and allowed them to calculate the speed of the air following the blast wave and determine the best and worst places to be. The study showed that high airspeeds remain a considerable hazard and can still result in severe injuries or even fatalities. Furthermore, simply being in a sturdy building is not enough to avoid risk. The tight spaces can increase airspeed, and the involvement of the blast wave causes air to reflect off walls and bend around corners. In the worst cases, this can produce a force equivalent to multiple times a human's body weight. The most dangerous critical indoor locations to avoid are windows, corridors, and doors. The study received considerable interest from the international press. See also Bomb pulse Effects of nuclear explosions on human health Lists of nuclear disasters and radioactive incidents List of nuclear weapons tests Nuclear warfare Nuclear holocaust Nuclear terrorism Peaceful nuclear explosion Rope trick effect Underwater explosion Visual depictions of nuclear explosions in fiction References External links Nuclear Weapon Testing Effects – Comprehensive video archive Underground Bomb Shelters The Federation of American Scientists provide solid information on weapons of mass destruction, including nuclear weapons and their effects The Nuclear War Survival Skills is a public domain text and is an excellent source on how to survive a nuclear attack. Ground Zero: A Javascript simulation of the effects of a nuclear explosion in a city Oklahoma Geological Survey Nuclear Explosion Catalog lists 2,199 explosions with their date, country, location, yield, etc. Australian Government database of all nuclear explosions Nuclear Weapon Archive from Carey Sublette (NWA) is a reliable source of information and has links to other sources. NWA repository of blast models mainly used for the effects table (especially DOS programs BLAST and WE) HYDESim: High-Yield Detonation Effects Simulator – Mashup of Google Maps and Javascript to calculate blast effects. NUKEMAP – Google Maps/Javascript effects mapper, which includes fireball size, blast pressure, ionizing radiation, and thermal radiation as well as qualitative descriptions. Nuclear Weapons Frequently Asked Questions Atomic Forum Samuel Glasstone and Philip J. Dolan, The Effects of Nuclear Weapons, Third Edition, United States Department of Defense & Energy Research and Development Administration Available Online Nuclear Emergency and Radiation Resources Outrider believes in the power of an informed, engaged public. Nuclear weapons Nuclear physics Articles containing video clips sv:Kärnexplosion
Effects of nuclear explosions
[ "Physics" ]
7,410
[ "Nuclear physics" ]
244,611
https://en.wikipedia.org/wiki/Newton%27s%20law%20of%20universal%20gravitation
Newton's law of universal gravitation states that every particle attracts every other particle in the universe with a force that is proportional to the product of their masses and inversely proportional to the square of the distance between their centers. Separated objects attract and are attracted as if all their mass were concentrated at their centers. The publication of the law has become known as the "first great unification", as it marked the unification of the previously described phenomena of gravity on Earth with known astronomical behaviors. This is a general physical law derived from empirical observations by what Isaac Newton called inductive reasoning.<ref>Isaac Newton: "In [experimental] philosophy particular propositions are inferred from the phenomena and afterwards rendered general by induction": Principia', Book 3, General Scholium, at p.392 in Volume 2 of Andrew Motte's English translation published 1729.</ref> It is a part of classical mechanics and was formulated in Newton's work Philosophiæ Naturalis Principia Mathematica ("the Principia"), first published on 5 July 1687. The equation for universal gravitation thus takes the form: where F is the gravitational force acting between two objects, m1 and m2 are the masses of the objects, r is the distance between the centers of their masses, and G is the gravitational constant. The first test of Newton's law of gravitation between masses in the laboratory was the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798. It took place 111 years after the publication of Newton's Principia and approximately 71 years after his death. Newton's law of gravitation resembles Coulomb's law of electrical forces, which is used to calculate the magnitude of the electrical force arising between two charged bodies. Both are inverse-square laws, where force is inversely proportional to the square of the distance between the bodies. Coulomb's law has charge in place of mass and a different constant. Newton's law was later superseded by Albert Einstein's theory of general relativity, but the universality of the gravitational constant is intact and the law still continues to be used as an excellent approximation of the effects of gravity in most applications. Relativity is required only when there is a need for extreme accuracy, or when dealing with very strong gravitational fields, such as those found near extremely massive and dense objects, or at small distances (such as Mercury's orbit around the Sun). History Before Newton’s law of gravity, there were many theories explaining gravity. Philoshophers made observations about things falling down − and developed theories why they do – as early as Aristotle who thought that rocks fall to the ground because seeking the ground was an essential part of their nature. Around 1600, the scientific method began to take root. René Descartes started over with a more fundamental view, developing ideas of matter and action independent of theology. Galileo Galilei wrote about experimental measurements of falling and rolling objects. Johannes Kepler's laws of planetary motion summarized Tycho Brahe's astronomical observations. Around 1666 Isaac Newton developed the idea that Kepler's laws must also apply to the orbit of the Moon around the Earth and then to all objects on Earth. The analysis required assuming that the gravitation force acted as if all of the mass of the Earth were concentrated at its center, an unproven conjecture at that time. His calculations of the Moon orbit time was within 16% of the known value. By 1680, new values for the diameter of the Earth improved his orbit time to within 1.6%, but more importantly Newton had found a proof of his earlier conjecture. In 1687 Newton published his Principia which combined his laws of motion with new mathematical analysis to explain Kepler's empirical results. His explanation was in the form of a law of universal gravitation: any two bodies are attracted by a force proportional to their mass and inversely proportional to their separation squared. Newton's original formula was: where the symbol means "is proportional to". To make this into an equal-sided formula or equation, there needed to be a multiplying factor or constant that would give the correct force of gravity no matter the value of the masses or distance between them (the gravitational constant). Newton would need an accurate measure of this constant to prove his inverse-square law. When Newton presented Book 1 of the unpublished text in April 1686 to the Royal Society, Robert Hooke made a claim that Newton had obtained the inverse square law from him, ultimately a frivolous accusation. Newton's "causes hitherto unknown" While Newton was able to formulate his law of gravity in his monumental work, he was deeply uncomfortable with the notion of "action at a distance" that his equations implied. In 1692, in his third letter to Bentley, he wrote: "That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it." He never, in his words, "assigned the cause of this power". In all other cases, he used the phenomenon of motion to explain the origin of various forces acting on bodies, but in the case of gravity, he was unable to experimentally identify the motion that produces the force of gravity (although he invented two mechanical hypotheses in 1675 and 1717). Moreover, he refused to even offer a hypothesis as to the cause of this force on grounds that to do so was contrary to sound science. He lamented that "philosophers have hitherto attempted the search of nature in vain" for the source of the gravitational force, as he was convinced "by many reasons" that there were "causes hitherto unknown" that were fundamental to all the "phenomena of nature". These fundamental phenomena are still under investigation and, though hypotheses abound, the definitive answer has yet to be found. And in Newton's 1713 General Scholium in the second edition of Principia: "I have not yet been able to discover the cause of these properties of gravity from phenomena and I feign no hypotheses. ... It is enough that gravity does really exist and acts according to the laws I have explained, and that it abundantly serves to account for all the motions of celestial bodies." Modern form In modern language, the law states the following: Assuming SI units, F is measured in newtons (N), m1 and m2 in kilograms (kg), r in meters (m), and the constant G is The value of the constant G was first accurately determined from the results of the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798, although Cavendish did not himself calculate a numerical value for G. This experiment was also the first test of Newton's theory of gravitation between masses in the laboratory. It took place 111 years after the publication of Newton's Principia and 71 years after Newton's death, so none of Newton's calculations could use the value of G; instead he could only calculate a force relative to another force. Bodies with spatial extent If the bodies in question have spatial extent (as opposed to being point masses), then the gravitational force between them is calculated by summing the contributions of the notional point masses that constitute the bodies. In the limit, as the component point masses become "infinitely small", this entails integrating the force (in vector form, see below) over the extents of the two bodies. In this way, it can be shown that an object with a spherically symmetric distribution of mass exerts the same gravitational attraction on external bodies as if all the object's mass were concentrated at a point at its center. (This is not generally true for non-spherically symmetrical bodies.) For points inside a spherically symmetric distribution of matter, Newton's shell theorem can be used to find the gravitational force. The theorem tells us how different parts of the mass distribution affect the gravitational force measured at a point located a distance r0 from the center of the mass distribution: The portion of the mass that is located at radii causes the same force at the radius r0 as if all of the mass enclosed within a sphere of radius r0 was concentrated at the center of the mass distribution (as noted above). The portion of the mass that is located at radii exerts no net gravitational force at the radius r0 from the center. That is, the individual gravitational forces exerted on a point at radius r0 by the elements of the mass outside the radius r0 cancel each other. As a consequence, for example, within a shell of uniform thickness and density there is no net gravitational acceleration anywhere within the hollow sphere. Vector form Newton's law of universal gravitation can be written as a vector equation to account for the direction of the gravitational force as well as its magnitude. In this formula, quantities in bold represent vectors. where F21 is the force applied on body 2 exerted by body 1, G is the gravitational constant, m1 and m2 are respectively the masses of bodies 1 and 2, r21 = r2 − r1 is the displacement vector between bodies 1 and 2, and is the unit vector from body 1 to body 2. It can be seen that the vector form of the equation is the same as the scalar form given earlier, except that F is now a vector quantity, and the right hand side is multiplied by the appropriate unit vector. Also, it can be seen that F12 = −F21. Gravity field The gravitational field is a vector field that describes the gravitational force that would be applied on an object in any given point in space, per unit mass. It is actually equal to the gravitational acceleration at that point. It is a generalisation of the vector form, which becomes particularly useful if more than two objects are involved (such as a rocket between the Earth and the Moon). For two objects (e.g. object 2 is a rocket, object 1 the Earth), we simply write r instead of r12 and m instead of m2 and define the gravitational field g(r) as: so that we can write: This formulation is dependent on the objects causing the field. The field has units of acceleration; in SI, this is m/s2. Gravitational fields are also conservative; that is, the work done by gravity from one position to another is path-independent. This has the consequence that there exists a gravitational potential field V(r) such that If m1 is a point mass or the mass of a sphere with homogeneous mass distribution, the force field g(r) outside the sphere is isotropic, i.e., depends only on the distance r from the center of the sphere. In that case As per Gauss's law, field in a symmetric body can be found by the mathematical equation: where is a closed surface and is the mass enclosed by the surface. Hence, for a hollow sphere of radius and total mass , For a uniform solid sphere of radius and total mass , Limitations Newton's description of gravity is sufficiently accurate for many practical purposes and is therefore widely used. Deviations from it are small when the dimensionless quantities and are both much less than one, where is the gravitational potential, is the velocity of the objects being studied, and is the speed of light in vacuum. For example, Newtonian gravity provides an accurate description of the Earth/Sun system, since where is the radius of the Earth's orbit around the Sun. In situations where either dimensionless parameter is large, then general relativity must be used to describe the system. General relativity reduces to Newtonian gravity in the limit of small potential and low velocities, so Newton's law of gravitation is often said to be the low-gravity limit of general relativity. Observations conflicting with Newton's formula Newton's theory does not fully explain the precession of the perihelion of the orbits of the planets, especially that of Mercury, which was detected long after the life of Newton. There is a 43 arcsecond per century discrepancy between the Newtonian calculation, which arises only from the gravitational attractions from the other planets, and the observed precession, made with advanced telescopes during the 19th century. The predicted angular deflection of light rays by gravity (treated as particles travelling at the expected speed) that is calculated by using Newton's theory is only one-half of the deflection that is observed by astronomers. Calculations using general relativity are in much closer agreement with the astronomical observations. In spiral galaxies, the orbiting of stars around their centers seems to strongly disobey both Newton's law of universal gravitation and general relativity. Astrophysicists, however, explain this marked phenomenon by assuming the presence of large amounts of dark matter. Einstein's solution The first two conflicts with observations above were explained by Einstein's theory of general relativity, in which gravitation is a manifestation of curved spacetime instead of being due to a force propagated between bodies. In Einstein's theory, energy and momentum distort spacetime in their vicinity, and other particles move in trajectories determined by the geometry of spacetime. This allowed a description of the motions of light and mass that was consistent with all available observations. In general relativity, the gravitational force is a fictitious force resulting from the curvature of spacetime, because the gravitational acceleration of a body in free fall is due to its world line being a geodesic of spacetime. Extensions In recent years, quests for non-inverse square terms in the law of gravity have been carried out by neutron interferometry. Solutions The two-body problem has been completely solved, as has the restricted three-body problem. The n-body problem is an ancient, classical problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally. Solving this problem – from the time of the Greeks and on – has been motivated by the desire to understand the motions of the Sun, planets and the visible stars. The classical problem can be informally stated as: given the quasi-steady orbital properties (instantaneous position, velocity and time) of a group of celestial bodies, predict their interactive forces; and consequently, predict their true orbital motions for all future times. In the 20th century, understanding the dynamics of globular cluster star systems became an important n-body problem too. The n''-body problem in general relativity is considerably more difficult to solve. See also References External links Newton's Law of Universal Gravitation Javascript calculator Theories of gravity Isaac Newton Articles containing video clips Scientific laws Concepts in astronomy Newtonian gravity Eponymous laws of physics
Newton's law of universal gravitation
[ "Physics", "Astronomy", "Mathematics" ]
3,065
[ "Concepts in astronomy", "Theoretical physics", "Mathematical objects", "Scientific laws", "Equations", "Theories of gravity" ]
245,186
https://en.wikipedia.org/wiki/Deletion%20%28genetics%29
In genetics, a deletion (also called gene deletion, deficiency, or deletion mutation) (sign: Δ) is a mutation (a genetic aberration) in which a part of a chromosome or a sequence of DNA is left out during DNA replication. Any number of nucleotides can be deleted, from a single base to an entire piece of chromosome. Some chromosomes have fragile spots where breaks occur, which result in the deletion of a part of the chromosome. The breaks can be induced by heat, viruses, radiation, or chemical reactions. When a chromosome breaks, if a part of it is deleted or lost, the missing piece of chromosome is referred to as a deletion or a deficiency. For synapsis to occur between a chromosome with a large intercalary deficiency and a normal complete homolog, the unpaired region of the normal homolog must loop out of the linear structure into a deletion or compensation loop. The smallest single base deletion mutations occur by a single base flipping in the template DNA, followed by template DNA strand slippage, within the DNA polymerase active site. Deletions can be caused by errors in chromosomal crossover during meiosis, which causes several serious genetic diseases. Deletions that do not occur in multiples of three bases can cause a frameshift by changing the 3-nucleotide protein reading frame of the genetic sequence. Deletions are representative of eukaryotic organisms, including humans and not in prokaryotic organisms, such as bacteria. Causes Causes include the following: Losses from translocation Chromosomal crossovers within a chromosomal inversion Unequal crossing over Breaking without rejoining Types Types of deletion include the following: Terminal deletion – a deletion that occurs towards the end of a chromosome. Intercalary/interstitial deletion – a deletion that occurs from the interior of a chromosome. Microdeletion – a relatively small amount of deletion (up to 5Mb that could include a dozen genes). Micro-deletion is usually found in children with physical abnormalities. A large amount of deletion would result in immediate abortion (miscarriage). Nomenclature The International System for Human Cytogenomic Nomenclature (ISCN) is an international standard for human chromosome nomenclature, which includes band names, symbols and abbreviated terms used in the description of human chromosome and chromosome abnormalities. Abbreviations include a minus sign (−) for chromosome deletions, and del for deletions of parts of a chromosome. Effects Small deletions are less likely to be fatal; large deletions are usually fatal – there are always variations based on which genes are lost. Some medium-sized deletions lead to recognizable human disorders, e.g. Williams syndrome. Deletion of a number of pairs that is not evenly divisible by three will lead to a frameshift mutation, causing all of the codons occurring after the deletion to be read incorrectly during translation, producing a severely altered and potentially nonfunctional protein. In contrast, a deletion that is evenly divisible by three is called an in-frame deletion. Deletions are responsible for an array of genetic disorders, including some cases of male infertility, two thirds of cases of Duchenne muscular dystrophy, and two thirds of cases of cystic fibrosis (those caused by ΔF508). Deletion of part of the short arm of chromosome 5 results in Cri du chat syndrome. Deletions in the SMN-encoding gene cause spinal muscular atrophy, the most common genetic cause of infant death. Microdeletions are associated with many different conditions, including Angelman Syndrome, Prader-Willi Syndrome, and DiGeorge Syndrome. Some syndromes, including Angelman syndrome and Prader-Willi syndrome, are associated with both microdeletions and genomic imprinting, meaning that same microdeletion can cause two different syndromes depending on which parent the deletion came from. Recent work suggests that some deletions of highly conserved sequences (CONDELs) may be responsible for the evolutionary differences present among closely related species. Such deletions in humans, referred to as hCONDELs, may be responsible for the anatomical and behavioral differences between humans, chimpanzees and other varieties of mammals like ape or monkeys. Recent comprehensive patient-level classification and quantification of driver events in TCGA cohorts revealed that there are on average 12 driver events per tumor, of which 2.1 are deletions of tumor suppressors. Detection The introduction of molecular techniques in conjunction with classical cytogenetic methods has in recent years greatly improved the diagnostic potential for chromosomal abnormalities. In particular, microarray-comparative genomic hybridization (CGH) based on the use of BAC clones promises a sensitive strategy for the detection of DNA copy-number changes on a genome-wide scale. The resolution of detection could be as high as >30,000 "bands" and the size of chromosomal deletion detected could as small as 5–20 kb in length. Other computation methods were selected to discover DNA sequencing deletion errors such as end-sequence profiling. Mitochondrial DNA deletions In the yeast Saccharomyces cerevisiae, the nuclear genes Rad51p, Rad52p and Rad59p encode proteins that are necessary for recombinational repair and are employed in the repair of double strand breaks in mitochondrial DNA. Loss of these proteins decreases the rate of spontaneous DNA deletion events in mitochondria. This finding implies that the repair of DNA double-strand breaks by homologous recombination is a step in the formation of mitochondrial DNA deletions. See also Indel Chromosome abnormalities Null allele List of genetic disorders Medical genetics Microdeletion syndrome Chromosomal deletion syndrome Insertion (genetics) 10q26 deletion References Modification of genetic information Mutation
Deletion (genetics)
[ "Biology" ]
1,259
[ "Modification of genetic information", "Molecular genetics" ]
245,203
https://en.wikipedia.org/wiki/Sphenic%20number
In number theory, a sphenic number (from , 'wedge') is a positive integer that is the product of three distinct prime numbers. Because there are infinitely many prime numbers, there are also infinitely many sphenic numbers. Definition A sphenic number is a product pqr where p, q, and r are three distinct prime numbers. In other words, the sphenic numbers are the square-free 3-almost primes. Examples The smallest sphenic number is 30 = 2 × 3 × 5, the product of the smallest three primes. The first few sphenic numbers are 30, 42, 66, 70, 78, 102, 105, 110, 114, 130, 138, 154, 165, ... The largest known sphenic number at any time can be obtained by multiplying together the three largest known primes. Divisors All sphenic numbers have exactly eight divisors. If we express the sphenic number as , where p, q, and r are distinct primes, then the set of divisors of n will be: The converse does not hold. For example, 24 is not a sphenic number, but it has exactly eight divisors. Properties All sphenic numbers are by definition squarefree, because the prime factors must be distinct. The Möbius function of any sphenic number is −1. The cyclotomic polynomials , taken over all sphenic numbers n, may contain arbitrarily large coefficients (for n a product of two primes the coefficients are or 0). Any multiple of a sphenic number (except by 1) is not sphenic. This is easily provable by the multiplication process at a minimum adding another prime factor, or raising an existing factor to a higher power. Consecutive sphenic numbers The first case of two consecutive sphenic integers is 230 = 2×5×23 and 231 = 3×7×11. The first case of three is 1309 = 7×11×17, 1310 = 2×5×131, and 1311 = 3×19×23. There is no case of more than three, because every fourth consecutive positive integer is divisible by 4 = 2×2 and therefore not squarefree. The numbers 2013 (3×11×61), 2014 (2×19×53), and 2015 (5×13×31) are all sphenic. The next three consecutive sphenic years will be 2665 (5×13×41), 2666 (2×31×43) and 2667 (3×7×127) . See also Semiprimes, products of two prime numbers. Almost prime References Integer sequences Prime numbers
Sphenic number
[ "Mathematics" ]
573
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Prime numbers", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
245,298
https://en.wikipedia.org/wiki/Barcan%20formula
In quantified modal logic, the Barcan formula and the converse Barcan formula (more accurately, schemata rather than formulas) (i) syntactically state principles of interchange between quantifiers and modalities; (ii) semantically state a relation between domains of possible worlds. The formulas were introduced as axioms by Ruth Barcan Marcus, in the first extensions of modal propositional logic to include quantification. Related formulas include the Buridan formula. The Barcan formula The Barcan formula is: . In English, the schema reads: If every x is necessarily F, then it is necessary that every x is F. It is equivalent to . The Barcan formula has generated some controversy because—in terms of possible world semantics—it implies that all objects which exist in any possible world (accessible to the actual world) exist in the actual world, i.e. that domains cannot grow when one moves to accessible worlds. This thesis is sometimes known as actualism—i.e. that there are no merely possible individuals. There is some debate as to the informal interpretation of the Barcan formula and its converse. An informal argument against the plausibility of the Barcan formula would be the interpretation of the predicate Fx as "x is a machine that can tap all the energy locked in the waves of the Atlantic Ocean in a practical and efficient way". In its equivalent form above, the antecedent seems plausible since it is at least theoretically possible that such a machine could exist. However, it is not obvious that this implies that there exists a machine that possibly could tap the energy of the Atlantic. Converse Barcan formula The converse Barcan formula is: . It is equivalent to . If a frame is based on a symmetric accessibility relation, then the Barcan formula will be valid in the frame if, and only if, the converse Barcan formula is valid in the frame. It states that domains cannot shrink as one moves to accessible worlds, i.e. that individuals cannot cease to exist. The converse Barcan formula is taken to be more plausible than the Barcan formula. See also Commutative property References External links Barcan both ways by Melvin Fitting Contingent Objects and the Barcan Formula by Hayaki Reina Modal logic
Barcan formula
[ "Mathematics" ]
468
[ "Mathematical logic", "Modal logic" ]
245,466
https://en.wikipedia.org/wiki/Sheaf%20%28mathematics%29
In mathematics, a sheaf (: sheaves) is a tool for systematically tracking data (such as sets, abelian groups, rings) attached to the open sets of a topological space and defined locally with regard to them. For example, for each open set, the data could be the ring of continuous functions defined on that open set. Such data are well-behaved in that they can be restricted to smaller open sets, and also the data assigned to an open set are equivalent to all collections of compatible data assigned to collections of smaller open sets covering the original open set (intuitively, every datum is the sum of its constituent data). The field of mathematics that studies sheaves is called sheaf theory. Sheaves are understood conceptually as general and abstract objects. Their precise definition is rather technical. They are specifically defined as sheaves of sets or as sheaves of rings, for example, depending on the type of data assigned to the open sets. There are also maps (or morphisms) from one sheaf to another; sheaves (of a specific type, such as sheaves of abelian groups) with their morphisms on a fixed topological space form a category. On the other hand, to each continuous map there is associated both a direct image functor, taking sheaves and their morphisms on the domain to sheaves and morphisms on the codomain, and an inverse image functor operating in the opposite direction. These functors, and certain variants of them, are essential parts of sheaf theory. Due to their general nature and versatility, sheaves have several applications in topology and especially in algebraic and differential geometry. First, geometric structures such as that of a differentiable manifold or a scheme can be expressed in terms of a sheaf of rings on the space. In such contexts, several geometric constructions such as vector bundles or divisors are naturally specified in terms of sheaves. Second, sheaves provide the framework for a very general cohomology theory, which encompasses also the "usual" topological cohomology theories such as singular cohomology. Especially in algebraic geometry and the theory of complex manifolds, sheaf cohomology provides a powerful link between topological and geometric properties of spaces. Sheaves also provide the basis for the theory of D-modules, which provide applications to the theory of differential equations. In addition, generalisations of sheaves to more general settings than topological spaces, such as Grothendieck topology, have provided applications to mathematical logic and to number theory. Definitions and examples In many mathematical branches, several structures defined on a topological space (e.g., a differentiable manifold) can be naturally localised or restricted to open subsets : typical examples include continuous real-valued or complex-valued functions, -times differentiable (real-valued or complex-valued) functions, bounded real-valued functions, vector fields, and sections of any vector bundle on the space. The ability to restrict data to smaller open subsets gives rise to the concept of presheaves. Roughly speaking, sheaves are then those presheaves, where local data can be glued to global data. Presheaves Let be a topological space. A presheaf of sets on consists of the following data: For each open set , there exists a set . This set is also denoted . The elements in this set are called the sections of over . The sections of over are called the global sections of . For each inclusion of open sets , a function . In view of many of the examples below, the morphisms are called restriction morphisms. If , then its restriction is often denoted by analogy with restriction of functions. The restriction morphisms are required to satisfy two additional (functorial) properties: For every open set of , the restriction morphism is the identity morphism on . If we have three open sets , then the composite Informally, the second axiom says it does not matter whether we restrict to in one step or restrict first to , then to . A concise functorial reformulation of this definition is given further below. Many examples of presheaves come from different classes of functions: to any , one can assign the set of continuous real-valued functions on . The restriction maps are then just given by restricting a continuous function on to a smaller open subset , which again is a continuous function. The two presheaf axioms are immediately checked, thereby giving an example of a presheaf. This can be extended to a presheaf of holomorphic functions and a presheaf of smooth functions . Another common class of examples is assigning to the set of constant real-valued functions on . This presheaf is called the constant presheaf associated to and is denoted . Sheaves Given a presheaf, a natural question to ask is to what extent its sections over an open set are specified by their restrictions to open subsets of . A sheaf is a presheaf whose sections are, in a technical sense, uniquely determined by their restrictions. Axiomatically, a sheaf is a presheaf that satisfies both of the following axioms: (Locality) Suppose is an open set, is an open cover of with for all , and are sections. If for all , then . (Gluing) Suppose is an open set, is an open cover of with for all , and is a family of sections. If all pairs of sections agree on the overlap of their domains, that is, if for all , then there exists a section such that for all . In both of these axioms, the hypothesis on the open cover is equivalent to the assumption that . The section whose existence is guaranteed by axiom 2 is called the gluing, concatenation, or collation of the sections . By axiom 1 it is unique. Sections and satisfying the agreement precondition of axiom 2 are often called compatible ; thus axioms 1 and 2 together state that any collection of pairwise compatible sections can be uniquely glued together. A separated presheaf, or monopresheaf, is a presheaf satisfying axiom 1. The presheaf consisting of continuous functions mentioned above is a sheaf. This assertion reduces to checking that, given continuous functions which agree on the intersections , there is a unique continuous function whose restriction equals the . By contrast, the constant presheaf is usually not a sheaf as it fails to satisfy the locality axiom on the empty set (this is explained in more detail at constant sheaf). Presheaves and sheaves are typically denoted by capital letters, being particularly common, presumably for the French word for sheaf, faisceau. Use of calligraphic letters such as is also common. It can be shown that to specify a sheaf, it is enough to specify its restriction to the open sets of a basis for the topology of the underlying space. Moreover, it can also be shown that it is enough to verify the sheaf axioms above relative to the open sets of a covering. This observation is used to construct another example which is crucial in algebraic geometry, namely quasi-coherent sheaves. Here the topological space in question is the spectrum of a commutative ring , whose points are the prime ideals in . The open sets form a basis for the Zariski topology on this space. Given an -module , there is a sheaf, denoted by on the , that satisfies the localization of at . There is another characterization of sheaves that is equivalent to the previously discussed. A presheaf is a sheaf if and only if for any open and any open cover of , is the fibre product . This characterization is useful in construction of sheaves, for example, if are abelian sheaves, then the kernel of sheaves morphism is a sheaf, since projective limits commutes with projective limits. On the other hand, the cokernel is not always a sheaf because inductive limit not necessarily commutes with projective limits. One of the way to fix this is to consider Noetherian topological spaces; every open sets are compact so that the cokernel is a sheaf, since finite projective limits commutes with inductive limits. Further examples Sheaf of sections of a continuous map Any continuous map of topological spaces determines a sheaf on by setting Any such is commonly called a section of , and this example is the reason why the elements in are generally called sections. This construction is especially important when is the projection of a fiber bundle onto its base space. For example, the sheaves of smooth functions are the sheaves of sections of the trivial bundle. Another example: the sheaf of sections of is the sheaf which assigns to any the set of branches of the complex logarithm on . Given a point and an abelian group , the skyscraper sheaf is defined as follows: if is an open set containing , then . If does not contain , then , the trivial group. The restriction maps are either the identity on , if both open sets contain , or the zero map otherwise. Sheaves on manifolds On an -dimensional -manifold , there are a number of important sheaves, such as the sheaf of -times continuously differentiable functions (with ). Its sections on some open are the -functions . For , this sheaf is called the structure sheaf and is denoted . The nonzero functions also form a sheaf, denoted . Differential forms (of degree ) also form a sheaf . In all these examples, the restriction morphisms are given by restricting functions or forms. The assignment sending to the compactly supported functions on is not a sheaf, since there is, in general, no way to preserve this property by passing to a smaller open subset. Instead, this forms a cosheaf, a dual concept where the restriction maps go in the opposite direction than with sheaves. However, taking the dual of these vector spaces does give a sheaf, the sheaf of distributions. Presheaves that are not sheaves In addition to the constant presheaf mentioned above, which is usually not a sheaf, there are further examples of presheaves that are not sheaves: Let be the two-point topological space with the discrete topology. Define a presheaf as follows: The restriction map is the projection of onto its first coordinate, and the restriction map is the projection of onto its second coordinate. is a presheaf that is not separated: a global section is determined by three numbers, but the values of that section over and determine only two of those numbers. So while we can glue any two sections over and , we cannot glue them uniquely. Let be the real line, and let be the set of bounded continuous functions on . This is not a sheaf because it is not always possible to glue. For example, let be the set of all such that . The identity function is bounded on each . Consequently, we get a section on . However, these sections do not glue, because the function is not bounded on the real line. Consequently is a presheaf, but not a sheaf. In fact, is separated because it is a sub-presheaf of the sheaf of continuous functions. Motivating sheaves from complex analytic spaces and algebraic geometry One of the historical motivations for sheaves have come from studying complex manifolds, complex analytic geometry, and scheme theory from algebraic geometry. This is because in all of the previous cases, we consider a topological space together with a structure sheaf giving it the structure of a complex manifold, complex analytic space, or scheme. This perspective of equipping a topological space with a sheaf is essential to the theory of locally ringed spaces (see below). Technical challenges with complex manifolds One of the main historical motivations for introducing sheaves was constructing a device which keeps track of holomorphic functions on complex manifolds. For example, on a compact complex manifold (like complex projective space or the vanishing locus in projective space of a homogeneous polynomial), the only holomorphic functionsare the constant functions. This means there exist two compact complex manifolds which are not isomorphic, but nevertheless their rings of global holomorphic functions, denoted , are isomorphic. Contrast this with smooth manifolds where every manifold can be embedded inside some , hence its ring of smooth functions comes from restricting the smooth functions from . Another complexity when considering the ring of holomorphic functions on a complex manifold is given a small enough open set , the holomorphic functions will be isomorphic to . Sheaves are a direct tool for dealing with this complexity since they make it possible to keep track of the holomorphic structure on the underlying topological space of on arbitrary open subsets . This means as becomes more complex topologically, the ring can be expressed from gluing the . Note that sometimes this sheaf is denoted or just , or even when we want to emphasize the space the structure sheaf is associated to. Tracking submanifolds with sheaves Another common example of sheaves can be constructed by considering a complex submanifold . There is an associated sheaf which takes an open subset and gives the ring of holomorphic functions on . This kind of formalism was found to be extremely powerful and motivates a lot of homological algebra such as sheaf cohomology since an intersection theory can be built using these kinds of sheaves from the Serre intersection formula. Operations with sheaves Morphisms Morphisms of sheaves are, roughly speaking, analogous to functions between them. In contrast to a function between sets, which is simply an assignment of outputs to inputs, morphisms of sheaves are also required to be compatible with the local–global structures of the underlying sheaves. This idea is made precise in the following definition. Let and be two sheaves of sets (respectively abelian groups, rings, etc.) on . A morphism consists of a morphism of sets (respectively abelian groups, rings, etc.) for each open set of , subject to the condition that this morphism is compatible with restrictions. In other words, for every open subset of an open set , the following diagram is commutative. For example, taking the derivative gives a morphism of sheaves on , Indeed, given an (-times continuously differentiable) function (with in open), the restriction (to a smaller open subset ) of its derivative equals the derivative of . With this notion of morphism, sheaves of sets (respectively abelian groups, rings, etc.) on a fixed topological space form a category. The general categorical notions of mono-, epi- and isomorphisms can therefore be applied to sheaves. A morphism of sheaves on is an isomorphism (respectively monomorphism) if and only if there exists an open cover of such that are isomorphisms (respectively injective morphisms) of sets (respectively abelian groups, rings, etc.) for all . These statements give examples of how to work with sheaves using local information, but it's important to note that we cannot check if a morphism of sheaves is an epimorphism in the same manner. Indeed the statement that maps on the level of open sets are not always surjective for epimorphisms of sheaves is equivalent to non-exactness of the global sections functor—or equivalently, to non-triviality of sheaf cohomology. Stalks of a sheaf The stalk of a sheaf captures the properties of a sheaf "around" a point , generalizing the germs of functions. Here, "around" means that, conceptually speaking, one looks at smaller and smaller neighborhoods of the point. Of course, no single neighborhood will be small enough, which requires considering a limit of some sort. More precisely, the stalk is defined by the direct limit being over all open subsets of containing the given point . In other words, an element of the stalk is given by a section over some open neighborhood of , and two such sections are considered equivalent if their restrictions agree on a smaller neighborhood. The natural morphism takes a section in to its germ at . This generalises the usual definition of a germ. In many situations, knowing the stalks of a sheaf is enough to control the sheaf itself. For example, whether or not a morphism of sheaves is a monomorphism, epimorphism, or isomorphism can be tested on the stalks. In this sense, a sheaf is determined by its stalks, which are a local data. By contrast, the global information present in a sheaf, i.e., the global sections, i.e., the sections on the whole space , typically carry less information. For example, for a compact complex manifold , the global sections of the sheaf of holomorphic functions are just , since any holomorphic function is constant by Liouville's theorem. Turning a presheaf into a sheaf It is frequently useful to take the data contained in a presheaf and to express it as a sheaf. It turns out that there is a best possible way to do this. It takes a presheaf and produces a new sheaf called the sheafification or sheaf associated to the presheaf . For example, the sheafification of the constant presheaf (see above) is called the constant sheaf. Despite its name, its sections are locally constant functions. The sheaf can be constructed using the étalé space of , namely as the sheaf of sections of the map Another construction of the sheaf proceeds by means of a functor from presheaves to presheaves that gradually improves the properties of a presheaf: for any presheaf , is a separated presheaf, and for any separated presheaf , is a sheaf. The associated sheaf is given by . The idea that the sheaf is the best possible approximation to by a sheaf is made precise using the following universal property: there is a natural morphism of presheaves so that for any sheaf and any morphism of presheaves , there is a unique morphism of sheaves such that . In fact, is the left adjoint functor to the inclusion functor (or forgetful functor) from the category of sheaves to the category of presheaves, and is the unit of the adjunction. In this way, the category of sheaves turns into a Giraud subcategory of presheaves. This categorical situation is the reason why the sheafification functor appears in constructing cokernels of sheaf morphisms or tensor products of sheaves, but not for kernels, say. Subsheaves, quotient sheaves If is a subsheaf of a sheaf of abelian groups, then the quotient sheaf is the sheaf associated to the presheaf ; in other words, the quotient sheaf fits into an exact sequence of sheaves of abelian groups; (this is also called a sheaf extension.) Let be sheaves of abelian groups. The set of morphisms of sheaves from to forms an abelian group (by the abelian group structure of ). The sheaf hom of and , denoted by, is the sheaf of abelian groups where is the sheaf on given by (note sheafification is not needed here). The direct sum of and is the sheaf given by , and the tensor product of and is the sheaf associated to the presheaf . All of these operations extend to sheaves of modules over a sheaf of rings ; the above is the special case when is the constant sheaf . Basic functoriality Since the data of a (pre-)sheaf depends on the open subsets of the base space, sheaves on different topological spaces are unrelated to each other in the sense that there are no morphisms between them. However, given a continuous map between two topological spaces, pushforward and pullback relate sheaves on to those on and vice versa. Direct image The pushforward (also known as direct image) of a sheaf on is the sheaf defined by Here is an open subset of , so that its preimage is open in by the continuity of . This construction recovers the skyscraper sheaf mentioned above: where is the inclusion, and is regarded as a sheaf on the singleton by . For a map between locally compact spaces, the direct image with compact support is a subsheaf of the direct image. By definition, consists of those whose support is mapped properly. If is proper itself, then , but in general they disagree. Inverse image The pullback or inverse image goes the other way: it produces a sheaf on , denoted out of a sheaf on . If is the inclusion of an open subset, then the inverse image is just a restriction, i.e., it is given by for an open in . A sheaf (on some space ) is called locally constant if by some open subsets such that the restriction of to all these open subsets is constant. On a wide range of topological spaces , such sheaves are equivalent to representations of the fundamental group . For general maps , the definition of is more involved; it is detailed at inverse image functor. The stalk is an essential special case of the pullback in view of a natural identification, where is as above: More generally, stalks satisfy . Extension by zero For the inclusion of an open subset, the extension by zero (pronounced "j lower shriek of F") of a sheaf of abelian groups on is the sheafification of the presheaf defined by if and otherwise. For a sheaf on , this construction is in a sense complementary to , where is the inclusion of the complement of : for in , and the stalk is zero otherwise, while for in , and equals otherwise. More generally, if is a locally closed subset, then there exists an open of containing such that is closed in . Let and be the natural inclusions. Then the extension by zero of a sheaf on is defined by . Due to its nice behavior on stalks, the extension by zero functor is useful for reducing sheaf-theoretic questions on to ones on the strata of a stratification, i.e., a decomposition of into smaller, locally closed subsets. Complements Sheaves in more general categories In addition to (pre-)sheaves as introduced above, where is merely a set, it is in many cases important to keep track of additional structure on these sections. For example, the sections of the sheaf of continuous functions naturally form a real vector space, and restriction is a linear map between these vector spaces. Presheaves with values in an arbitrary category are defined by first considering the category of open sets on to be the posetal category whose objects are the open sets of and whose morphisms are inclusions. Then a -valued presheaf on is the same as a contravariant functor from to . Morphisms in this category of functors, also known as natural transformations, are the same as the morphisms defined above, as can be seen by unraveling the definitions. If the target category admits all limits, a -valued presheaf is a sheaf if the following diagram is an equalizer for every open cover of any open set : Here the first map is the product of the restriction maps and the pair of arrows the products of the two sets of restrictions and If is an abelian category, this condition can also be rephrased by requiring that there is an exact sequence A particular case of this sheaf condition occurs for being the empty set, and the index set also being empty. In this case, the sheaf condition requires to be the terminal object in . Ringed spaces and sheaves of modules In several geometrical disciplines, including algebraic geometry and differential geometry, the spaces come along with a natural sheaf of rings, often called the structure sheaf and denoted by . Such a pair is called a ringed space. Many types of spaces can be defined as certain types of ringed spaces. Commonly, all the stalks of the structure sheaf are local rings, in which case the pair is called a locally ringed space. For example, an -dimensional manifold is a locally ringed space whose structure sheaf consists of -functions on the open subsets of . The property of being a locally ringed space translates into the fact that such a function, which is nonzero at a point , is also non-zero on a sufficiently small open neighborhood of . Some authors actually define real (or complex) manifolds to be locally ringed spaces that are locally isomorphic to the pair consisting of an open subset of (respectively ) together with the sheaf of (respectively holomorphic) functions. Similarly, schemes, the foundational notion of spaces in algebraic geometry, are locally ringed spaces that are locally isomorphic to the spectrum of a ring. Given a ringed space, a sheaf of modules is a sheaf such that on every open set of , is an -module and for every inclusion of open sets , the restriction map is compatible with the restriction map : the restriction of fs is the restriction of times that of for any in and in . Most important geometric objects are sheaves of modules. For example, there is a one-to-one correspondence between vector bundles and locally free sheaves of -modules. This paradigm applies to real vector bundles, complex vector bundles, or vector bundles in algebraic geometry (where consists of smooth functions, holomorphic functions, or regular functions, respectively). Sheaves of solutions to differential equations are -modules, that is, modules over the sheaf of differential operators. On any topological space, modules over the constant sheaf are the same as sheaves of abelian groups in the sense above. There is a different inverse image functor for sheaves of modules over sheaves of rings. This functor is usually denoted and it is distinct from . See inverse image functor. Finiteness conditions for sheaves of modules Finiteness conditions for module over commutative rings give rise to similar finiteness conditions for sheaves of modules: is called finitely generated (respectively finitely presented) if, for every point of , there exists an open neighborhood of , a natural number (possibly depending on ), and a surjective morphism of sheaves (respectively, in addition a natural number , and an exact sequence .) Paralleling the notion of a coherent module, is called a coherent sheaf if it is of finite type and if, for every open set and every morphism of sheaves (not necessarily surjective), the kernel of is of finite type. is coherent if it is coherent as a module over itself. Like for modules, coherence is in general a strictly stronger condition than finite presentation. The Oka coherence theorem states that the sheaf of holomorphic functions on a complex manifold is coherent. The étalé space of a sheaf In the examples above it was noted that some sheaves occur naturally as sheaves of sections. In fact, all sheaves of sets can be represented as sheaves of sections of a topological space called the étalé space, from the French word étalé , meaning roughly "spread out". If is a sheaf over , then the étalé space (sometimes called the étale space) of is a topological space together with a local homeomorphism such that the sheaf of sections of is . The space is usually very strange, and even if the sheaf arises from a natural topological situation, may not have any clear topological interpretation. For example, if is the sheaf of sections of a continuous function , then if and only if is a local homeomorphism. The étalé space is constructed from the stalks of over . As a set, it is their disjoint union and is the obvious map that takes the value on the stalk of over . The topology of is defined as follows. For each element and each , we get a germ of at , denoted or . These germs determine points of . For any and , the union of these points (for all ) is declared to be open in . Notice that each stalk has the discrete topology as subspace topology. Two morphisms between sheaves determine a continuous map of the corresponding étalé spaces that is compatible with the projection maps (in the sense that every germ is mapped to a germ over the same point). This makes the construction into a functor. The construction above determines an equivalence of categories between the category of sheaves of sets on and the category of étalé spaces over . The construction of an étalé space can also be applied to a presheaf, in which case the sheaf of sections of the étalé space recovers the sheaf associated to the given presheaf. This construction makes all sheaves into representable functors on certain categories of topological spaces. As above, let be a sheaf on , let be its étalé space, and let be the natural projection. Consider the overcategory of topological spaces over , that is, the category of topological spaces together with fixed continuous maps to . Every object of this category is a continuous map , and a morphism from to is a continuous map that commutes with the two maps to . There is a functorsending an object to . For example, if is the inclusion of an open subset, thenand for the inclusion of a point , thenis the stalk of at . There is a natural isomorphism,which shows that (for the étalé space) represents the functor . is constructed so that the projection map is a covering map. In algebraic geometry, the natural analog of a covering map is called an étale morphism. Despite its similarity to "étalé", the word étale has a different meaning in French. It is possible to turn into a scheme and into a morphism of schemes in such a way that retains the same universal property, but is not in general an étale morphism because it is not quasi-finite. It is, however, formally étale. The definition of sheaves by étalé spaces is older than the definition given earlier in the article. It is still common in some areas of mathematics such as mathematical analysis. Sheaf cohomology In contexts where the open set is fixed, and the sheaf is regarded as a variable, the set is also often denoted As was noted above, this functor does not preserve epimorphisms. Instead, an epimorphism of sheaves is a map with the following property: for any section there is a covering where of open subsets, such that the restriction are in the image of . However, itself need not be in the image of . A concrete example of this phenomenon is the exponential map between the sheaf of holomorphic functions and non-zero holomorphic functions. This map is an epimorphism, which amounts to saying that any non-zero holomorphic function (on some open subset in , say), admits a complex logarithm locally, i.e., after restricting to appropriate open subsets. However, need not have a logarithm globally. Sheaf cohomology captures this phenomenon. More precisely, for an exact sequence of sheaves of abelian groups (i.e., an epimorphism whose kernel is ), there is a long exact sequenceBy means of this sequence, the first cohomology group is a measure for the non-surjectivity of the map between sections of and . There are several different ways of constructing sheaf cohomology. introduced them by defining sheaf cohomology as the derived functor of . This method is theoretically satisfactory, but, being based on injective resolutions, of little use in concrete computations. Godement resolutions are another general, but practically inaccessible approach. Computing sheaf cohomology Especially in the context of sheaves on manifolds, sheaf cohomology can often be computed using resolutions by soft sheaves, fine sheaves, and flabby sheaves (also known as flasque sheaves from the French flasque meaning flabby). For example, a partition of unity argument shows that the sheaf of smooth functions on a manifold is soft. The higher cohomology groups for vanish for soft sheaves, which gives a way of computing cohomology of other sheaves. For example, the de Rham complex is a resolution of the constant sheaf on any smooth manifold, so the sheaf cohomology of is equal to its de Rham cohomology. A different approach is by Čech cohomology. Čech cohomology was the first cohomology theory developed for sheaves and it is well-suited to concrete calculations, such as computing the coherent sheaf cohomology of complex projective space . It relates sections on open subsets of the space to cohomology classes on the space. In most cases, Čech cohomology computes the same cohomology groups as the derived functor cohomology. However, for some pathological spaces, Čech cohomology will give the correct but incorrect higher cohomology groups. To get around this, Jean-Louis Verdier developed hypercoverings. Hypercoverings not only give the correct higher cohomology groups but also allow the open subsets mentioned above to be replaced by certain morphisms from another space. This flexibility is necessary in some applications, such as the construction of Pierre Deligne's mixed Hodge structures. Many other coherent sheaf cohomology groups are found using an embedding of a space into a space with known cohomology, such as , or some weighted projective space. In this way, the known sheaf cohomology groups on these ambient spaces can be related to the sheaves , giving . For example, computing the coherent sheaf cohomology of projective plane curves is easily found. One big theorem in this space is the Hodge decomposition found using a spectral sequence associated to sheaf cohomology groups, proved by Deligne. Essentially, the -page with termsthe sheaf cohomology of a smooth projective variety , degenerates, meaning . This gives the canonical Hodge structure on the cohomology groups . It was later found these cohomology groups can be easily explicitly computed using Griffiths residues. See Jacobian ideal. These kinds of theorems lead to one of the deepest theorems about the cohomology of algebraic varieties, the decomposition theorem, paving the path for Mixed Hodge modules. Another clean approach to the computation of some cohomology groups is the Borel–Bott–Weil theorem, which identifies the cohomology groups of some line bundles on flag manifolds with irreducible representations of Lie groups. This theorem can be used, for example, to easily compute the cohomology groups of all line bundles on projective space and grassmann manifolds. In many cases there is a duality theory for sheaves that generalizes Poincaré duality. See Grothendieck duality and Verdier duality. Derived categories of sheaves The derived category of the category of sheaves of, say, abelian groups on some space X, denoted here as , is the conceptual haven for sheaf cohomology, by virtue of the following relation: The adjunction between , which is the left adjoint of (already on the level of sheaves of abelian groups) gives rise to an adjunction (for ), where is the derived functor. This latter functor encompasses the notion of sheaf cohomology since for . Like , the direct image with compact support can also be derived. By virtue of the following isomorphism parametrizes the cohomology with compact support of the fibers of : This isomorphism is an example of a base change theorem. There is another adjunction Unlike all the functors considered above, the twisted (or exceptional) inverse image functor is in general only defined on the level of derived categories, i.e., the functor is not obtained as the derived functor of some functor between abelian categories. If and X is a smooth orientable manifold of dimension n, then This computation, and the compatibility of the functors with duality (see Verdier duality) can be used to obtain a high-brow explanation of Poincaré duality. In the context of quasi-coherent sheaves on schemes, there is a similar duality known as coherent duality. Perverse sheaves are certain objects in , i.e., complexes of sheaves (but not in general sheaves proper). They are an important tool to study the geometry of singularities. Derived categories of coherent sheaves and the Grothendieck group Another important application of derived categories of sheaves is with the derived category of coherent sheaves on a scheme denoted . This was used by Grothendieck in his development of intersection theory using derived categories and K-theory, that the intersection product of subschemes is represented in K-theory aswhere are coherent sheaves defined by the -modules given by their structure sheaves. Sites and topoi André Weil's Weil conjectures stated that there was a cohomology theory for algebraic varieties over finite fields that would give an analogue of the Riemann hypothesis. The cohomology of a complex manifold can be defined as the sheaf cohomology of the locally constant sheaf in the Euclidean topology, which suggests defining a Weil cohomology theory in positive characteristic as the sheaf cohomology of a constant sheaf. But the only classical topology on such a variety is the Zariski topology, and the Zariski topology has very few open sets, so few that the cohomology of any Zariski-constant sheaf on an irreducible variety vanishes (except in degree zero). Alexandre Grothendieck solved this problem by introducing Grothendieck topologies, which axiomatize the notion of covering. Grothendieck's insight was that the definition of a sheaf depends only on the open sets of a topological space, not on the individual points. Once he had axiomatized the notion of covering, open sets could be replaced by other objects. A presheaf takes each one of these objects to data, just as before, and a sheaf is a presheaf that satisfies the gluing axiom with respect to our new notion of covering. This allowed Grothendieck to define étale cohomology and ℓ-adic cohomology, which eventually were used to prove the Weil conjectures. A category with a Grothendieck topology is called a site. A category of sheaves on a site is called a topos or a Grothendieck topos. The notion of a topos was later abstracted by William Lawvere and Miles Tierney to define an elementary topos, which has connections to mathematical logic. History The first origins of sheaf theory are hard to pin down – they may be co-extensive with the idea of analytic continuation. It took about 15 years for a recognisable, free-standing theory of sheaves to emerge from the foundational work on cohomology. 1936 Eduard Čech introduces the nerve construction, for associating a simplicial complex to an open covering. 1938 Hassler Whitney gives a 'modern' definition of cohomology, summarizing the work since J. W. Alexander and Kolmogorov first defined cochains. 1943 Norman Steenrod publishes on homology with local coefficients. 1945 Jean Leray publishes work carried out as a prisoner of war, motivated by proving fixed-point theorems for application to PDE theory; it is the start of sheaf theory and spectral sequences. 1947 Henri Cartan reproves the de Rham theorem by sheaf methods, in correspondence with André Weil (see De Rham–Weil theorem). Leray gives a sheaf definition in his courses via closed sets (the later carapaces). 1948 The Cartan seminar writes up sheaf theory for the first time. 1950 The "second edition" sheaf theory from the Cartan seminar: the sheaf space (espace étalé) definition is used, with stalkwise structure. Supports are introduced, and cohomology with supports. Continuous mappings give rise to spectral sequences. At the same time Kiyoshi Oka introduces an idea (adjacent to that) of a sheaf of ideals, in several complex variables. 1951 The Cartan seminar proves theorems A and B, based on Oka's work. 1953 The finiteness theorem for coherent sheaves in the analytic theory is proved by Cartan and Jean-Pierre Serre, as is Serre duality. 1954 Serre's paper Faisceaux algébriques cohérents (published in 1955) introduces sheaves into algebraic geometry. These ideas are immediately exploited by Friedrich Hirzebruch, who writes a major 1956 book on topological methods. 1955 Alexander Grothendieck in lectures in Kansas defines abelian category and presheaf, and by using injective resolutions allows direct use of sheaf cohomology on all topological spaces, as derived functors. 1956 Oscar Zariski's report Algebraic sheaf theory 1957 Grothendieck's Tohoku paper rewrites homological algebra; he proves Grothendieck duality (i.e., Serre duality for possibly singular algebraic varieties). 1957 onwards: Grothendieck extends sheaf theory in line with the needs of algebraic geometry, introducing: schemes and general sheaves on them, local cohomology, derived categories (with Verdier), and Grothendieck topologies. There emerges also his influential schematic idea of 'six operations' in homological algebra. 1958 Roger Godement's book on sheaf theory is published. At around this time Mikio Sato proposes his hyperfunctions, which will turn out to have sheaf-theoretic nature. At this point sheaves had become a mainstream part of mathematics, with use by no means restricted to algebraic topology. It was later discovered that the logic in categories of sheaves is intuitionistic logic (this observation is now often referred to as Kripke–Joyal semantics, but probably should be attributed to a number of authors). See also Coherent sheaf Gerbe Stack (mathematics) Sheaf of spectra Perverse sheaf Presheaf of spaces Constructible sheaf De Rham's theorem Notes References (oriented towards conventional topological applications) (updated edition of a classic using enough sheaf theory to show its power) (advanced techniques such as the derived category and vanishing cycles on the most reasonable spaces) (category theory and toposes emphasised) (concise lecture notes) (pedagogic treatment) (introductory book with open access) Topological methods of algebraic geometry Algebraic topology
Sheaf (mathematics)
[ "Mathematics" ]
8,992
[ "Mathematical structures", "Algebraic topology", "Fields of abstract algebra", "Topology", "Category theory", "Sheaf theory" ]
245,552
https://en.wikipedia.org/wiki/Gaussian%20function
In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form and with parametric extension for arbitrary real constants , and non-zero . It is named after the mathematician Carl Friedrich Gauss. The graph of a Gaussian is a characteristic symmetric "bell curve" shape. The parameter is the height of the curve's peak, is the position of the center of the peak, and (the standard deviation, sometimes called the Gaussian RMS width) controls the width of the "bell". Gaussian functions are often used to represent the probability density function of a normally distributed random variable with expected value and variance . In this case, the Gaussian is of the form Gaussian functions are widely used in statistics to describe the normal distributions, in signal processing to define Gaussian filters, in image processing where two-dimensional Gaussians are used for Gaussian blurs, and in mathematics to solve heat equations and diffusion equations and to define the Weierstrass transform. They are also abundantly used in quantum chemistry to form basis sets. Properties Gaussian functions arise by composing the exponential function with a concave quadratic function:where (Note: in , not to be confused with ) The Gaussian functions are thus those functions whose logarithm is a concave quadratic function. The parameter is related to the full width at half maximum (FWHM) of the peak according to The function may then be expressed in terms of the FWHM, represented by : Alternatively, the parameter can be interpreted by saying that the two inflection points of the function occur at . The full width at tenth of maximum (FWTM) for a Gaussian could be of interest and is Gaussian functions are analytic, and their limit as is 0 (for the above case of ). Gaussian functions are among those functions that are elementary but lack elementary antiderivatives; the integral of the Gaussian function is the error function: Nonetheless, their improper integrals over the whole real line can be evaluated exactly, using the Gaussian integral and one obtains This integral is 1 if and only if (the normalizing constant), and in this case the Gaussian is the probability density function of a normally distributed random variable with expected value and variance : These Gaussians are plotted in the accompanying figure. Gaussian functions centered at zero minimize the Fourier uncertainty principle. The product of two Gaussian functions is a Gaussian, and the convolution of two Gaussian functions is also a Gaussian, with variance being the sum of the original variances: . The product of two Gaussian probability density functions (PDFs), though, is not in general a Gaussian PDF. Taking the Fourier transform (unitary, angular-frequency convention) of a Gaussian function with parameters , and yields another Gaussian function, with parameters , and . So in particular the Gaussian functions with and are kept fixed by the Fourier transform (they are eigenfunctions of the Fourier transform with eigenvalue 1). A physical realization is that of the diffraction pattern: for example, a photographic slide whose transmittance has a Gaussian variation is also a Gaussian function. The fact that the Gaussian function is an eigenfunction of the continuous Fourier transform allows us to derive the following interesting identity from the Poisson summation formula: Integral of a Gaussian function The integral of an arbitrary Gaussian function is An alternative form is where f must be strictly positive for the integral to converge. Relation to standard Gaussian integral The integral for some real constants a, b and c > 0 can be calculated by putting it into the form of a Gaussian integral. First, the constant a can simply be factored out of the integral. Next, the variable of integration is changed from x to : and then to : Then, using the Gaussian integral identity we have Two-dimensional Gaussian function Base form: In two dimensions, the power to which e is raised in the Gaussian function is any negative-definite quadratic form. Consequently, the level sets of the Gaussian will always be ellipses. A particular example of a two-dimensional Gaussian function is Here the coefficient A is the amplitude, x0, y0 is the center, and σx, σy are the x and y spreads of the blob. The figure on the right was created using A = 1, x0 = 0, y0 = 0, σx = σy = 1. The volume under the Gaussian function is given by In general, a two-dimensional elliptical Gaussian function is expressed as where the matrix is positive-definite. Using this formulation, the figure on the right can be created using , , , . Meaning of parameters for the general equation For the general form of the equation the coefficient A is the height of the peak and is the center of the blob. If we setthen we rotate the blob by a positive, counter-clockwise angle (for negative, clockwise rotation, invert the signs in the b coefficient). To get back the coefficients , and from , and use Example rotations of Gaussian blobs can be seen in the following examples: Using the following Octave code, one can easily see the effect of changing the parameters: A = 1; x0 = 0; y0 = 0; sigma_X = 1; sigma_Y = 2; [X, Y] = meshgrid(-5:.1:5, -5:.1:5); for theta = 0:pi/100:pi a = cos(theta)^2 / (2 * sigma_X^2) + sin(theta)^2 / (2 * sigma_Y^2); b = sin(2 * theta) / (4 * sigma_X^2) - sin(2 * theta) / (4 * sigma_Y^2); c = sin(theta)^2 / (2 * sigma_X^2) + cos(theta)^2 / (2 * sigma_Y^2); Z = A * exp(-(a * (X - x0).^2 + 2 * b * (X - x0) .* (Y - y0) + c * (Y - y0).^2)); surf(X, Y, Z); shading interp; view(-36, 36) waitforbuttonpress end Such functions are often used in image processing and in computational models of visual system function—see the articles on scale space and affine shape adaptation. Also see multivariate normal distribution. Higher-order Gaussian or super-Gaussian function A more general formulation of a Gaussian function with a flat-top and Gaussian fall-off can be taken by raising the content of the exponent to a power : This function is known as a super-Gaussian function and is often used for Gaussian beam formulation. This function may also be expressed in terms of the full width at half maximum (FWHM), represented by : In a two-dimensional formulation, a Gaussian function along and can be combined with potentially different and to form a rectangular Gaussian distribution: or an elliptical Gaussian distribution: Multi-dimensional Gaussian function In an -dimensional space a Gaussian function can be defined as where is a column of coordinates, is a positive-definite matrix, and denotes matrix transposition. The integral of this Gaussian function over the whole -dimensional space is given as It can be easily calculated by diagonalizing the matrix and changing the integration variables to the eigenvectors of . More generally a shifted Gaussian function is defined as where is the shift vector and the matrix can be assumed to be symmetric, , and positive-definite. The following integrals with this function can be calculated with the same technique: where Estimation of parameters A number of fields such as stellar photometry, Gaussian beam characterization, and emission/absorption line spectroscopy work with sampled Gaussian functions and need to accurately estimate the height, position, and width parameters of the function. There are three unknown parameters for a 1D Gaussian function (a, b, c) and five for a 2D Gaussian function . The most common method for estimating the Gaussian parameters is to take the logarithm of the data and fit a parabola to the resulting data set. While this provides a simple curve fitting procedure, the resulting algorithm may be biased by excessively weighting small data values, which can produce large errors in the profile estimate. One can partially compensate for this problem through weighted least squares estimation, reducing the weight of small data values, but this too can be biased by allowing the tail of the Gaussian to dominate the fit. In order to remove the bias, one can instead use an iteratively reweighted least squares procedure, in which the weights are updated at each iteration. It is also possible to perform non-linear regression directly on the data, without involving the logarithmic data transformation; for more options, see probability distribution fitting. Parameter precision Once one has an algorithm for estimating the Gaussian function parameters, it is also important to know how precise those estimates are. Any least squares estimation algorithm can provide numerical estimates for the variance of each parameter (i.e., the variance of the estimated height, position, and width of the function). One can also use Cramér–Rao bound theory to obtain an analytical expression for the lower bound on the parameter variances, given certain assumptions about the data. The noise in the measured profile is either i.i.d. Gaussian, or the noise is Poisson-distributed. The spacing between each sampling (i.e. the distance between pixels measuring the data) is uniform. The peak is "well-sampled", so that less than 10% of the area or volume under the peak (area if a 1D Gaussian, volume if a 2D Gaussian) lies outside the measurement region. The width of the peak is much larger than the distance between sample locations (i.e. the detector pixels must be at least 5 times smaller than the Gaussian FWHM). When these assumptions are satisfied, the following covariance matrix K applies for the 1D profile parameters , , and under i.i.d. Gaussian noise and under Poisson noise: where is the width of the pixels used to sample the function, is the quantum efficiency of the detector, and indicates the standard deviation of the measurement noise. Thus, the individual variances for the parameters are, in the Gaussian noise case, and in the Poisson noise case, For the 2D profile parameters giving the amplitude , position , and width of the profile, the following covariance matrices apply: where the individual parameter variances are given by the diagonal elements of the covariance matrix. Discrete Gaussian One may ask for a discrete analog to the Gaussian; this is necessary in discrete applications, particularly digital signal processing. A simple answer is to sample the continuous Gaussian, yielding the sampled Gaussian kernel. However, this discrete function does not have the discrete analogs of the properties of the continuous function, and can lead to undesired effects, as described in the article scale space implementation. An alternative approach is to use the discrete Gaussian kernel: where denotes the modified Bessel functions of integer order. This is the discrete analog of the continuous Gaussian in that it is the solution to the discrete diffusion equation (discrete space, continuous time), just as the continuous Gaussian is the solution to the continuous diffusion equation. Applications Gaussian functions appear in many contexts in the natural sciences, the social sciences, mathematics, and engineering. Some examples include: In statistics and probability theory, Gaussian functions appear as the density function of the normal distribution, which is a limiting probability distribution of complicated sums, according to the central limit theorem. Gaussian functions are the Green's function for the (homogeneous and isotropic) diffusion equation (and to the heat equation, which is the same thing), a partial differential equation that describes the time evolution of a mass-density under diffusion. Specifically, if the mass-density at time t=0 is given by a Dirac delta, which essentially means that the mass is initially concentrated in a single point, then the mass-distribution at time t will be given by a Gaussian function, with the parameter a being linearly related to 1/ and c being linearly related to ; this time-varying Gaussian is described by the heat kernel. More generally, if the initial mass-density is φ(x), then the mass-density at later times is obtained by taking the convolution of φ with a Gaussian function. The convolution of a function with a Gaussian is also known as a Weierstrass transform. A Gaussian function is the wave function of the ground state of the quantum harmonic oscillator. The molecular orbitals used in computational chemistry can be linear combinations of Gaussian functions called Gaussian orbitals (see also basis set (chemistry)). Mathematically, the derivatives of the Gaussian function can be represented using Hermite functions. For unit variance, the n-th derivative of the Gaussian is the Gaussian function itself multiplied by the n-th Hermite polynomial, up to scale. Consequently, Gaussian functions are also associated with the vacuum state in quantum field theory. Gaussian beams are used in optical systems, microwave systems and lasers. In scale space representation, Gaussian functions are used as smoothing kernels for generating multi-scale representations in computer vision and image processing. Specifically, derivatives of Gaussians (Hermite functions) are used as a basis for defining a large number of types of visual operations. Gaussian functions are used to define some types of artificial neural networks. In fluorescence microscopy a 2D Gaussian function is used to approximate the Airy disk, describing the intensity distribution produced by a point source. In signal processing they serve to define Gaussian filters, such as in image processing where 2D Gaussians are used for Gaussian blurs. In digital signal processing, one uses a discrete Gaussian kernel, which may be approximated by the Binomial coefficient or sampling a Gaussian. In geostatistics they have been used for understanding the variability between the patterns of a complex training image. They are used with kernel methods to cluster the patterns in the feature space. See also Bell-shaped function Cauchy distribution Normal distribution Radial basis function kernel References External links Mathworld, includes a proof for the relations between c and FWHM Haskell, Erlang and Perl implementation of Gaussian distribution Bensimhoun Michael, N-Dimensional Cumulative Function, And Other Useful Facts About Gaussians and Normal Densities (2009) Code for fitting Gaussians in ImageJ and Fiji. Exponentials Articles containing proofs Articles with example MATLAB/Octave code
Gaussian function
[ "Mathematics" ]
3,185
[ "E (mathematical constant)", "Articles containing proofs", "Exponentials" ]
245,560
https://en.wikipedia.org/wiki/Trigonometric%20integral
In mathematics, trigonometric integrals are a family of nonelementary integrals involving trigonometric functions. Sine integral The different sine integral definitions are Note that the integrand is the sinc function, and also the zeroth spherical Bessel function. Since is an even entire function (holomorphic over the entire complex plane), is entire, odd, and the integral in its definition can be taken along any path connecting the endpoints. By definition, is the antiderivative of whose value is zero at , and is the antiderivative whose value is zero at . Their difference is given by the Dirichlet integral, In signal processing, the oscillations of the sine integral cause overshoot and ringing artifacts when using the sinc filter, and frequency domain ringing if using a truncated sinc filter as a low-pass filter. Related is the Gibbs phenomenon: If the sine integral is considered as the convolution of the sinc function with the Heaviside step function, this corresponds to truncating the Fourier series, which is the cause of the Gibbs phenomenon. Cosine integral The different cosine integral definitions are is an even, entire function. For that reason, some texts define as the primary function, and derive in terms of for where is the Euler–Mascheroni constant. Some texts use instead of . The restriction on is to avoid a discontinuity (shown as the orange vs blue area on the left half of the plot above) that arises because of a branch cut in the standard logarithm function (). is the antiderivative of (which vanishes as ). The two definitions are related by Hyperbolic sine integral The hyperbolic sine integral is defined as It is related to the ordinary sine integral by Hyperbolic cosine integral The hyperbolic cosine integral is where is the Euler–Mascheroni constant. It has the series expansion Auxiliary functions Trigonometric integrals can be understood in terms of the so-called "auxiliary functions" Using these functions, the trigonometric integrals may be re-expressed as (cf. Abramowitz & Stegun, p. 232) Nielsen's spiral The spiral formed by parametric plot of is known as Nielsen's spiral. The spiral is closely related to the Fresnel integrals and the Euler spiral. Nielsen's spiral has applications in vision processing, road and track construction and other areas. Expansion Various expansions can be used for evaluation of trigonometric integrals, depending on the range of the argument. Asymptotic series (for large argument) These series are asymptotic and divergent, although can be used for estimates and even precise evaluation at . Convergent series These series are convergent at any complex , although for , the series will converge slowly initially, requiring many terms for high precision. Derivation of series expansion From the Maclaurin series expansion of sine: Relation with the exponential integral of imaginary argument The function is called the exponential integral. It is closely related to and , As each respective function is analytic except for the cut at negative values of the argument, the area of validity of the relation should be extended to (Outside this range, additional terms which are integer factors of appear in the expression.) Cases of imaginary argument of the generalized integro-exponential function are which is the real part of Similarly Efficient evaluation Padé approximants of the convergent Taylor series provide an efficient way to evaluate the functions for small arguments. The following formulae, given by Rowe et al. (2015), are accurate to better than for , The integrals may be evaluated indirectly via auxiliary functions and , which are defined by For the Padé rational functions given below approximate and with error less than 10−16: See also Logarithmic integral Tanc function Tanhc function Sinhc function Coshc function References Further reading External links http://mathworld.wolfram.com/SineIntegral.html Trigonometry Special functions Special hypergeometric functions Integrals
Trigonometric integral
[ "Mathematics" ]
845
[ "Special functions", "Combinatorics" ]
245,926
https://en.wikipedia.org/wiki/Self-driving%20car
A self-driving car, also known as a autonomous car (AC), driverless car, robotaxi, robotic car or robo-car, is a car that is capable of operating with reduced or no human input. Self-driving cars are responsible for all driving activities, such as perceiving the environment, monitoring important systems, and controlling the vehicle, which includes navigating from origin to destination. , no system has achieved full autonomy (SAE Level 5). In December 2020, Waymo was the first to offer rides in self-driving taxis to the public in limited geographic areas (SAE Level 4), and offers services in Arizona (Phoenix) and California (San Francisco and Los Angeles). In June 2024, after a Waymo self-driving taxi crashed into a utility pole in Phoenix, Arizona, all 672 of its Jaguar I-Pace were recalled after they were found to have susceptibility to crashing into pole like items and had their software updated. In July 2021, DeepRoute.ai started offering self-driving taxi rides in Shenzhen, China. Starting in February 2022, Cruise offered self-driving taxi service in San Francisco, but suspended service in 2023. In 2021, Honda was the first manufacturer to sell an SAE Level 3 car, followed by Mercedes-Benz in 2023. History Experiments have been conducted on advanced driver assistance systems (ADAS) since at least the 1920s. The first ADAS system was cruise control, which was invented in 1948 by Ralph Teetor. Trials began in the 1950s. The first semi-autonomous car was developed in 1977, by Japan's Tsukuba Mechanical Engineering Laboratory. It required specially marked streets that were interpreted by two cameras on the vehicle and an analog computer. The vehicle reached speeds of with the support of an elevated rail. Carnegie Mellon University's Navlab and ALV semi-autonomous projects launched in the 1980s, funded by the United States' Defense Advanced Research Projects Agency (DARPA) starting in 1984 and Mercedes-Benz and Bundeswehr University Munich's EUREKA Prometheus Project in 1987. By 1985, ALV had reached , on two-lane roads. Obstacle avoidance came in 1986, and day and night off-road driving by 1987. In 1995 Navlab 5 completed the first autonomous US coast-to-coast journey. Traveling from Pittsburgh, Pennsylvania and San Diego, California, 98.2% of the trip was autonomous. It completed the trip at an average speed of . Until the second DARPA Grand Challenge in 2005, automated vehicle research in the United States was primarily funded by DARPA, the US Army, and the US Navy, yielding incremental advances in speeds, driving competence, controls, and sensor systems. The US allocated US$650 million in 1991 for research on the National Automated Highway System, which demonstrated automated driving, combining highway-embedded automation with vehicle technology, and cooperative networking between the vehicles and highway infrastructure. The programme concluded with a successful demonstration in 1997. Partly funded by the National Automated Highway System and DARPA, Navlab drove across the US in 1995, or 98% autonomously. In 2015, Delphi piloted a Delphi technology-based Audi, over through 15 states, 99% autonomously. In 2015, Nevada, Florida, California, Virginia, Michigan, and Washington DC allowed autonomous car testing on public roads. From 2016 to 2018, the European Commission funded development for connected and automated driving through Coordination Actions CARTRE and SCOUT programs. The Strategic Transport Research and Innovation Agenda (STRIA) Roadmap for Connected and Automated Transport was published in 2019. In November 2017, Waymo announced testing of autonomous cars without a safety driver. However, an employee was in the car to handle emergencies. In March 2018, Elaine Herzberg became the first reported pedestrian killed by a self-driving car, an Uber test vehicle with a human backup driver; prosecutors did not charge Uber, while the human driver was sentenced to probation. In December 2018, Waymo was the first to commercialize a robotaxi service, in Phoenix, Arizona. In October 2020, Waymo launched a robotaxi service in a (geofenced) part of the area. The cars were monitored in real-time, and remote engineers intervened to handle exceptional conditions. In March 2019, ahead of Roborace, Robocar set the Guinness World Record as the world's fastest autonomous car. Robocar reached 282.42 km/h (175.49 mph). In March 2021, Honda began leasing in Japan a limited edition of 100 Legend Hybrid EX sedans equipped with Level 3 "Traffic Jam Pilot" driving technology, which legally allowed drivers to take their eyes off the road when the car was travelling under . In December 2020, Waymo became the first service provider to offer driverless taxi rides to the general public, in a part of Phoenix, Arizona. Nuro began autonomous commercial delivery operations in California in 2021. DeepRoute.ai launched robotaxi service in Shenzhen in July 2021. In December 2021, Mercedes-Benz received approval for a Level 3 car. In February 2022, Cruise became the second service provider to offer driverless taxi rides to the general public, in San Francisco. In December 2022, several manufacturers scaled back plans for self-driving technology, including Ford and Volkswagen. In 2023, Cruise suspended its robotaxi service. Nuro was approved for Level 4 in Palo Alto in August, 2023. , vehicles operating at Level 3 and above were an insignificant market factor; as of early 2024, Honda leases a Level 3 car in Japan, and Mercedes sells two Level 3 cars in Germany, California and Nevada. Definitions Organizations such as SAE have proposed terminology standards. However, most terms have no standard definition and are employed variously by vendors and others. Proposals to adopt aviation automation terminology for cars have not prevailed. Names such as AutonoDrive, PilotAssist, Full-Self Driving or DrivePilot are used even though the products offer an assortment of features that may not match the names. Despite offering a system it called Full Self-Driving, Tesla stated that its system did not autonomously handle all driving tasks. In the United Kingdom, a fully self-driving car is defined as a car so registered, rather than one that supports a specific feature set. The Association of British Insurers claimed that the usage of the word autonomous in marketing was dangerous because car ads make motorists think "autonomous" and "autopilot" imply that the driver can rely on the car to control itself, even though they do not. Automated driving system SAE identified 6 levels for driving automation from level 0 to level 5. An ADS is an SAE J3016 level 3 or higher system. Advanced driver assistance system An ADAS is a system that automates specific driving features, such as Forward Collision Warning (FCW), Automatic Emergency Braking (AEB), Lane Departure Warning (LDW), Lane Keeping Assistance (LKA) or Blind Spot Warning (BSW). An ADAS requires a human driver to handle tasks that the ADAS does not support. Autonomy versus automation Autonomy implies that an automation system is under the control of the vehicle rather than a driver. Automation is function-specific, handling issues such as speed control, but leaves broader decision-making to the driver. Euro NCAP defined autonomous as "the system acts independently of the driver to avoid or mitigate the accident". In Europe, the words automated and autonomous can be used together. For instance, Regulation (EU) 2019/2144 supplied: "automated vehicle" means a vehicle that can move without continuous driver supervision, but that driver intervention is still expected or required in the operational design domains (ODD); "fully automated vehicle" means a vehicle that can move entirely without driver supervision; Cooperative system A remote driver is a driver that operates a vehicle at a distance, using a video and data connection. According to SAE J3016, Operational design domain Vendors have taken a variety of approaches to the self-driving problem. Tesla's approach is to allow their "full self-driving" (FSD) system to be used in all ODDs as a Level 2 (hands/on, eyes/on) ADAS. Waymo picked specific ODDs (city streets in Phoenix and San Francisco) for their Level 5 robotaxi service. Mercedes Benz offers Level 3 service in Las Vegas in highway traffic jams at speeds up to . Mobileye's SuperVision system offers hands-off/eyes-on driving on all road types at speeds up to . GM's hands-free Super Cruise operates on specific roads in specific conditions, stopping or returning control to the driver when ODD changes. In 2024 the company announced plans to expand road coverage from 400,000 miles to 750,000 miles. Ford's BlueCruise hands-off system operates on 130,000 miles of US divided highways. Self-driving The Union of Concerned Scientists defined self-driving as "cars or trucks in which human drivers are never required to take control to safely operate the vehicle. Also known as autonomous or 'driverless' cars, they combine sensors and software to control, navigate, and drive the vehicle." The British Automated and Electric Vehicles Act 2018 law defines a vehicle as "driving itself" if the vehicle is "not being controlled, and does not need to be monitored, by an individual". Another British government definition stated, "Self-driving vehicles are vehicles that can safely and lawfully drive themselves". British definitions In British English, the word automated alone has several meanings, such as in the sentence: "Thatcham also found that the automated lane keeping systems could only meet two out of the twelve principles required to guarantee safety, going on to say they cannot, therefore, be classed as 'automated driving', preferring 'assisted driving'". The first occurrence of the "automated" word refers to an Unece automated system, while the second refers to the British legal definition of an automated vehicle. British law interprets the meaning of "automated vehicle" based on the interpretation section related to a vehicle "driving itself" and an insured vehicle. In November 2023 the British Government introduced the Automated Vehicles Bill. It proposed definitions for related terms: Self-driving: "A vehicle “satisfies the self-driving test” if it is designed or adapted with the intention that a feature of the vehicle will allow it to travel autonomously, and it is capable of doing so, by means of that feature, safely and legally." Autonomy: A vehicle travels "autonomously" if it is controlled by the vehicle, and neither the vehicle nor its surroundings are monitored by a person who can intervene. Control: control of vehicle motion. Safe: a vehicle that conforms to an acceptably safe standard. Legal: a vehicle that offers an acceptably low risk of committing a traffic infraction. SAE classification A six-level classification system – ranging from fully manual to fully automated – was published in 2014 by SAE International as J3016, Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems; the details are revised occasionally. This classification is based on the role of the driver, rather than the vehicle's capabilities, although these are related. After SAE updated its classification in 2016, (J3016_201609), the National Highway Traffic Safety Administration (NHTSA) adopted the SAE standard. The classification is a topic of debate, with various revisions proposed. Classifications A "driving mode", aka driving scenario, combines an ODD with matched driving requirements (e.g., expressway merging, traffic jam). Cars may switch levels in accord with the driving mode. Above Level 1, level differences are related to how responsibility for safe movement is divided/shared between ADAS and driver rather than specific driving features. SAE Automation Levels have been criticized for their technological focus. It has been argued that the structure of the levels suggests that automation increases linearly and that more automation is better, which may not be the case. SAE Levels also do not account for changes that may be required to infrastructure and road user behavior. Mobileye System Mobileye CEO Amnon Shashua and CTO Shai Shalev-Shwartz proposed an alternative taxonomy for autonomous driving systems, claiming that a more consumer-friendly approach was needed. Its categories reflect the amount of driver engagement that is required. Some vehicle makers have informally adopted some of the terminology involved, while not formally committing to it. Eyes-on/hands-on The first level, hands-on/eyes-on, implies that the driver is fully engaged in operating the vehicle, but is supervised by the system, which intervenes according to the features it supports (e.g., adaptive cruise control, automatic emergency braking). The driver is entirely responsible, with hands on the wheel, and eyes on the road. Eyes-on/hands-off Eyes-on/hands-off allows the driver to let go of the wheel. The system drives, the driver monitors and remains prepared to resume control as needed. Eyes-off/hands-off Eyes-off/hands-off means that the driver can stop monitoring the system, leaving the system in full control. Eyes-off requires that no errors be reproducible (not triggered by exotic transitory conditions) or frequent, that speeds are contextually appropriate (e.g., 80 mph on limited-access roads), and that the system handle typical maneuvers (e.g., getting cut off by another vehicle). The automation level could vary according to the road (e.g., eyes-off on freeways, eyes-on on side streets). No driver The highest level does not require a human driver in the car: monitoring is done either remotely (telepresence) or not at all. Safety A critical requirement for the higher two levels is that the vehicle be able to conduct a Minimum Risk Maneuver and stop safely out of traffic without driver intervention. Technology Architecture The perception system processes visual and audio data from outside and inside the car to create a local model of the vehicle, the road, traffic, traffic controls and other observable objects, and their relative motion. The control system then takes actions to move the vehicle, considering the local model, road map, and driving regulations. Several classifications have been proposed to describe ADAS technology. One proposal is to adopt these categories: navigation, path planning, perception, and car control. Navigation Navigation involves the use of maps to define a path between origin and destination. Hybrid navigation is the use of multiple navigation systems. Some systems use basic maps, relying on perception to deal with anomalies. Such a map understands which roads lead to which others, whether a road is a freeway, a highway, are one-way, etc. Other systems require highly detailed maps, including lane maps, obstacles, traffic controls, etc. Perception ACs need to be able to perceive the world around them. Supporting technologies include combinations of cameras, LiDAR, radar, audio, and ultrasound, GPS, and inertial measurement. Deep neural networks are used to analyse inputs from these sensors to detect and identify objects and their trajectories. Some systems use Bayesian simultaneous localization and mapping (SLAM) algorithms. Another technique is detection and tracking of other moving objects (DATMO), used to handle potential obstacles. Other systems use roadside real-time locating system (RTLS) technologies to aid localization. Tesla's "vision only" system uses eight cameras, without LIDAR or radar, to create its bird's-eye view of the environment. Path planning Path planning finds a sequence of segments that a vehicle can use to move from origin to destination. Techniques used for path planning include graph-based search and variational-based optimization techniques. Graph-based techniques can make harder decisions such as how to pass another vehicle/obstacle. Variational-based optimization techniques require more stringent restrictions on the vehicle's path to prevent collisions. The large scale path of the vehicle can be determined by using a voronoi diagram, an occupancy grid mapping, or a driving corridor algorithm. The latter allows the vehicle to locate and drive within open space that is bounded by lanes or barriers. Maps Maps are necessary for navigation. Map sophistication varies from simple graphs that show which roads connect to each other, with details such as one-way vs two-way, to those that are highly detailed, with information about lanes, traffic controls, roadworks, and more. Researchers at the MITComputer Science and Artificial Intelligence Laboratory (CSAIL) developed a system called MapLite, which allows self-driving cars to drive with simple maps. The system combines the GPS position of the vehicle, a "sparse topological map" such as OpenStreetMap (which has only 2D road features), with sensors that observe road conditions. One issue with highly-detailed maps is updating them as the world changes. Vehicles that can operate with less-detailed maps do not require frequent updates or geo-fencing. Sensors Sensors are necessary for the vehicle to properly respond to the driving environment. Sensor types include cameras, LiDAR, ultrasound, and radar. Control systems typically combine data from multiple sensors. Multiple sensors can provide a more complete view of the surroundings and can be used to cross-check each other to correct errors. For example, radar can image a scene in, e.g., a nighttime snowstorm, that defeats cameras and LiDAR, albeit at reduced precision. After experimenting with radar and ultrasound, Tesla adopted a vision-only approach, asserting that humans drive using only vision, and that cars should be able to do the same, while citing the lower cost of cameras versus other sensor types. By contrast, Waymo makes use of the higher resolution of LiDAR sensors and cites the declining cost of that technology. Drive by wire Drive by wire is the use of electrical or electro-mechanical systems for performing vehicle functions such as steering or speed control that are traditionally achieved by mechanical linkages. Driver monitoring Driver monitoring is used to assess the driver's attention and alertness. Techniques in use include eye monitoring, and requiring the driver to maintain torque on the steering wheel. It attempts to understand driver status and identify dangerous driving behaviors. Vehicle communication Vehicles can potentially benefit from communicating with others to share information about traffic, road obstacles, to receive map and software updates, etc. ISO/TC 22 specifies in-vehicle transport information and control systems, while ISO/TC 204 specifies information, communication and control systems in surface transport. International standards have been developed for ADAS functions, connectivity, human interaction, in-vehicle systems, management/engineering, dynamic map and positioning, privacy and security. Rather than communicating among vehicles, they can communicate with road-based systems to receive similar information. Software update Software controls the vehicle, and can provide entertainment and other services. Over-the-air updates can deliver bug fixes and additional features over the internet. Software updates are one way to accomplish recalls that in the past required a visit to a service center. In March 2021, the UNECE regulation on software update and software update management systems was published. Safety model A safety model is software that attempts to formalize rules that ensure that ACs operate safely. IEEE is attempting to forge a standard for safety models as "IEEE P2846: A Formal Model for Safety Considerations in Automated Vehicle Decision Making". In 2022, a research group at National Institute of Informatics (NII, Japan) enhanced Mobileye's Reliable Safety System as "Goal-Aware RSS" to enable RSS rules to deal with complex scenarios via program logic. Notification The US has standardized the use of turquoise lights to inform other drivers that a vehicle is driving autonomously. It will be used in the 2026 Mercedes-Benz EQS and S-Class sedans with Drive Pilot, an SAE Level 3 driving system. As of 2023, the Turquoise light had not been standardized by the P.R.C or the UN-ECE. Artificial Intelligence Artificial intelligence (AI) plays a pivotal role in the development and operation of autonomous vehicles (AVs), enabling them to perceive their surroundings, make decisions, and navigate safely without human intervention. AI algorithms empower AVs to interpret sensory data from various onboard sensors, such as cameras, LiDAR, radar, and GPS, to understand their environment and improve its technological ability and overall safety over time. Challenges Obstacles The primary obstacle to ACs is the advanced software and mapping required to make them work safely across the wide variety of conditions that drivers experience. In addition to handling day/night driving in good and bad weather on roads of arbitrary quality, ACs must cope with other vehicles, road obstacles, poor/missing traffic controls, flawed maps, and handle endless edge cases, such as following the instructions of a police officer managing traffic at a crash site. Other obstacles include cost, liability, consumer reluctance, ethical dilemmas, security, privacy, and legal/regulatory framework. Further, AVs could automate the work of professional drivers, eliminating many jobs, which could slow acceptance. Concerns Deceptive marketing Tesla calls its Level 2 ADAS "Full Self-Driving (FSD) Beta". US Senators Richard Blumenthal and Edward Markey called on the Federal Trade Commission (FTC) to investigate this marketing in 2021. In December 2021 in Japan, Mercedes-Benz was punished by the Consumer Affairs Agency for misleading product descriptions. Mercedes-Benz was criticized for a misleading US commercial advertising E-Class models. At that time, Mercedes-Benz rejected the claims and stopped its "self-driving car" ad campaign that had been running. In August 2022, the California Department of Motor Vehicles (DMV) accused Tesla of deceptive marketing practices. With the Automated Vehicles Bill (AVB) self-driving car-makers could face prison for misleading adverts in the United-Kingdom. Security In the 2020s, concerns over ACs' vulnerability to cyberattacks and data theft emerged. Espionage In 2018 and 2019 former Apple engineers were charged with stealing information related to Apple's self-driving car project. In 2021 the United States Department of Justice (DOJ) accused Chinese security officials of coordinating a hacking campaign to steal information from government entities, including research related to autonomous vehicles. China has prepared "the Provisions on Management of Automotive Data Security (Trial) to protect its own data". Cellular Vehicle-to-Everything technologies are based on 5G wireless networks. , the US Congress was considering the possibility that imported Chinese AC technology could facilitate espionage. Testing of Chinese automated cars in the US has raised concern over which US data are collected by Chinese vehicles to be stored in Chinese country and concern with any link with the Chinese communist party. Driver communications ACs complicate the need for drivers to communicate with each other, e.g., to decide which car enters an intersection first. In an AC without a driver, traditional means such as hand signals do not work (no driver, no hands). Behavior prediction ACs must be able to predict the behavior of possibly moving vehicles, pedestrians, etc in real time in order to proceed safely. The task becomes more challenging the further into the future the prediction extends, requiring rapid revisions to the estimate to cope with unpredicted behavior. One approach is to wholly recompute the position and trajectory of each object many times per second. Another is to cache the results of an earlier prediction for use in the next one to reduce computational complexity. Handover The ADAS has to be able to safely accept control from and return control to the driver. Trust Consumers will avoid ACs unless they trust them as safe. Robotaxis operating in San Francisco received pushback over perceived safety risks. Automatic elevators were invented in 1900, but did not become common until operator strikes and trust was built with advertising and features such as an emergency stop button. However, with repeated use of autonomous driving functions, drivers' behavior and trust in autonomous vehicles gradually improved and both entered a more stable state. At the same time this also improved the performance and reliability of the vehicle in complex conditions, thereby increasing public trust. Economics Autonomous also present various political and economic implications. The transportation sector holds significant sway in many the political and economic landscapes. For instance, many US states generates much annual revenue from transportation fees and taxes. The advent of self-driving cars could profoundly affect the economy by potentially altering state tax revenue streams. Furthermore, the transition to autonomous vehicles might disrupt employment patterns and labor markets, particularly in industries heavily reliant on driving professions. Data from the U.S. Bureau of Labor Statistics indicates that in 2019, the sector employed over two million individuals as tractor-trailer truck drivers. Additionally, taxi and delivery drivers represented approximately 370,400 positions, and bus drivers constituted a workforce of over 680,000. Collectively, this amounts to a conceivable displacement of nearly 2.9 million jobs, surpassing the job losses experienced in the 2008 Great Recession. Equity and Inclusion The prominence of certain demographic groups within the tech industry inevitably shapes the trajectory of autonomous vehicle (AV) development, potentially perpetuating existing inequalities. There are others in society without a political agenda who believe that the advancement of technology has nothing to do with promoting inequalities in certain groups and see this as a ridiculous presumption. Ethical issues Pedestrian Detection Research from Georgia Tech revealed that autonomous vehicle detection systems were generally five percent less effective at recognizing darker-skinned individuals. This accuracy gap persisted despite adjustments for environmental variables like lighting and visual obstructions. Rationale for liability Standards for liability have yet to be adopted to address crashes and other incidents. Liability could rest with the vehicle occupant, its owner, the vehicle manufacturer, or even the ADAS technology supplier, possibly depending on the circumstances of the crash. Additionally, the infusion of ArtificiaI Intelligence technology in autonomous vehicles adds layers of complexity to ownership and ethical dynamics. Given that AI systems are inherently self-learning, a question arises of whether accountability should rest with the vehicle owner, the manufacturer, or the AI developer? Trolley problem The trolley problem is a thought experiment in ethics. Adapted for ACs, it considers an AC carrying one passenger confronts a pedestrian who steps in its way. The ADAS notionally has to choose between killing the pedestrian or swerving into a wall, killing the passenger. Possible frameworks include deontology (formal rules) and utilitarianism (harm reduction). One public opinion survey reported that harm reduction was preferred, except that passengers wanted the vehicle to prefer them, while pedestrians took the opposite view. Utilitarian regulations were unpopular. Additionally, cultural viewpoints exert substantial influence on shaping responses to these ethical quandaries. Another study found that cultural biases impact preferences in prioritizing the rescue of certain individuals over others in car accident scenarios. Privacy Some ACs require an internet connection to function, opening the possibility that a hacker might gain access to private information such as destinations, routes, camera recordings, media preferences, and/or behavioral patterns, although this is true of an internet-connected device. Road infrastructure ACs make use of road infrastructure (e.g., traffic signs, turn lanes) and may require modifications to that infrastructure to fully achieve their safety and other goals. In March 2023, the Japanese government unveiled a plan to set up a dedicated highway lane for ACs. In April 2023, JR East announced their challenge to raise their self-driving level of Kesennuma Line bus rapid transit (BRT) in rural area from the current Level 2 to Level 4 at 60 km/h. Testing Approaches ACs can be tested via digital simulations, in a controlled test environment, and/or on public roads. Road testing typically requires some form of permit or a commitment to adhere to acceptable operating principles. For example, New York requires a test driver to be in the vehicle, prepared to override the ADAS as necessary. 2010s and disengagements In California, self-driving car manufacturers are required to submit annual reports describing how often their vehicles autonomously disengaged from autonomous mode. This is one measure of system robustness (ideally, the system should never disengage). In 2017, Waymo reported 63 disengagements over of testing, an average distance of between disengagements, the highest (best) among companies reporting such figures. Waymo also logged more autonomous miles than other companies. Their 2017 rate of 0.18 disengagements per was an improvement over the 0.2 disengagements per in 2016, and 0.8 in 2015. In March 2017, Uber reported an average of per disengagement. In the final three months of 2017, Cruise (owned by GM) averaged per disengagement over . 2020s Disengagement definitions Reporting companies use varying definitions of what qualifies as a disengagement, and such definitions can change over time. Executives of self-driving car companies have criticized disengagements as a deceptive metric, because it does not consider varying road conditions. Standards In April 2021, WP.29 GRVA proposed a "Test Method for Automated Driving (NATM)". In October 2021, Europe's pilot test, L3Pilot, demonstrated ADAS for cars in Hamburg, Germany, in conjunction with ITS World Congress 2021. SAE Level 3 and 4 functions were tested on ordinary roads. In November 2022, an International Standard ISO 34502 on "Scenario based safety evaluation framework" was published. Collision avoidance In April 2022, collision avoidance testing was demonstrated by Nissan. Waymo published a document about collision avoidance testing in December 2022. Simulation and validation In September 2022, Biprogy released Driving Intelligence Validation Platform (DIVP) as part of Japanese national project "SIP-adus", which is interoperable with Open Simulation Interface (OSI) of ASAM. Toyota In November 2022, Toyota demonstrated one of its GR Yaris test cars, which had been trained using professional rally drivers. Toyota used its collaboration with Microsoft in FIA World Rally Championship since the 2017 season. Pedestrian reactions In 2023 David R. Large, senior research fellow with the Human Factors Research Group at the University of Nottingham, disguised himself as a car seat in a study to test people's reactions to driverless cars. He said, "We wanted to explore how pedestrians would interact with a driverless car and developed this unique methodology to explore their reactions." The study found that, in the absence of someone in the driving seat, pedestrians trust certain visual prompts more than others when deciding whether to cross the road. Incidents Tesla As of 2023, Tesla's ADAS Autopilot/Full Self Driving (beta) was classified as Level 2 ADAS. On 20 January 2016, the first of five known fatal crashes of a Tesla with Autopilot occurred, in China's Hubei province. Initially, Tesla stated that the vehicle was so badly damaged from the impact that their recorder was not able to determine whether the car had been on Autopilot at the time. However, the car failed to take evasive action. Another fatal Autopilot crash occurred in May in Florida in a Tesla Model S that crashed into a tractor-trailer. In a civil suit between the father of the driver killed and Tesla, Tesla documented that the car had been on Autopilot. According to Tesla, "neither Autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied." Tesla claimed that this was Tesla's first known Autopilot death in over with Autopilot engaged. Tesla claimed that on average one fatality occurs every across all vehicle types in the US. However, this number also includes motorcycle/pedestrian fatalities. The ultimate National Transportation Safety Board (NTSB) report concluded Tesla was not at fault; the investigation revealed that for Tesla cars, the crash rate dropped by 40 percent after Autopilot was installed. Google Waymo In June 2015, Google confirmed that 12 vehicles had suffered collisions as of that date. Eight involved rear-end collisions at a stop sign or traffic light, in two of which the vehicle was side-swiped by another driver, one in which another driver rolled a stop sign, and one where a driver was controlling the car manually. In July 2015, three employees suffered minor injuries when their vehicle was rear-ended by a car whose driver failed to brake. This was the first collision that resulted in injuries. According to Google Waymo's accident reports as of early 2016, their test cars had been involved in 14 collisions, of which other drivers were at fault 13 times, although in 2016 the car's software caused a crash. On 14 February 2016 a Google vehicle attempted to avoid sandbags blocking its path. During the maneuver it struck a bus. Google stated, "In this case, we clearly bear some responsibility, because if our car hadn't moved, there wouldn't have been a collision." Google characterized the crash as a misunderstanding and a learning experience. No injuries were reported. Uber's Advanced Technologies Group (ATG) In March 2018, Elaine Herzberg died after she was hit by an AC tested by Uber's Advanced Technologies Group (ATG) in Arizona. A safety driver was in the car. Herzberg was crossing the road about 400 feet from an intersection. Some experts said a human driver could have avoided the crash. Arizona governor Doug Ducey suspended the company's ability to test its ACs citing an "unquestionable failure" of Uber to protect public safety. Uber also stopped testing in California until receiving a new permit in 2020. NTSB's final report determined that the immediate cause of the accident was that safety driver Rafaela Vasquez failed to monitor the road, because she was distracted by her phone, but that Uber's "inadequate safety culture" contributed. The report noted that the victim had "a very high level" of methamphetamine in her body. The board called on federal regulators to carry out a review before allowing automated test vehicles to operate on public roads. In September 2020, Vasquez pled guilty to endangerment and was sentenced to three years' probation. NIO Navigate on Pilot On 12 August 2021, a 31-year-old Chinese man was killed after his NIO ES8 collided with a construction vehicle. NIO's self-driving feature was in beta and could not deal with static obstacles. The vehicle's manual clearly stated that the driver must take over near construction sites. Lawyers of the deceased's family questioned NIO's private access to the vehicle, which they argued did not guarantee the integrity of the data. Pony.ai In November 2021, the California Department of Motor Vehicles (DMV) notified Pony.ai that it was suspending its testing permit following a reported collision in Fremont on 28 October. In May 2022, DMV revoked Pony.ai's permit for failing to monitor the driving records of its safety drivers. Cruise In April 2022, Cruise's testing vehicle was reported to have blocked a fire engine on emergency call, and sparked questions about its ability to handle unexpected circumstances. Ford In February 2024, a driver using the Ford BlueCruise hands-free driving feature struck and killed the driver of a stationary car with no lights on in the middle lane of a freeway in Texas. In March 2024, a drunk driver who was speeding, holding her cell phone, and using BlueCruise on a Pennsylvania freeway struck and killed two people who had been driving two cars. The first car had become disabled and was on the left shoulder with part of the car in the left driving lane. The second driver had parked his car behind the first car presumably to help the first driver. The NTSB is investigating both incidents. Total incidents The NHTSA began mandating incident reports from autonomous vehicle companies in June 2021. Some reports cite incidents from as early as August 2019, with current data available through June 17, 2024. There have been a total of 3,979 autonomous vehicle incidents (both ADS and ADAS) reported during this timeframe. 2,146 of those incidents (53.9%) involved Tesla vehicles. Public opinion surveys 2010s In a 2011 online survey of 2,006 US and UK consumers, 49% said they would be comfortable using a "driverless car". A 2012 survey of 17,400 vehicle owners found 37% who initially said they would be interested in purchasing a "fully autonomous car". However, that figure dropped to 20% if told the technology would cost US$3,000 more. In a 2012 survey of about 1,000 German drivers, 22% had a positive attitude, 10% were undecided, 44% were skeptical and 24% were hostile. A 2013 survey of 1,500 consumers across 10 countries found 57% "stated they would be likely to ride in a car controlled entirely by technology that does not require a human driver", with Brazil, India and China the most willing to trust automated technology. In a 2014 US telephone survey, over three-quarters of licensed drivers said they would consider buying a self-driving car, rising to 86% if car insurance were cheaper. 31.7% said they would not continue to drive once an automated car was available. In 2015, a survey of 5,000 people from 109 countries reported that average respondents found manual driving the most enjoyable. 22% did not want to pay more money for autonomy. Respondents were found to be most concerned about hacking/misuse, and were also concerned about legal issues and safety. Finally, respondents from more developed countries were less comfortable with their vehicle sharing data. The survey reported consumer interest in purchasing an AC, stating that 37% of surveyed current owners were either "definitely" or "probably" interested. In 2016, a survey of 1,603 people in Germany that controlled for age, gender, and education reported that men felt less anxiety and more enthusiasm, whereas women showed the opposite. The difference was pronounced between young men and women and decreased with age. In a 2016 US survey of 1,584 people, "66 percent of respondents said they think autonomous cars are probably smarter than the average human driver". People were worried about safety and hacking risk. Nevertheless, only 13% of the interviewees saw no advantages in this new kind of cars. In a 2017 survey of 4,135 US adults found that many Americans anticipated significant impacts from various automation technologies including the widespread adoption of automated vehicles. In 2019, results from two opinion surveys of 54 and 187 US adults respectively were published. The questionnaire was termed the autonomous vehicle acceptance model (AVAM), including additional description to help respondents better understand the implications of various automation levels. Users were less accepting of high autonomy levels and displayed significantly lower intention to use autonomous vehicles. Additionally, partial autonomy (regardless of level) was perceived as requiring uniformly higher driver engagement (usage of hands, feet and eyes) than full autonomy. In the 2020s In 2022, a survey reported that only a quarter (27%) of the world's population would feel safe in self-driving cars. In 2024, a study by Saravanos et al. at New York University reported that 87% of their respondents (from a sample of 358) believed that conditionally automated cars (at Level 3) would be easy to use. Opinion surveys may have little salience given that few respondents had any personal experience with ACs. Regulation The regulation of autonomous cars concerns liability, approvals, and international conventions. In the 2010s, researchers openly worried that delayed regulations could delay deployment. In 2020, UNECE WP.29 GRVA was issued to address regulation of Level 3 automated driving. Commercialization Vehicles operating below Level 5 still offer many advantages. most commercially available ADAS vehicles are SAE Level 2. A couple of companies reached higher levels, but only in restricted (geofenced) locations. Level 2 – Partial Automation SAE Level 2 features are available as part of the ADAS systems in many vehicles. In the US, 50% of new cars provide driver assistance for both steering and speed. Ford started offering BlueCruise service on certain vehicles in 2022; the system is named ActiveGlide in Lincoln vehicles. The system provided features such as lane centering, street sign recognition, and hands-free highway driving on more than 130,000 miles of divided highways. The 2022 1.2 version added features including hands-free lane changing, in-lane repositioning, and predictive speed assist. In April 2023 BlueCruise was approved in the UK for use on certain motorways, starting with 2023 models of Ford's electric Mustang Mach-E SUV. Tesla's Autopilot and its Full Self-Driving (FSD) ADAS suites are available on all Tesla cars since 2016. FSD offers highway and street driving (without geofencing), navigation/turn management, steering, and dynamic cruise control, collision avoidance, lane-keeping/switching, emergency braking, obstacle avoidance, but still requires the driver to remain ready to control the vehicle at any moment. Its driver management system combines eye tracking with monitoring pressure on the steering wheel to ensure that drives are both eyes on and hands on. Tesla's FSD rewrite V12 (released in March 2024) uses a single deep learning transformer model for all aspects of perception, monitoring, and control. It relies on its eight cameras for its vision-only perception system, without use of LiDAR, radar, or ultrasound. As of April 2024, FSD has been deployed on two million Tesla cars. As of January 2024, Tesla has not initiated requests for Level 3 status for its systems and has not disclosed its reason for not doing so. Development General Motors is developing the "Ultra Cruise" ADAS system, that will be a dramatic improvement over their current "Super Cruise" system. Ultra Cruise will cover "95 percent" of driving scenarios on 2 million miles of roads in the US, according to the company. The system hardware in and around the car includes multiple cameras, short- and long-range radar, and a LiDAR sensor, and will be powered by the Qualcomm Snapdragon Ride Platform. The luxury Cadillac Celestiq electric vehicle will be one of the first vehicles to feature Ultra Cruise. Europe is developing a new "Driver Control Assistance Systems" (DCAS) level 2 regulation to no longer limit the use of lane changing systems to roads with 2 lanes and a physical separation from traffic in the opposite direction. Level 3 – Conditional Automation , two car manufacturers have sold or leased Level 3 cars: Honda in Japan, and Mercedes in Germany, Nevada and California. Mercedes Drive Pilot has been available on the EQS and S-class sedan in Germany since 2022, and in California and Nevada since 2023. A subscription costs between €5,000 and €7,000 for three years in Germany and $2,500 for one year in the United States. Drive Pilot can only be used when the vehicle is traveling under , there is a vehicle in front, readable line markings, during the day, clear weather, and on freeways mapped by Mercedes down to the centimeter (100,000 miles in California). As of April 2024, one Mercedes vehicle with this capability has been sold in California. Development Honda continued to enhance its Level 3 technology. As of 2023, 80 vehicles with Level 3 support had been sold. Mercedes-Benz received authorization in early 2023 to pilot its Level 3 software in Las Vegas. California also authorized Drive Pilot in 2023. BMW commercialized its AC in 2021. In 2023 BMW stated that its Level-3 technology was nearing release. It would be the second manufacturer to deliver Level-3 technology, but the only one with a Level 3 technology which works in the dark. In 2023, in China, IM Motors, Mercedes, and BMW obtained authorization to test vehicles with Level 3 systems on motorways. In September 2021, Stellantis presented its findings from its Level 3 pilot testing on Italian highways. Stellantis's Highway Chauffeur claimed Level 3 capabilities, as tested on the Maserati Ghibli and Fiat 500X prototypes. Polestar, a Volvo Cars' brand, announced in January 2022 its plan to offer Level 3 autonomous driving system in the Polestar 3 SUV, a Volvo XC90 successor, with technologies from Luminar Technologies, Nvidia, and Zenseact. In January 2022, Bosch and the Volkswagen Group subsidiary CARIAD released a collaboration for autonomous driving up to Level 3. This joint development targets Level 4 capabilities. Hyundai Motor Company is enhancing cybersecurity of connected cars to offer a Level 3 self-driving Genesis G90. Kia and Hyundai Korean car makers delayed their Level 3 plans, and will not deliver Level 3 vehicles in 2023. Level 4 – High Automation Waymo offers robotaxi services in parts of Arizona (Phoenix) and California (San Francisco and Los Angeles), as fully autonomous vehicles without safety drivers. In April 2023 in Japan, a Level 4 protocol became part of the amended Road Traffic Act. ZEN drive Pilot Level 4 made by AIST operates there. Development In July 2020, Toyota started public demonstration rides on Lexus LS (fifth generation) based TRI-P4 with Level 4 capability. In August 2021, Toyota operated a potentially Level 4 service using e-Palette around the Tokyo 2020 Olympic Village. In September 2020, Mercedes-Benz introduced world's first commercial Level 4 Automated Valet Parking (AVP) system named Intelligent Park Pilot for its new S-Class. In November 2022, Germany’s Federal Motor Transport Authority (KBA) approved the system for use at Stuttgart Airport. In September 2021, Cruise, General Motors, and Honda started a joint testing programme, using Cruise AV. In 2023, the Origin was put on indefinite hold following Cruise's loss of its operating permit. In January 2023, Holon announced an autonomous shuttle during the 2023 Consumer Electronics Show (CES). The company claimed the vehicle is the world's first Level 4 shuttle built to automotive standard. See also Autopilot Driving References Further reading These books are based on presentations and discussions at the Automated Vehicles Symposium organized annually by TRB and AUVSI. Automotive technologies Automotive safety Driving Transport culture
Self-driving car
[ "Physics", "Engineering" ]
9,531
[ "Self-driving cars", "Transport culture", "Physical systems", "Transport", "Automotive engineering" ]
245,982
https://en.wikipedia.org/wiki/Buoyancy
Buoyancy (), or upthrust is a net upward force exerted by a fluid that opposes the weight of a partially or fully immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid. Thus, the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object. The pressure difference results in a net upward force on the object. The magnitude of the force is proportional to the pressure difference, and (as explained by Archimedes' principle) is equivalent to the weight of the fluid that would otherwise occupy the submerged volume of the object, i.e. the displaced fluid. For this reason, an object whose average density is greater than that of the fluid in which it is submerged tends to sink. If the object is less dense than the liquid, the force can keep the object afloat. This can occur only in a non-inertial reference frame, which either has a gravitational field or is accelerating due to a force other than gravity defining a "downward" direction. Buoyancy also applies to fluid mixtures, and is the most common driving force of convection currents. In these cases, the mathematical modelling is altered to apply to continua, but the principles remain the same. Examples of buoyancy driven flows include the spontaneous separation of air and water or oil and water. Buoyancy is a function of the force of gravity or other source of acceleration on objects of different densities, and for that reason is considered an apparent force, in the same way that centrifugal force is an apparent force as a function of inertia. Buoyancy can exist without gravity in the presence of an inertial reference frame, but without an apparent "downward" direction of gravity or other source of acceleration, buoyancy does not exist. The center of buoyancy of an object is the center of gravity of the displaced volume of fluid. Archimedes' principle Archimedes' principle is named after Archimedes of Syracuse, who first discovered this law in 212 BC. For objects, floating and sunken, and in gases as well as liquids (i.e. a fluid), Archimedes' principle may be stated thus in terms of forces: —with the clarifications that for a sunken object the volume of displaced fluid is the volume of the object, and for a floating object on a liquid, the weight of the displaced liquid is the weight of the object. More tersely: buoyant force = weight of displaced fluid. Archimedes' principle does not consider the surface tension (capillarity) acting on the body, but this additional force modifies only the amount of fluid displaced and the spatial distribution of the displacement, so the principle that buoyancy = weight of displaced fluid remains valid. The weight of the displaced fluid is directly proportional to the volume of the displaced fluid (if the surrounding fluid is of uniform density). In simple terms, the principle states that the buoyancy force on an object is equal to the weight of the fluid displaced by the object, or the density of the fluid multiplied by the submerged volume times the gravitational acceleration, g. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy. This is also known as upthrust. Suppose a rock's weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting upon it. Suppose that when the rock is lowered into water, it displaces water of weight 3 newtons. The force it then exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyancy force: 10 − 3 = 7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor. It is generally easier to lift an object up through the water than it is to pull it out of the water. Assuming Archimedes' principle to be reformulated as follows, then inserted into the quotient of weights, which has been expanded by the mutual volume yields the formula below. The density of the immersed object relative to the density of the fluid can easily be calculated without measuring any volumes: (This formula is used for example in describing the measuring principle of a dasymeter and of hydrostatic weighing.) Example: If you drop wood into water, buoyancy will keep it afloat. Example: A helium balloon in a moving car. During a period of increasing speed, the air mass inside the car moves in the direction opposite to the car's acceleration (i.e., towards the rear). The balloon is also pulled this way. However, because the balloon is buoyant relative to the air, it ends up being pushed "out of the way", and will actually drift in the same direction as the car's acceleration (i.e., forward). If the car slows down, the same balloon will begin to drift backward. For the same reason, as the car goes round a curve, the balloon will drift towards the inside of the curve. Forces and equilibrium The equation to calculate the pressure inside a fluid in equilibrium is: where f is the force density exerted by some outer field on the fluid, and σ is the Cauchy stress tensor. In this case the stress tensor is proportional to the identity tensor: Here δij is the Kronecker delta. Using this the above equation becomes: Assuming the outer force field is conservative, that is it can be written as the negative gradient of some scalar valued function: Then: Therefore, the shape of the open surface of a fluid equals the equipotential plane of the applied outer conservative force field. Let the z-axis point downward. In this case the field is gravity, so Φ = −ρfgz where g is the gravitational acceleration, ρf is the mass density of the fluid. Taking the pressure as zero at the surface, where z is zero, the constant will be zero, so the pressure inside the fluid, when it is subject to gravity, is So pressure increases with depth below the surface of a liquid, as z denotes the distance from the surface of the liquid into it. Any object with a non-zero vertical depth will have different pressures on its top and bottom, with the pressure on the bottom being greater. This difference in pressure causes the upward buoyancy force. The buoyancy force exerted on a body can now be calculated easily, since the internal pressure of the fluid is known. The force exerted on the body can be calculated by integrating the stress tensor over the surface of the body which is in contact with the fluid: The surface integral can be transformed into a volume integral with the help of the Gauss theorem: where V is the measure of the volume in contact with the fluid, that is the volume of the submerged part of the body, since the fluid does not exert force on the part of the body which is outside of it. The magnitude of buoyancy force may be appreciated a bit more from the following argument. Consider any object of arbitrary shape and volume V surrounded by a liquid. The force the liquid exerts on an object within the liquid is equal to the weight of the liquid with a volume equal to that of the object. This force is applied in a direction opposite to gravitational force, that is of magnitude: where ρf is the density of the fluid, Vdisp is the volume of the displaced body of liquid, and g is the gravitational acceleration at the location in question. If this volume of liquid is replaced by a solid body of exactly the same shape, the force the liquid exerts on it must be exactly the same as above. In other words, the "buoyancy force" on a submerged body is directed in the opposite direction to gravity and is equal in magnitude to Though the above derivation of Archimedes principle is correct, a recent paper by the Brazilian physicist Fabio M. S. Lima brings a more general approach for the evaluation of the buoyant force exerted by any fluid (even non-homogeneous) on a body with arbitrary shape. Interestingly, this method leads to the prediction that the buoyant force exerted on a rectangular block touching the bottom of a container points downward! Indeed, this downward buoyant force has been confirmed experimentally. The net force on the object must be zero if it is to be a situation of fluid statics such that Archimedes principle is applicable, and is thus the sum of the buoyancy force and the object's weight If the buoyancy of an (unrestrained and unpowered) object exceeds its weight, it tends to rise. An object whose weight exceeds its buoyancy tends to sink. Calculation of the upwards force on a submerged object during its accelerating period cannot be done by the Archimedes principle alone; it is necessary to consider dynamics of an object involving buoyancy. Once it fully sinks to the floor of the fluid or rises to the surface and settles, Archimedes principle can be applied alone. For a floating object, only the submerged volume displaces water. For a sunken object, the entire volume displaces water, and there will be an additional force of reaction from the solid floor. In order for Archimedes' principle to be used alone, the object in question must be in equilibrium (the sum of the forces on the object must be zero), therefore; and therefore showing that the depth to which a floating object will sink, and the volume of fluid it will displace, is independent of the gravitational field regardless of geographic location. (Note: If the fluid in question is seawater, it will not have the same density (ρ) at every location, since the density depends on temperature and salinity. For this reason, a ship may display a Plimsoll line.) It can be the case that forces other than just buoyancy and gravity come into play. This is the case if the object is restrained or if the object sinks to the solid floor. An object which tends to float requires a tension restraint force T in order to remain fully submerged. An object which tends to sink will eventually have a normal force of constraint N exerted upon it by the solid floor. The constraint force can be tension in a spring scale measuring its weight in the fluid, and is how apparent weight is defined. If the object would otherwise float, the tension to restrain it fully submerged is: When a sinking object settles on the solid floor, it experiences a normal force of: Another possible formula for calculating buoyancy of an object is by finding the apparent weight of that particular object in the air (calculated in Newtons), and apparent weight of that object in the water (in Newtons). To find the force of buoyancy acting on the object when in air, using this particular information, this formula applies: Buoyancy force = weight of object in empty space − weight of object immersed in fluid The final result would be measured in Newtons. Air's density is very small compared to most solids and liquids. For this reason, the weight of an object in air is approximately the same as its true weight in a vacuum. The buoyancy of air is neglected for most objects during a measurement in air because the error is usually insignificant (typically less than 0.1% except for objects of very low average density such as a balloon or light foam). Simplified model A simplified explanation for the integration of the pressure over the contact area may be stated as follows: Consider a cube immersed in a fluid with the upper surface horizontal. The sides are identical in area, and have the same depth distribution, therefore they also have the same pressure distribution, and consequently the same total force resulting from hydrostatic pressure, exerted perpendicular to the plane of the surface of each side. There are two pairs of opposing sides, therefore the resultant horizontal forces balance in both orthogonal directions, and the resultant force is zero. The upward force on the cube is the pressure on the bottom surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal bottom surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the bottom surface. Similarly, the downward force on the cube is the pressure on the top surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal top surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the top surface. As this is a cube, the top and bottom surfaces are identical in shape and area, and the pressure difference between the top and bottom of the cube is directly proportional to the depth difference, and the resultant force difference is exactly equal to the weight of the fluid that would occupy the volume of the cube in its absence. This means that the resultant upward force on the cube is equal to the weight of the fluid that would fit into the volume of the cube, and the downward force on the cube is its weight, in the absence of external forces. This analogy is valid for variations in the size of the cube. If two cubes are placed alongside each other with a face of each in contact, the pressures and resultant forces on the sides or parts thereof in contact are balanced and may be disregarded, as the contact surfaces are equal in shape, size and pressure distribution, therefore the buoyancy of two cubes in contact is the sum of the buoyancies of each cube. This analogy can be extended to an arbitrary number of cubes. An object of any shape can be approximated as a group of cubes in contact with each other, and as the size of the cube is decreased, the precision of the approximation increases. The limiting case for infinitely small cubes is the exact equivalence. Angled surfaces do not nullify the analogy as the resultant force can be split into orthogonal components and each dealt with in the same way. Static stability A floating object is stable if it tends to restore itself to an equilibrium position after a small displacement. For example, floating objects will generally have vertical stability, as if the object is pushed down slightly, this will create a greater buoyancy force, which, unbalanced by the weight force, will push the object back up. Rotational stability is of great importance to floating vessels. Given a small angular displacement, the vessel may return to its original position (stable), move away from its original position (unstable), or remain where it is (neutral). Rotational stability depends on the relative lines of action of forces on an object. The upward buoyancy force on an object acts through the center of buoyancy, being the centroid of the displaced volume of fluid. The weight force on the object acts through its center of gravity. A buoyant object will be stable if the center of gravity is beneath the center of buoyancy because any angular displacement will then produce a 'righting moment'. The stability of a buoyant object at the surface is more complex, and it may remain stable even if the center of gravity is above the center of buoyancy, provided that when disturbed from the equilibrium position, the center of buoyancy moves further to the same side that the center of gravity moves, thus providing a positive righting moment. If this occurs, the floating object is said to have a positive metacentric height. This situation is typically valid for a range of heel angles, beyond which the center of buoyancy does not move enough to provide a positive righting moment, and the object becomes unstable. It is possible to shift from positive to negative or vice versa more than once during a heeling disturbance, and many shapes are stable in more than one position. Fluids and objects As a submarine expels water from its buoyancy tanks, it rises because its volume is constant (the volume of water it displaces if it is fully submerged) while its mass is decreased. Compressible objects As a floating object rises or falls, the forces external to it change and, as all objects are compressible to some extent or another, so does the object's volume. Buoyancy depends on volume and so an object's buoyancy reduces if it is compressed and increases if it expands. If an object at equilibrium has a compressibility less than that of the surrounding fluid, the object's equilibrium is stable and it remains at rest. If, however, its compressibility is greater, its equilibrium is then unstable, and it rises and expands on the slightest upward perturbation, or falls and compresses on the slightest downward perturbation. Submarines Submarines rise and dive by filling large ballast tanks with seawater. To dive, the tanks are opened to allow air to exhaust out the top of the tanks, while the water flows in from the bottom. Once the weight has been balanced so the overall density of the submarine is equal to the water around it, it has neutral buoyancy and will remain at that depth. Most military submarines operate with a slightly negative buoyancy and maintain depth by using the "lift" of the stabilizers with forward motion. Balloons The height to which a balloon rises tends to be stable. As a balloon rises it tends to increase in volume with reducing atmospheric pressure, but the balloon itself does not expand as much as the air on which it rides. The average density of the balloon decreases less than that of the surrounding air. The weight of the displaced air is reduced. A rising balloon stops rising when it and the displaced air are equal in weight. Similarly, a sinking balloon tends to stop sinking. Divers Underwater divers are a common example of the problem of unstable buoyancy due to compressibility. The diver typically wears an exposure suit which relies on gas-filled spaces for insulation, and may also wear a buoyancy compensator, which is a variable volume buoyancy bag which is inflated to increase buoyancy and deflated to decrease buoyancy. The desired condition is usually neutral buoyancy when the diver is swimming in mid-water, and this condition is unstable, so the diver is constantly making fine adjustments by control of lung volume, and has to adjust the contents of the buoyancy compensator if the depth varies. Density If the weight of an object is less than the weight of the displaced fluid when fully submerged, then the object has an average density that is less than the fluid and when fully submerged will experience a buoyancy force greater than its own weight. If the fluid has a surface, such as water in a lake or the sea, the object will float and settle at a level where it displaces the same weight of fluid as the weight of the object. If the object is immersed in the fluid, such as a submerged submarine or air in a balloon, it will tend to rise. If the object has exactly the same density as the fluid, then its buoyancy equals its weight. It will remain submerged in the fluid, but it will neither sink nor float, although a disturbance in either direction will cause it to drift away from its position. An object with a higher average density than the fluid will never experience more buoyancy than weight and it will sink. A ship will float even though it may be made of steel (which is much denser than water), because it encloses a volume of air (which is much less dense than water), and the resulting shape has an average density less than that of the water. See also References External links Falling in Water W. H. Besant (1889) Elementary Hydrostatics from Google Books. NASA's definition of buoyancy Fluid mechanics Force
Buoyancy
[ "Physics", "Mathematics", "Engineering" ]
4,097
[ "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Civil engineering", "Wikipedia categories named after physical quantities", "Fluid mechanics", "Matter" ]
246,173
https://en.wikipedia.org/wiki/Phenolphthalein
Phenolphthalein ( ) is a chemical compound with the formula C20H14O4 and is often written as "HIn", "HPh", "phph" or simply "Ph" in shorthand notation. Phenolphthalein is often used as an indicator in acid–base titrations. For this application, it turns colorless in acidic solutions and pink in basic solutions. It belongs to the class of dyes known as phthalein dyes. Phenolphthalein is slightly soluble in water and usually is dissolved in alcohols in experiments. It is a weak acid, which can lose H+ ions in solution. The nonionized phenolphthalein molecule is colorless and the double deprotonated phenolphthalein ion is fuchsia. Further proton loss in higher pH occurs slowly and leads to a colorless form. Phenolphthalein ion in concentrated sulfuric acid is orange red due to sulfonation. Uses pH indicator Phenolphthalein's common use is as an indicator in acid-base titrations. It also serves as a component of universal indicator, together with methyl red, bromothymol blue, and thymol blue. Phenolphthalein adopts different forms in aqueous solution depending on the pH of the solution. Inconsistency exists in the literature about hydrated forms of the compounds and the color of sulfuric acid. Wittke reported in 1983 that it exists in protonated form (H3In+) under strongly acidic conditions, providing an orange coloration. However, a later paper suggested that this color is due to sulfonation to phenolsulfonphthalein. The lactone form (H2In) is colorless between strongly acidic and slightly basic conditions. The doubly deprotonated (In2-) phenolate form (the anion form of phenol) gives the familiar pink color. In strongly basic solutions, phenolphthalein is converted to its In(OH)3− form, and its pink color undergoes a rather slow fading reaction and becomes completely colorless when pH is greater than 13. The pKa values of phenolphthalein were found to be 9.05, 9.50 and 12 while those of phenolsulfonphthalein are 1.2 and 7.70. The pKa for the color change is 9.50. Carbonation of concrete Phenolphthalein's pH sensitivity is exploited in other applications: concrete has naturally high pH due to the calcium hydroxide formed when Portland cement reacts with water. As the concrete reacts with carbon dioxide in the atmosphere, pH decreases to 8.5–9. When a 1% phenolphthalein solution is applied to normal concrete, it turns bright pink. However, if it remains colorless, it shows that the concrete has undergone carbonation. In a similar application, some spackling used to repair holes in drywall contains phenolphthalein. When applied, the basic spackling material retains a pink color; when the spackling has cured by reaction with atmospheric carbon dioxide, the pink color fades. Education In a highly basic solution, phenolphthalein's slow change from pink to colorless as it is converted to its Ph(OH)3− form is used in chemistry classes for the study of reaction kinetics. Entertainment Phenolphthalein is used in toys, for example as a component of disappearing inks, or disappearing dye on the "Hollywood Hair" Barbie hair. In the ink, it is mixed with sodium hydroxide, which reacts with carbon dioxide in the air. This reaction leads to the pH falling below the color change threshold as hydrogen ions are released by the reaction: OH−(aq) + CO2(g) → (aq) + H+(aq). To develop the hair and "magic" graphical patterns, the ink is sprayed with a solution of hydroxide, which leads to the appearance of the hidden graphics by the same mechanism described above for color change in alkaline solution. The pattern will eventually disappear again because of the reaction with carbon dioxide. Thymolphthalein is used for the same purpose and in the same way, when a blue color is desired. Detection of blood A reduced form of phenolphthalein, phenolphthalin, which is colorless, is used in a test to identify substances thought to contain blood, commonly known as the Kastle–Meyer test. A dry sample is collected with a swab or filter paper. A few drops of alcohol, then a few drops of phenolphthalein, and finally a few drops of hydrogen peroxide are dripped onto the sample. If the sample contains hemoglobin, it will turn pink immediately upon addition of the peroxide, because of the generation of phenolphthalein. A positive test indicates the sample contains hemoglobin and, therefore, is likely blood. A false positive can result from the presence of substances with catalytic activity similar to hemoglobin. This test is not destructive to the sample; it can be kept and used in further tests. This test has the same reaction with blood from any animal whose blood contains hemoglobin, including almost all vertebrates; further testing would be required to determine whether it originated from a human. Laxative Phenolphthalein has been used for over a century as a laxative, but is now being removed from over-the-counter laxatives over concerns of carcinogenicity. Laxative products formerly containing phenolphthalein have often been reformulated with alternative active ingredients: Feen-a-Mint switched to bisacodyl, and Ex-Lax was switched to a senna extract. Thymolphthalein is a related laxative made from thymol. Despite concerns regarding its carcinogenicity based on rodent studies, the use of phenolphthalein as a laxative is unlikely to cause ovarian cancer. Some studies suggest a weak association with colon cancer, while others show none at all. Phenolphthalein is described as a stimulant laxative. In addition, it has been found to inhibit human cellular calcium influx via store-operated calcium entry (SOCE, see ) in vivo. This is effected by its inhibiting thrombin and thapsigargin, two activators of SOCE that increase intracellular free calcium. Phenolphthalein has been added to the European Chemicals Agency's candidate list for substance of very high concern (SVHC). It is on the IARC group 2B list for substances "possibly carcinogenic to humans". The discovery of phenolphthalein's laxative effect was due to an attempt by the Hungarian government to label genuine local white wine with the substance in 1900. Phenolphthalein did not change the taste of the wine and would change color when a base is added, making it a good label in principle. However, it was found that ingestion of the substance led to diarrhea. Max Kiss, a Hungarian-born pharmacist residing in New York, heard about the news and launched Ex-Lax in 1906. Synthesis Phenolphthalein can be synthesized by condensation of phthalic anhydride with two equivalents of phenol under acidic conditions. It was discovered in 1871 by Adolf von Baeyer. See also Bromothymol blue Litmus Methyl orange pH indicator Universal indicator References External links Page on different titration indicators, including phenolphthalein 1871 introductions PH indicators Triarylmethane dyes 4-Hydroxyphenyl compounds IARC Group 2B carcinogens Phthalides Laxatives
Phenolphthalein
[ "Chemistry", "Materials_science" ]
1,644
[ "Titration", "PH indicators", "Chromism", "Chemical tests", "Equilibrium chemistry" ]
246,267
https://en.wikipedia.org/wiki/Magnesium%20sulfate
Magnesium sulfate or magnesium sulphate is a chemical compound, a salt with the formula , consisting of magnesium cations (20.19% by mass) and sulfate anions . It is a white crystalline solid, soluble in water but not in ethanol. Magnesium sulfate is usually encountered in the form of a hydrate , for various values of n between 1 and 11. The most common is the heptahydrate , known as Epsom salt, which is a household chemical with many traditional uses, including bath salts. The main use of magnesium sulfate is in agriculture, to correct soils deficient in magnesium (an essential plant nutrient because of the role of magnesium in chlorophyll and photosynthesis). The monohydrate is favored for this use; by the mid 1970s, its production was 2.3 million tons per year. The anhydrous form and several hydrates occur in nature as minerals, and the salt is a significant component of the water from some springs. Hydrates Magnesium sulfate can crystallize as several hydrates, including: Anhydrous, ; unstable in nature, hydrates to form epsomite. Monohydrate, ; kieserite, monoclinic. Monohydrate, ; triclinic. or . Dihydrate, ; orthorhombic. or . Trihydrate, . Tetrahydrate, ; starkeyite, monoclinic. Pentahydrate, ; pentahydrite, triclinic. Hexahydrate, ; hexahydrite, monoclinic. Heptahydrate, ("Epsom salt"); epsomite, orthorhombic. Enneahydrate, , monoclinic. Decahydrate, . Undecahydrate, ; meridianiite, triclinic. As of 2017, the existence of the decahydrate apparently has not been confirmed. All the hydrates lose water upon heating. Above 320 °C, only the anhydrous form is stable. It decomposes without melting at 1124 °C into magnesium oxide (MgO) and sulfur trioxide (). Heptahydrate The heptahydrate takes its common name "Epsom salt" from a bitter saline spring in Epsom in Surrey, England, where the salt was produced from the springs that arise where the porous chalk of the North Downs meets the impervious London clay. The heptahydrate readily loses one equivalent of water to form the hexahydrate. It is a natural source of both magnesium and sulphur. Epsom salts are commonly used in bath salts, exfoliants, muscle relaxers and pain relievers. However, these are different from Epsom salts that are used for gardening, as they contain aromas and perfumes not suitable for plants. Monohydrate Magnesium sulfate monohydrate, or kieserite, can be prepared by heating the heptahydrate to 120 °C. Further heating to 250 °C gives anhydrous magnesium sulfate. Kieserite exhibits monoclinic symmetry at pressures lower than 2.7 GPa after which it transforms to phase of triclinic symmetry. Undecahydrate The undecahydrate , meridianiite, is stable at atmospheric pressure only below 2 °C. Above that temperature, it liquefies into a mix of solid heptahydrate and a saturated solution. It has a eutectic point with water at −3.9 °C and 17.3% (mass) of . Large crystals can be obtained from solutions of the proper concentration kept at 0 °C for a few days. At pressures of about 0.9 GPa and at 240 K, meridianiite decomposes into a mixture of ice VI and the enneahydrate . Enneahydrate The enneahydrate was identified and characterized only recently, even though it seems easy to produce (by cooling a solution of and sodium sulfate in suitable proportions). The structure is monoclinic, with unit-cell parameters at 250 K: a = 0.675 nm, b = 1.195 nm, c = 1.465 nm, β = 95.1°, V = 1.177 nm3 with Z = 4. The most probable space group is P21/c. Magnesium selenate also forms an enneahydrate , but with a different crystal structure. Natural occurrence As and ions are respectively the second most abundant cation and anion present in seawater after and , magnesium sulfates are common minerals in geological environments. Their occurrence is mostly connected with supergene processes. Some of them are also important constituents of evaporitic potassium-magnesium (K-Mg) salts deposits. Bright spots observed by the Dawn Spacecraft in Occator Crater on the dwarf planet Ceres are most consistent with reflected light from magnesium sulfate hexahydrate. Almost all known mineralogical forms of are hydrates. Epsomite is the natural analogue of "Epsom salt". Meridianiite, , has been observed on the surface of frozen lakes and is thought to also occur on Mars. Hexahydrite is the next lower hydrate. Three next lower hydrates – pentahydrite, starkeyite, and especially sanderite – are rare. Kieserite is a monohydrate and is common among evaporitic deposits. Anhydrous magnesium sulfate was reported from some burning coal dumps. Preparation Magnesium sulfate is usually obtained directly from dry lake beds and other natural sources. It can also be prepared by reacting magnesite (magnesium carbonate, ) or magnesia (oxide, MgO) with sulfuric acid (): Another possible method is to treat seawater or magnesium-containing industrial wastes so as to precipitate magnesium hydroxide and react the precipitate with sulfuric acid. Also, magnesium sulfate heptahydrate (epsomite, ) is manufactured by dissolution of magnesium sulfate monohydrate (kieserite, ) in water and subsequent crystallization of the heptahydrate. Physical properties Magnesium sulfate relaxation is the primary mechanism that causes the absorption of sound in seawater at frequencies above 10 kHz (acoustic energy is converted to thermal energy). Lower frequencies are less absorbed by the salt, so that low frequency sound travels farther in the ocean. Boric acid and magnesium carbonate also contribute to absorption. Uses Medical Magnesium sulfate is used both externally (as Epsom salt) and internally. The main external use is the formulation as bath salts, especially for foot baths to soothe sore feet. Such baths have been claimed to also soothe and hasten recovery from muscle pain, soreness, or injury. Potential health effects of magnesium sulfate are reflected in medical studies on the impact of magnesium on resistant depression and as an analgesic for migraine and chronic pain. Magnesium sulfate has been studied in the treatment of asthma, preeclampsia and eclampsia. Magnesium sulfate is usually the main component of the concentrated salt solution used in isolation tanks to increase its specific gravity to approximately 1.25–1.26. This high density allows an individual to float effortlessly on the surface of water in the closed tank, eliminating stimulation of as many of the external senses as possible. In the UK, a medication containing magnesium sulfate and phenol, called "drawing paste", is useful for small boils or localized infections and removing splinters. Internally, magnesium sulfate may be administered by oral, respiratory, or intravenous routes. Internal uses include replacement therapy for magnesium deficiency, treatment of acute and severe arrhythmias, as a bronchodilator in the treatment of asthma, preventing eclampsia and cerebral palsy, a tocolytic agent, and as an anticonvulsant. The effectiveness and safety of magnesium sulfate for treating acute bronchiolitis in children under the age of 2 years old is not well understood. It also may be used as laxative. Agriculture In agriculture, magnesium sulfate is used to increase magnesium or sulfur content in soil. It is most commonly applied to potted plants, or to magnesium-hungry crops such as potatoes, tomatoes, carrots, peppers, lemons, and roses. The advantage of magnesium sulfate over other magnesium soil amendments (such as dolomitic lime) is its high solubility, which also allows the option of foliar feeding. Solutions of magnesium sulfate are also nearly pH neutral, compared with the slightly alkaline salts of magnesium as found in limestone; therefore, the use of magnesium sulfate as a magnesium source for soil does not significantly change the soil pH. Contrary to the popular belief that magnesium sulfate is able to control pests and slugs, helps seeds germination, produce more flowers, improve nutrient uptake, and is environmentally friendly, it does none of the purported claims except for correcting magnesium deficiency in soils. Magnesium sulfate can even pollute water if used in excessive amounts. Magnesium sulfate was historically used as a treatment for lead poisoning prior to the development of chelation therapy, as it was hoped that any lead ingested would be precipitated out by the magnesium sulfate and subsequently purged from the digestive system. This application saw particularly widespread use among veterinarians during the early-to-mid 20th century; Epsom salt was already available on many farms for agricultural use, and it was often prescribed in the treatment of farm animals that had inadvertently ingested lead. Food preparation Magnesium sulfate is used as: Brewing salt in making beer Coagulant for making tofu Salt substitute A food additive to add taste to bottled water. Chemistry Anhydrous magnesium sulfate is commonly used as a desiccant in organic synthesis owing to its affinity for water and compatibility with most organic compounds. During work-up, an organic phase is treated with anhydrous magnesium sulfate. The hydrated solid is then removed by filtration, decantation, or by distillation (if the boiling point is low enough). Other inorganic sulfate salts such as sodium sulfate and calcium sulfate may be used in the same way. Construction Magnesium sulfate is used to prepare specific cements by the reaction between magnesium oxide and magnesium sulfate solution, which are of good binding ability and more resistance than Portland cement. This cement is mainly utilized in the production of lightweight insulation panels, although its poor water resistance limits its usage. Magnesium (or sodium) sulfate is also used for testing aggregates for soundness in accordance with ASTM C88 standard, when there are no service records of the material exposed to actual weathering conditions. The test is accomplished by repeated immersion in saturated solutions followed by oven drying to dehydrate the salt precipitated in permeable pore spaces. The internal expansive force, derived from the rehydration of the salt upon re-immersion, simulates the expansion of water on freezing. Magnesium sulfate is also used to test the resistance of concrete to external sulfate attack (ESA). Aquaria Magnesium sulfate heptahydrate is also used to maintain the magnesium concentration in marine aquaria which contain large amounts of stony corals, as it is slowly depleted in their calcification process. In a magnesium-deficient marine aquarium, calcium and alkalinity concentrations are very difficult to control because not enough magnesium is present to stabilize these ions in the saltwater and prevent their spontaneous precipitation into calcium carbonate. Double salts Double salts containing magnesium sulfate exist. There are several known as sodium magnesium sulfates and potassium magnesium sulfates. A mixed copper-magnesium sulfate heptahydrate was found to occur in mine tailings and was given the mineral name alpersite. See also Calcium sulfate Magnesium chloride References External links International Chemical Safety Cards—Magnesium Sulfate Epsom Salt in Gardening Desiccants Laxatives Magnesium compounds Sulfates
Magnesium sulfate
[ "Physics", "Chemistry" ]
2,446
[ "Sulfates", "Salts", "Desiccants", "Materials", "Matter" ]
10,405,712
https://en.wikipedia.org/wiki/QuickLOAD
QuickLOAD is an internal ballistics predictor computer program for firearms. For computations apart from other parameters, the cartridge the projectile (bullet) the gun barrel length the cartridge overall length the propellant type and quantity must be entered for calculating an estimated maximum chamber gas piezo pressure, muzzle velocity, muzzle pressure and other relevant data. QuickLOAD database QuickLOAD has a default database of predefined bullets, cartridges and propellants. The database of the more recent versions of QuickLOAD also include dimensional technical drawings of the predefined cartridges and for most cartridges photographic images. Data can later be imported or entered by the user to expand the programs database. The default database contains more than 2,500 projectiles, over 1,200 cartridges, over 225 powders and dimensional drawings and photos of many cartridges. The default database however contains some errors, so measuring sizes, weights and case capacities of components intended for use and if appropriate correcting default provided data is wise to avoid surprises and make the predictions more accurate. Some default data is incomplete, since it was not released by the manufacturer or when components that are neither officially registered with nor sanctioned by C.I.P. (Commission Internationale Permanente Pour L'Epreuve Des Armes A Feu Portative) or its American equivalent, SAAMI (Sporting Arms and Ammunition Manufacturers’ Institute) come into play. Such wildcat cartridges have no official dimensions nor other performance related specifications. Cartridge case volume establishment Besides the standard entered information, the actual internal volume or cartridge case capacity of the used cases is an important parameter for QuickLOAD to obtain usable predictions. The internal case volume has to be established by weighing empty once-fired cartridge cases from a production lot, then filling the cases with fresh or distilled water up to the point of overflowing and weighing the water-filled cases. The added weight of the water is then used to establish the liquid volume and hence the case capacity. This liquid volume measurement method can be practically employed to about a 0.01 to 0.02 ml or 0.15 to 0.30 grains of water precision level for firearms cartridge cases. A case capacity establishment should be done by measuring several fired cases from a particular production lot and calculating their average case capacity. This also provides insight into the uniformity of the sampled lot. A water case capacity test measurement of 4 fired .35 Whelen Remington cases resulted in: The case capacity of different cartridge brands of a particular chambering can significantly vary between cartridge case manufacturers and even production lots. The default database of QuickLOAD for example contains 5 different .300 Winchester Magnum case capacities for 5 different cartridge case manufacturers. Practical use and limitations QuickLOAD mainly helps reloaders understand how changing variables can affect barrel harmonics, pressures and muzzle velocities. It can predict the effect of changes in ambient temperature, bullet seating depth, and barrel length. However, QuickLOAD has limitations, as it is merely a computer simulation. It does not account for different brands of primers for example, and its ability to predict the effect of seating bullets into the rifling is crude. A QuickLOAD user most certainly should not just "plug in" a cartridge, bullet and powder and use that load, assuming it is safe. It is good practice to double- or triple-check QuickLOAD's output against reliable load data supplied by the powder producing companies. Of course the best way to check firearms cartridge loads are actual proof test measurements at certified test facilities. QuickTARGET external ballistics predictor computer program The QuickLOAD interior ballistics predictor program also contains the external ballistics predictor computer program QuickTARGET. QuickTARGET is based on the Siacci/Mayevski G1 model and gives the user the possibility to enter several different BC G1 constants for different speed regimes to calculate ballistic predictions that more closely match a bullet's flight behaviour at longer ranges in comparison to calculations that use only one BC constant. In 2008 QuickTARGET Unlimited was introduced as an additional part of the QuickLOAD/QuickTARGET 3.4 version software suite. QuickTARGET Unlimited is an enhanced beta version of QuickTARGET that can take several long range factors into account to make the external ballistic predictions more accurate. For this it can use several drag models; G1, G5, G7, etc. and a custom drag function that uses drag coefficient (Cd) data. In January 2009 the Finnish ammunition manufacturer Lapua published Doppler radar tests derived drag coefficient data for most of their rifle projectiles. The predictive capabilities of the custom mode are based on actual bullet flight data derived from Doppler radar test sessions. With this data engineers can create algorithms that utilize both known mathematical ballistic models as well as test specific, tabular data in unison. Besides the data for Lapua bullets QuickTARGET Ultimate also contains Cd data for some other projectiles that are often used for extended range shooting. Computer requirements QuickLOAD/QuickTARGET 3.6 version and up is compatible only with the Microsoft Windows 7 to Windows 11 operating system. The software suite can be used with metric units and imperial units/United States customary units and was created and is maintained by mechanical engineer Mr. Hartmut G. Brömel in Babenhausen, Germany. QuickLOAD is distributed in the United States, Canada, Mexico, South Africa, Australia and New Zealand a by NECO (Nostalgia Enterprises Company) and Europe except Germany, Czech Republic, Denmark, Finland and Ukraine by JMS Arms. References External links 6mmBR.com QuickLOAD Review & User's Guide RealGuns.com QuickLOAD Review Desktop Data by Craig Boddington, Guns & Ammo Magazine Handloading Ballistics
QuickLOAD
[ "Physics" ]
1,160
[ "Applied and interdisciplinary physics", "Ballistics" ]
10,410,127
https://en.wikipedia.org/wiki/Light%20scattering%20by%20particles
Light scattering by particles is the process by which small particles (e.g. ice crystals, dust, atmospheric particulates, cosmic dust, and blood cells) scatter light causing optical phenomena such as the blue color of the sky, and halos. Maxwell's equations are the basis of theoretical and computational methods describing light scattering, but since exact solutions to Maxwell's equations are only known for selected particle geometries (such as spherical), light scattering by particles is a branch of computational electromagnetics dealing with electromagnetic radiation scattering and absorption by particles. In case of geometries for which analytical solutions are known (such as spheres, cluster of spheres, infinite cylinders), the solutions are typically calculated in terms of infinite series. In case of more complex geometries and for inhomogeneous particles the original Maxwell's equations are discretized and solved. Multiple-scattering effects of light scattering by particles are treated by radiative transfer techniques (see, e.g. atmospheric radiative transfer codes). The relative size of a scattering particle is defined by its size parameter , which is the ratio of its characteristic dimension to its wavelength: Exact computational methods Finite-difference time-domain method The FDTD method belongs in the general class of grid-based differential time-domain numerical modeling methods. The time-dependent Maxwell's equations (in partial differential form) are discretized using central-difference approximations to the space and time partial derivatives. The resulting finite-difference equations are solved in either software or hardware in a leapfrog manner: the electric field vector components in a volume of space are solved at a given instant in time; then the magnetic field vector components in the same spatial volume are solved at the next instant in time; and the process is repeated over and over again until the desired transient or steady-state electromagnetic field behavior is fully evolved. T-matrix The technique is also known as null field method and extended boundary technique method (EBCM). Matrix elements are obtained by matching boundary conditions for solutions of Maxwell equations. The incident, transmitted, and scattered field are expanded into spherical vector wave functions. Computational approximations Mie approximation Scattering from any spherical particles with arbitrary size parameter is explained by the Mie theory. Mie theory, also called Lorenz-Mie theory or Lorenz-Mie-Debye theory, is a complete analytical solution of Maxwell's equations for the scattering of electromagnetic radiation by spherical particles (Bohren and Huffman, 1998). For more complex shapes such as coated spheres, multispheres, spheroids, and infinite cylinders there are extensions which express the solution in terms of infinite series. There are codes available to study light scattering in Mie approximation for spheres, layered spheres, and multiple spheres and cylinders. Discrete dipole approximation There are several techniques for computing scattering of radiation by particles of arbitrary shape. The discrete dipole approximation is an approximation of the continuum target by a finite array of polarizable points. The points acquire dipole moments in response to the local electric field. The dipoles of these points interact with one another via their electric fields. There are DDA codes available to calculate light scattering properties in DDA approximation. Approximate methods Rayleigh scattering Rayleigh scattering regime is the scattering of light, or other electromagnetic radiation, by particles much smaller than the wavelength of the light. Rayleigh scattering can be defined as scattering in small size parameter regime . Geometric optics (ray-tracing) Ray tracing techniques can approximate light scattering by not only spherical particles but ones of any specified shape (and orientation) so long as the size and critical dimensions of a particle are much larger than the wavelength of light. The light can be considered as a collection of rays whose widths are much larger than the wavelength but small compared to the particle itself. Each ray hitting the particle may undergo (partial) reflection and/or refraction. These rays exit in directions thereby computed with their full power or (when partial reflection is involved) with the incident power divided among two (or more) exiting rays. Just as with lenses and other optical components, ray tracing determines the light emanating from a single scatterer, and combining that result statistically for a large number of randomly oriented and positioned scatterers, one can describe atmospheric optical phenomena such as rainbows due to water droplets and halos due to ice crystals. There are atmospheric optics ray-tracing codes available. See also Codes for electromagnetic scattering by spheres Codes for electromagnetic scattering by cylinders Discrete dipole approximation codes Finite-difference time-domain method Scattering References Barber,P.W. and S.C. Hill, Light scattering by particles : computational methods, Singapore ; Teaneck, N.J., World Scientific, c1990, 261 p.+ 2 computer disks (3½ in.), , (pbk.) Bohren, Craig F. and Donald R. Huffman, Title Absorption and scattering of light by small particles, New York : Wiley, 1998, 530 p., , Hulst, H. C. van de, Light scattering by small particles, New York, Dover Publications, 1981, 470 p., . Kerker, Milton, The scattering of light, and other electromagnetic radiation, New York, Academic Press, 1969, 666 p. Mishchenko, Michael I., Joop W. Hovenier, Larry D. Travis, Light scattering by nonspherical particles: theory, measurements, and applications, San Diego : Academic Press, 2000, 690 p., . Stratton, Julius Adams, Electromagnetic theory, New York, London, McGraw-Hill book company, inc., 1941. 615 p. Scattering, absorption and radiative transfer (optics)
Light scattering by particles
[ "Chemistry" ]
1,174
[ "Scattering", " absorption and radiative transfer (optics)" ]
4,671,396
https://en.wikipedia.org/wiki/Atomistix%20Virtual%20NanoLab
Atomistix Virtual NanoLab (VNL) is a commercial point-and-click software for simulation and analysis of physical and chemical properties of nanoscale devices. Virtual NanoLab is developed and sold commercially by QuantumWise A/S. QuantumWise was then acquired by Synopsys in 2017. Features With its graphical interface, Virtual NanoLab provides a user-friendly approach to atomic-scale modeling. The software contains a set of interactive instruments that allows the user to design nanosystems, to set up and execute numerical calculations, and to visualize the results. Samples such as molecules, nanotubes, crystalline systems, and two-probe systems (i.e. a nanostructure coupled to two electrodes) are built with a few mouse clicks. Virtual NanoLab contains a 3D visualization tool, the Nanoscope, where atomic geometries and computed results can be viewed and analyzed. One can for example plot Bloch functions of nanotubes and crystals, molecular orbitals, electron densities, and effective potentials. The numerical engine that carries out the actual simulations is Atomistix ToolKit, which combines density functional theory and non-equilibrium Green's functions to ab initio electronic-structure and transport calculations. Atomistix ToolKit is developed from the academic codes TranSIESTA and McDCal. See also Atomistix ToolKit NanoLanguage Atomistix References External links QuantumWise web site Nanotechnology companies Computational science Computational chemistry software Physics software Density functional theory software Software that uses Qt
Atomistix Virtual NanoLab
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
314
[ "Quantum chemistry stubs", "Materials science stubs", "Quantum chemistry", "Computational chemistry software", "Chemistry software", "Theoretical chemistry stubs", "Applied mathematics", "Nanotechnology companies", "Computational physics", "Computational science", "Computational chemistry", "D...
4,671,692
https://en.wikipedia.org/wiki/Power%20electronic%20substrate
The role of the substrate in power electronics is to provide the interconnections to form an electric circuit (like a printed circuit board), and to cool the components. Compared to materials and techniques used in lower power microelectronics, these substrates must carry higher currents and provide a higher voltage isolation (up to several thousand volts). They also must operate over a wide temperature range (up to 150 or 200 °C). Direct Bonded Copper (DBC) substrate DBC substrates are commonly used in power modules, because of their very good thermal conductivity. They are composed of a ceramic material tile with a sheet of copper bonded to one or both sides by a high-temperature oxidation process (the copper and substrate are heated to a carefully controlled temperature in an atmosphere of nitrogen containing about 30 ppm of oxygen; under these conditions, a copper-oxygen eutectic forms which bonds successfully both to copper and the oxides used as substrates). The top copper layer can be preformed prior to firing or chemically etched using printed circuit board technology to form an electrical circuit, while the bottom copper layer is usually kept plain. The substrate is attached to a heat spreader by soldering the bottom copper layer to it. A related technique uses a seed layer, photoimaging, and then additional copper plating to allow for fine lines (as small as 50 micrometres) and through-vias to connect front and back sides. This can be combined with polymer-based circuits to create high density substrates that eliminate the need for direct connection of power devices to heat sinks. One of the main advantages of the DBC vs other power electronic substrates is their low coefficient of thermal expansion, which is close to that of silicon (compared to pure copper). This ensures good thermal cycling performances (up to 50,000 cycles). The DBC substrates also have excellent electrical insulation and good heat spreading characteristics. Ceramic material used in DBC include: Alumina (Al2O3), commonly used because of its low cost. It is however not a really good thermal conductor (24-28 W/mK) and is brittle. Aluminium nitride (AlN), which is more expensive, but has far better thermal performance (> 150 W/mK). Silicon nitride (SiN) (90 W/mK) HPS (Alumina w/ 9% ZrO2 doped) (26 W/mK) Beryllium oxide (BeO), which has good thermal performance, but is often avoided because of its toxicity when the powder is ingested or inhaled. Active Metal Brazed (AMB) substrate AMB consists of a metal foil soldered to the ceramic baseplate using solder paste and high temperature (800 °C – 1000 °C) under vacuum. Although AMB is electrically very similar to DBC, it is typically suited for small production lots due to the unique process requirements. Insulated Metal substrate (IMS) IMS consists of a metal baseplate (aluminium is commonly used because of its low cost and density) covered by a thin layer of dielectric (usually an epoxy-based layer) and a layer of copper (35 μm to more than 200 μm thick). The FR-4-based dielectric is usually thin (about 100 μm) because it has poor thermal conductivity compared to the ceramics used in DBC substrates. Due to its structure, the IMS is a single-sided substrate, i.e. it can only accommodate components on the copper side. In most applications, the baseplate is attached to a heatsink to provide cooling, usually using thermal grease and screws. Some IMS substrates are available with a copper baseplate for better thermal performances. Compared to a classical printed circuit board, the IMS provides a better heat dissipation. It is one of the simplest ways to provide efficient cooling to surface mount components. Other substrates When the power devices are attached to a proper heatsink, there is no need for a thermally efficient substrate. Classical printed circuit board (PCB) material can be used (this method is typically used with through-hole technology components). This is also true for low-power applications (from some milliwatts to some watts), as the PCB can be thermally enhanced by using thermal vias or wide tracks to improve convection. An advantage of this method is that multilayer PCB allows design of complex circuits, whereas DBC and IMS are mostly single-sided technologies. Flexible substrates can be used for low-power applications. As they are built using Kapton as a dielectric, they can withstand high temperatures and high voltages. Their intrinsic flexibility makes them resistant to thermal cycling damage. Ceramic substrates (thick film technology) can also be used in some applications (such as automotive) where reliability is of highest importance. Compared to DCBs, thick film technology offers a higher degree of design freedom but may be less cost-efficient. The thermal performances of IMS, DBC and thick film substrate are evaluated in Thermal analysis of high-power modules Van Godbold, C., Sankaran, V.A. and Hudgins, J.L., IEEE Transactions on Power Electronics, Vol. 12, N° 1, Jan 1997, pages 3–11, ISSN 0885-8993 (restricted access) References Power electronics
Power electronic substrate
[ "Engineering" ]
1,104
[ "Electronic engineering", "Power electronics" ]
4,672,258
https://en.wikipedia.org/wiki/Need%20to%20know
The term "need to know" (alternatively spelled need-to-know), when used by governments and other organizations (particularly those related to military or intelligence), describes the restriction of data which is considered very confidential and sensitive. Under need-to-know restrictions, even if one has all the necessary official approvals (such as a security clearance) to access certain information, one would not be given access to such information, or read into a clandestine operation, unless one has a specific need to know; that is, access to the information must be necessary for one to conduct one's official duties. This term also includes anyone that the people with the knowledge deemed necessary to share it with. As with most security mechanisms, the aim is to make it difficult for unauthorized access to occur, without inconveniencing legitimate access. Need-to-know also aims to discourage "browsing" of sensitive material by limiting access to the smallest possible number of people. Examples The Battle of Normandy in 1944 is an example of a need-to-know restriction. Though thousands of military personnel were involved in planning the invasion, only a small number of them knew the entire scope of the operation; the rest were only informed of data needed to complete a small part of the plan. The same is true of the Trinity project, the first test of a nuclear weapon in 1945. Problems and criticism Like other security measures, need to know can be misused by persons who wish to refuse others access to information they hold in an attempt to increase their personal power, prevent unwelcome review of their work or prevent embarrassment resulting from actions or thoughts. Need to know can also be invoked to hide illegal activities. This may be considered a necessary use, or a detrimental abuse of such a policy when considered from different perspectives. Need to know can be detrimental to workers' efficiency. Even when done in good faith, one might not be fully aware of who actually needs to know the information, resulting in inefficiencies as some people may inevitably withhold information that they require to perform their duty. The speed of computations with IBM mechanical calculators at Los Alamos dramatically increased after the calculators' operators were told what the numbers meant: In computer technology The discretionary access control mechanisms of some operating systems can be used to enforce need to know. In this case, the owner of a file determines whether another person should have access. Need to know is often concurrently applied with mandatory access control schemes, in which the lack of an official approval (such as a clearance) may absolutely prohibit a person from accessing the information. This is because need to know can be a subjective assessment. Mandatory access control schemes can also audit accesses, in order to determine if need to know has been violated. The term is also used in the concept of graphical user interface design where computers are controlling complex equipment such as airplanes. In this usage, when many different pieces of data are dynamically competing for finite user interface space, safety-related messages are given priority. See also References Computer security procedures Classified information
Need to know
[ "Engineering" ]
624
[ "Cybersecurity engineering", "Computer security procedures" ]
4,674,032
https://en.wikipedia.org/wiki/Traffic%20management
Traffic management is a key branch within logistics. It concerns the planning, control and purchasing of transport services needed to physically move vehicles (for example aircraft, road vehicles, rolling stock and watercraft) and freight. Traffic management is implemented by people working with different job titles in different branches: Within freight and cargo logistics: traffic manager, assessment of hazardous and awkward materials, carrier choice and fees, demurrage, documentation, expediting, freight consolidation, insurance, reconsignment and tracking Within air traffic management: air traffic controller Within rail traffic management: rail traffic controller, train dispatcher or signalman Within road traffic management: traffic controller See also Air traffic control, a service provided by ground-based controllers who direct aircraft Road traffic control, directing vehicular and pedestrian traffic around a construction zone, accident or other road disruption Traffic control in shipping lanes Urban (peak-hour) traffic management References traffic control company sydney External links Business terms Management by type
Traffic management
[ "Physics", "Engineering" ]
193
[ "Systems engineering", "Traffic management", "Transport stubs", "Physical systems", "Transport" ]
4,675,617
https://en.wikipedia.org/wiki/Cyclopiazonic%20acid
Cyclopiazonic acid (α-CPA), a mycotoxin and a fungal neurotoxin, is made by the molds Aspergillus and Penicillium. It is an indole-tetramic acid that serves as a toxin due to its ability to inhibit calcium-dependent ATPases found in the endoplasmic and sarcoplasmic reticulum. This inhibition disrupts the muscle contraction-relaxation cycle and the calcium gradient that is maintained for proper cellular activity in cells. Cyclopiazonic acid is known to contaminate multiple foods because the molds that produce them are able to grow on different agricultural products, including but not limited to grains, corn, peanuts, and cheese. Due to this contamination, α-CPA can be harmful to both humans and farm animals that were exposed to contaminated animal feeds. However, α-CPA needs to be introduced in very high concentrations to produce mycotoxicosis in animals. Due to this, α-CPA is not a potent acute toxin. Chemically, CPA is related to ergoline alkaloids. CPA was originally isolated from Penicillium cyclopium and subsequently from other fungi including Penicillium griseofulvum, Penicillium camemberti, Penicillium commune, Aspergillus flavus, and Aspergillus versicolor. CPA only appears to be toxic in high concentrations. Ingestion of CPA causes anorexia, dehydration, weight loss, immobility, and signs of spasm when near death. CPA can be found in molds, corns, peanuts, and other fermented products, such as cheese and sausages. Biologically, CPA is a specific inhibitor of SERCA ATPase in intracellular Ca2+ storage sites. CPA inhibits SERCA ATPase by keeping it in one specific conformation, thus, preventing it from forming another. CPA also binds to SERCA ATPase at the same site as another inhibitor, thapsigargin (TG). In this way, CPA lowers the ability of SERCA ATPase to bind an ATP molecule. Toxicity Cases of α-CPA mycotoxicosis in humans are rare. However, the occurrence of α-CPA in foods consumed by humans suggests that the toxin is indeed ingested by humans, though at concentrations low enough to be of no serious health concern. Even if its toxicity in humans is rare, large doses of α-CPA have been seen to adversely affect animals such as mice, rats, chickens, pigs, dogs, and rabbits. Cyclopiazonic acid's toxicity mirrors that of antipsychotic drugs when taken up these animals. This mycotoxin has been extensively studied in mice to discern its toxic properties. The severity of toxicity is dose-dependent, and exposure to α-CPA has led to hypokinesia, hypothermia, catalepsy, tremors, irregular respiration, ptosis, weight loss, and eventual death in mice. The adverse health effects of α-CPA studied in mice are similar to those found in other animals. Biosynthesis Three enzymes are utilized in the biosynthesis of α-CPA: the polypeptide CpaS, dimethylallyltransferase (CpaD), and flavoprotein oxidocyclase (CpaO). CpaS is the first enzyme in the biosynthetic pathway and is a hybrid polyketide synthase- nonribosomal peptide synthetase (PKS-NRPS). It uses the precursors acetyl-CoA, malonyl-CoA, and tryptophan to produce cyclo-acetoaceytl-L-tryptophan (cAATrp). The intermediate cAATrp is then prenylated with dimethylallyl pyrophosphate (DMAPP) by the enzyme CpaD to form the intermediate β-CPA. CpaD has high substrate specificity and will not catalyze prenylation in the presence of DMAPP's isomer isopentyl pyrophosphate (IPP) or the derivatives of cAATrp. The third enzyme, CpaO, then acts on β-CPA through a redox mechanism that allows for intramolecular cyclization to form α-CPA. Mechanism of Action of CpaS CpaS is made of several domains that belong either to the PKS portion or the NRPS portion of the 431 kDa protein. The PKS portion is made up of three catalytically important domains and three additional tailoring domains that are common to polyketide synthases but not used in the biosynthesis of α-CPA. The catalytically important acyl carrier protein domain (ACP), acyl transferase domain (AT), and ketosynthase domain (KS) work together to form acetoacetyl-CoA from the precursors acetyl-CoA and malonyl-CoA. The acetoacetyl-CoA is then acted on by the NRPS portion of CpaS. The NRPS portion, like the PKS portion, contains many catalytically active domains. The adenylation domain (A) acts first to activate the amino acid tryptophan and subsequently transfer it to the peptidyl carrier protein (PCP) domain (T). Following this, the condensation domain (C) catalyzes an amide bond formation between the acetoacetyl moiety attached to the ACP and tryptophan attached to the PCP. The releasing domain (R) catalyzes a Dieckmann condensation to both cyclize and release the cAATrp product. Formation of β-CPA The second enzyme, CpaD, converts the cAATrp produced by CpaS to β-CPA. CpaD, also known as cycloacetoacetyltyptophanyl dimethylallyl transferase, places DMAPP at the tryptophan indole ring, specifically at position C-4. CpaD then catalyzes selective prenylation at position C-4 through a Friedel-Craft alkylation, producing β-CPA. It is important to note here that the biosynthesis of α-CPA is dependent on other pathways, specifically the mevalonate pathway, which serves to form DMAPP. Formation of α-CPA The final enzyme in the biosynthetic pathway, CpaO, converts β-CPA to α-CPA. CpaO is a FAD-dependent oxidoreductase. FAD oxidizes β-CPA in a two-electron process, subsequently allowing for ring closure and formation of α-CPA. To regenerate the oxidized FAD cofactor used by CpaO, the reduced FAD reacts with molecular oxygen to produce hydrogen peroxide. References Mycotoxins Tryptamine alkaloids Nitrogen heterocycles Enols Ketones Lactams Heterocyclic compounds with 5 rings
Cyclopiazonic acid
[ "Chemistry" ]
1,519
[ "Enols", "Tryptamine alkaloids", "Ketones", "Functional groups", "Alkaloids by chemical classification" ]
4,676,311
https://en.wikipedia.org/wiki/RICE%20chart
An ICE table or RICE box or RICE chart is a tabular system of keeping track of changing concentrations in an equilibrium reaction. ICE stands for initial, change, equilibrium. It is used in chemistry to keep track of the changes in amount of substance of the reactants and also organize a set of conditions that one wants to solve with. Some sources refer to a RICE table (or box or chart) where the added R stands for the reaction to which the table refers. Others simply call it a concentration table (for the acid–base equilibrium). Example To illustrate the processes, consider the case of dissolving a weak acid, HA, in water. The pH can be calculated using an ICE table. Note that in this example, we are assuming that the acid is not very weak, and that the concentration is not very dilute, so that the concentration of [OH−] ions can be neglected. This is equivalent to the assumption that the final pH will be below about 6 or so. See pH calculations for more details. First write down the equilibrium expression. HA <=> {A^-} + {H+} The columns of the table correspond to the three species in equilibrium. The first row shows the reaction, which some authors label R and some leave blank. The second row, labeled I, has the initial conditions: the nominal concentration of acid is Ca and it is initially undissociated, so the concentrations of A− and H+ are zero. The third row, labeled C, specifies the change that occurs during the reaction. When the acid dissociates, its concentration changes by an amount , and the concentrations of A− and H+ both change by an amount . This follows from consideration of mass balance (the total number of each atom/molecule must remain the same) and charge balance (the sum of the electric charges before and after the reaction must be zero). Note that the coefficients in front of the "x" correlate to the mole ratios of the reactants to the product. For example, if the reaction equation had 2 H+ ions in the product, then the "change" for that cell would be "2x" The fourth row, labeled E, is the sum of the first two rows and shows the final concentrations of each species at equilibrium. It can be seen from the table that, at equilibrium, [H+] = x. To find x, the acid dissociation constant (that is, the equilibrium constant for acid-base dissociation) must be specified. Substitute the concentrations with the values found in the last row of the ICE table. With specific values for Ca and Ka this quadratic equation can be solved for x. Assuming that pH = −log10[H+] the pH can be calculated as pH = −log10x. If the degree of dissociation is quite small, Ca ≫ x and the expression simplifies to and pH = (pKa − log Ca). This approximate expression is good for pKa values larger than about 2 and concentrations high enough. References Equilibrium chemistry Physical chemistry
RICE chart
[ "Physics", "Chemistry" ]
630
[ "Equilibrium chemistry", "Physical chemistry", "Applied and interdisciplinary physics", "nan" ]
4,677,186
https://en.wikipedia.org/wiki/Wien%20approximation
Wien's approximation (also sometimes called Wien's law or the Wien distribution law) is a law of physics used to describe the spectrum of thermal radiation (frequently called the blackbody function). This law was first derived by Wilhelm Wien in 1896. The equation does accurately describe the short-wavelength (high-frequency) spectrum of thermal emission from objects, but it fails to accurately fit the experimental data for long-wavelength (low-frequency) emission. Details Wien derived his law from thermodynamic arguments, several years before Planck introduced the quantization of radiation. Wien's original paper did not contain the Planck constant. In this paper, Wien took the wavelength of black-body radiation and combined it with the Maxwell–Boltzmann energy distribution for atoms. The exponential curve was created by the use of Euler's number e raised to the power of the temperature multiplied by a constant. Fundamental constants were later introduced by Max Planck. The law may be written as (note the simple exponential frequency dependence of this approximation) or, by introducing natural Planck units, where: This equation may also be written as where is the amount of energy per unit surface area per unit time per unit solid angle per unit wavelength emitted at a wavelength λ. Wien acknowledges Friedrich Paschen in his original paper as having supplied him with the same formula based on Paschen's experimental observations. The peak value of this curve, as determined by setting the derivative of the equation equal to zero and solving, occurs at a wavelength and frequency Relation to Planck's law The Wien approximation was originally proposed as a description of the complete spectrum of thermal radiation, although it failed to accurately describe long-wavelength (low-frequency) emission. However, it was soon superseded by Planck's law, which accurately describes the full spectrum, derived by treating the radiation as a photon gas and accordingly applying Bose–Einstein in place of Maxwell–Boltzmann statistics. Planck's law may be given as The Wien approximation may be derived from Planck's law by assuming . When this is true, then and so the Wien approximation gets ever closer to Planck's law as the frequency increases. Other approximations of thermal radiation The Rayleigh–Jeans law developed by Lord Rayleigh may be used to accurately describe the long wavelength spectrum of thermal radiation but fails to describe the short wavelength spectrum of thermal emission. See also ASTM Subcommittee E20.02 on Radiation Thermometry Sakuma–Hattori equation Ultraviolet catastrophe Wien's displacement law References Statistical mechanics Electromagnetic radiation 1896 in science 1896 in Germany
Wien approximation
[ "Physics" ]
521
[ "Electromagnetic radiation", "Physical phenomena", "Radiation", "Statistical mechanics" ]
4,677,237
https://en.wikipedia.org/wiki/Gene%20cassette
In biology, a gene cassette is a type of mobile genetic element that contains a gene and a recombination site. Each cassette usually contains a single gene and tends to be very small; on the order of 500–1,000 base pairs. They may exist incorporated into an integron or freely as circular DNA. Gene cassettes can move around within an organism's genome or be transferred to another organism in the environment via horizontal gene transfer. These cassettes often carry antibiotic resistance genes. An example would be the kanMX cassette which confers kanamycin (an antibiotic) resistance upon bacteria. Integrons Integrons are genetic structures in bacteria which express and are capable of acquiring and exchanging gene cassettes. The integron consists of a promoter, an attachment site, and an integrase gene that encodes a site-specific recombinase There are three classes of integrons described. The mobile units that insert into integrons are gene cassettes. For cassettes that carry a single gene without a promoter, the entire series of cassettes is transcribed from an adjacent promoter within the integron. The gene cassettes are speculated to be inserted and excised via a circular intermediate. This would involve recombination between short sequences found at their termini and known as 59 base elements (59-be)—which may not be 59 bases long. The 59-be are a diverse family of sequences that function as recognition sites for the site-specific integrase (enzyme responsible for integrating the gene cassette into an integron) that occur downstream from the gene coding sequence. Diversity and prevalence The ability of genetic elements like gene cassettes to excise and insert into genomes results in highly similar gene regions appearing in distantly related organisms. The three classes of integrons are similar in structure and are identified by where the insertions occur and what systems they coincide with. Class 1 integrons are seen in a diverse group of bacterial genomes and likely are all descendant from one common ancestor. The prevalence of the integron has shaped bacterial evolution by allowing rapid transfer of genes that are novel to an organism, such as antibiotic resistance genes. Genetic engineering In genetic engineering, a gene cassette is a manipulable fragment of DNA carrying, and capable of expressing, one or more genes of interest between one or more sets of restriction sites. It can be transferred from one DNA sequence (usually on a vector) to another by 'cutting' the fragment out using restriction enzymes and 'pasting' it back into the new context. The vectors containing the gene of interest typically also carry an antibiotic resistance gene called a selectable marker to easily identify cells that have successfully integrated the vector into their genome. To introduce a vector into a target cell, a state of competence must be inferred on the cell. This state is induced in the lab by incubating cells with calcium chloride before a brief heat shock, or by electroporation. This makes the cells more susceptible to the plasmid that is being inserted. Once the plasmid has been added, the cells are grown in the presence of an antibiotic to confirm the uptake and expression of the new genetic elements. The usage of CRISPR/Cas9 systems has shown success in inserting genes into eukaryotic genomes. While CRISPR modification is still in its infancy, there is significant evidence for usage in combination with other techniques to produce high throughput (HTP) genome editing systems. Genetic engineering of bacteria for production of a variety of industrial products, including biofuels and specialty chemicals/nutraceuticals is a major area of research. Horizontal gene transfer Horizontal gene transfer (HGT) is the transfer of genetic elements between cells other than parental inheritance. HGT is responsible for much of the spread of antibiotic resistance among bacteria. Gene cassettes containing antibiotic resistance genes, or other virulence factors such as exotoxins, can be transferred from cell to cell via phage, transduction, taken up from the environment, transformation, or by bacterial conjugation. The ability to transfer gene cassettes between organisms has played a large role in the evolution of prokaryotes. Many commensal organisms, such as E. coli, regularly harbor one or more gene cassettes that convey antibiotic resistance. Horizontal transfer of genetic elements from non-pathogenic commensals to unrelated species results in highly virulent pathogens that can carry multiple antibiotic resistance genes. The increasing prevalence of resistance creates challenging questions for researchers and physicians. See also Expression cassette References External links Broad Institute Caribou Biosciences Ginkgo Bioworks Genetics
Gene cassette
[ "Biology" ]
961
[ "Genetics" ]
2,521,540
https://en.wikipedia.org/wiki/Picrite%20basalt
Picrite basalt or picrobasalt is a variety of high-magnesium olivine basalt that is very rich in the mineral olivine. It is dark with yellow-green olivine phenocrysts (20-50%) and black to dark brown pyroxene, mostly augite. The olivine-rich picrite basalts that occur with the more common tholeiitic basalts of Kīlauea and other volcanoes of the Hawaiian Islands are the result of accumulation of olivine crystals either in a portion of the magma chamber or in a caldera lava lake. The compositions of these rocks are well represented by mixes of olivine and more typical tholeiitic basalt. The name "picrite" can also be applied to an olivine-rich alkali basalt: such picrite consists largely of phenocrysts of olivine and titanium-rich augite pyroxene with minor plagioclase set in a groundmass of augite and more sodic plagioclase and perhaps analcite and biotite. More generally the classification of fine grained rocks recognises a group known as 'picritic rocks' that are characterised by high magnesium content and low SiO2 content. They fit in the TAS classification system only at the lowest level of Si02 (41 to 43% by weight) and Na2O + K2O (up to 3% by weight). They include picrite, komatiite and meimechite. Picrites and komatiites are somewhat similar chemically (defined as >18% MgO), but differ in having 1 to 3% total alkalis and <1% total alkalis respectively. Komatiite lavas are products of more magnesium-rich melts, and good examples exhibit the spinifex texture. They are largely restricted to the Archean. In contrast, picrites are magnesium-rich because crystals of olivine have accumulated in more normal melts by magmatic processes. Picrite basalt is found in the lavas of Mauna Kea and Mauna Loa in Hawaii, Curaçao, in the Piton de la Fournaise volcano on Réunion Island and various other oceanic island volcanoes. Picrite basalt has erupted in historical times from Mauna Loa during the eruptions of 1852 and 1868 (from different flanks of Mauna Loa). Picrite basalt with 30% olivine commonly erupts from the Piton de la Fournaise. In addition to extrusive occurrences, it also occurs in minor intrusions. Oceanite Oceanite is a variety of picritic basalt characterized by its large amounts of olivine phenocrysts and lesser amounts of augite and by having a groundmass of olivine, plagioclase and augite. The term was coined by Antoine Lacroix in 1923 for rare basalts with more than 50% olivine. Common uses Olivine basalt is commonly used by foundries, boilermakers and boiler users to protect the area around a burner tip or to protect a floor from molten metal and other slag. Its use in this fashion is appropriate since olivine is a highly refractory, high-melting-temperature mineral. References . Aphanitic rocks Mafic rocks Ultramafic rocks Volcanology Basalt
Picrite basalt
[ "Chemistry" ]
699
[ "Mafic rocks", "Ultramafic rocks", "Igneous rocks by composition" ]
2,522,070
https://en.wikipedia.org/wiki/Multiferroics
Multiferroics are defined as materials that exhibit more than one of the primary ferroic properties in the same phase: ferromagnetism – a magnetisation that is switchable by an applied magnetic field ferroelectricity – an electric polarisation that is switchable by an applied electric field ferroelasticity – a deformation that is switchable by an applied stress While ferroelectric ferroelastics and ferromagnetic ferroelastics are formally multiferroics, these days the term is usually used to describe the magnetoelectric multiferroics that are simultaneously ferromagnetic and ferroelectric. Sometimes the definition is expanded to include nonprimary order parameters, such as antiferromagnetism or ferrimagnetism. In addition, other types of primary order, such as ferroic arrangements of magnetoelectric multipoles of which ferrotoroidicity is an example, were proposed. Besides scientific interest in their physical properties, multiferroics have potential for applications as actuators, switches, magnetic field sensors and new types of electronic memory devices. History A Web of Science search for the term multiferroic yields the year 2000 paper "Why are there so few magnetic ferroelectrics?" from N. A. Spaldin (then Hill) as the earliest result. This work explained the origin of the contraindication between magnetism and ferroelectricity and proposed practical routes to circumvent it, and is widely credited with starting the modern explosion of interest in multiferroic materials. The availability of practical routes to creating multiferroic materials from 2000 stimulated intense activity. Particularly key early works were the discovery of large ferroelectric polarization in epitaxially grown thin films of magnetic BiFeO3, the observation that the non-collinear magnetic ordering in orthorhombic TbMnO3 and TbMn2O5 causes ferroelectricity, and the identification of unusual improper ferroelectricity that is compatible with the coexistence of magnetism in hexagonal manganite YMnO3. The graph to the right shows in red the number of papers on multiferroics from a Web of Science search until 2008; the exponential increase continues today. Magnetoelectric materials To place multiferroic materials in their appropriate historical context, one also needs to consider magnetoelectric materials, in which an electric field modifies the magnetic properties and vice versa. While magnetoelectric materials are not necessarily multiferroic, all ferromagnetic ferroelectric multiferroics are linear magnetoelectrics, with an applied electric field inducing a change in magnetization linearly proportional to its magnitude. Magnetoelectric materials and the corresponding magnetoelectric effect have a longer history than multiferroics, shown in blue in the graph to the right. The first known mention of magnetoelectricity is in the 1959 Edition of Landau & Lifshitz' Electrodynamics of Continuous Media which has the following comment at the end of the section on piezoelectricity: "Let us point out two more phenomena, which, in principle, could exist. One is piezomagnetism, which consists of linear coupling between a magnetic field in a solid and a deformation (analogous to piezoelectricity). The other is a linear coupling between magnetic and electric fields in a media, which would cause, for example, a magnetization proportional to an electric field. Both these phenomena could exist for certain classes of magnetocrystalline symmetry. We will not however discuss these phenomena in more detail because it seems that till present, presumably, they have not been observed in any substance." One year later, I. E. Dzyaloshinskii showed using symmetry arguments that the material Cr2O3 should have linear magnetoelectric behavior, and his prediction was rapidly verified by D. Astrov. Over the next decades, research on magnetoelectric materials continued steadily in a number of groups in Europe, in particular in the former Soviet Union and in the group of H. Schmid at U. Geneva. A series of East-West conferences entitled Magnetoelectric Interaction Phenomena in Crystals (MEIPIC) was held between 1973 (in Seattle) and 2009 (in Santa Barbara), and indeed the term "multi-ferroic magnetoelectric" was first used by H. Schmid in the proceedings of the 1993 MEIPIC conference (in Ascona). Mechanisms for combining ferroelectricity and magnetism To be defined as ferroelectric, a material must have a spontaneous electric polarization that is switchable by an applied electric field. Usually such an electric polarization arises via an inversion-symmetry-breaking structural distortion from a parent centrosymmetric phase. For example, in the prototypical ferroelectric barium titanate, BaTiO3, the parent phase is the ideal cubic ABO3 perovskite structure, with the B-site Ti4+ ion at the center of its oxygen coordination octahedron and no electric polarisation. In the ferroelectric phase the Ti4+ ion is shifted away from the center of the octahedron causing a polarization. Such a displacement only tends to be favourable when the B-site cation has an electron configuration with an empty d shell (a so-called d0 configuration), which favours energy-lowering covalent bond formation between the B-site cation and the neighbouring oxygen anions. This "d0-ness" requirement is a clear obstacle for the formation of multiferroics, since the magnetism in most transition-metal oxides arises from the presence of partially filled transition metal d shells. As a result, in most multiferroics, the ferroelectricity has a different origin. The following describes the mechanisms that are known to circumvent this contraindication between ferromagnetism and ferroelectricity. Lone-pair-active In lone-pair-active multiferroics, the ferroelectric displacement is driven by the A-site cation, and the magnetism arises from a partially filled d shell on the B site. Examples include bismuth ferrite, BiFeO3, BiMnO3 (although this is believed to be anti-polar), and PbVO3. In these materials, the A-site cation (Bi3+, Pb2+) has a so-called stereochemically active 6s2 lone-pair of electrons, and off-centering of the A-site cation is favoured by an energy-lowering electron sharing between the formally empty A-site 6p orbitals and the filled O 2p orbitals. Geometric ferroelectricity In geometric ferroelectrics, the driving force for the structural phase transition leading to the polar ferroelectric state is a rotational distortion of the polyhedra rather than an electron-sharing covalent bond formation. Such rotational distortions occur in many transition-metal oxides; in the perovskites for example they are common when the A-site cation is small, so that the oxygen octahedra collapse around it. In perovskites, the three-dimensional connectivity of the polyhedra means that no net polarization results; if one octahedron rotates to the right, its connected neighbor rotates to the left and so on. In layered materials, however, such rotations can lead to a net polarization. The prototypical geometric ferroelectrics are the layered barium transition metal fluorides, BaMF4, M=Mn, Fe, Co, Ni, Zn, which have a ferroelectric transition at around 1000K and a magnetic transition to an antiferromagnetic state at around 50K. Since the distortion is not driven by a hybridisation between the d-site cation and the anions, it is compatible with the existence of magnetism on the B site, thus allowing for multiferroic behavior. A second example is provided by the family of hexagonal rare earth manganites (h-RMnO3 with R=Ho-Lu, Y), which have a structural phase transition at around 1300 K consisting primarily of a tilting of the MnO5 bipyramids. While the tilting itself has zero polarization, it couples to a polar corrugation of the R-ion layers which yields a polarisation of ~6 μC/cm2. Since the ferroelectricity is not the primary order parameter it is described as improper. The multiferroic phase is reached at ~100K when a triangular antiferromagnetic order due to spin frustration arises. Charge ordering Charge ordering can occur in compounds containing ions of mixed valence when the electrons, which are delocalised at high temperature, localize in an ordered pattern on different cation sites so that the material becomes insulating. When the pattern of localized electrons is polar, the charge ordered state is ferroelectric. Usually the ions in such a case are magnetic and so the ferroelectric state is also multiferroic. The first proposed example of a charge ordered multiferroic was LuFe2O4, which charge orders at 330 K with an arrangement of Fe2+ and Fe3+ ions. Ferrimagnetic ordering occurs below 240 K. Whether or not the charge ordering is polar has recently been questioned, however. In addition, charge ordered ferroelectricity is suggested in magnetite, Fe3O4, below its Verwey transition, and . Magnetically-driven ferroelectricity In magnetically driven multiferroics the macroscopic electric polarization is induced by long-range magnetic order which is non-centrosymmetric. Formally, the electric polarisation, , is given in terms of the magnetization, , by . Like the geometric ferroelectrics discussed above, the ferroelectricity is improper, because the polarisation is not the primary order parameter (in this case the primary order is the magnetisation) for the ferroic phase transition. The prototypical example is the formation of the non-centrosymmetric magnetic spiral state, accompanied by a small ferroelectric polarization, below 28K in TbMnO3. In this case the polarization is small, 10−2 μC/cm2, because the mechanism coupling the non-centrosymmetric spin structure to the crystal lattice is the weak spin-orbit coupling. Larger polarizations occur when the non-centrosymmetric magnetic ordering is caused by the stronger superexchange interaction, such as in orthorhombic HoMnO3 and related materials. In both cases the magnetoelectric coupling is strong because the ferroelectricity is directly caused by the magnetic order. f-electron magnetism While most magnetoelectric multiferroics developed to date have conventional transition-metal d-electron magnetism and a novel mechanism for the ferroelectricity, it is also possible to introduce a different type of magnetism into a conventional ferroelectric. The most obvious route is to use a rare-earth ion with a partially filled shell of f electrons on the A site. An example is EuTiO3 which, while not ferroelectric under ambient conditions, becomes so when strained a little bit, or when its lattice constant is expanded for example by substituting some barium on the A site. Composites It remains a challenge to develop good single-phase multiferroics with large magnetization and polarization and strong coupling between them at room temperature. Therefore, composites combining magnetic materials, such as FeRh, with ferroelectric materials, such as PMN-PT, are an attractive and established route to achieving multiferroicity. Some examples include magnetic thin films on piezoelectric PMN-PT substrates and Metglass/PVDF/Metglass trilayer structures. Recently an interesting layer-by-layer growth of an atomic-scale multiferroic composite has been demonstrated, consisting of individual layers of ferroelectric and antiferromagnetic LuFeO3 alternating with ferrimagnetic but non-polar LuFe2O4 in a superlattice. A new promising approach are core-shell type ceramics where a magnetoelectric composite is formed in-situ during synthesis. In the system (BiFe0.9Co0.1O3)0.4-(Bi1/2K1/2TiO3)0.6 (BFC-BKT) very strong ME coupling has been observed on a microscopic scale using PFM under magnetic field. Furthermore, switching of magnetization via electric field has been observed using MFM. Here, the ME active core-shell grains consist of magnetic CoFe2O4 (CFO) cores and a (BiFeO3)0.6-(Bi1/2K1/2TiO3)0.4 (BFO-BKT) shell where core and shell have an epitaxial lattice structure. The mechanism of strong ME coupling is via magnetic exchange interaction between CFO and BFO across the core-shell interface, which results in an exceptionally high Neel-Temperature of 670 K of the BF-BKT phase. Other There have been reports of large magnetoelectric coupling at room-temperature in type-I multiferroics such as in the "diluted" magnetic perovskite (PbZr0.53Ti0.47O3)0.6–(PbFe1/2Ta1/2O3)0.4 (PZTFT) in certain Aurivillius phases. Here, strong ME coupling has been observed on a microscopic scale using PFM under magnetic field among other techniques. Organic-inorganic hybrid multiferroics have been reported in the family of metal-formate perovskites, as well as molecular multiferroics such as [(CH3)2NH2][Ni(HCOO)3], with elastic strain-mediated coupling between the order parameters. Classification Type-I and type-II multiferroics A helpful classification scheme for multiferroics into so-called type-I and type-II multiferroics was introduced in 2009 by D. Khomskii. Khomskii suggested the term type-I multiferroic for materials in which the ferroelectricity and magnetism occur at different temperatures and arise from different mechanisms. Usually the structural distortion which gives rise to the ferroelectricity occurs at high temperature, and the magnetic ordering, which is usually antiferromagnetic, sets in at lower temperature. The prototypical example is BiFeO3 (TC=1100 K, TN=643 K), with the ferroelectricity driven by the stereochemically active lone pair of the Bi3+ ion and the magnetic ordering caused by the usual superexchange mechanism. YMnO3 (TC=914 K, TN=76 K) is also type-I, although its ferroelectricity is so-called "improper", meaning that it is a secondary effect arising from another (primary) structural distortion. The independent emergence of magnetism and ferroelectricity means that the domains of the two properties can exist independently of each other. Most type-I multiferroics show a linear magnetoelectric response, as well as changes in dielectric susceptibility at the magnetic phase transition. The term type-II multiferroic is used for materials in which the magnetic ordering breaks the inversion symmetry and directly "causes" the ferroelectricity. In this case the ordering temperatures for the two phenomena are identical. The prototypical example is TbMnO3, in which a non-centrosymmetric magnetic spiral accompanied by a ferroelectric polarization sets in at 28 K. Since the same transition causes both effects they are by construction strongly coupled. The ferroelectric polarizations tend to be orders of magnitude smaller than those of the type-I multiferroics however, typically of the order of 10−2 μC/cm2. The opposite effect has also been reported, in the Mott insulating charge-transfer salt –. Here, a charge-ordering transition to a polar ferroelectric case drives a magnetic ordering, again giving an intimate coupling between the ferroelectric and, in this case antiferromagnetic, orders. Symmetry and coupling The formation of a ferroic order is always associated with the breaking of a symmetry. For example, the symmetry of spatial inversion is broken when ferroelectrics develop their electric dipole moment, and time reversal is broken when ferromagnets become magnetic. The symmetry breaking can be described by an order parameter, the polarization P and magnetization M in these two examples, and leads to multiple equivalent ground states which can be selected by the appropriate conjugate field; electric or magnetic for ferroelectrics or ferromagnets respectively. This leads for example to the familiar switching of magnetic bits using magnetic fields in magnetic data storage. Ferroics are often characterized by the behavior of their order parameters under space inversion and time reversal (see table). The operation of space inversion reverses the direction of polarisation (so the phenomenon of polarisation is space-inversion antisymmetric) while leaving the magnetisation invariant. As a result, non-polar ferromagnets and ferroelastics are invariant under space inversion whereas polar ferroelectrics are not. The operation of time reversal, on the other hand, changes the sign of M (which is therefore time-reversal antisymmetric), while the sign of P remains invariant. Therefore, non-magnetic ferroelastics and ferroelectrics are invariant under time reversal whereas ferromagnets are not. Magnetoelectric multiferroics are both space-inversion and time-reversal anti-symmetric since they are both ferromagnetic and ferroelectric. The combination of symmetry breakings in multiferroics can lead to coupling between the order parameters, so that one ferroic property can be manipulated with the conjugate field of the other. Ferroelastic ferroelectrics, for example, are piezoelectric, meaning that an electric field can cause a shape change or a pressure can induce a voltage, and ferroelastic ferromagnets show the analogous piezomagnetic behavior. Particularly appealing for potential technologies is the control of the magnetism with an electric field in magnetoelectric multiferroics, since electric fields have lower energy requirements than their magnetic counterparts. Applications Electric-field control of magnetism The main technological driver for the exploration of multiferroics has been their potential for controlling magnetism using electric fields via their magneto electric coupling. Such a capability could be technologically transformative, since the production of electric fields is far less energy intensive than the production of magnetic fields (which in turn require electric currents) that are used in most existing magnetism-based technologies. There have been successes in controlling the orientation of magnetism using an electric field, for example in heterostructures of conventional ferromagnetic metals and multiferroic BiFeO3, as well as in controlling the magnetic state, for example from antiferromagnetic to ferromagnetic in FeRh. In multiferroic thin films, the coupled magnetic and ferroelectric order parameters can be exploited for developing magnetoelectronic devices. These include novel spintronic devices such as tunnel magnetoresistance (TMR) sensors and spin valves with electric field tunable functions. A typical TMR device consists of two layers of ferromagnetic materials separated by a thin tunnel barrier (~2 nm) made of a multiferroic thin film. In such a device, spin transport across the barrier can be electrically tuned. In another configuration, a multiferroic layer can be used as the exchange bias pinning layer. If the antiferromagnetic spin orientations in the multiferroic pinning layer can be electrically tuned, then magnetoresistance of the device can be controlled by the applied electric field. One can also explore multiple state memory elements, where data are stored both in the electric and the magnetic polarizations. Radio and high-frequency devices Multiferroic composite structures in bulk form are explored for high-sensitivity ac magnetic field sensors and electrically tunable microwave devices such as filters, oscillators and phase shifters (in which the ferri-, ferro- or antiferro-magnetic resonance is tuned electrically instead of magnetically). Cross-over applications in other areas of physics Multiferroics have been used to address fundamental questions in cosmology and particle physics. In the first, the fact that an individual electron is an ideal multiferroic, with any electric dipole moment required by symmetry to adopt the same axis as its magnetic dipole moment, has been exploited to search for the electric dipole moment of the electron. Using the designed multiferroic material , the change in net magnetic moment on switching of the ferroelectric polarisation in an applied electric field was monitored, allowing an upper bound on the possible value of the electron electric dipole moment to be extracted. This quantity is important because it reflects the amount of time-reversal (and hence CP) symmetry breaking in the universe, which imposes severe constraints on theories of elementary particle physics. In a second example, the unusual improper geometric ferroelectric phase transition in the hexagonal manganites has been shown to have symmetry characteristics in common with proposed early universe phase transitions. As a result, the hexagonal manganites can be used to run experiments in the laboratory to test various aspects of early universe physics. In particular, a proposed mechanism for cosmic-string formation has been verified, and aspects of cosmic string evolution are being explored through observation of their multiferroic domain intersection analogues. Applications beyond magnetoelectricity A number of other unexpected applications have been identified in the last few years, mostly in multiferroic bismuth ferrite, that do not seem to be directly related to the coupled magnetism and ferroelectricity. These include a photovoltaic effect, photocatalysis, and gas sensing behaviour. It is likely that the combination of ferroelectric polarisation, with the small band gap composed partially of transition-metal d states are responsible for these favourable properties. Multiferroic films with appropriate band gap structure into solar cells was developed which results in high energy conversion efficiency due to efficient ferroelectric polarization driven carrier separation and overband spacing generation photo-voltage. Various films have been researched, and there is also a new approach to effectively adjust the band gap of the double perovskite multilayer oxide by engineering the cation order for Bi2FeCrO6. Dynamics Dynamical multiferroicity Recently it was pointed out that, in the same way that electric polarisation can be generated by spatially varying magnetic order, magnetism can be generated by a temporally varying polarisation. The resulting phenomenon was called Dynamical Multiferroicity. The magnetisation, is given by where is the polarisation and the indicates the vector product. The dynamical multiferroicity formalism underlies the following diverse range of phenomena: The phonon Zeeman effect, in which phonons of opposite circular polarisation have different energies in a magnetic field. This phenomenon awaits experimental verification. Resonant magnon excitation by optical driven phonons. Dzylaoshinskii-Moriya-type electromagnons. The inverse Faraday effect. Exotic flavours of quantum criticality. Dynamical processes The study of dynamics in multiferroic systems is concerned with understanding the time evolution of the coupling between various ferroic orders, in particular under external applied fields. Current research in this field is motivated both by the promise of new types of application reliant on the coupled nature of the dynamics, and the search for new physics lying at the heart of the fundamental understanding of the elementary MF excitations. An increasing number of studies of MF dynamics are concerned with the coupling between electric and magnetic order parameters in the magnetoelectric multiferroics. In this class of materials, the leading research is exploring, both theoretically and experimentally, the fundamental limits (e.g. intrinsic coupling velocity, coupling strength, materials synthesis) of the dynamical magnetoelectric coupling and how these may be both reached and exploited for the development of new technologies. At the heart of the proposed technologies based on magnetoelectric coupling are switching processes, which describe the manipulation of the material's macroscopic magnetic properties with electric field and vice versa. Much of the physics of these processes is described by the dynamics of domains and domain walls. An important goal of current research is the minimization of the switching time, from fractions of a second ("quasi"-static regime), towards the nanosecond range and faster, the latter being the typical time scale needed for modern electronics, such as next generation memory devices. Ultrafast processes operating at picosecond, femtosecond, and even attosecond scale are both driven by, and studied using, optical methods that are at the front line of modern science. The physics underpinning the observations at these short time scales is governed by non-equilibrium dynamics, and usually makes use of resonant processes. One demonstration of ultrafast processes is the switching from collinear antiferromagnetic state to spiral antiferromagnetic state in CuO under excitation by 40 fs 800 nm laser pulse. A second example shows the possibility for the direct control of spin waves with THz radiation on antiferromagnetic NiO. These are promising demonstrations of how the switching of electric and magnetic properties in multiferroics, mediated by the mixed character of the magnetoelectric dynamics, may lead to ultrafast data processing, communication and quantum computing devices. Current research into MF dynamics aims to address various open questions; the practical realisation and demonstration of ultra-high speed domain switching, the development of further new applications based on tunable dynamics, e.g. frequency dependence of dielectric properties, the fundamental understanding of the mixed character of the excitations (e.g. in the ME case, mixed phonon-magnon modes – 'electromagnons'), and the potential discovery of new physics associated with the MF coupling. Domains and domain walls Like any ferroic material, a multiferroic system is fragmented into domains. A domain is a spatially extended region with a constant direction and phase of its order parameters. Neighbouring domains are separated by transition regions called domain walls. Properties of multiferroic domains In contrast to materials with a single ferroic order, domains in multiferroics have additional properties and functionalities. For instance, they are characterized by an assembly of at least two order parameters. The order parameters may be independent (typical yet not mandatory for a Type-I multiferroic) or coupled (mandatory for a Type-II multiferroic). Many outstanding properties that distinguish domains in multiferroics from those in materials with a single ferroic order are consequences of the coupling between the order parameters. The coupling can lead to patterns with a distribution and/or topology of domains that is exclusive to multiferroics. The order-parameter coupling is usually homogeneous across a domain, i.e., gradient effects are negligible. In some cases the averaged net value of the order parameter for a domain pattern is more relevant for the coupling than the value of the order parameter of an individual domain. These issues lead to novel functionalities which explain the current interest in these materials. Properties of multiferroic domain walls Domain walls are spatially extended regions of transition mediating the transfer of the order parameter from one domain to another. In comparison to the domains the domain walls are not homogeneous and they can have a lower symmetry. This may modify the properties of a multiferroic and the coupling of its order parameters. Multiferroic domain walls may display particular static and dynamic properties. Static properties refer to stationary walls. They can result from The reduced dimensionality The finite width of the wall The different symmetry of the wall The inherent chemical, electronic, or order-parameter inhomogeneity within the walls and the resulting gradient effects. Synthesis Multiferroic properties can appear in a large variety of materials. Therefore, several conventional material fabrication routes are used, including solid state synthesis, hydrothermal synthesis, sol-gel processing, vacuum based deposition, and floating zone. Some types of multiferroics require more specialized processing techniques, such as Vacuum based deposition (for instance: MBE, PLD) for thin film deposition to exploit certain advantages that may come with 2-dimensional layered structures such as: strain mediated multiferroics, heterostructures, anisotropy. High pressure solid state synthesis to stabilize metastable or highly distorted structures, or in the case of the Bi-based multiferroics due to the high volatility of bismuth. List of materials Most multiferroic materials identified to date are transition-metal oxides, which are compounds made of (usually 3d) transition metals with oxygen and often an additional main-group cation. Transition-metal oxides are a favorable class of materials for identifying multiferroics for a few reasons: The localised 3d electrons on the transition metal are usually magnetic if they are partially filled with electrons. Oxygen is at a "sweet spot" in the periodic table in that the bonds it makes with transition metals are neither too ionic (like its neighbor fluorine, F) or too covalent (like its neighbor nitrogen, N). As a result, its bonds with transition metals are rather polarizable, which is favorable for ferroelectricity. Transition metals and oxygen tend to be earth abundant, non-toxic, stable and environmentally benign. Many multiferroics have the perovskite structure. This is in part historical most of the well-studied ferroelectrics are perovskites and in part because of the high chemical versatility of the structure. Below is a list of some the most well-studied multiferroics with their ferroelectric and magnetic ordering temperatures. When a material shows more than one ferroelectric or magnetic phase transition, the most relevant for the multiferroic behavior is given. See also Ferrotoroidicity Reviews on Multiferroics Talks and documentaries on multiferroics France 24 documentary "Nicola Spaldin: The pioneer behind multiferroics" (12 minutes) Nicola Spaldin: The pioneer behind multiferroics Seminar "Electric field control of magnetism" by R. Ramesh at U Michigan (1 hour) Ramamoorthy Ramesh | Electric Field Control of Magnetism Max Roessler prize for multiferroics at ETH Zürich (5 minutes): Nicola Spaldin, Professor of Materials Theory at ETH Zurich ICTP Colloquium "From materials to cosmology; Studying the early universe under the microscope" by Nicola Spaldin (1 hour) From Materials to Cosmology: Studying the early universe under the microscope - ICTP COLLOQUIUM Tsuyoshi Kimura's research on "Toward highly functional devices using mulitferroics" (4 minutes): Toward highly functional devices using multi-ferroics "Strong correlation between electricity and magnetism in materials" by Yoshi Tokura (45 minutes): 4th Kyoto Prize Symposium [Materials Science and Engineering Yoshinori Tokura, July 2, 2017] "Breaking the wall to the next material age", Falling Walls, Berlin (15 minutes): How Materials Science Heralds a New Class of Technologies | NICOLA SPALDIN References Condensed matter physics Materials science Magnetism Phases of matter Hysteresis
Multiferroics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
6,712
[ "Physical phenomena", "Applied and interdisciplinary physics", "Phases of matter", "Materials science", "Condensed matter physics", "nan", "Hysteresis", "Matter" ]
2,522,890
https://en.wikipedia.org/wiki/Transaction%20authentication%20number
A transaction authentication number (TAN) is used by some online banking services as a form of single use one-time passwords (OTPs) to authorize financial transactions. TANs are a second layer of security above and beyond the traditional single-password authentication. TANs provide additional security because they act as a form of two-factor authentication (2FA). If the physical document or token containing the TANs is stolen, it will be useless without the password. Conversely, if the login data are obtained, no transactions can be performed without a valid TAN. Classic TAN TANs often function as follows: The bank creates a set of unique TANs for the user. Typically, there are 50 TANs printed on a list, enough to last half a year for a normal user; each TAN being six or eight characters long. The user picks up the list from the nearest bank branch (presenting a passport, an ID card or similar document) or is sent the TAN list through mail. The password (PIN) is mailed separately. To log on to their account, the user must enter user name (often the account number) and password (PIN). This may give access to account information but the ability to process transactions is disabled. To perform a transaction, the user enters the request and authorizes the transaction by entering an unused TAN. The bank verifies the TAN submitted against the list of TANs they issued to the user. If it is a match, the transaction is processed. If it is not a match, the transaction is rejected. The TAN has now been used and will not be recognized for any further transactions. If the TAN list is compromised, the user may cancel it by notifying the bank. However, as any TAN can be used for any transaction, TANs are still prone to phishing attacks where the victim is tricked into providing both password/PIN and one or several TANs. Further, they provide no protection against man-in-the-middle attacks, where an attacker intercepts the transmission of the TAN, and uses it for a forged transaction, such as when the client system becomes compromised by some form of malware that enables a malicious user. Although the remaining TANs are uncompromised and can be used safely, users are generally advised to take further action, as soon as possible. Indexed TAN (iTAN) Indexed TANs reduce the risk of phishing. To authorize a transaction, the user is not asked to use an arbitrary TAN from the list but to enter a specific TAN as identified by a sequence number (index). As the index is randomly chosen by the bank, an arbitrary TAN acquired by an attacker is usually worthless. However, iTANs are still susceptible to man-in-the-middle attacks, including phishing attacks where the attacker tricks the user into logging into a forged copy of the bank's website and man-in-the-browser attacks which allow the attacker to secretly swap the transaction details in the background of the PC as well as to conceal the actual transactions carried out by the attacker in the online account overview. Therefore, in 2012 the European Union Agency for Network and Information Security advised all banks to consider the PC systems of their users being infected by malware by default and use security processes where the user can cross-check the transaction data against manipulations like for example (provided the security of the mobile phone holds up) mTAN or smartcard readers with their own screen including the transaction data into the TAN generation process while displaying it beforehand to the user (chipTAN). Indexed TAN with CAPTCHA (iTANplus) Prior to entering the iTAN, the user is presented a CAPTCHA, which in the background also shows the transaction data and data deemed unknown to a potential attacker, such as the user's birthdate. This is intended to make it hard (but not impossible) for an attacker to forge the CAPTCHA. This variant of the iTAN is method used by some German banks adds a CAPTCHA to reduce the risk of man-in-the-middle attacks. Some Chinese banks have also deployed a TAN method similar to iTANplus. A recent study shows that these CAPTCHA-based TAN schemes are not secure against more advanced automated attacks. Mobile TAN (mTAN) mTANs are used by banks in Austria, Bulgaria, Czech Republic, Germany, Hungary, Malaysia, the Netherlands, Poland, Russia, Singapore, South Africa, Spain, Switzerland and some in New Zealand, Australia, UK, and Ukraine. When the user initiates a transaction, a TAN is generated by the bank and sent to the user's mobile phone by SMS. The SMS may also include transaction data, allowing the user to verify that the transaction has not been modified in transmission to the bank. However, the security of this scheme depends on the security of the mobile phone system. In South Africa, where SMS-delivered TAN codes are common, a new attack has appeared: SIM Swap Fraud. A common attack vector is for the attacker to impersonate the victim, and obtain a replacement SIM card for the victim's phone from the mobile network operator. The victim's user name and password are obtained by other means (such as keylogging or phishing). In-between obtaining the cloned/replacement SIM and the victim noticing their phone no longer works, the attacker can transfer/extract the victim's funds from their accounts. In 2016 a study was conducted on SIM Swap Fraud by a social engineer, revealing weaknesses in issuing porting numbers. In 2014, a weakness in the Signalling System No. 7 used for SMS transmission was published, which allows interception of messages. It was demonstrated by Tobias Engel during the 31st Chaos Communication Congress. At the beginning of 2017, this weakness was used successfully in Germany to intercept SMS and fraudulently redirect fund transfers. Also the rise of smartphones led to malware attacks trying to simultaneously infect the PC and the mobile phone as well to break the mTAN scheme. pushTAN pushTAN is an app-based TAN scheme by German Sparkassen banking group reducing some of the shortcomings of the mTAN scheme. It eliminates the cost of SMS messages and is not susceptible to SIM card fraud, since the messages are sent via a special text-messaging application to the user's smartphone using an encrypted Internet connection. Just like mTAN, the scheme allows the user to cross-check the transaction details against hidden manipulations carried out by Trojans on the user's PC by including the actual transaction details the bank received in the pushTAN message. Although analogous to using mTAN with a smartphone, there is the risk of a parallel malware infection of PC and smartphone. To reduce this risk the pushTAN app ceases to function if the mobile device is rooted or jailbroken. In late 2014 the Deutsche Kreditbank (DKB) also adopted the pushTAN scheme. TAN generators Simple TAN generators The risk of compromising the whole TAN list can be reduced by using security tokens that generate TANs on-the-fly, based on a secret known by the bank and stored in the token or a smartcard inserted into the token. However, the TAN generated is not tied to the details of a specific transaction. Because the TAN is valid for any transaction submitted with it, it does not protect against phishing attacks where the TAN is directly used by the attacker, or against man-in-the-middle attacks. ChipTAN / Sm@rt-TAN / CardTAN ChipTAN is a TAN scheme used by many German and Austrian banks. It is known as ChipTAN or Sm@rt-TAN in Germany and as CardTAN in Austria, whereas cardTAN is a technically independent standard. A ChipTAN generator is not tied to a particular account; instead, the user must insert their bank card during use. The TAN generated is specific to the bank card as well as to the current transaction details. There are two variants: In the older variant, the transaction details (at least amount and account number) must be entered manually. In the modern variant, the user enters the transaction online, then the TAN generator reads the transaction details via a flickering barcode on the computer screen (using photodetectors). It then shows the transaction details on its own screen to the user for confirmation before generating the TAN. As it is independent hardware, coupled only by a simple communication channel, the TAN generator is not susceptible to attack from the user's computer. Even if the computer is subverted by a Trojan, or if a man-in-the-middle attack occurs, the TAN generated is only valid for the transaction confirmed by the user on the screen of the TAN generator, therefore modifying a transaction retroactively would cause the TAN to be invalid. An additional advantage of this scheme is that because the TAN generator is generic, requiring a card to be inserted, it can be used with multiple accounts across different banks, and losing the generator is not a security risk because the security-critical data is stored on the bank card. While it offers protection from technical manipulation, the ChipTAN scheme is still vulnerable to social engineering. Attackers have tried to persuade the users themselves to authorize a transfer under a pretext, for example by claiming that the bank required a "test transfer" or that a company had falsely transferred money to the user's account and they should "send it back". Users should therefore never confirm bank transfers they have not initiated themselves. ChipTAN is also used to secure batch transfers (Sammelüberweisungen). However, this method offers significantly less security than the one for individual transfers. In case of a batch transfer the TAN generator will only show the number and total amount of all transfers combined – thus for batch transfers there is little protection from manipulation by a Trojan. This vulnerability was reported by RedTeam Pentesting in November 2009. In response, as a mitigation, some banks changed their batch transfer handling so that batch transfers containing only a single record are treated as individual transfers. See also One-time password Security token References Online banking Banking technology Computer access control
Transaction authentication number
[ "Engineering" ]
2,066
[ "Cybersecurity engineering", "Computer access control" ]
2,523,077
https://en.wikipedia.org/wiki/International%20AIDS%20Vaccine%20Initiative
The International AIDS Vaccine Initiative (IAVI) is a global not-for-profit, public-private partnership working to accelerate the development of vaccines to prevent HIV infection and AIDS. IAVI researches and develops vaccine candidates, conducts policy analyses, serves as an advocate for the HIV prevention field and engages communities in the trial process and AIDS vaccine education. The organization takes a comprehensive approach to HIV and AIDS that supports existing HIV prevention and treatment programs while emphasizing the need for new AIDS prevention tools. It also works to ensure that future vaccines will be accessible to all who need them. History In 1994, the Rockefeller Foundation convened an international meeting of AIDS researchers, vaccinologists, public health officials, and representatives from philanthropic organizations in Bellagio, Italy, to evaluate the challenges facing HIV/AIDS vaccine development and identify ways to jump-start research. The International AIDS Vaccine Initiative was founded in 1996 by epidemiologist Seth Berkeley with the mission of accelerating the development and global distribution of preventative AIDS vaccines. In February 2023, Muhammad Ali Pate was appointed chairman of the Vaccine Alliance (Gavi), which works to provide vaccines in low-income countries. Activities IAVI's scientific team, drawn largely from private industry, researches and develops AIDS vaccine candidates and engages in clinical trials and research through partnerships with more than 100 academic, biotechnology, pharmaceutical and governmental institutions. In September 2009, a global group of researchers led by IAVI published a study in the journal Science identifying PG9 and PG16, two highly powerful broadly neutralizing antibodies against a wide variety of HIV variants. The site on the virus to which PG9 and PG16 attach revealed a vulnerability on HIV. PG9 and PG16 were the first new broadly neutralizing antibodies against HIV discovered in more than a decade and are the result of a global effort launched in 2006. Partnerships To address major obstacles in AIDS vaccine development, IAVI partners with HIV researchers from around the world. Its Neutralizing Antibody Center is a network dedicated to discovering and understanding broadly neutralizing antibodies against HIV and using that knowledge in the design of vaccines. IAVI is a founding member of the Global HIV Vaccine Enterprise, an alliance of independent organizations working towards an AIDS vaccine. It also partners with civil society organizations and other entities to advocate jointly for the development of AIDS vaccines, and is a member of the Global Health Technologies Coalition, an alliance of more than 30 non-profit groups that aims to increase awareness of the urgent need for technologies that save lives in developing countries. Donors IAVI's work is funded by donors including: the Bill & Melinda Gates Foundation, the Coalition for Epidemic Preparedness Innovations, the Ministry of Foreign Affairs of Denmark, Irish Aid, the Ministry of Finance of Japan in partnership with The World Bank, the Ministry of Foreign Affairs of the Netherlands, the United Kingdom Department for International Development, and the United States Agency for International Development (USAID). See also Seth Berkley Clinical trial Advance market commitments References External links Seth Berkley: HIV and flu -- the vaccine strategy - TED 2010 HIV/AIDS research organisations Vaccination-related organizations Organizations established in 1996 HIV vaccine research International organizations based in the United States HIV/AIDS organizations in the United States
International AIDS Vaccine Initiative
[ "Chemistry", "Biology" ]
647
[ "HIV vaccine research", "Vaccination", "Vaccination-related organizations", "Drug discovery" ]
2,523,262
https://en.wikipedia.org/wiki/Pocket%20protein%20family
Pocket protein family consists of three proteins: RB – Retinoblastoma protein p107 – Retinoblastoma-like protein 1 p130 – Retinoblastoma-like protein 2 They play crucial roles in the metazoan cell cycle through interaction with members of the E2F transcription factors family. References Protein families
Pocket protein family
[ "Chemistry", "Biology" ]
71
[ "Biochemistry stubs", "Protein families", "Protein stubs", "Protein classification" ]
2,523,359
https://en.wikipedia.org/wiki/Zein
Zein ( ) is a class of prolamine protein found in maize. It is usually manufactured as a powder from corn gluten meal. Zein is one of the best understood plant proteins. Pure zein is clear, odorless, tasteless, hard, water-insoluble, and edible, and it has a variety of industrial and food uses. Commercial uses Historically, zein has been used in the manufacture of a wide variety of commercial products, including coatings for paper cups, soda bottle cap linings, clothing fabric, buttons, adhesives, coatings and binders. The dominant historical use of zein was in the textile fibers market where it was produced under the name "Vicara". With the development of synthetic alternatives, the use of zein in this market eventually disappeared. By using electrospinning, zein fibers have again been produced in the lab, where additional research will be performed to re-enter the fiber market. It can be used as a water and grease coating for paperboards and allows recyclability. Zein's properties make it valuable in processed foods and pharmaceuticals, in competition with insect shellac. It is now used as a coating for candy, nuts, fruit, pills, and other encapsulated foods and drugs. In the United States, it may be labeled as "confectioner's glaze" (which may also refer to shellac-based glazes) and used as a coating on bakery products or as "vegetable protein." It is classified as Generally Recognized as Safe (GRAS) by the U.S. Food and Drug Administration. For pharmaceutical coating, zein is preferred over food shellac, since it is all natural and requires less testing per the USP monographs. Zein can be further processed into resins and other bioplastic polymers, which can be extruded or rolled into a variety of plastic products. With increasing environmental concerns about synthetic coatings (such as PFOA) and the current higher prices of hydrocarbon-based petrochemicals, there is increased focus on zein as a raw material for a variety of nontoxic and renewable polymer applications, particularly in the paper industry. Other reasons for a renewed interest in zein include concern about the landfill costs of plastics, and consumer interest in natural substances. There are also a number of potential new food industry applications. Researchers at the University of Illinois at Urbana-Champaign and at William Wrigley Jr. Company have recently been studying the possibility of using zein to replace some of the gum base in chewing gum. They are also studying medical applications such as using the zein molecule to "carry biocompounds to targeted sites in the human body". There are a number of potential food safety applications that may be possible for zein-based packaging according to several researchers. A military contractor is researching the use of zein to protect MRE food packages. Other packaging/food safety applications that have been researched include frozen foods, ready-to-eat chicken, and cheese and liquid eggs. Food researchers in Japan have noted the ability of the zein molecule to act as a water barrier. While there are numerous existing and potential uses for zein, the main barrier to greater commercial success has been its historic high cost until recently. Zein pricing is now very competitive with food shellac. Zein may be extracted as a byproduct in the manufacturing process for ethanol or in new off-shore manufacture. Gene family Alpha-prolamins are the major seed storage proteins of species of the grass tribe Andropogoneae. They are unusually rich in glutamine, proline, alanine, and leucine residues and their sequences show a series of tandem repeats presumed to be the result of multiple intragenic duplication. In Zea mays (Maize), the 22 kDa and 19 kDa zeins are encoded by a large multigene family and are the major seed storage proteins accounting for 70% of the total zein fraction. Structurally the 22 kDa and 19 kDa zeins are composed of nine adjacent, topologically antiparallel helices clustered within a distorted cylinder. The 22 kDa alpha-zeins are encoded by 23 genes; twenty-two of the members are found in a roughly tandem array forming a dense gene cluster. The expressed genes in the cluster are interspersed with nonexpressed genes. Some of the expressed genes differ in their transcriptional regulation. Gene amplification appears to be in blocks of genes explaining the rapid and compact expansion of the cluster during the evolution of maize. Other biodegradable polymers Cellophane Plastarch material Poly-3-hydroxybutyrate Polycaprolactone Polyglycolide Polylactic acid References External links Seed storage proteins Food additives Biodegradable plastics Protein families
Zein
[ "Biology" ]
1,000
[ "Protein families", "Protein classification" ]
2,523,651
https://en.wikipedia.org/wiki/Gambrel
A gambrel or gambrel roof is a usually symmetrical two-sided roof with two slopes on each side. The upper slope is positioned at a shallow angle, while the lower slope is steep. This design provides the advantages of a sloped roof while maximizing headroom inside the building's upper level and shortening what would otherwise be a tall roof, as well as reducing the span of each set of rafters. The name comes from the Medieval Latin word gamba, meaning horse's hock or leg. The term gambrel is of American origin, the older, European name being a curb (kerb, kirb) roof. Europeans historically did not distinguish between a gambrel roof and a mansard roof but called both types a mansard. In the United States, various shapes of gambrel roofs are sometimes called Dutch gambrel or Dutch Colonial gambrel with bell-cast eaves, Swedish, German, English, French, or New England gambrel. The cross-section of a gambrel roof is similar to that of a mansard roof, but a gambrel has vertical gable ends instead of being hipped at the four corners of the building. A gambrel roof overhangs the façade, whereas a mansard normally does not. Origin and use of the term Gambrel is a Norman English word, sometimes spelled gambol such as in the 1774 Boston carpenters' price book (revised 1800). Other spellings include gamerel, gamrel, gambril, gameral, gambering, cambrel, cambering, chambrel referring to a wooden bar used by butchers to hang the carcasses of slaughtered animals. Butcher's gambrels, later made of metal, resembled the two-sloped appearance of a gambrel roof when in use. Gambrel is also a term for the joint in the upper part of a horse's hind leg, the hock. In 1858, Oliver Wendell Holmes Sr. wrote: An earlier reference from the Dictionary of Americanisms, published in 1848, defines gambrel as "A hipped roof of a house, so called from the resemblance to the hind leg of a horse which by farriers is termed the gambrel." Webster's Dictionary also confusingly used the term hip in the definition of this roof. The term is also used for a single mansard roof in France and Germany. In Dutch the term 'two-sided mansard roof' is used for gambrel roofs. Origins of the gambrel in North America The origin of the gambrel roof form in North America is unknown. The oldest known gambrel roof in America was on the second Harvard Hall at Harvard University built in 1677. Possibly the oldest surviving house in the U.S. with a gambrel roof is the c. 1677–78 Peter Tufts House. The oldest surviving framed house in North America, the Fairbanks House, has an ell with a gambrel roof, but this roof was a later addition. Claims to the origin of the gambrel roof form in North America include: Indigenous tribes of the Pacific Northwest, the Coast Salish, used gambrel roof form (Suttle & Lane (1990), p. 491). Spanish, Portuguese, Dutch, and English mariners and traders had visited or settled into the area of southeast Asia now called Indonesia prior to permanent European settlement in America. In Indonesia, they saw dwellings with a roof style where the end of a roof started as a hip and finished as a gable end at the ridge. The gable end was an opening, to allow smoke to dissipate from the cooking fires. This roof design was brought back to Europe and the American Colonies, and adapted to local conditions. The roof style is still in use around the world today; seamen who traveled to the Netherlands brought the design back to North America; or practical reasons such as a way to allow wider buildings, the use of shorter rafters, or to avoid taxes. Image gallery See also List of roof shapes References Bibliography External links Roofs Structural system
Gambrel
[ "Technology", "Engineering" ]
850
[ "Structural system", "Structural engineering", "Roofs", "Building engineering" ]
2,523,979
https://en.wikipedia.org/wiki/Port%20Reading%20Refinery
Port Reading Refinery, also known as Hess Refinery (photo), was an oil refinery located in Perth Amboy and Port Reading, New Jersey. It was constructed by Hess Oil under Leon Hess in 1958. It was a simple refinery which further processed other refinery's product which began with heavy sour crude. It was owned by the Hess Corporation, refiners of Hess brand gasoline. The refinery itself had outlets that connected with Arthur Kill, enabling oil barges to make passage into the refinery's commons. The refinery had a neon red "HESS" sign on its cracking unit which was removed in December, 2013 after the property was sold. The refinery was closed in February 2013. See also Bayway Refinery Perth Amboy Refinery Chemical Coast Port of Paulsboro References External links Hess: Operations Energy infrastructure completed in 1958 Oil refineries in the United States Energy infrastructure in New Jersey Buildings and structures in Middlesex County, New Jersey Buildings and structures in Woodbridge Township, New Jersey 1958 establishments in New Jersey 2013 disestablishments in New Jersey
Port Reading Refinery
[ "Chemistry" ]
211
[ "Petroleum", "Petroleum stubs" ]
2,524,095
https://en.wikipedia.org/wiki/Domain%20model
In software engineering, a domain model is a conceptual model of the domain that incorporates both behavior and data. In ontology engineering, a domain model is a formal representation of a knowledge domain with concepts, roles, datatypes, individuals, and rules, typically grounded in a description logic. Overview In the field of computer science a conceptual model aims to express the meaning of terms and concepts used by domain experts to discuss the problem, and to find the correct relationships between different concepts. The conceptual model is explicitly chosen to be independent of design or implementation concerns, for example, concurrency or data storage. Conceptual modeling in computer science should not be confused with other modeling disciplines within the broader field of conceptual models such as data modelling, logical modelling and physical modelling. The conceptual model attempts to clarify the meaning of various, usually ambiguous terms, and ensure that confusion caused by different interpretations of the terms and concepts cannot occur. Such differing interpretations could easily cause confusion amongst stakeholders, especially those responsible for designing and implementing a solution, where the conceptual model provides a key artifact of business understanding and clarity. Once the domain concepts have been modeled, the model becomes a stable basis for subsequent development of applications in the domain. The concepts of the conceptual model can be mapped into physical design or implementation constructs using either manual or automated code generation approaches. The realization of conceptual models of many domains can be combined to a coherent platform. A conceptual model can be described using various notations, such as UML, ORM or OMT for object modelling, ITE, or IDEF1X for Entity Relationship Modelling. In UML notation, the conceptual model is often described with a class diagram in which classes represent concepts, associations represent relationships between concepts and role types of an association represent role types taken by instances of the modelled concepts in various situations. In ER notation, the conceptual model is described with an ER Diagram in which entities represent concepts, cardinality and optionality represent relationships between concepts. Regardless of the notation used, it is important not to compromise the richness and clarity of the business meaning depicted in the conceptual model by expressing it directly in a form influenced by design or implementation concerns. This is often used for defining different processes in a particular company or institute. A domain model is a system of abstractions that describes selected aspects of a sphere of knowledge, influence or activity (a domain). The model can then be used to solve problems related to that domain. The domain model is a representation of meaningful real-world concepts pertinent to the domain that need to be modeled in software. The concepts include the data involved in the business and rules the business uses in relation to that data. A domain model leverages natural language of the domain. A domain model generally uses the vocabulary of the domain, thus allowing a representation of the model to be communicated to non-technical stakeholders. It should not refer to any technical implementations such as databases or software components that are being designed. Usage A domain model is generally implemented as an object model within a layer that uses a lower-level layer for persistence and "publishes" an API to a higher-level layer to gain access to the data and behavior of the model. In the Unified Modeling Language (UML), a class diagram is used to represent the domain model. See also Domain-driven design (DDD) Domain layer Information model Feature-driven development Logical data model Mental model OntoUML References Further reading Halpin T, Morgan T: Information Modeling and Relational Databases, Morgan Kaufmann, 2008. . Fowler, Martin: Analysis Patterns, Reusable object models, Addison-Wesley Longman, 1997. . Stewart Robinson, Roger Brooks, Kathy Kotiadis, and Durk-Jouke Van Der Zee (Eds.): Conceptual Modeling for Discrete-Event Simulation, 2010. David W. Embley, Bernhard Thalheim (Eds.): Handbook of Conceptual Modeling, 2011. . Software requirements Data modeling
Domain model
[ "Engineering" ]
794
[ "Data modeling", "Data engineering", "Software engineering", "Software requirements" ]
2,524,890
https://en.wikipedia.org/wiki/Zirconium%20nitride
Zirconium nitride () is an inorganic compound used in a variety of ways due to its properties. Properties ZrN grown by physical vapor deposition (PVD) is a light gold color similar to elemental gold. ZrN has a room-temperature electrical resistivity of 12.0 μΩ·cm, a temperature coefficient of resistivity of 5.6·10−8 Ω·cm/K, a superconducting transition temperature of 10.4 K, and a relaxed lattice parameter of 0.4575 nm. The hardness of single-crystal ZrN is 22.7±1.7 GPa and elastic modulus is 450 GPa. Uses Zirconium nitride is a hard ceramic material similar to titanium nitride and is a cement-like refractory material. Thus it is used in cermets and laboratory crucibles. When applied using the physical vapor deposition coating process it is commonly used for coating medical devices, industrial parts (notably drill bits), automotive and aerospace components and other parts subject to high wear and corrosive environments. Zirconium nitride was suggested as a hydrogen peroxide fuel tank liner for rockets and aircraft. References Nitrides Zirconium(III) compounds Superhard materials Refractory materials Rock salt crystal structure
Zirconium nitride
[ "Physics" ]
271
[ "Refractory materials", "Materials", "Superhard materials", "Matter" ]
2,524,938
https://en.wikipedia.org/wiki/Beryllium%20nitride
Beryllium nitride, Be3N2, is a nitride of beryllium. It can be prepared from the elements at high temperature (1100–1500 °C); unlike beryllium azide or BeN6, it decomposes in vacuum into beryllium and nitrogen. It is readily hydrolysed forming beryllium hydroxide and ammonia. It has two polymorphic forms cubic α-Be3N2 with a defect anti-fluorite structure, and hexagonal β-Be3N2. It reacts with silicon nitride, Si3N4 in a stream of ammonia at 1800–1900 °C to form BeSiN2. Preparation Beryllium nitride is prepared by heating beryllium metal powder with dry nitrogen in an oxygen-free atmosphere in temperatures between 700 and 1400 °C. 3Be + N2 → Be3N2 Uses It is used in refractory ceramics as well as in nuclear reactors. It is used to produce radioactive carbon-14 for tracer applications by the + n → + p reaction. It is favoured due to its stability, high nitrogen content (50%), and the very low capture cross section of beryllium for neutrons. Reactions Beryllium nitride reacts with mineral acids producing ammonia and the corresponding salts of the acids: Be3N2 + 6 HCl → 3 BeCl2 + 2 NH3 In strong alkali solutions, a beryllate forms, with evolution of ammonia: Be3N2 + 6 NaOH → 3 Na2BeO2 + 2 NH3 Both the acid and alkali reactions are brisk and vigorous. Reaction with water, however, is very slow: Be3N2 + 6 H2O → 3 Be(OH)2 + 2 NH3 Reactions with oxidizing agents are likely to be violent. It is oxidized when heated at 600 °C in air. References Nitrides Beryllium compounds Refractory materials
Beryllium nitride
[ "Physics", "Chemistry" ]
406
[ "Inorganic compounds", "Refractory materials", "Inorganic compound stubs", "Materials", "Matter" ]
2,525,165
https://en.wikipedia.org/wiki/Deferoxamine
{{Infobox drug | drug_name = | INN = | type = | IUPAC_name = N-{5-[Acetyl(hydroxy)amino]pentyl}-N-[5-({4-[(5-aminopentyl)(hydroxy)amino]-4-oxobutanoyl}amino)pentyl]-N-hydroxysuccinamide | image = Deferoxamine-2D-skeletal.png | image_class = skin-invert-image | width = | alt = | image2 = Deferoxamine-3D-vdW.png | image_class2 = bg-transparent | width2 = | alt2 = | imageL = | widthL = | altL = | imageR = | widthR = | altR = | caption = Skeletal formula and spacefill model of deferoxamine | pronounce = | tradename = Desferal | Drugs.com = | MedlinePlus = | licence_EU = | licence_US = | DailyMedID = | pregnancy_AU = | pregnancy_AU_comment = | pregnancy_US = C | pregnancy_US_comment = | pregnancy_category= | dependency_liability = | addiction_liability = | routes_of_administration = | legal_AU = | legal_AU_comment = | legal_CA = | legal_CA_comment = | legal_DE = | legal_DE_comment = | legal_NZ = | legal_NZ_comment = | legal_UK = | legal_UK_comment = | legal_US = | legal_US_comment = | legal_UN = | legal_UN_comment = | legal_status = | bioavailability = | protein_bound = | metabolism = | metabolites = | onset = | elimination_half-life = 6 hours | duration_of_action = | excretion = | CAS_number = 70-51-9 | CAS_supplemental = | ATCvet = | ATC_prefix = V03 | ATC_suffix = AC01 | ATC_supplemental = | PubChem = 2973 | PubChemSubstance = | IUPHAR_ligand = | DrugBank = DB00746 | ChemSpiderID = 2867 | UNII = J06Y7MXW4D | KEGG = D03670 | ChEBI = 4356 | ChEMBL = 556 | NIAID_ChemDB = | synonyms = desferrioxamine B, desferoxamine B, DFO-B, DFB ,N'-[5-(Acetyl-hydroxy-amino)pentyl]-N-[5-[3-(5-aminopentyl-hydroxy-carbamoyl) propanoylamino]pentyl]-N-hydroxy-butane diamide | chemical_formula = | C=25 | H=48 | Ag= | Al= | As= | Au= | B= | Bi= | Br= | Ca= | Cl= | Co= | F= | Fe= | Gd= | I= | K= | Li= | Mg= | Mn= | N=6 | Na= | O=8 | P= | Pt= | S= | Sb= | Se= | Sr= | Tc= | Zn= | charge= | molecular_weight = | SMILES = CC(=O)N(O)CCCCCNC(=O)CCC(=O)N(O)CCCCCNC(=O)CCC(=O)N(O)CCCCCN | Jmol = | StdInChI = 1S/C25H48N6O8/c1-21(32)29(37)18-9-3-6-16-27-22(33)12-14-25(36)31(39)20-10-4-7-17-28-23(34)11-13-24(35)30(38)19-8-2-5-15-26/h37-39H,2-20,26H2,1H3,(H,27,33)(H,28,34) | StdInChI_comment = | StdInChIKey = UBQYURCVBFRUQT-UHFFFAOYSA-N | density = | density_notes = | melting_point = | melting_high = | melting_notes = | boiling_point = | boiling_notes = | solubility = | specific_rotation = }}Deferoxamine (DFOA), also known as desferrioxamine and sold under the brand name Desferal''', is a medication that binds iron and aluminium. It is specifically used in iron overdose, hemochromatosis either due to multiple blood transfusions or an underlying genetic condition, and aluminium toxicity in people on dialysis. It is used by injection into a muscle, vein, or under the skin. Common side effects include pain at the site of injection, diarrhea, vomiting, fever, hearing loss, and eye problems. Severe allergic reactions including anaphylaxis and low blood pressure may occur. It is unclear if use during pregnancy or breastfeeding is safe for the baby. Deferoxamine is a siderophore from the bacteria Streptomyces pilosus. Deferoxamine was approved for medical use in the United States in 1968. It is on the World Health Organization's List of Essential Medicines. Medical uses Deferoxamine is used to treat acute iron poisoning, especially in small children. This agent is also frequently used to treat hemochromatosis, a disease of iron accumulation that can be either genetic or acquired. Acquired hemochromatosis is common in patients with certain types of chronic anemia (e.g. thalassemia and myelodysplastic syndrome) who require many blood transfusions, which can greatly increase the amount of iron in the body. Treatment with iron-chelating drugs such as deferoxamine reduces mortality in persons with sickle cell disease or β‐thalassemia who are transfusion dependent. Administration for chronic conditions is generally accomplished by subcutaneous injection over a period of 8–12 hours each day. Administration of deferoxamine after acute intoxication may color the urine a pinkish red, a phenomenon termed "vin rosé urine". Apart from iron toxicity, deferoxamine can be used to treat aluminium toxicity (an excess of aluminium in the body) in selected patients. In US, the drug is not FDA-approved for this use. Deferoxamine is also used to minimize doxorubicin's cardiotoxic side effects and in the treatment of patients with aceruloplasminemia. Deferoxamine may be effective for improving neurologic outcomes in persons with intracranial hemorrhage, although the evidence supporting the efficacy and safety for this indication was weak. Some published manuscripts suggesting the use of deferoxamine for patients diagnosed with COVID-19 because of the high level of ferritin among them. Adverse effects It is unclear if use during pregnancy is safe for the baby. Chronic use of deferoxamine may increase the risk of hearing loss in patients with thalassemia major. Chronic use of deferoxamine may cause ocular symptoms, growth retardation, local reactions and allergy. Mechanism Deferoxamine is produced by removal of the trivalent iron moiety from ferrioxamine B, an iron-bearing sideramine produced by the actinomycetes, Streptomyces pilosus''. Its discovery was a serendipitous result of research conducted by scientists at Ciba in collaboration with scientists at the Swiss Federal Institute of Technology in Zurich and the University Hospital in Freiburg, Germany Deferoxamine acts by binding free iron in the bloodstream and enhancing its elimination in the urine. By removing excess iron from persons with hemochromatosis, the agent reduces the damage done to various organs and tissues, such as the liver. Also, it speeds healing of nerve damage (and minimizes the extent of recent nerve trauma). Deferoxamine may modulate expression and release of inflammatory mediators by specific cell types. Research Deferoxamine is being studied as a treatment for spinal cord injury and intracerebral hemorrhage. It is also used to induce hypoxia-like environment in mesenchymal stem cells. Since the terminal amine group of Deferoxamine does not participate in metal chelation, it has been used to immobilize Deferoxamine to surfaces and substrates for various industrial and biomedical applications. See also Chelation therapy References Siderophores Antidotes Hydroxamic acids World Health Organization essential medicines Amines Carboxamides Chelating agents used as drugs Wikipedia medicine articles ready to translate
Deferoxamine
[ "Chemistry" ]
1,906
[ "Functional groups", "Organic compounds", "Amines", "Bases (chemistry)", "Hydroxamic acids" ]
2,526,456
https://en.wikipedia.org/wiki/Biomedical%20technology
Biomedical technology is the application of engineering and technology principles to the domain of living or biological systems, with an emphasis on human health and diseases. Biomedical engineering and Biotechnology alike are often loosely called Biomedical Technology or Bioengineering. The Biomedical technology field is currently growing at a rapid pace. Biomedical news has often been reported on various platforms, including the MediUnite Journal; and required jobs for the industry expect to grow 23% by 2024, and with the pay averaging over $86,000. Biomedical technology involves: Biomedical science Biomedical informatics Biomedical research Biomedical engineering Bioengineering Biotechnology Biomedical technologies: Cloning Therapeutic cloning References Biological engineering
Biomedical technology
[ "Engineering", "Biology" ]
132
[ "Biological engineering", "Biotechnology stubs", "Bioengineering stubs" ]
2,526,554
https://en.wikipedia.org/wiki/Foundation%20Fieldbus
Foundation Fieldbus (styled Fieldbus) is an all-digital, serial, two-way communications system that serves as the base-level network in a plant or factory automation environment. It is an open architecture, developed and administered by FieldComm Group. It is targeted for applications using basic and advanced regulatory control, and for much of the discrete control associated with those functions. Foundation Fieldbus technology is mostly used in process industries, but has recently been implemented in powerplants. Two related implementations of Foundation Fieldbus have been introduced to meet different needs within the process automation environment. These two implementations use different physical media and communication speeds. Foundation Fieldbus H1 - Operates at 31.25 kbit/s and is generally used to connect to field devices and host systems. It provides communication and power over standard stranded twisted-pair wiring in both conventional and intrinsic safety applications. H1 is currently the most common implementation. HSE (High-speed Ethernet) - Operates at 100/1000 Mbit/s and generally connects input/output subsystems, host systems, linking devices and gateways. It doesn't currently provide power over the cable, although work is under way to address this using the IEEE802.3af Power over Ethernet (PoE) standard. Foundation Fieldbus was originally intended as a replacement for the 4-20 mA standard, and today it coexists alongside other technologies such as Modbus, Profibus, and Industrial Ethernet. Foundation Fieldbus today enjoys a growing installed base in many heavy process applications such as refining, petrochemicals, power generation, and even food and beverage, pharmaceuticals, and nuclear applications. Foundation Fieldbus was developed over a period of many years by the International Society of Automation, or ISA, as SP50. In 1996 the first H1 (31.25 kbit/s) specifications were released. In 1999 the first HSE (High Speed Ethernet) specifications were released. The International Electrotechnical Commission (IEC) standard on field bus, including Foundation Fieldbus, is IEC 61158. Type 1 is Foundation Fieldbus H1, while Type 5 is Foundation Fieldbus HSE. A typical fieldbus segment consists of the following components. H1 card - fieldbus interface card (It is common practice to have redundant H1 cards, but ultimately this is application specific) PS - Bulk power (Vdc) to Fieldbus Power Supply FPS - Fieldbus Power Supply and Signal Conditioner (Integrated power supplies and conditioners have become the standard nowadays) T - Terminators (Exactly 2 terminators are used per fieldbus segment. One at the FPS and one at the furthest point of a segment at the device coupler) LD - Linking Device, alternatively used with HSE networks to terminate 4-8 H1 segments acting as a gateway to an HSE backbone network. And fieldbus devices, (e.g. transmitters, transducers, etc.) segment diagram on flickr An explanation of how Foundation Fieldbus works and how it is used in continuous process control is in the Foundation Fieldbus Primer which may be found at the Fieldbus Inc. website. See also Computer networking Computer science References External links FieldComm Group Fieldbus Wiring Guide and other technical papers Manufacturers of Power Conditioners and wiring components Official Site (previously www.fieldbus.org. Has since gone through a merger.) IEC 61804 Official Preview Foundation Fieldbus Primer Foundation Fieldbus Parameter Search Foundation Fieldbus End User Councils Middle East: Foundation Fieldbus End User Council - Middle East Australia: Foundation Fieldbus End User Council Australia Inc Industrial computing Serial buses
Foundation Fieldbus
[ "Technology", "Engineering" ]
739
[ "Industrial computing", "Industrial engineering", "Automation" ]
2,526,950
https://en.wikipedia.org/wiki/Isotopes%20of%20barium
Naturally occurring barium (56Ba) is a mix of six stable isotopes and one very long-lived radioactive primordial isotope, barium-130, identified as being unstable by geochemical means (from analysis of the presence of its daughter xenon-130 in rocks) in 2001. This nuclide decays by double electron capture (absorbing two electrons and emitting two neutrinos), with a half-life of (0.5–2.7)×1021 years (about 1011 times the age of the universe). There are a total of thirty-three known radioisotopes in addition to 130Ba. The longest-lived of these is 133Ba, which has a half-life of 10.51 years. All other radioisotopes have half-lives shorter than two weeks. The longest-lived isomer is 133mBa, which has a half-life of 38.9 hours. The shorter-lived 137mBa (half-life 2.55 minutes) arises as the decay product of the common fission product caesium-137. Barium-114 is predicted to undergo cluster decay, emitting a nucleus of stable 12C to produce 102Sn. However this decay is not yet observed; the upper limit on the branching ratio of such decay is 0.0034%. List of isotopes |-id=Barium-114 | rowspan=4|114Ba | rowspan=4 style="text-align:right" | 56 | rowspan=4 style="text-align:right" | 58 | rowspan=4|113.95072(11) | rowspan=4|460(125) ms | β+ (79%) | 114Cs | rowspan=4|0+ | rowspan=4| | rowspan=4| |- | α (0.9%) | 110Xe |- | β+, p (20%) | 113Xe |- | CD (<.0034%) | 102Sn, 12C |-id=Barium-115 | rowspan=2|115Ba | rowspan=2 style="text-align:right" | 56 | rowspan=2 style="text-align:right" | 59 | rowspan=2|114.94748(22)# | rowspan=2|0.45(5) s | β+ | 115Cs | rowspan=2|5/2+# | rowspan=2| | rowspan=2| |- | β+, p (>15%) | 114Xe |-id=Barium-116 | rowspan=2|116Ba | rowspan=2 style="text-align:right" | 56 | rowspan=2 style="text-align:right" | 60 | rowspan=2|115.94162(22)# | rowspan=2|1.3(2) s | β+ (97%) | 116Cs | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β+, p (3%) | 115Xe |-id=Barium-117 | rowspan=3|117Ba | rowspan=3 style="text-align:right" | 56 | rowspan=3 style="text-align:right" | 61 | rowspan=3|116.93832(27) | rowspan=3|1.75(7) s | β+ (87%) | 117Cs | rowspan=3|(3/2+) | rowspan=3| | rowspan=3| |- | β+, p (13%) | 116Xe |- | β+, α (0.024%) | 113I |-id=Barium-118 | 118Ba | style="text-align:right" | 56 | style="text-align:right" | 62 | 117.93323(22)# | 5.2(2) s | β+ | 118Cs | 0+ | | |-id=Barium-119 | rowspan=2|119Ba | rowspan=2 style="text-align:right" | 56 | rowspan=2 style="text-align:right" | 63 | rowspan=2|118.93066(21) | rowspan=2|5.4(3) s | β+ (75%) | 119Cs | rowspan=2|(3/2+) | rowspan=2| | rowspan=2| |- | β+, p (25%) | 118Xe |-id=Barium-119m | style="text-indent:1em" | 119mBa | colspan="3" style="text-indent:2em" | 66.0 keV | 360(20) ns | IT | 119Ba | (5/2−) | | |-id=Barium-120 | 120Ba | style="text-align:right" | 56 | style="text-align:right" | 64 | 119.92604(32) | 24(2) s | β+ | 120Cs | 0+ | | |-id=Barium-121 | rowspan=2|121Ba | rowspan=2 style="text-align:right" | 56 | rowspan=2 style="text-align:right" | 65 | rowspan=2|120.92405(15) | rowspan=2|29.7(15) s | β+ (99.98%) | 121Cs | rowspan=2|5/2+ | rowspan=2| | rowspan=2| |- | β+, p (0.02%) | 120Xe |-id=Barium-122 | 122Ba | style="text-align:right" | 56 | style="text-align:right" | 66 | 121.91990(3) | 1.95(15) min | β+ | 122Cs | 0+ | | |-id=Barium-123 | 123Ba | style="text-align:right" | 56 | style="text-align:right" | 67 | 122.918781(13) | 2.7(4) min | β+ | 123Cs | 5/2+ | | |-id=Barium-123m | style="text-indent:1em" | 123mBa | colspan="3" style="text-indent:2em" | 120.95(8) keV | 830(60) ns | IT | 123Ba | 1/2+# | | |-id=Barium-124 | 124Ba | style="text-align:right" | 56 | style="text-align:right" | 68 | 123.915094(13) | 11.0(5) min | β+ | 124Cs | 0+ | | |-id=Barium-125 | 125Ba | style="text-align:right" | 56 | style="text-align:right" | 69 | 124.914472(12) | 3.3(3) min | β+ | 125Cs | 1/2+ | | |-id=Barium-125m | style="text-indent:1em" | 125mBa | colspan="3" style="text-indent:2em" | 120(20)# keV | 2.76(14) μs | IT | 125Ba | (7/2−) | | |-id=Barium-126 | 126Ba | style="text-align:right" | 56 | style="text-align:right" | 70 | 125.911250(13) | 100(2) min | β+ | 126Cs | 0+ | | |-id=Barium-127 | 127Ba | style="text-align:right" | 56 | style="text-align:right" | 71 | 126.911091(12) | 12.7(4) min | β+ | 127Cs | 1/2+ | | |-id=Barium-127m | style="text-indent:1em" | 127mBa | colspan="3" style="text-indent:2em" | 80.32(11) keV | 1.93(7) s | IT | 127Ba | 7/2− | | |-id=Barium-128 | 128Ba | style="text-align:right" | 56 | style="text-align:right" | 72 | 127.9083524(17) | 2.43(5) d | EC | 128Cs | 0+ | | |-id=Barium-129 | 129Ba | style="text-align:right" | 56 | style="text-align:right" | 73 | 128.908683(11) | 2.23(11) h | β+ | 129Cs | 1/2+ | | |-id=Barium-129m | rowspan=2 style="text-indent:1em" | 129mBa | rowspan=2 colspan="3" style="text-indent:2em" | 8.42(6) keV | rowspan=2|2.135(10) h | β+ | 129Cs | rowspan=2|7/2+ | rowspan=2| | rowspan=2| |- | IT | 129Ba |-id=Barium-130 | 130Ba | style="text-align:right" | 56 | style="text-align:right" | 74 | 129.9063260(3) | ≈ 1×1021 y | 2EC? | 130Xe | 0+ | 0.0011(1) | |-id=Barium-130m | style="text-indent:1em" | 130mBa | colspan="3" style="text-indent:2em" | 2475.12(18) keV | 9.54(14) ms | IT | 130Ba | 8− | | |-id=Barium-131 | 131Ba | style="text-align:right" | 56 | style="text-align:right" | 75 | 130.9069463(4) | 11.52(1) d | β+ | 131Cs | 1/2+ | | |-id=Barium-131m | style="text-indent:1em" | 131mBa | colspan="3" style="text-indent:2em" | 187.995(9) keV | 14.26(9) min | IT | 131Ba | 9/2− | | |-id=Barium-132 | 132Ba | style="text-align:right" | 56 | style="text-align:right" | 76 | 131.9050612(11) | colspan=3 align=center|Observationally Stable | 0+ | 0.0010(1) | |-id=Barium-133 | 133Ba | style="text-align:right" | 56 | style="text-align:right" | 77 |132.9060074(11) | 10.5379(16) y | EC | 133Cs | 1/2+ | | |-id=Barium-133m | rowspan=2 style="text-indent:1em" | 133mBa | rowspan=2 colspan="3" style="text-indent:2em" | 288.252(9) keV | rowspan=2|38.90(6) h | IT (99.99%) | 133Ba | rowspan=2|11/2− | rowspan=2| | rowspan=2| |- | EC (0.0104%) | 133Cs |-id=Barium-134 | 134Ba | style="text-align:right" | 56 | style="text-align:right" | 78 | 133.90450825(27) | colspan=3 align=center|Stable | 0+ | 0.0242(15) | |-id=Barium-134m | style="text-indent:1em" | 134mBa | colspan="3" style="text-indent:2em" | 2957.2(5) keV | 2.61(13) μs | IT | 134Ba | 10+ | | |-id=Barium-135 | 135Ba | style="text-align:right" | 56 | style="text-align:right" | 79 | 134.90568845(26) | colspan=3 align=center|Stable | 3/2+ | 0.0659(10) | |-id=Barium-135m1 | style="text-indent:1em" | 135m1Ba | colspan="3" style="text-indent:2em" | 268.218(20) keV | 28.11(2) h | IT | 135Ba | 11/2− | | |-id=Barium-135m2 | style="text-indent:1em" | 135m2Ba | colspan="3" style="text-indent:2em" | 2388.0(5) keV | 1.06(4) ms | IT | 135Ba | (23/2+) | | |-id=Barium-136 | 136Ba | style="text-align:right" | 56 | style="text-align:right" | 80 |135.90457580(26) | colspan=3 align=center|Stable | 0+ | 0.0785(24) | |-id=Barium-136m1 | style="text-indent:1em" | 136m1Ba | colspan="3" style="text-indent:2em" | 2030.535(18) keV | 308.4(19) ms | IT | 136Ba | 7− | | |-id=Barium-136m2 | style="text-indent:1em" | 136m2Ba | colspan="3" style="text-indent:2em" | 3357.19(25) keV | 91(2) ns | IT | 136Ba | 10+ | | |-id=Barium-137 | 137Ba | style="text-align:right" | 56 | style="text-align:right" | 81 |136.90582721(27) | colspan=3 align=center|Stable | 3/2+ | 0.1123(23) | |-id=Barium-137m1 | style="text-indent:1em" | 137m1Ba | colspan="3" style="text-indent:2em" | 661.659(3) keV | 2.552(1) min | IT | 137Ba | 11/2− | | |-id=Barium-137m2 | style="text-indent:1em" | 137m2Ba | colspan="3" style="text-indent:2em" | 2349.1(5) keV | 589(20) ns | IT | 137Ba | (19/2−) | | |-id=Barium-138 | 138Ba | style="text-align:right" | 56 | style="text-align:right" | 82 | 137.90524706(27) | colspan=3 align=center|Stable | 0+ | 0.7170(29) | |-id=Barium-138m | style="text-indent:1em" | 138mBa | colspan="3" style="text-indent:2em" | 2090.536(21) keV | 850(100) ns | IT | 138Ba | 6+ | | |-id=Barium-139 | 139Ba | style="text-align:right" | 56 | style="text-align:right" | 83 | 138.90884116(27) | 82.93(9) min | β− | 139La | 7/2− | | |-id=Barium-140 | 140Ba | style="text-align:right" | 56 | style="text-align:right" | 84 | 139.910608(8) | 12.7534(21) d | β− | 140La | 0+ | | |-id=Barium-141 | 141Ba | style="text-align:right" | 56 | style="text-align:right" | 85 | 140.914404(6) | 18.27(7) min | β− | 141La | 3/2− | | |-id=Barium-142 | 142Ba | style="text-align:right" | 56 | style="text-align:right" | 86 | 141.916433(6) | 10.6(2) min | β− | 142La | 0+ | | |-id=Barium-143 | 143Ba | style="text-align:right" | 56 | style="text-align:right" | 87 | 142.920625(7) | 14.5(3) s | β− | 143La | 5/2− | | |-id=Barium-144 | 144Ba | style="text-align:right" | 56 | style="text-align:right" | 88 |143.922955(8) | 11.73(8) s | β− | 144La | 0+ | | |-id=Barium-145 | 145Ba | style="text-align:right" | 56 | style="text-align:right" | 89 | 144.927518(9) | 4.31(16) s | β− | 145La | 5/2− | | |-id=Barium-146 | 146Ba | style="text-align:right" | 56 | style="text-align:right" | 90 | 145.9303632(19) | 2.15(4) s | β− | 146La | 0+ | | |-id=Barium-147 | rowspan=2|147Ba | rowspan=2 style="text-align:right" | 56 | rowspan=2 style="text-align:right" | 91 | rowspan=2|146.935304(21) | rowspan=2|893(1) ms | β− (99.93%) | 147La | rowspan=2|5/2− | rowspan=2| | rowspan=2| |- | β−, n (0.07%) | 146La |-id=Barium-148 | rowspan=2|148Ba | rowspan=2 style="text-align:right" | 56 | rowspan=2 style="text-align:right" | 92 | rowspan=2| 147.9382230(16) | rowspan=2|620(5) ms | β− (99.6%) | 148La | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (0.4%) | 147La |-id=Barium-149 | rowspan=2|149Ba | rowspan=2 style="text-align:right" | 56 | rowspan=2 style="text-align:right" | 93 | rowspan=2|148.9432840(27) | rowspan=2|349(4) ms | β− (96.1%) | 149La | rowspan=2|3/2−# | rowspan=2| | rowspan=2| |- | β−, n (3.9%) | 148La |-id=Barium-150 | rowspan=2|150Ba | rowspan=2 style="text-align:right" | 56 | rowspan=2 style="text-align:right" | 94 | rowspan=2| 149.946441(6) | rowspan=2|258(5) ms | β− (99.0%) | 150La | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (1.0%) | 149La |-id=Barium-151 | rowspan=2|151Ba | rowspan=2 style="text-align:right" | 56 | rowspan=2 style="text-align:right" | 95 | rowspan=2|150.95176(43)# | rowspan=2|167(5) ms | β− | 151La | rowspan=2|3/2−# | rowspan=2| | rowspan=2| |- | β−, n? | 150La |-id=Barium-152 | rowspan=2|152Ba | rowspan=2 style="text-align:right" | 56 | rowspan=2 style="text-align:right" | 96 | rowspan=2|151.95533(43)# | rowspan=2|139(8) ms | β− | 152La | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n? | 151La |-id=Barium-153 | rowspan=3|153Ba | rowspan=3 style="text-align:right" | 56 | rowspan=3 style="text-align:right" | 97 | rowspan=3|152.96085(43)# | rowspan=3|113(39) ms | β− | 153La | rowspan=3|5/2−# | rowspan=3| | rowspan=3| |- | β−, n? | 152La |- | β−, 2n? | 151La |-id=Barium-154 | 154Ba | style="text-align:right" | 56 | style="text-align:right" | 98 |153.96466(54)# | 53(48) ms | β− | 154La | 0+ | | References Isotope masses from: Isotopic compositions and standard atomic masses from: Half-life, spin, and isomer data selected from the following sources. Half-life of 130Ba from: Barium Barium
Isotopes of barium
[ "Chemistry" ]
5,051
[ "Lists of isotopes by element", "Isotopes", "Isotopes of barium" ]
2,526,955
https://en.wikipedia.org/wiki/Isotopes%20of%20tellurium
There are 39 known isotopes and 17 nuclear isomers of tellurium (52Te), with atomic masses that range from 104 to 142. These are listed in the table below. Naturally-occurring tellurium on Earth consists of eight isotopes. Two of these have been found to be radioactive: 128Te and 130Te undergo double beta decay with half-lives of, respectively, 2.2×1024 (2.2 septillion) years (the longest half-life of all nuclides proven to be radioactive) and 8.2×1020 (820 quintillion) years. The longest-lived artificial radioisotope of tellurium is 121Te with a half-life of about 19 days. Several nuclear isomers have longer half-lives, the longest being 121mTe with a half-life of 154 days. The very-long-lived radioisotopes 128Te and 130Te are the two most common isotopes of tellurium. Of elements with at least one stable isotope, only indium and rhenium likewise have a radioisotope in greater abundance than a stable one. It has been claimed that electron capture of 123Te was observed, but more recent measurements of the same team have disproved this. The half-life of 123Te is longer than 9.2 × 1016 years, and probably much longer. 124Te can be used as a starting material in the production of radionuclides by a cyclotron or other particle accelerators. Some common radionuclides that can be produced from tellurium-124 are iodine-123 and iodine-124. The short-lived isotope 135Te (half-life 19 seconds) is produced as a fission product in nuclear reactors. It decays, via two beta decays, to 135Xe, the most powerful known neutron absorber, and the cause of the iodine pit phenomenon. With the exception of beryllium, tellurium is the second lightest element observed to have isotopes capable of undergoing alpha decay, with isotopes 104Te to 109Te being seen to undergo this mode of decay. Some lighter elements, namely those in the vicinity of 8Be, have isotopes with delayed alpha emission (following proton or beta emission) as a rare branch. List of isotopes |-id=Tellurium-104 | 104Te | style="text-align:right" | 52 | style="text-align:right" | 52 | 103.94672(34) | <4 ns | α | 100Sn | 0+ | | |-id=Tellurium-105 | 105Te | style="text-align:right" | 52 | style="text-align:right" | 53 | 104.94330(32) | 633(66) ns | α | 101Sn | (7/2+) | | |-id=Tellurium-106 | 106Te | style="text-align:right" | 52 | style="text-align:right" | 54 | 105.93750(11) | 78(11) μs | α | 102Sn | 0+ | | |-id=Tellurium-107 | rowspan=2|107Te | rowspan=2 style="text-align:right" | 52 | rowspan=2 style="text-align:right" | 55 | rowspan=2|106.93488(11)# | rowspan=2|3.22(9) ms | α (70%) | 103Sn | rowspan=2|5/2+# | rowspan=2| | rowspan=2| |- | β+ (30%) | 107Sb |-id=Tellurium-108 | rowspan=4|108Te | rowspan=4 style="text-align:right" | 52 | rowspan=4 style="text-align:right" | 56 | rowspan=4|107.9293805(58) | rowspan=4|2.1(1) s | α (49%) | 104Sn | rowspan=4|0+ | rowspan=4| | rowspan=4| |- | β+ (48.6%) | 108Sb |- | β+, p (2.4%) | 107Sn |- | β+, α (<0.065%) | 104In |-id=Tellurium-109 | rowspan=4|109Te | rowspan=4 style="text-align:right" | 52 | rowspan=4 style="text-align:right" | 57 | rowspan=4|108.9273045(47) | rowspan=4|4.4(2) s | β+ (86.7%) | 109Sb | rowspan=4|(5/2+) | rowspan=4| | rowspan=4| |- | β+, p (9.4%) | 108Sn |- | α (3.9%) | 105Sn |- | β+, α (<0.0049%) | 105In |-id=Tellurium-110 | 110Te | style="text-align:right" | 52 | style="text-align:right" | 58 | 109.9224581(71) | 18.6(8) s | β+ | 110Sb | 0+ | | |-id=Tellurium-111 | rowspan=2|111Te | rowspan=2 style="text-align:right" | 52 | rowspan=2 style="text-align:right" | 59 | rowspan=2|110.9210006(69) | rowspan=2|26.2(6) s | β+ | 111Sb | rowspan=2|(5/2)+ | rowspan=2| | rowspan=2| |- | β+, p (?%) | 110Sn |-id=Tellurium-112 | 112Te | style="text-align:right" | 52 | style="text-align:right" | 60 | 111.9167278(90) | 2.0(2) min | β+ | 112Sb | 0+ | | |-id=Tellurium-113 | 113Te | style="text-align:right" | 52 | style="text-align:right" | 61 | 112.915891(30) | 1.7(2) min | β+ | 113Sb | (7/2+) | | |-id=Tellurium-114 | 114Te | style="text-align:right" | 52 | style="text-align:right" | 62 | 113.912088(26) | 15.2(7) min | β+ | 114Sb | 0+ | | |-id=Tellurium-115 | 115Te | style="text-align:right" | 52 | style="text-align:right" | 63 | 114.911902(30) | 5.8(2) min | β+ | 115Sb | 7/2+ | | |-id=Tellurium-115m1 | style="text-indent:1em" | 115m1Te | colspan="3" style="text-indent:2em" | 10(6) keV | 6.7(4) min | β+ | 115Sb | (1/2+) | | |-id=Tellurium-115m2 | style="text-indent:1em" | 115m2Te | colspan="3" style="text-indent:2em" | 280.05(20) keV | 7.5(2) μs | IT | 115Te | 11/2− | | |-id=Tellurium-116 | 116Te | style="text-align:right" | 52 | style="text-align:right" | 64 | 115.908466(26) | 2.49(4) h | β+ | 116Sb | 0+ | | |-id=Tellurium-117 | rowspan=2|117Te | rowspan=2 style="text-align:right" | 52 | rowspan=2 style="text-align:right" | 65 | rowspan=2|116.908646(14) | rowspan=2|62(2) min | EC (75%) | 117Sb | rowspan=2|1/2+ | rowspan=2| | rowspan=2| |- | β+ | 117Sb |-id=Tellurium-117m | style="text-indent:1em" | 117mTe | colspan="3" style="text-indent:2em" | 296.1(5) keV | 103(3) ms | IT | 117Te | (11/2−) | |-id=Tellurium-118 | 118Te | style="text-align:right" | 52 | style="text-align:right" | 66 | 117.905860(20) | 6.00(2) d | EC | 118Sb | 0+ | | |-id=Tellurium-119 | rowspan=2|119Te | rowspan=2 style="text-align:right" | 52 | rowspan=2 style="text-align:right" | 67 | rowspan=2|118.9064057(78) | rowspan=2|16.05(5) h | EC (97.94%) | 119Sb | rowspan=2|1/2+ | rowspan=2| | rowspan=2| |- | β+ (2.06%) | 119Sb |-id=Tellurium-119m | rowspan=2 style="text-indent:1em" | 119mTe | rowspan=2 colspan="3" style="text-indent:2em" | 260.96(5) keV | rowspan=2|4.70(4) d | EC (99.59%) | 119Sb | rowspan=2|11/2− | rowspan=2| | rowspan=2| |- | β+ (0.41%) | 119Sb |-id=Tellurium-120 | 120Te | style="text-align:right" | 52 | style="text-align:right" | 68 | 119.9040658(19) | colspan=3 align=center|Observationally Stable | 0+ | 9(1)×10−4 | |-id=Tellurium-121 | 121Te | style="text-align:right" | 52 | style="text-align:right" | 69 | 120.904945(28) | 19.31(7) d | β+ | 121Sb | 1/2+ | | |-id=Tellurium-121m | rowspan=2 style="text-indent:1em" | 121mTe | rowspan=2 colspan="3" style="text-indent:2em" | 293.974(22) keV | rowspan=2|164.7(5) d | IT (88.6%) | 121Te | rowspan=2|11/2− | rowspan=2| | rowspan=2| |- | β+ (11.4%) | 121Sb |-id=Tellurium-122 | 122Te | style="text-align:right" | 52 | style="text-align:right" | 70 | 121.9030447(15) | colspan=3 align=center|Stable | 0+ | 0.0255(12) | |-id=Tellurium-123 | 123Te | style="text-align:right" | 52 | style="text-align:right" | 71 | 122,9042710(15) | colspan=3 align=center|Observationally Stable | 1/2+ | 0.0089(3) | |-id=Tellurium-123m | style="text-indent:1em" | 123mTe | colspan="3" style="text-indent:2em" | 247.47(4) keV | 119.2(1) d | IT | 123Te | 11/2− | | |-id=Tellurium-124 | 124Te | style="text-align:right" | 52 | style="text-align:right" | 72 | 123.9028183(15) | colspan=3 align=center|Stable | 0+ | 0.0474(14) | |-id=Tellurium-125 | 125Te | style="text-align:right" | 52 | style="text-align:right" | 73 | 124.9044312(15) | colspan=3 align=center|Stable | 1/2+ | 0.0707(15) | |-id=Tellurium-125m | style="text-indent:1em" | 125mTe | colspan="3" style="text-indent:2em" | 144.775(8) keV | 57.40(15) d | IT | 125Te | 11/2− | | |-id=Tellurium-126 | 126Te | style="text-align:right" | 52 | style="text-align:right" | 74 | 125.9033121(15) | colspan=3 align=center|Stable | 0+ | 0.1884(25) | |-id=Tellurium-127 | 127Te | style="text-align:right" | 52 | style="text-align:right" | 75 | 126.9052270(15) | 9.35(7) h | β− | 127I | 3/2+ | | |-id=Tellurium-127m | rowspan=2 style="text-indent:1em" | 127mTe | rowspan=2 colspan="3" style="text-indent:2em" | 88.23(7) keV | rowspan=2|106.1(7) d | IT (97.86%) | 127Te | rowspan=2|11/2− | rowspan=2| | rowspan=2| |- | β− (2.14%) | 127I |-id=Tellurium-128 | 128Te | style="text-align:right" | 52 | style="text-align:right" | 76 | 127.90446124(76) | 2.25(9)×1024 y | β−β− | 128Xe | 0+ | 0.3174(8) | |-id=Tellurium-128m | style="text-indent:1em" | 128mTe | colspan="3" style="text-indent:2em" | 2790.8(3) keV | 363(27) ns | IT | 128Te | (10+) | | |-id=Tellurium-129 | 129Te | style="text-align:right" | 52 | style="text-align:right" | 77 | 128.90659642(76) | 69.6(3) min | β− | 129I | 3/2+ | | |-id=Tellurium-129m | rowspan=2 style="text-indent:1em" | 129mTe | rowspan=2 colspan="3" style="text-indent:2em" | 105.51(3) keV | rowspan=2 |33.6(1) d | IT (64%) | 129Te | rowspan=2 |11/2− | rowspan=2 | | rowspan=2 | |- | β− (36%) | 129I |-id=Tellurium-130 | 130Te | style="text-align:right" | 52 | style="text-align:right" | 78 | 129.906222745(11) | 7.91(21)×1020 y | β−β− | 130Xe | 0+ | 0.3408(62) | |-id=Tellurium-130m1 | style="text-indent:1em" | 130m1Te | colspan="3" style="text-indent:2em" | 2146.41(4) keV | 186(11) ns | IT | 130Te | 7− | | |-id=Tellurium-130m2 | style="text-indent:1em" | 130m2Te | colspan="3" style="text-indent:2em" | 2667.2(8) keV | 1.90(8) μs | IT | 130Te | (10+) | | |-id=Tellurium-130m3 | style="text-indent:1em" | 130m3Te | colspan="3" style="text-indent:2em" | 4373.9(9) keV | 53(8) ns | IT | 130Te | (15−) | | |-id=Tellurium-131 | 131Te | style="text-align:right" | 52 | style="text-align:right" | 79 | 130.908522210(65) | 25.0(1) min | β− | 131I | 3/2+ | | |-id=Tellurium-131m1 | rowspan=2 style="text-indent:1em" | 131m1Te | rowspan=2 colspan="3" style="text-indent:2em" | 182.258(18) keV | rowspan=2|32.48(11) h | β− (74.1%) | 131I | rowspan=2|11/2− | rowspan=2| | rowspan=2| |- | IT (25.9%) | 131Te |-id=Tellurium-131m2 | style="text-indent:1em" | 131m2Te | colspan="3" style="text-indent:2em" | 1940.0(4) keV | 93(12) ms | IT | 131Te | (23/2+) | | |-id=Tellurium-132 | 132Te | style="text-align:right" | 52 | style="text-align:right" | 80 | 131.9085467(37) | 3.204(13) d | β− | 132I | 0+ | | |-id=Tellurium-132m1 | style="text-indent:1em" | 132m1Te | colspan="3" style="text-indent:2em" | 1774.80(9) keV | 145(8) ns | IT | 132Te | 6+ | | |-id=Tellurium-132m2 | style="text-indent:1em" | 132m2Te | colspan="3" style="text-indent:2em" | 1925.47(9) keV | 28.5(9) μs | IT | 132Te | 7− | | |-id=Tellurium-132m3 | style="text-indent:1em" | 132m3Te | colspan="3" style="text-indent:2em" | 2723.3(8) keV | 3.62(6) μs | IT | 132Te | (10+) | | |-id=Tellurium-133 | 133Te | style="text-align:right" | 52 | style="text-align:right" | 81 | 132.9109633(22) | 12.5(3) min | β− | 133I | 3/2+# | | |-id=Tellurium-133m1 | rowspan=2 style="text-indent:1em" | 133m1Te | rowspan=2 colspan="3" style="text-indent:2em" | 334.26(4) keV | rowspan=2|55.4(4) min | β− (83.5%) | 133I | rowspan=2|(11/2−) | rowspan=2| | rowspan=2| |- | IT (16.5%) | 133Te |-id=Tellurium-133m2 | style="text-indent:1em" | 133m2Te | colspan="3" style="text-indent:2em" | 1610.4(5) keV | 100(5) ns | IT | 133Te | (19/2−) | | |-id=Tellurium-134 | 134Te | style="text-align:right" | 52 | style="text-align:right" | 82 | 133.9113964(29) | 41.8(8) min | β− | 134I | 0+ | | |-id=Tellurium-134m | style="text-indent:1em" | 134mTe | colspan="3" style="text-indent:2em" | 1691.34(16) keV | 164.5(7) ns | IT | 134Te | 6+ | | |-id=Tellurium-135 | 135Te | style="text-align:right" | 52 | style="text-align:right" | 83 | 134.9165547(18) | 19.0(2) s | β− | 135I | (7/2−) | | |-id=Tellurium-135m | style="text-indent:1em" | 135mTe | colspan="3" style="text-indent:2em" | 1554.89(16) keV | 511(20) ns | IT | 135Te | (19/2−) | | |-id=Tellurium-136 | rowspan=2|136Te | rowspan=2 style="text-align:right" | 52 | rowspan=2 style="text-align:right" | 84 | rowspan=2|135.9201012(24) | rowspan=2|17.63(9) s | β− (98.63%) | 136I | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (1.37%) | 135I |-id=Tellurium-137 | rowspan=2|137Te | rowspan=2 style="text-align:right" | 52 | rowspan=2 style="text-align:right" | 85 | rowspan=2|136.9255994(23) | rowspan=2|2.49(5) s | β− (97.06%) | 137I | rowspan=2|3/2−# | rowspan=2| | rowspan=2| |- | β−, n (2.94%) | 136I |-id=Tellurium-138 | rowspan=2|138Te | rowspan=2 style="text-align:right" | 52 | rowspan=2 style="text-align:right" | 86 | rowspan=2|137.9294725(41) | rowspan=2|1.46(25) s | β− (95.20%) | 138I | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (4.80%) | 137I |-id=Tellurium-139 | 139Te | style="text-align:right" | 52 | style="text-align:right" | 87 | 138.9353672(38) | 724(81) ms | β− | 139I | 5/2−# | | |-id=Tellurium-140 | rowspan=2|140Te | rowspan=2 style="text-align:right" | 52 | rowspan=2 style="text-align:right" | 88 | rowspan=2|139.939487(15) | rowspan=2|351(5) ms | β− (?%) | 140I | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (?%) | 139I |-id=Tellurium-141 | 141Te | style="text-align:right" | 52 | style="text-align:right" | 89 | 140.94560(43)# | 193(16) ms | β− | 141I | 5/2−# | | |-id=Tellurium-142 | 142Te | style="text-align:right" | 52 | style="text-align:right" | 90 | 141.95003(54)# | 147(8) ms | β− | 142I | 0+ | | |-id=Tellurium-143 | 143Te | style="text-align:right" | 52 | style="text-align:right" | 91 | 142.95649(54)# | 120(8) ms | β− | 143I | 7/2+# | | |-id=Tellurium-144 | 144Te | style="text-align:right" | 52 | style="text-align:right" | 92 | 143.96112(32)# | 93(60) ms | β− | 144I | 0+ | | |-id=Tellurium-145 | 145Te | style="text-align:right" | 52 | style="text-align:right" | 93 | 144.96778(32)# | 75# ms[>550 ns] | β− | 145I | | | References Isotope masses from: Isotopic compositions and standard atomic masses from: Half-life, spin, and isomer data selected from the following sources. Tellurium Tellurium
Isotopes of tellurium
[ "Chemistry" ]
5,912
[ "Lists of isotopes by element", "Isotopes", "Isotopes of tellurium" ]
2,527,017
https://en.wikipedia.org/wiki/Isotopes%20of%20krypton
There are 34 known isotopes of krypton (36Kr) with atomic mass numbers from 67 to 103. Naturally occurring krypton is made of five stable isotopes and one () which is slightly radioactive with an extremely long half-life, plus traces of radioisotopes that are produced by cosmic rays in the atmosphere. List of isotopes |-id=Krypton-67 | rowspan=2|67Kr | rowspan=2 style="text-align:right" | 36 | rowspan=2 style="text-align:right" | 31 | rowspan=2|66.98331(46)# | rowspan=2|7.4(29) ms | β+? (63%) | 67Br | rowspan=2|3/2-# | rowspan=2| | rowspan=2| |- |2p (37%) |65Se |-id=Krypton-68 | rowspan=3|68Kr | rowspan=3 style="text-align:right" | 36 | rowspan=3 style="text-align:right" | 32 | rowspan=3|67.97249(54)# | rowspan=3|21.6(33) ms | β+, p (>90%) | 67Se | rowspan=3|0+ | rowspan=3| | rowspan=3| |- |β+? (<10%) |68Br |- |p? |67Br |-id=Krypton-69 | rowspan=2|69Kr | rowspan=2 style="text-align:right" | 36 | rowspan=2 style="text-align:right" | 33 | rowspan=2|68.96550(32)# | rowspan=2|27.9(8) ms | β+, p (94%) | 68Se | rowspan=2|(5/2−) | rowspan=2| | rowspan=2| |- | β+ (6%) | 69Br |-id=Krypton-70 | rowspan=2|70Kr | rowspan=2 style="text-align:right" | 36 | rowspan=2 style="text-align:right" | 34 | rowspan=2|69.95588(22)# | rowspan=2|45.00(14) ms | β+ (>98.7%) | 70Br | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β+, p (<1.3%) | 69Se |-id=Krypton-71 | rowspan=2|71Kr | rowspan=2 style="text-align:right" | 36 | rowspan=2 style="text-align:right" | 35 | rowspan=2|70.95027(14) | rowspan=2|98.8(3) ms | β+ (97.9%) | 71Br | rowspan=2|(5/2)− | rowspan=2| | rowspan=2| |- | β+, p (2.1%) | 70Se |-id=Krypton-72 | 72Kr | style="text-align:right" | 36 | style="text-align:right" | 36 | 71.9420924(86) | 17.16(18) s | β+ | 72Br | 0+ | | |-id=Krypton-73 | rowspan=2|73Kr | rowspan=2 style="text-align:right" | 36 | rowspan=2 style="text-align:right" | 37 | rowspan=2|72.9392892(71) | rowspan=2|27.3(10) s | β+ (99.75%) | 73Br | rowspan=2|(3/2)− | rowspan=2| | rowspan=2| |- | β+, p (0.25%) | 72Se |-id=Krypton-73m | style="text-indent:1em" | 73mKr | colspan="3" style="text-indent:2em" | 433.55(13) keV | 107(10) ns | IT | 73Kr | (9/2+) | | |-id=Krypton-74 | 74Kr | style="text-align:right" | 36 | style="text-align:right" | 38 | 73.9330840(22) | 11.50(11) min | β+ | 74Br | 0+ | | |-id=Krypton-75 | 75Kr | style="text-align:right" | 36 | style="text-align:right" | 39 | 74.9309457(87) | 4.60(7) min | β+ | 75Br | 5/2+ | | |-id=Krypton-76 | 76Kr | style="text-align:right" | 36 | style="text-align:right" | 40 | 75.9259107(43) | 14.8(1) h | β+ | 76Br | 0+ | | |-id=Krypton-77 | 77Kr | style="text-align:right" | 36 | style="text-align:right" | 41 | 76.9246700(21) | 72.6(9) min | β+ | 77Br | 5/2+ | | |-id=Krypton-77m | style="text-indent:1em" | 77mKr | colspan="3" style="text-indent:2em" | 66.50(5) keV | 118(12) ns | IT | 77Kr | 3/2− | | |-id=Krypton-78 | 78Kr | style="text-align:right" | 36 | style="text-align:right" | 42 | 77.92036634(33) | align=center|9.2 y |Double EC |78Se | 0+ | 0.00355(3) | |-id=Krypton-79 | 79Kr | style="text-align:right" | 36 | style="text-align:right" | 43 | 78.9200829(37) | 35.04(10) h | β+ | 79Br | 1/2− | | |-id=Krypton-79m | style="text-indent:1em" | 79mKr | colspan="3" style="text-indent:2em" | 129.77(5) keV | 50(3) s | IT | 79Kr | 7/2+ | | |-id=Krypton-80 | 80Kr | style="text-align:right" | 36 | style="text-align:right" | 44 | 79.91637794(75) | colspan=3 align=center|Stable | 0+ | 0.02286(10) | |- | 81Kr | style="text-align:right" | 36 | style="text-align:right" | 45 | 80.9165897(12) | 2.29(11)×105 y | EC | 81Br | 7/2+ | | |-id=Krypton-81m | rowspan=2 style="text-indent:1em" | 81mKr | rowspan=2 colspan="3" style="text-indent:2em" | 190.64(4) keV | rowspan=2|13.10(3) s | IT | 81Kr | rowspan=2|1/2− | rowspan=2| | rowspan=2| |- | EC (0.0025%) | 81Br |-id=Krypton-82 | 82Kr | style="text-align:right" | 36 | style="text-align:right" | 46 | 81.9134811537(59) | colspan=3 align=center|Stable | 0+ | 0.11593(31) | |-id=Krypton-83 | 83Kr | style="text-align:right" | 36 | style="text-align:right" | 47 | 82.914126516(9) | colspan=3 align=center|Stable | 9/2+ | 0.11500(19) | |-id=Krypton-83m1 | style="text-indent:1em" | 83m1Kr | colspan="3" style="text-indent:2em" | 9.4053(8) keV | 156.8(5) ns | IT | 83Kr | 7/2+ | | |-id=Krypton-83m2 | style="text-indent:1em" | 83m2Kr | colspan="3" style="text-indent:2em" | 41.5575(7) keV | 1.830(13) h | IT | 83Kr | 1/2− | | |-id=Krypton-84 | 84Kr | style="text-align:right" | 36 | style="text-align:right" | 48 | 83.9114977271(41) | colspan=3 align=center|Stable | 0+ | 0.56987(15) | |-id=Krypton-84m | style="text-indent:1em" | 84mKr | colspan="3" style="text-indent:2em" | 3236.07(18) keV | 1.83(4) μs | IT | 84Kr | 8+ | | |- | 85Kr | style="text-align:right" | 36 | style="text-align:right" | 49 | 84.9125273(21) | 10.728(7) y | β− | 85Rb | 9/2+ | | |-id=Krypton-85m1 | rowspan=2 style="text-indent:1em" | 85m1Kr | rowspan=2 colspan="3" style="text-indent:2em" | 304.871(20) keV | rowspan=2|4.480(8) h | β− (78.8%) | 85Rb | rowspan=2|1/2− | rowspan=2| | rowspan=2| |- | IT (21.2%) | 85Kr |-id=Krypton-85m2 | style="text-indent:1em" | 85m2Kr | colspan="3" style="text-indent:2em" | 1991.8(2) keV | 1.82(5) μs | IT | 85Kr | (17/2+) | | |- | 86Kr | style="text-align:right" | 36 | style="text-align:right" | 50 | 85.9106106247(40) | colspan=3 align=center|Observationally Stable | 0+ | 0.17279(41) | |-id=Krypton-87 | 87Kr | style="text-align:right" | 36 | style="text-align:right" | 51 | 86.91335476(26) | 76.3(5) min | β− | 87Rb | 5/2+ | | |-id=Krypton-88 | 88Kr | style="text-align:right" | 36 | style="text-align:right" | 52 | 87.9144479(28) | 2.825(19) h | β− | 88Rb | 0+ | | |-id=Krypton-89 | 89Kr | style="text-align:right" | 36 | style="text-align:right" | 53 | 88.9178354(23) | 3.15(4) min | β− | 89Rb | 3/2+ | | |-id=Krypton-90 | 90Kr | style="text-align:right" | 36 | style="text-align:right" | 54 | 89.9195279(20) | 32.32(9) s | β− | 90mRb | 0+ | | |-id=Krypton-91 | rowspan=2|91Kr | rowspan=2 style="text-align:right" | 36 | rowspan=2 style="text-align:right" | 55 | rowspan=2|90.9238063(24) | rowspan=2|8.57(4) s | β− | 91Rb | rowspan=2|5/2+ | rowspan=2| | rowspan=2| |- | β−, n? | 90Rb |-id=Krypton-92 | rowspan=2|92Kr | rowspan=2 style="text-align:right" | 36 | rowspan=2 style="text-align:right" | 56 | rowspan=2|91.9261731(29) | rowspan=2|1.840(8) s | β− (99.97%) | 92Rb | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (0.0332%) | 91Rb |-id=Krypton-93 | rowspan=2|93Kr | rowspan=2 style="text-align:right" | 36 | rowspan=2 style="text-align:right" | 57 | rowspan=2|92.9311472(27) | rowspan=2|1.287(10) s | β− (98.05%) | 93Rb | rowspan=2|1/2+ | rowspan=2| | rowspan=2| |- | β−, n (1.95%) | 92Rb |-id=Krypton-94 | rowspan=2|94Kr | rowspan=2 style="text-align:right" | 36 | rowspan=2 style="text-align:right" | 58 | rowspan=2|93.934140(13) | rowspan=2|212(4) ms | β− (98.89%) | 94Rb | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (1.11%) | 93Rb |-id=Krypton-95 | rowspan=3|95Kr | rowspan=3 style="text-align:right" | 36 | rowspan=3 style="text-align:right" | 59 | rowspan=3|94.939711(20) | rowspan=3|114(3) ms | β− (97.13%) | 95Rb | rowspan=3|1/2+ | rowspan=3| | rowspan=3| |- | β−, n (2.87%) | 94Rb |- | β−, 2n? | 93Rb |-id=Krypton-95m | style="text-indent:1em" | 95mKr | colspan="3" style="text-indent:2em" | 195.5(3) keV | 1.582(22) μs | IT | 85Kr | (7/2+) | | |-id=Krypton-96 | rowspan=2|96Kr | rowspan=2 style="text-align:right" | 36 | rowspan=2 style="text-align:right" | 60 | rowspan=2|95.942998(62) | rowspan=2|80(8) ms | β− (96.3%) | 96Rb | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (3.7%) | 95Rb |-id=Krypton-97 | rowspan=3|97Kr | rowspan=3 style="text-align:right" | 36 | rowspan=3 style="text-align:right" | 61 | rowspan=3|96.94909(14) | rowspan=3|62.2(32) ms | β− (93.3%) | 97Rb | rowspan=3|3/2+# | rowspan=3| | rowspan=3| |- | β−, n (6.7%) | 96Rb |- | β−, 2n? | 95Rb |-id=Krypton-98 | rowspan=3|98Kr | rowspan=3 style="text-align:right" | 36 | rowspan=3 style="text-align:right" | 62 | rowspan=3|97.95264(32)# | rowspan=3|42.8(36) ms | β− (93.0%) | 98Rb | rowspan=3|0+ | rowspan=3| | rowspan=3| |- | β−, n (7.0%) | 97Rb |- | β−, 2n? | 96Rb |-id=Krypton-99 | rowspan=3|99Kr | rowspan=3 style="text-align:right" | 36 | rowspan=3 style="text-align:right" | 63 | rowspan=3|98.95878(43)# | rowspan=3|40(11) ms | β− (89%) | 99Rb | rowspan=3|5/2−# | rowspan=3| | rowspan=3| |- | β−, n (11%) | 98Rb |- | β−, 2n? | 97Rb |-id=Krypton-100 | rowspan=3|100Kr | rowspan=3 style="text-align:right" | 36 | rowspan=3 style="text-align:right" | 64 | rowspan=3|99.96300(43)# | rowspan=3|12(8) ms | β− | 100Rb | rowspan=3|0+ | rowspan=3| | rowspan=3| |- | β−, n? | 99Rb |- | β−, 2n? | 98Rb |-id=Krypton-101 | rowspan=3 | 101Kr | rowspan=3 | 36 | rowspan=3 | 65 | rowspan=3 | 100.96932(54)# | rowspan=3 | 9# ms[>400 ns] | β−? | 101Rb | rowspan=3 | 5/2+# | rowspan=3 | | rowspan=3 | |- | β−, n? | 100Rb |- | β−, 2n? | 99Rb |-id=Krypton-102 | 102Kr | style="text-align:right" | 36 | style="text-align:right" | 66 | | | | | 0+ | | |-id=Krypton-103 | 103Kr | style="text-align:right" | 36 | style="text-align:right" | 67 | | | | | | | The isotopic composition refers to that in air. Notable isotopes Krypton-81 Krypton-81 is useful in determining how old the water beneath the ground is. Radioactive krypton-81 is the product of spallation reactions with cosmic rays striking gases present in the Earth atmosphere, along with the six stable or nearly stable krypton isotopes. Krypton-81 has a half-life of about 229,000 years. Krypton-81 is used for dating ancient (50,000- to 800,000-year-old) groundwater and to determine their residence time in deep aquifers. One of the main technical limitations of the method is that it requires the sampling of very large volumes of water: several hundred liters or a few cubic meters of water. This is particularly challenging for dating pore water in deep clay aquitards with very low hydraulic conductivity. Krypton-85 Krypton-85 has a half-life of about 10.75 years. This isotope is produced by the nuclear fission of uranium and plutonium in nuclear weapons testing and in nuclear reactors, as well as by cosmic rays. An important goal of the Limited Nuclear Test Ban Treaty of 1963 was to eliminate the release of such radioisotopes into the atmosphere, and since 1963 much of that krypton-85 has had time to decay. However, it is almost inevitable that krypton-85 is released during the reprocessing of fuel rods from nuclear reactors. Atmospheric concentration The atmospheric concentration of krypton-85 around the North Pole is about 30 percent higher than that at the Amundsen–Scott South Pole Station because nearly all of the world's nuclear reactors and all of its major nuclear reprocessing plants are located in the northern hemisphere, and also well-north of the equator. To be more specific, those nuclear reprocessing plants with significant capacities are located in the United States, the United Kingdom, the French Republic, the Russian Federation, Mainland China (PRC), Japan, India, and Pakistan. Krypton-86 Krypton-86 was formerly used to define the meter from 1960 until 1983, when the definition of the meter was based on the wavelength of the 606 nm (orange) spectral line of a krypton-86 atom. Others All other radioisotopes of krypton have half-lives of less than one day, except for krypton-79, a positron emitter with a half-life of about 35.0 hours. References Sources Isotope masses from: Isotopic compositions and standard atomic masses from: Half-life, spin, and isomer data selected from the following sources. External links Brookhaven National Laboratory: Krypton-101 information Krypton Krypton
Isotopes of krypton
[ "Chemistry" ]
4,965
[ "Lists of isotopes by element", "Isotopes of krypton", "Isotopes" ]
2,527,020
https://en.wikipedia.org/wiki/Isotopes%20of%20selenium
Selenium (34Se) has six natural isotopes that occur in significant quantities, along with the trace isotope 79Se, which occurs in minute quantities in uranium ores. Five of these isotopes are stable: 74Se, 76Se, 77Se, 78Se, and 80Se. The last three also occur as fission products, along with 79Se, which has a half-life of 327,000 years, and 82Se, which has a very long half-life (~1020 years, decaying via double beta decay to 82Kr) and for practical purposes can be considered to be stable. There are 23 other unstable isotopes that have been characterized, the longest-lived being 79Se with a half-life 327,000 years, 75Se with a half-life of 120 days, and 72Se with a half-life of 8.40 days. Of the other isotopes, 73Se has the longest half-life, 7.15 hours; most others have half-lives not exceeding 38 seconds. List of isotopes |-id=Selenium-63 | rowspan=3|63Se | rowspan=3 style="text-align:right" | 34 | rowspan=3 style="text-align:right" | 29 | rowspan=3|62.98191(54)# | rowspan=3|13.2(39) ms | β+, p (89%) | 62Ge | rowspan=3|3/2−# | rowspan=3| | rowspan=3| |- | β+ (11%) | 63As |- | 2p? (<0.5%) | 61Ge |-id=Selenium-64 | rowspan=2|64Se | rowspan=2 style="text-align:right" | 34 | rowspan=2 style="text-align:right" | 30 | rowspan=2|63.97117(54)# | rowspan=2|22.6(2) ms | β+? | 64As | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β+, p? | 63Ge |-id=Selenium-65 | rowspan=2|65Se | rowspan=2 style="text-align:right" | 34 | rowspan=2 style="text-align:right" | 31 | rowspan=2|64.96455(32)# | rowspan=2|34.2(7) ms | β+, p (87%) | 64Ge | rowspan=2|3/2−# | rowspan=2| | rowspan=2| |- | β+ (13%) | 65As |-id=Selenium-66 | rowspan=2|66Se | rowspan=2 style="text-align:right" | 34 | rowspan=2 style="text-align:right" | 32 | rowspan=2|65.95528(22)# | rowspan=2|54(4) ms | β+ | 66As | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β+, p? | 65Ge |-id=Selenium-67 | rowspan=2|67Se | rowspan=2 style="text-align:right" | 34 | rowspan=2 style="text-align:right" | 33 | rowspan=2|66.949994(72) | rowspan=2|133(4) ms | β+ (99.5%) | 67As | rowspan=2|5/2−# | rowspan=2| | rowspan=2| |- | β+, p (0.5%) | 66Ge |-id=Selenium-68 | 68Se | style="text-align:right" | 34 | style="text-align:right" | 34 | 67.94182524(53) | 35.5(7) s | β+ | 68As | 0+ | | |-id=Selenium-69 | rowspan=2|69Se | rowspan=2 style="text-align:right" | 34 | rowspan=2 style="text-align:right" | 35 | rowspan=2|68.9394148(16) | rowspan=2|27.4(2) s | β+ (99.95%) | 69As | rowspan=2|1/2− | rowspan=2| | rowspan=2| |- | β+, p (.052%) | 68Ge |-id=Selenium-69m1 | style="text-indent:1em" | 69m1Se | colspan="3" style="text-indent:2em" | 38.85(22) keV | 2.0(2) μs | IT | 69Se | 5/2− | | |-id=Selenium-69m2 | style="text-indent:1em" | 69m2Se | colspan="3" style="text-indent:2em" | 574.0(4) keV | 955(16) ns | IT | 69Se | 9/2+ | | |-id=Selenium-70 | 70Se | style="text-align:right" | 34 | style="text-align:right" | 36 | 69.9335155(17) | 41.1(3) min | β+ | 70As | 0+ | | |-id=Selenium-71 | 71Se | style="text-align:right" | 34 | style="text-align:right" | 37 | 70.9322094(30) | 4.74(5) min | β+ | 71As | (5/2−) | | |-id=Selenium-71m1 | style="text-indent:1em" | 71m1Se | colspan="3" style="text-indent:2em" | 48.79(5) keV | 5.6(7) μs | IT | 71Se | (1/2−) | | |-id=Selenium-71m2 | style="text-indent:1em" | 71m2Se | colspan="3" style="text-indent:2em" | 260.48(10) keV | 19.0(5) μs | IT | 71Se | (9/2+) | | |-id=Selenium-72 | 72Se | style="text-align:right" | 34 | style="text-align:right" | 38 | 71.9271405(21) | 8.40(8) d | EC | 72As | 0+ | | |-id=Selenium-73 | 73Se | style="text-align:right" | 34 | style="text-align:right" | 39 | 72.9267549(80) | 7.15(9) h | β+ | 73As | 9/2+ | | |-id=Selenium-73m | rowspan=2 style="text-indent:1em" | 73mSe | rowspan=2 colspan="3" style="text-indent:2em" | 25.71(4) keV | rowspan=2|39.8(17) min | IT (72.6%) | 73Se | rowspan=2|3/2− | rowspan=2| | rowspan=2| |- | β+ (27.4%) | 73As |-id=Selenium-74 | 74Se | style="text-align:right" | 34 | style="text-align:right" | 40 | 73.922475933(15) | colspan=3 align=center|Observationally Stable | 0+ | 0.0086(3) | |-id=Selenium-75 | 75Se | style="text-align:right" | 34 | style="text-align:right" | 41 | 74.922522870(78) | 119.78(3) d | EC | 75As | 5/2+ | | |-id=Selenium-76 | 76Se | style="text-align:right" | 34 | style="text-align:right" | 42 | 75.919213702(17) | colspan=3 align=center|Stable | 0+ | 0.0923(7) | |-id=Selenium-77 | 77Se | style="text-align:right" | 34 | style="text-align:right" | 43 | 76.919914150(67) | colspan=3 align=center|Stable | 1/2− | 0.0760(7) | |-id=Selenium-77m | style="text-indent:1em" | 77mSe | colspan="3" style="text-indent:2em" | 161.9223(10) keV | 17.36(5) s | IT | 77Se | 7/2+ | | |-id=Selenium-78 | 78Se | style="text-align:right" | 34 | style="text-align:right" | 44 | 77.91730924(19) | colspan=3 align=center|Stable | 0+ | 0.2369 (22) | |- | 79Se | style="text-align:right" | 34 | style="text-align:right" | 45 | 78.91849925(24) | 3.27(28)×105 y | β− | 79Br | 7/2+ | | |-id=Selenium-79m | rowspan=2 style="text-indent:1em" | 79mSe | rowspan=2 colspan="3" style="text-indent:2em" | 95.77(3) keV | rowspan=2|3.900(18) min | IT (99.94%) | 79Se | rowspan=2|1/2− | rowspan=2| | rowspan=2| |- | β− (0.056%) | 79Br |-id=Selenium-80 | 80Se | style="text-align:right" | 34 | style="text-align:right" | 46 | 79.9165218(10) | colspan=3 align=center|Observationally Stable | 0+ | 0.4980(36) | |-id=Selenium-81 | 81Se | style="text-align:right" | 34 | style="text-align:right" | 47 | 80.9179930(10) | 18.45(12) min | β− | 81Br | 1/2− | | |-id=Selenium-81m | rowspan=2 style="text-indent:1em" | 81mSe | rowspan=2 colspan="3" style="text-indent:2em" | 103.00(6) keV | rowspan=2|57.28(2) min | IT (99.95%) | 81Se | rowspan=2|7/2+ | rowspan=2| | rowspan=2| |- | β− (.051%) | 81Br |-id=Selenium-82 | 82Se | style="text-align:right" | 34 | style="text-align:right" | 48 | 81.91669953(50) | 8.76(15)×1019 y | β−β− | 82Kr | 0+ | 0.0882(15) | |-id=Selenium-83 | 83Se | style="text-align:right" | 34 | style="text-align:right" | 49 | 82.9191186(33) | 22.25(4) min | β− | 83Br | 9/2+ | | |-id=Selenium-83m | style="text-indent:1em" | 83mSe | colspan="3" style="text-indent:2em" | 228.92(7) keV | 70.1(4) s | β− | 83Br | 1/2− | | |-id=Selenium-84 | 84Se | style="text-align:right" | 34 | style="text-align:right" | 50 | 83.9184668(21) | 3.26(10) min | β− | 84Br | 0+ | | |-id=Selenium-85 | 85Se | style="text-align:right" | 34 | style="text-align:right" | 51 | 84.9222608(28) | 32.9(3) s | β− | 85Br | (5/2)+ | | |-id=Selenium-86 | rowspan=2|86Se | rowspan=2 style="text-align:right" | 34 | rowspan=2 style="text-align:right" | 52 | rowspan=2|85.9243117(27) | rowspan=2|14.3(3) s | β− | 86Br | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n? | 85Br |-id=Selenium-87 | rowspan=2|87Se | rowspan=2 style="text-align:right" | 34 | rowspan=2 style="text-align:right" | 53 | rowspan=2|86.9286886(24) | rowspan=2|5.50(6) s | β− (99.50%) | 87Br | rowspan=2|(3/2+) | rowspan=2| | rowspan=2| |- | β−, n (0.60%) | 86Br |-id=Selenium-88 | rowspan=2|88Se | rowspan=2 style="text-align:right" | 34 | rowspan=2 style="text-align:right" | 54 | rowspan=2|87.9314175(36) | rowspan=2|1.53(6) s | β− (99.01%) | 88Br | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (0.99%) | 87Br |-id=Selenium-89 | rowspan=2|89Se | rowspan=2 style="text-align:right" | 34 | rowspan=2 style="text-align:right" | 55 | rowspan=2|88.9366691(40) | rowspan=2|430(50) ms | β− (92.2%) | 89Br | rowspan=2|5/2+# | rowspan=2| | rowspan=2| |- | β−, n (7.8%) | 88Br |-id=Selenium-90 | rowspan=2|90Se | rowspan=2 style="text-align:right" | 34 | rowspan=2 style="text-align:right" | 56 | rowspan=2|89.94010(35) | rowspan=2|210(80) ms | β− | 90Br | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n? | 89Br |-id=Selenium-91 | rowspan=3|91Se | rowspan=3 style="text-align:right" | 34 | rowspan=3 style="text-align:right" | 57 | rowspan=3|90.94570(47) | rowspan=3|270(50) ms | β− (79%) | 91Br | rowspan=3|1/2+# | rowspan=3| | rowspan=3| |- | β−, n (21%) | 90Br |- | β−, 2n? | 89Br |-id=Selenium-92 | rowspan=3|92Se | rowspan=3 style="text-align:right" | 34 | rowspan=3 style="text-align:right" | 58 | rowspan=3|91.94984(43)# | rowspan=3|90# ms [>300 ns] | β−? | 92Br | rowspan=3|0+ | rowspan=3| | rowspan=3| |- | β−, n? | 91Br |- | β−, 2n? | 90Br |-id=Selenium-92m | style="text-indent:1em" | 92mSe | colspan="3" style="text-indent:2em" | 3072(2) keV | 15.7(7) μs | IT | 92Se | (9−) | | |-id=Selenium-93 | rowspan=3|93Se | rowspan=3 style="text-align:right" | 34 | rowspan=3 style="text-align:right" | 59 | rowspan=3|92.95614(43)# | rowspan=3|130# ms [>300 ns] | β−? | 93Br | rowspan=3|1/2+# | rowspan=3| | rowspan=3| |- | β−, n? | 92Br |- | β−, 2n? | 91Br |-id=Selenium-93m | style="text-indent:1em" | 93mSe | colspan="3" style="text-indent:2em" | 678.2(7) keV | 420(100) ns | IT | 93Se | | | |-id=Selenium-94 | rowspan=3|94Se | rowspan=3 style="text-align:right" | 34 | rowspan=3 style="text-align:right" | 60 | rowspan=3|93.96049(54)# | rowspan=3|50# ms [>300 ns] | β−? | 94Br | rowspan=3|0+ | rowspan=3| | rowspan=3| |- | β−, n? | 93Br |- | β−, 2n? | 92Br |-id=Selenium-94m | style="text-indent:1em" | 94mSe | colspan="3" style="text-indent:2em" | 2430.0(6) keV | 680(50) ns | IT | 94Se | (7−) | | |-id=Selenium-95 | rowspan=3|95Se | rowspan=3 style="text-align:right" | 34 | rowspan=3 style="text-align:right" | 61 | rowspan=3|94.96730(54)# | rowspan=3|70# ms [>400 ns] | β−? | 95Br | rowspan=3|3/2+# | rowspan=3| | rowspan=3| |- | β−, n? | 94Br |- | β−, 2n? | 93Br |-id=Selenium-96 | 96Se | style="text-align:right" | 34 | style="text-align:right" | 62 | | | | | | | |-id=Selenium-97 | 97Se | style="text-align:right" | 34 | style="text-align:right" | 63 | | | | | | | Use of radioisotopes The isotope selenium-75 has radiopharmaceutical uses. For example, it is used in high-dose-rate endorectal brachytherapy, as an alternative to iridium-192. In paleobiogeochemistry, the ratio in amount of selenium-82 to selenium-76 (i.e, the value of δ82/76Se) can be used to track down the redox conditions on Earth during the Neoproterozoic era in order to gain a deeper understanding of the rapid oxygenation that trigger the emergence of complex organisms. References Isotope masses from: Isotopic compositions and standard atomic masses from: Half-life, spin, and isomer data selected from the following sources. Selenium Selenium
Isotopes of selenium
[ "Chemistry" ]
4,685
[ "Lists of isotopes by element", "Isotopes of selenium", "Isotopes" ]
2,527,021
https://en.wikipedia.org/wiki/Isotopes%20of%20bromine
Bromine (35Br) has two stable isotopes, 79Br and 81Br, and 35 known radioisotopes, the most stable of which is 77Br, with a half-life of 57.036 hours. Like the radioactive isotopes of iodine, radioisotopes of bromine, collectively radiobromine, can be used to label biomolecules for nuclear medicine; for example, the positron emitters 75Br and 76Br can be used for positron emission tomography. Radiobromine has the advantage that organobromides are more stable than analogous organoiodides, and that it is not uptaken by the thyroid like iodine. List of isotopes |-id=Bromine-68 | 68Br | style="text-align:right" | 35 | style="text-align:right" | 33 | 67.95836(28)# | ~35 ns | p? | 67Se | 3+# | | |-id=Bromine-69 | 69Br | style="text-align:right" | 35 | style="text-align:right" | 34 | 68.950338(45) | <19 ns | p | 68Se | (5/2−) | | |-id=Bromine-70 | rowspan=2|70Br | rowspan=2 style="text-align:right" | 35 | rowspan=2 style="text-align:right" | 35 | rowspan=2|69.944792(16) | rowspan=2|78.8(3) ms | β+ | 70Se | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β+, p? | 69As |-id=Bromine-70m | rowspan=2 style="text-indent:1em" | 70mBr | rowspan=2 colspan="3" style="text-indent:2em" | 2292.3(8) keV | rowspan=2|2.16(5) s | β+ | 70Se | rowspan=2|9+ | rowspan=2| | rowspan=2| |- | β+, p? | 69As |-id=Bromine-71 | 71Br | style="text-align:right" | 35 | style="text-align:right" | 36 | 70.9393422(58) | 21.4(6) s | β+ | 71Se | (5/2)− | | |-id=Bromine-72 | 72Br | style="text-align:right" | 35 | style="text-align:right" | 37 | 71.9365946(11) | 78.6(24) s | β+ | 72Se | 1+ | | |-id=Bromine-72m | rowspan=2 style="text-indent:1em" | 72mBr | rowspan=2 colspan="3" style="text-indent:2em" | 100.76(15) keV | rowspan=2|10.6(3) s | IT | 72Br | rowspan=2|(3−) | rowspan=2| | rowspan=2| |- | β+? | 72Se |-id=Bromine-73 | 73Br | style="text-align:right" | 35 | style="text-align:right" | 38 | 72.9316734(72) | 3.4(2) min | β+ | 73Se | 1/2− | | |-id=Bromine-74 | 74Br | style="text-align:right" | 35 | style="text-align:right" | 39 | 73.9299103(63) | 25.4(3) min | β+ | 74Se | (0−) | | |-id=Bromine-74m | style="text-indent:1em" | 74mBr | colspan="3" style="text-indent:2em" | 13.58(21) keV | 46(2) min | β+ | 74Se | 4+ | | |- | rowspan=2|75Br | rowspan=2 style="text-align:right" | 35 | rowspan=2 style="text-align:right" | 40 | rowspan=2|74.9258106(46) | rowspan=2|96.7(13) min | β+ (76%) | 75Se | rowspan=2|3/2− | rowspan=2| | rowspan=2| |- | EC (24%) | 76Se |- | rowspan=2|76Br | rowspan=2 style="text-align:right" | 35 | rowspan=2 style="text-align:right" | 41 | rowspan=2|75.924542(10) | rowspan=2|16.2(2) h | β+ (57%) | 76Se | rowspan=2|1− | rowspan=2| | rowspan=2| |- | EC (43%) | 76Se |-id=Bromine-76m | rowspan=2 style="text-indent:1em" | 76mBr | rowspan=2 colspan="3" style="text-indent:2em" | 102.58(3) keV | rowspan=2|1.31(2) s | IT (>99.4%) | 76Br | rowspan=2|(4)+ | rowspan=2| | rowspan=2| |- | β+ (<0.6%) | 76Se |- | rowspan=2|77Br | rowspan=2 style="text-align:right" | 35 | rowspan=2 style="text-align:right" | 42 | rowspan=2|76.9213792(30) | rowspan=2|57.04(12) h | EC (99.3%) | 77Se | rowspan=2|3/2− | rowspan=2| | rowspan=2| |- | β+ (0.7%) | 77Se |-id=Bromine-77m | style="text-indent:1em" | 77mBr | colspan="3" style="text-indent:2em" | 105.86(8) keV | 4.28(10) min | IT | 77Br | 9/2+ | | |-id=Bromine-78 | rowspan=2|78Br | rowspan=2 style="text-align:right" | 35 | rowspan=2 style="text-align:right" | 43 | rowspan=2|77.9211459(38) | rowspan=2|6.45(4) min | β+ (>99.99%) | 78Se | rowspan=2|1+ | rowspan=2| | rowspan=2| |- | β− (<0.01%) | 78Kr |-id=Bromine-78m | style="text-indent:1em" | 78mBr | colspan="3" style="text-indent:2em" | 180.89(13) keV | 119.4(10) μs | IT | 78Br | (4+) | | |-id=Bromine-79 | 79Br | style="text-align:right" | 35 | style="text-align:right" | 44 | 78.9183376(11) | colspan=3 align=center|Stable | 3/2− | 0.5065(9) | |-id=Bromine-79m | style="text-indent:1em" | 79mBr | colspan="3" style="text-indent:2em" | 207.61(9) keV | 4.85(4) s | IT | 79Br | 9/2+ | | |-id=Bromine-80 | rowspan=2|80Br | rowspan=2 style="text-align:right" | 35 | rowspan=2 style="text-align:right" | 45 | rowspan=2|79.9185298(11) | rowspan=2|17.68(2) min | β− (91.7%) | 80Kr | rowspan=2|1+ | rowspan=2| | rowspan=2| |- | β+ (8.3%) | 80Se |-id=Bromine-80m | style="text-indent:1em" | 80mBr | colspan="3" style="text-indent:2em" | 85.843(4) keV | 4.4205(8) h | IT | 80Br | 5− | | |-id=Bromine-81 | 81Br | style="text-align:right" | 35 | style="text-align:right" | 46 | 80.9162882(10) | colspan=3 align=center|Stable | 3/2− | 0.4935(9) | |-id=Bromine-81m | style="text-indent:1em" | 81mBr | colspan="3" style="text-indent:2em" | 536.20(9) keV | 34.6(28) μs | IT | 81Br | 9/2+ | | |-id=Bromine-82 | 82Br | style="text-align:right" | 35 | style="text-align:right" | 47 | 81.9168018(10) | 35.282(7) h | β− | 82Kr | 5− | | |-id=Bromine-82m | rowspan=2 style="text-indent:1em" | 82mBr | rowspan=2 colspan="3" style="text-indent:2em" | 45.9492(10) keV | rowspan=2|6.13(5) min | IT (97.6%) | 82Br | rowspan=2|2− | rowspan=2| | rowspan=2| |- | β− (2.4%) | 82Kr |-id=Bromine-83 | 83Br | style="text-align:right" | 35 | style="text-align:right" | 48 | 82.9151753(41) | 2.374(4) h | β− | 83Kr | 3/2− | | |-id=Bromine-83m | style="text-indent:1em" | 83mBr | colspan="3" style="text-indent:2em" | 3069.2(4) keV | 729(77) ns | IT | 83Br | (19/2−) | | |-id=Bromine-84 | 84Br | style="text-align:right" | 35 | style="text-align:right" | 49 | 83.9165136(17) | 31.76(8) min | β− | 84Kr | 2− | | |-id=Bromine-84m1 | style="text-indent:1em" | 84m1Br | colspan="3" style="text-indent:2em" | 193.6(15) keV | 6.0(2) min | β− | 84Kr | (6)− | | |-id=Bromine-84m2 | style="text-indent:1em" | 84m2Br | colspan="3" style="text-indent:2em" | 408.2(4) keV | <140 ns | IT | 84Br | 1+ | | |-id=Bromine-85 | 85Br | style="text-align:right" | 35 | style="text-align:right" | 50 | 84.9156458(33) | 2.90(6) min | β− | 85Kr | 3/2− | | |-id=Bromine-86 | 86Br | style="text-align:right" | 35 | style="text-align:right" | 51 | 85.9188054(33) | 55.1(4) s | β− | 86Kr | (1−) | | |-id=Bromine-87 | rowspan=2|87Br | rowspan=2 style="text-align:right" | 35 | rowspan=2 style="text-align:right" | 52 | rowspan=2|86.9206740(34) | rowspan=2|55.68(12) s | β− (97.40%) | 87Kr | rowspan=2|5/2− | rowspan=2| | rowspan=2| |- | β−, n (2.60%) | 86Kr |-id=Bromine-88 | rowspan=2|88Br | rowspan=2 style="text-align:right" | 35 | rowspan=2 style="text-align:right" | 53 | rowspan=2|87.9240833(34) | rowspan=2|16.34(8) s | β− (93.42%) | 88Kr | rowspan=2|(1−) | rowspan=2| | rowspan=2| |- | β−, n (6.58%) | 87Kr |-id=Bromine-88m | style="text-indent:1em" | 88mBr | colspan="3" style="text-indent:2em" | 270.17(11) keV | 5.51(4) μs | IT | 88Br | (4−) | | |-id=Bromine-89 | rowspan=2|89Br | rowspan=2 style="text-align:right" | 35 | rowspan=2 style="text-align:right" | 54 | rowspan=2|88.9267046(35) | rowspan=2|4.357(22) s | β− (86.2%) | 89Kr | rowspan=2|(3/2−, 5/2−) | rowspan=2| | rowspan=2| |- | β−, n (13.8%) | 88Kr |-id=Bromine-90 | rowspan=2|90Br | rowspan=2 style="text-align:right" | 35 | rowspan=2 style="text-align:right" | 55 | rowspan=2|89.9312928(36) | rowspan=2|1.910(10) s | β− (74.7%) | 90Kr | rowspan=2| | rowspan=2| | rowspan=2| |- | β−, n (25.3%) | 89Kr |-id=Bromine-91 | rowspan=2|91Br | rowspan=2 style="text-align:right" | 35 | rowspan=2 style="text-align:right" | 56 | rowspan=2|90.9343986(38) | rowspan=2|543(4) ms | β− (70.5%) | 91Kr | rowspan=2|5/2−# | rowspan=2| | rowspan=2| |- | β−, n (29.5%) | 90Kr |-id=Bromine-92 | rowspan=3|92Br | rowspan=3 style="text-align:right" | 35 | rowspan=3 style="text-align:right" | 57 | rowspan=3|91.9396316(72) | rowspan=3|314(16) ms | β− (66.9%) | 92Kr | rowspan=3|(2−) | rowspan=3| | rowspan=3| |- | β−, n (33.1%) | 91Kr |- | β−, 2n? | 90Kr |-id=Bromine-92m1 | style="text-indent:1em" | 92m1Br | colspan="3" style="text-indent:2em" | 662(1) keV | 88(8) ns | IT | 92Br | | | |-id=Bromine-92m2 | style="text-indent:1em" | 92m2Br | colspan="3" style="text-indent:2em" | 1138(1) keV | 85(10) ns | IT | 92Br | | | |-id=Bromine-93 | rowspan=3|93Br | rowspan=3 style="text-align:right" | 35 | rowspan=3 style="text-align:right" | 58 | rowspan=3|92.94322(46) | rowspan=3|152(8) ms | β−, n (64%) | 92Kr | rowspan=3|5/2−# | rowspan=3| | rowspan=3| |- | β− (36%) | 93Kr |- | β−, 2n? | 91Kr |-id=Bromine-94 | rowspan=3|94Br | rowspan=3 style="text-align:right" | 35 | rowspan=3 style="text-align:right" | 59 | rowspan=3|93.94885(22)# | rowspan=3|70(20) ms | β−, n (68%) | 93Kr | rowspan=3|2−# | rowspan=3| | rowspan=3| |- | β− (32%) | 94Kr |- | β−, 2n? | 92Kr |-id=Bromine-94m | style="text-indent:1em" | 94mBr | colspan="3" style="text-indent:2em" | 294.6(5) keV | 530(15) ns | IT | 94Br | | | |-id=Bromine-95 | rowspan=3|95Br | rowspan=3 style="text-align:right" | 35 | rowspan=3 style="text-align:right" | 60 | rowspan=3|94.95293(32)# | rowspan=3|80# ms [>300 ns] | β−? | 95Kr | rowspan=3|5/2−# | rowspan=3| | rowspan=3| |- | β−, n? | 94Kr |- | β−, 2n? | 93Kr |-id=Bromine-95m | style="text-indent:1em" | 95mBr | colspan="3" style="text-indent:2em" | 537.9(5) keV | 6.8(10) μs | IT | 95Br | | | |-id=Bromine-96 | rowspan=3|96Br | rowspan=3 style="text-align:right" | 35 | rowspan=3 style="text-align:right" | 61 | rowspan=3|95.95898(32)# | rowspan=3|20# ms [>300 ns] | β−? | 96Kr | rowspan=3| | rowspan=3| | rowspan=3| |- | β−, n? | 95Kr |- | β−, 2n? | 94Kr |-id=Bromine-96m | style="text-indent:1em" | 96mBr | colspan="3" style="text-indent:2em" | 311.5(5) keV | 3.0(9) μs | IT | 95Br | | | |-id=Bromine-97 | rowspan=3|97Br | rowspan=3 style="text-align:right" | 35 | rowspan=3 style="text-align:right" | 62 | rowspan=3|96.96350(43)# | rowspan=3|40# ms [>300 ns] | β−? | 97Kr | rowspan=3|5/2−# | rowspan=3| | rowspan=3| |- | β−, n? | 96Kr |- | β−, 2n? | 95Kr |-id=Bromine-98 | rowspan=3|98Br | rowspan=3 style="text-align:right" | 35 | rowspan=3 style="text-align:right" | 63 | rowspan=3|97.96989(43)# | rowspan=3|15# ms [>400 ns] | β−? | 98Kr | rowspan=3| | rowspan=3| | rowspan=3| |- | β−, n? | 97Kr |- | β−, 2n? | 96Kr |-id=Bromine-99 | 99Br | style="text-align:right" | 35 | style="text-align:right" | 64 | | | | | | | |-id=Bromine-100 | 100Br | style="text-align:right" | 35 | style="text-align:right" | 65 | | | | | | | |-id=Bromine-101 | 101Br | style="text-align:right" | 35 | style="text-align:right" | 66 | | | | | | | Bromine-75 Bromine-75 has a half-life of 97 minutes. This isotope undergoes β+ decay rather than electron capture about 76% of the time, so it was used for diagnosis and positron emission tomography (PET) in the 1980s. However, its decay product, selenium-75, produces secondary radioactivity with a longer half-life of 120.4 days. Bromine-76 Bromine-76 has a half-life of 16.2 hours. While its decay is more energetic than 75Br and has lower yield of positrons (about 57% of decays), bromine-76 has been preferred in PET applications since the 1980s because of its longer half-life and easier synthesis, and because its decay product, 76Se, is not radioactive. Bromine-77 Bromine-77 is the most stable radioisotope of bromine, with a half-life of 57 hours. Although β+ decay is possible for this isotope, about 99.3% of decays are by electron capture. Despite its complex emission spectrum, featuring strong gamma-ray emissions at 239, 297, 521, and 579 keV, 77Br was used in SPECT imaging in the 1970s. However, except for longer-term tracing, this is no longer considered practical due to the difficult collimator requirements and the proximity of the 521 keV line to the 511 keV annihilation radiation related to the β+ decay. The Auger electrons emitted during decay are nevertheless well-suited for radiotherapy, and 77Br can possibly be paired with the imaging-suited 76Br (produced as an impurity in common synthesis routes) for this application. References Isotope masses from: Isotopic compositions and standard atomic masses from: Half-life, spin, and isomer data selected from the following sources. Bromine Bromine
Isotopes of bromine
[ "Chemistry" ]
5,304
[ "Lists of isotopes by element", "Isotopes", "Isotopes of bromine" ]
2,527,022
https://en.wikipedia.org/wiki/Isotopes%20of%20arsenic
Arsenic (33As) has 32 known isotopes and at least 10 isomers. Only one of these isotopes, 75As, is stable; as such, it is considered a monoisotopic element. The longest-lived radioisotope is 73As with a half-life of 80 days. List of isotopes |-id=Arsenic-64 | rowspan=2 |64As | rowspan=2 style="text-align:right" | 33 | rowspan=2 style="text-align:right" | 31 | rowspan=2 |63.95756(22)# | rowspan=2 |69.0(14) ms | β+ | 64Ge | rowspan=2 |0+# | rowspan=2 | |- | β+, p? | 63Ga |-id=Arsenic-65 | rowspan=2 |65As | rowspan=2 style="text-align:right" | 33 | rowspan=2 style="text-align:right" | 32 | rowspan=2 |64.949611(91) | rowspan=2 |130.3(6) ms | β+ | 65Ge | rowspan=2 |3/2−# | rowspan=2 | |- | β+, p? | 64Ga |-id=Arsenic-66 | 66As | style="text-align:right" | 33 | style="text-align:right" | 33 | 65.9441488(61) | 95.77(23) ms | β+ | 66Ge | 0+ | |-id=Arsenic-66m1 | style="text-indent:1em" | 66m1As | colspan="3" style="text-indent:2em" | 1356.63(17) keV | 1.14(4) μs | IT | 66As | 5+ | |-id=Arsenic-66m2 | style="text-indent:1em" | 66m2As | colspan="3" style="text-indent:2em" | 3023.(3) keV | 7.98(26) μs | IT | 66As | 9+ | |-id=Arsenic-67 | 67As | style="text-align:right" | 33 | style="text-align:right" | 34 | 66.93925111(48) | 42.5(12) s | β+ | 67Ge | (5/2−) | |-id=Arsenic-68 | 68As | style="text-align:right" | 33 | style="text-align:right" | 35 | 67.9367741(20) | 151.6(8) s | β+ | 68Ge | 3+ | |-id=Arsenic-68m | style="text-indent:1em" | 68mAs | colspan="3" style="text-indent:2em" | 425.1(2) keV | 111(20) ns | IT | 68As | 1+ | |-id=Arsenic-69 | 69As | style="text-align:right" | 33 | style="text-align:right" | 36 | 68.932246(34) | 15.2(2) min | β+ | 69Ge | 5/2− | |-id=Arsenic-70 | 70As | style="text-align:right" | 33 | style="text-align:right" | 37 | 69.9309346(15) | 52.6(3) min | β+ | 70Ge | 4+ | |-id=Arsenic-70m | style="text-indent:1em" | 70mAs | colspan="3" style="text-indent:2em" | 32.046(23) keV | 96(3) μs | IT | 70As | 2+ | |-id=Arsenic-71 | 71As | style="text-align:right" | 33 | style="text-align:right" | 38 | 70.9271136(45) | 65.30(7) h | β+ | 71Ge | 5/2− | |-id=Arsenic-72 | 72As | style="text-align:right" | 33 | style="text-align:right" | 39 | 71.9267523(44) | 26.0(1) h | β+ | 72Ge | 2− | |-id=Arsenic-73 | 73As | style="text-align:right" | 33 | style="text-align:right" | 40 | 72.9238291(41) | 80.30(6) d | EC | 73Ge | 3/2− | |-id=Arsenic-73m | style="text-indent:1em" | 73mAs | colspan="3" style="text-indent:2em" | 427.902(21) keV | 5.7(2) μs | IT | 73As | 9/2+ | |-id=Arsenic-74 | rowspan=2|74As | rowspan=2 style="text-align:right" | 33 | rowspan=2 style="text-align:right" | 41 | rowspan=2|73.9239286(18) | rowspan=2|17.77(2) d | β+ (66%) | 74Ge | rowspan=2|2− | rowspan=2| |- | β− (34%) | 74Se |-id=Arsenic-75 | 75As | style="text-align:right" | 33 | style="text-align:right" | 42 | 74.92159456(95) | colspan=3 align=center|Stable | 3/2− | 1.0000 |-id=Arsenic-75m | style="text-indent:1em" | 75mAs | colspan="3" style="text-indent:2em" | 303.9243(8) keV | 17.62(23) ms | IT | 75As | 9/2+ | |-id=Arsenic-76 | 76As | style="text-align:right" | 33 | style="text-align:right" | 43 | 75.92239201(95) | 1.0933(38) d | β− | 76Se | 2− | |-id=Arsenic-76m | style="text-indent:1em" | 76mAs | colspan="3" style="text-indent:2em" | 44.425(1) keV | 1.84(6) μs | IT | 76As | (1)+ | |-id=Arsenic-77 | 77As | style="text-align:right" | 33 | style="text-align:right" | 44 | 76.9206476(18) | 38.79(5) h | β− | 77Se | 3/2− | |-id=Arsenic-77m | style="text-indent:1em" | 77mAs | colspan="3" style="text-indent:2em" | 475.48(4) keV | 114.0(25) μs | IT | 77As | 9/2+ | |-id=Arsenic-78 | 78As | style="text-align:right" | 33 | style="text-align:right" | 45 | 77.921828(10) | 90.7(2) min | β− | 78Se | 2− | |-id=Arsenic-79 | 79As | style="text-align:right" | 33 | style="text-align:right" | 46 | 78.9209484(57) | 9.01(15) min | β− | 79Se | 3/2− | |-id=Arsenic-79m | style="text-indent:1em" | 79mAs | colspan="3" style="text-indent:2em" | 772.81(6) keV | 1.21(1) μs | IT | 79As | (9/2)+ | |-id=Arsenic-80 | 80As | style="text-align:right" | 33 | style="text-align:right" | 47 | 79.9224744(36) | 15.2(2) s | β− | 80Se | 1+ | |-id=Arsenic-81 | 81As | style="text-align:right" | 33 | style="text-align:right" | 48 | 80.9221323(28) | 33.3(8) s | β− | 81Se | 3/2− | |-id=Arsenic-82 | 82As | style="text-align:right" | 33 | style="text-align:right" | 49 | 81.9247387(40) | 19.1(5) s | β− | 82Se | (2−) | |-id=Arsenic-82m | style="text-indent:1em" | 82mAs | colspan="3" style="text-indent:2em" | 131.6(5) keV | 13.6(4) s | β− | 82Se | (5-) | |-id=Arsenic-83 | 83As | style="text-align:right" | 33 | style="text-align:right" | 50 | 82.9252069(30) | 13.4(4) s | β− | 83Se | 5/2−# | |-id=Arsenic-84 | rowspan=2|84As | rowspan=2 style="text-align:right" | 33 | rowspan=2 style="text-align:right" | 51 | rowspan=2|83.9293033(34) | rowspan=2|3.16(58) s | β− (99.72%) | 84Se | rowspan=2|(2−) | rowspan=2| |- | β−, n (.28%) | 83Se |-id=Arsenic-85 | rowspan=2|85As | rowspan=2 style="text-align:right" | 33 | rowspan=2 style="text-align:right" | 52 | rowspan=2|84.9321637(33) | rowspan=2|2.022(7) s | β−, n (62.6%) | 84Se | rowspan=2|(5/2−) | rowspan=2| |- | β− (37.4%) | 85Se |-id=Arsenic-86 | rowspan=3|86As | rowspan=3 style="text-align:right" | 33 | rowspan=3 style="text-align:right" | 53 | rowspan=3|85.9367015(37) | rowspan=3|945(8) ms | β− (64.5%) | 86Se | rowspan=3|(1−,2−) | rowspan=3| |- | β−, n (35.5%) | 85Se |- | β−, 2n? | 84Se |-id=Arsenic-87 | rowspan=3|87As | rowspan=3 style="text-align:right" | 33 | rowspan=3 style="text-align:right" | 54 | rowspan=3|86.9402917(32) | rowspan=3|492(25) ms | β− (84.6%) | 87Se | rowspan=3|(5/2−,3/2−) | rowspan=3| |- | β−, n (15.4%) | 86Se |- | β−, 2n? | 85Se |-id=Arsenic-88 | rowspan=2|88As | rowspan=2 style="text-align:right" | 33 | rowspan=2 style="text-align:right" | 55 | rowspan=2|87.94584(22)# | rowspan=2|270(150) ms | β− | 88Se | rowspan=2| | rowspan=2| |- | β−, n? | 87Se |-id=Arsenic-89 | rowspan=3|89As | rowspan=3 style="text-align:right" | 33 | rowspan=3 style="text-align:right" | 56 | rowspan=3|88.95005(32)# | rowspan=3|220# ms [>150 ns] | β−? | 89Se | rowspan=3|5/2−# | rowspan=3| |- | β−, n? | 88Se |- | β−, 2n? | 87Se |-id=Arsenic-90 | rowspan=3|90As | rowspan=3 style="text-align:right" | 33 | rowspan=3 style="text-align:right" | 57 | rowspan=3|89.95600(43)# | rowspan=3|70# ms [>300 ns] | β−? | 90Se | rowspan=3| | rowspan=3| |- | β−, n? | 89Se |- | β−, 2n? | 88Se |-id=Arsenic-90m | style="text-indent:1em" | 90mAs | colspan="3" style="text-indent:2em" | 124.5(5) keV | 220(100) ns | IT | 90As | | |-id=Arsenic-91 | rowspan=3|91As | rowspan=3 style="text-align:right" | 33 | rowspan=3 style="text-align:right" | 58 | rowspan=3|90.96082(43)# | rowspan=3|100# ms [>300 ns] | β−? | 91Se | rowspan=3|5/2−# | rowspan=3| |- | β−, n? | 90Se |- | β−, 2n? | 89Se |-id=Arsenic-92 | rowspan=3|92As | rowspan=3 style="text-align:right" | 33 | rowspan=3 style="text-align:right" | 59 | rowspan=3|91.96739(54)# | rowspan=3|45# ms [>300 ns] | β−? | 92Se | rowspan=3| | rowspan=3| |- | β−, n? | 91Se |- | β−, 2n? | 90Se |-id=Arsenic-93 | 93As | style="text-align:right" | 33 | style="text-align:right" | 60 | | | | | | |-id=Arsenic-94 | 94As | style="text-align:right" | 33 | style="text-align:right" | 61 | | | | | | |-id=Arsenic-95 | 95As | style="text-align:right" | 33 | style="text-align:right" | 62 | | | | | | References Isotope masses from: Half-life, spin, and isomer data selected from the following sources. A.Shore, A. Fritsch, M. Heim, A. Schuh, M. Thoennessen. Discovery of the Arsenic Isotopes. arXiv:0902.4361. Arsenic Arsenic
Isotopes of arsenic
[ "Chemistry" ]
3,536
[ "Isotopes of arsenic", "Lists of isotopes by element", "Isotopes" ]
2,527,023
https://en.wikipedia.org/wiki/Isotopes%20of%20germanium
Germanium (32Ge) has five naturally occurring isotopes, 70Ge, 72Ge, 73Ge, 74Ge, and 76Ge. Of these, 76Ge is very slightly radioactive, decaying by double beta decay with a half-life of 1.78 × 1021 years (130 billion times the age of the universe). Stable 74Ge is the most common isotope, having a natural abundance of approximately 36%. 76Ge is the least common with a natural abundance of approximately 7%. At least 27 radioisotopes have also been synthesized ranging in atomic mass from 58 to 89. The most stable of these is 68Ge, decaying by electron capture with a half-life of 270.95 d. It decays to the medically useful positron-emitting isotope 68Ga. (See gallium-68 generator for notes on the source of this isotope, and its medical use.) The least stable known germanium isotope is 59Ge with a half-life of 13.3 ms. While most of germanium's radioisotopes decay by beta decay, 61Ge and 65Ge can also decay by β+-delayed proton emission. 84Ge through 87Ge also have minor β−-delayed neutron emission decay paths. 76Ge is used in experiments on the nature of neutrinos, by searching for neutrinoless double beta decay. List of isotopes |-id=Germanium-59 | rowspan=3|59Ge | rowspan=3 style="text-align:right" | 32 | rowspan=3 style="text-align:right" | 27 | rowspan=3|58.98243(43)# | rowspan=3|13.3(17) ms | β+, p (93%) | 58Zn | rowspan=3|7/2−# | rowspan=3| | rowspan=3| |- | β+ (7%) | 59Ga |- | 2p (<0.2%) | 57Zn |-id=Germanium-60 | rowspan=2|60Ge | rowspan=2 style="text-align:right" | 32 | rowspan=2 style="text-align:right" | 28 | rowspan=2|59.97045(32)# | rowspan=2|21(6) ms | β+, p | 59Zn | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β+, 2p (<14%) | 58Cu |-id=Germanium-61 | rowspan=2|61Ge | rowspan=2 style="text-align:right" | 32 | rowspan=2 style="text-align:right" | 29 | rowspan=2|60.96373(32)# | rowspan=2|40.7(4) ms | β+, p (87%) | 60Zn | rowspan=2|3/2−# | rowspan=2| | rowspan=2| |- | β+ (18%) | 61Ga |-id=Germanium-62 | 62Ge | style="text-align:right" | 32 | style="text-align:right" | 30 | 61.95476(15)# | 82.5(14) ms | β+ | 62Ga | 0+ | | |-id=Germanium-63 | 63Ge | style="text-align:right" | 32 | style="text-align:right" | 31 | 62.949628(40) | 153.6(11) ms | β+ | 63Ga | 3/2−# | | |-id=Germanium-64 | 64Ge | style="text-align:right" | 32 | style="text-align:right" | 32 | 63.9416899(40) | 63.7(25) s | β+ | 64Ga | 0+ | | |-id=Germanium-65 | rowspan=2|65Ge | rowspan=2 style="text-align:right" | 32 | rowspan=2 style="text-align:right" | 33 | rowspan=2|64.9393681(23) | rowspan=2|30.9(5) s | β+ (99.99%) | 65Ga | rowspan=2|3/2− | rowspan=2| | rowspan=2| |- | β+, p (0.011%) | 64Zn |-id=Germanium-66 | 66Ge | style="text-align:right" | 32 | style="text-align:right" | 34 | 65.9338621(26) | 2.26(5) h | β+ | 66Ga | 0+ | | |-id=Germanium-67 | 67Ge | style="text-align:right" | 32 | style="text-align:right" | 35 | 66.9327170(46) | 18.9(3) min | β+ | 67Ga | 1/2− | | |-id=Germanium-67m1 | style="text-indent:1em" | 67m1Ge | colspan="3" style="text-indent:2em" | 18.20(5) keV | 13.7(9) μs | IT | 67Ge | 5/2− | | |-id=Germanium-67m2 | style="text-indent:1em" | 67m2Ge | colspan="3" style="text-indent:2em" | 751.70(6) keV | 109.1(38) ns | IT | 67Ge | 9/2+ | | |-id=Germanium-68 | 68Ge | style="text-align:right" | 32 | style="text-align:right" | 36 | 67.9280953(20) | 271.05(8) d | EC | 68Ga | 0+ | | |-id=Germanium-69 | 69Ge | style="text-align:right" | 32 | style="text-align:right" | 37 | 68.9279645(14) | 39.05(10) h | β+ | 69Ga | 5/2− | | |-id=Germanium-69m1 | style="text-indent:1em" | 69m1Ge | colspan="3" style="text-indent:2em" | 86.76(2) keV | 5.1(2) μs | IT | 69Ge | 1/2− | | |-id=Germanium-69m2 | style="text-indent:1em" | 69m2Ge | colspan="3" style="text-indent:2em" | 397.94(2) keV | 2.81(5) μs | IT | 69Ge | 9/2+ | | |-id=Germanium-70 | 70Ge | style="text-align:right" | 32 | style="text-align:right" | 38 | 69.92424854(88) | colspan=3 align=center|Stable | 0+ | 0.2052(19) | |-id=Germanium-71 | 71Ge | style="text-align:right" | 32 | style="text-align:right" | 39 | 70.92495212(87) | 11.468(8) d | EC | 71Ga | 1/2− | | |-id=Germanium-71m | style="text-indent:1em" | 71mGe | colspan="3" style="text-indent:2em" | 198.354(14) keV | 20.41(18) ms | IT | 71Ge | 9/2+ | | |-id=Germanium-72 | 72Ge | style="text-align:right" | 32 | style="text-align:right" | 40 | 71.922075824(81) | colspan=3 align=center|Stable | 0+ | 0.2745(15) | |-id=Germanium-72m | style="text-indent:1em" | 72mGe | colspan="3" style="text-indent:2em" | 691.43(4) keV | 444.2(8) ns | IT | 72Ge | 0+ | | |-id=Germanium-73 | 73Ge | style="text-align:right" | 32 | style="text-align:right" | 41 | 72.923458954(61) | colspan=3 align=center|Stable | 9/2+ | 0.0776(8) | |-id=Germanium-73m1 | style="text-indent:1em" | 73m1Ge | colspan="3" style="text-indent:2em" | 13.2845(15) keV | 2.91(3) μs | IT | 73Ge | 5/2+ | | |-id=Germanium-73m2 | style="text-indent:1em" | 73m2Ge | colspan="3" style="text-indent:2em" | 66.725(9) keV | 499(11) ms | IT | 73Ge | 1/2− | | |-id=Germanium-74 | 74Ge | style="text-align:right" | 32 | style="text-align:right" | 42 | 73.921177760(13) | colspan=3 align=center|Stable | 0+ | 0.3652(12) | |-id=Germanium-75 | 75Ge | style="text-align:right" | 32 | style="text-align:right" | 43 | 74.922858370(55) | 82.78(4) min | β− | 75As | 1/2− | | |-id=Germanium-75m1 | rowspan=2 style="text-indent:1em" | 75m1Ge | rowspan=2 colspan="3" style="text-indent:2em" | 139.69(3) keV | rowspan=2|47.7(5) s | IT (99.97%) | 75Ge | rowspan=2|7/2+ | rowspan=2| | rowspan=2| |- | β− (0.030%) | 75As |-id=Germanium-75m2 | style="text-indent:1em" | 75m2Ge | colspan="3" style="text-indent:2em" | 192.19(6) keV | 216(5) ns | IT | 75Ge | 5/2+ | | |-id=Germanium-76 | 76Ge | style="text-align:right" | 32 | style="text-align:right" | 44 | 75.921402725(19) | (2.022±0.018±0.038) y | β−β− | 76Se | 0+ | 0.0775(12) | |-id=Germanium-77 | 77Ge | style="text-align:right" | 32 | style="text-align:right" | 45 | 76.923549843(56) | 11.211(3) h | β− | 77As | 7/2+ | | |-id=Germanium-77m | rowspan=2 style="text-indent:1em" | 77mGe | rowspan=2 colspan="3" style="text-indent:2em" | 159.71(6) keV | rowspan=2|53.7(6) s | β− (81%) | 77As | rowspan=2|1/2− | rowspan=2| | rowspan=2| |- | IT (19%) | 77Ge |-id=Germanium-78 | 78Ge | style="text-align:right" | 32 | style="text-align:right" | 46 | 77.9228529(38) | 88.0(10) min | β− | 78As | 0+ | | |-id=Germanium-79 | 79Ge | style="text-align:right" | 32 | style="text-align:right" | 47 | 78.925360(40) | 18.98(3) s | β− | 79As | (1/2)− | | |-id=Germanium-79m | rowspan=2 style="text-indent:1em" | 79mGe | rowspan=2 colspan="3" style="text-indent:2em" | 185.95(4) keV | rowspan=2|39.0(10) s | β− (96%) | 79As | rowspan=2|7/2+# | rowspan=2| | rowspan=2| |- | IT (4%) | 79Ge |-id=Germanium-80 | 80Ge | style="text-align:right" | 32 | style="text-align:right" | 48 | 79.9253508(22) | 29.5(4) s | β− | 80As | 0+ | | |-id=Germanium-81 | 81Ge | style="text-align:right" | 32 | style="text-align:right" | 49 | 80.9288329(22) | 9(2) s | β− | 81As | 9/2+# | | |-id=Germanium-81m | rowspan=2 style="text-indent:1em" | 81mGe | rowspan=2 colspan="3" style="text-indent:2em" | 679.14(4) keV | rowspan=2|6(2) s | β− | 81As | rowspan=2|(1/2+) | rowspan=2| | rowspan=2| |- | IT (<1%) | 81Ge |-id=Germanium-82 | 82Ge | style="text-align:right" | 32 | style="text-align:right" | 50 | 81.9297740(24) | 4.31(19) s | β− | 82As | 0+ | | |-id=Germanium-83 | 83Ge | style="text-align:right" | 32 | style="text-align:right" | 51 | 82.9345391(26) | 1.85(6) s | β− | 83As | (5/2+) | | |-id=Germanium-84 | rowspan=2|84Ge | rowspan=2 style="text-align:right" | 32 | rowspan=2 style="text-align:right" | 52 | rowspan=2|83.9375751(34) | rowspan=2|951(9) ms | β− (89.4%) | 84As | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (10.6%) | 83As |-id=Germanium-85 | rowspan=2|85Ge | rowspan=2 style="text-align:right" | 32 | rowspan=2 style="text-align:right" | 53 | rowspan=2|84.9429697(40) | rowspan=2|495(5) ms | β− (82.8%) | 85As | rowspan=2|(3/2+,5/2+)# | rowspan=2| | rowspan=2| |- | β−, n (17.2%) | 84As |-id=Germanium-86 | rowspan=2|86Ge | rowspan=2 style="text-align:right" | 32 | rowspan=2 style="text-align:right" | 54 | rowspan=2|85.94697(47) | rowspan=2|221.6(11) ms | β− (55%) | 86As | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (45%) | 85As |-id=Germanium-87 | 87Ge | style="text-align:right" | 32 | style="text-align:right" | 55 | 86.95320(32)# | 103(4) ms | β− | 87As | 5/2+# | | |-id=Germanium-88 | 88Ge | style="text-align:right" | 32 | style="text-align:right" | 56 | 87.95757(43)# | 61(6) ms | β− | 88As | 0+ | | |-id=Germanium-89 | 89Ge | style="text-align:right" | 32 | style="text-align:right" | 57 | 88.96453(43)# | 60# ms [>300 ns] | | | 3/2+# | | |-id=Germanium-90 | 90Ge | style="text-align:right" | 32 | style="text-align:right" | 58 | 89.96944(54)# | 30# ms [>400 ns] | | | 0+ | | |-id=Germanium-91 | 91Ge | style="text-align:right" | 32 | style="text-align:right" | 59 | | | | | | | |-id=Germanium-92 | 92Ge | style="text-align:right" | 32 | style="text-align:right" | 60 | | | | | | | References Germanium Germanium
Isotopes of germanium
[ "Chemistry" ]
4,081
[ "Lists of isotopes by element", "Isotopes of germanium", "Isotopes" ]
2,527,034
https://en.wikipedia.org/wiki/Isotopes%20of%20zinc
Naturally occurring zinc (30Zn) is composed of the 5 stable isotopes 64Zn, 66Zn, 67Zn, 68Zn, and 70Zn with 64Zn being the most abundant (48.6% natural abundance). Twenty-eight radioisotopes have been characterised with the most stable being 65Zn with a half-life of 244.26 days, and then 72Zn with a half-life of 46.5 hours. All of the remaining radioactive isotopes have half-lives that are less than 14 hours and the majority of these have half-lives that are less than 1 second. This element also has 10 meta states. Zinc has been proposed as a "salting" material for nuclear weapons. A jacket of isotopically enriched 64Zn, irradiated by the intense high-energy neutron flux from an exploding thermonuclear weapon, would transmute into the radioactive isotope 65Zn with a half-life of 244 days and produce approximately 1.115 MeV of gamma radiation, significantly increasing the radioactivity of the weapon's fallout for several years. Such a weapon is not known to have ever been built, tested, or used. List of isotopes |-id=Zinc-54 | 54Zn | style="text-align:right" | 30 | style="text-align:right" | 24 | 53.99388(23)# | 1.8(5) ms | 2p | 52Ni | 0+ | | |-id=Zinc-55 | rowspan=2|55Zn | rowspan=2 style="text-align:right" | 30 | rowspan=2 style="text-align:right" | 25 | rowspan=2|54.98468(43)# | rowspan=2|19.8(13) ms | β+, p (91.0%) | 54Ni | rowspan=2|5/2−# | rowspan=2| | rowspan=2| |- | β+ (9.0%) | 55Cu |-id=Zinc-56 | rowspan=2|56Zn | rowspan=2 style="text-align:right" | 30 | rowspan=2 style="text-align:right" | 26 | rowspan=2|55.97274(43)# | rowspan=2|32.4(7) ms | β+, p (88.0%) | 55Ni | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β+ (12.0%) | 56Cu |-id=Zinc-57 | rowspan=2|57Zn | rowspan=2 style="text-align:right" | 30 | rowspan=2 style="text-align:right" | 27 | rowspan=2|56.96506(22)# | rowspan=2|45.7(6) ms | β+, p (87%) | 56Ni | rowspan=2|7/2−# | rowspan=2| | rowspan=2| |- | β+ (13%) | 57Cu |-id=Zinc-58 | rowspan=2|58Zn | rowspan=2 style="text-align:right" | 30 | rowspan=2 style="text-align:right" | 28 | rowspan=2|57.954590(54) | rowspan=2|86.0(19) ms | β+ (99.3%) | 58Cu | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β+, p (0.7%) | 57Ni |-id=Zinc-59 | rowspan=2|59Zn | rowspan=2 style="text-align:right" | 30 | rowspan=2 style="text-align:right" | 29 | rowspan=2|58.94931189(81) | rowspan=2|178.7(13) ms | β+ (99.90%) | 59Cu | rowspan=2|3/2− | rowspan=2| | rowspan=2| |- | β+, p (0.10%) | 58Ni |-id=Zinc-60 | 60Zn | style="text-align:right" | 30 | style="text-align:right" | 30 | 59.94184132(59) | 2.38(5) min | β+ | 60Cu | 0+ | | |-id=Zinc-61 | 61Zn | style="text-align:right" | 30 | style="text-align:right" | 31 | 60.939507(17) | 89.1(2) s | β+ | 61Cu | 3/2− | | |-id=Zinc-62 | 62Zn | style="text-align:right" | 30 | style="text-align:right" | 32 | 61.93433336(66) | 9.193(15) h | β+ | 62Cu | 0+ | | |-id=Zinc-63 | 63Zn | style="text-align:right" | 30 | style="text-align:right" | 33 | 62.9332111(17) | 38.47(5) min | β+ | 63Cu | 3/2− | | |-id=Zinc-64 | 64Zn | style="text-align:right" | 30 | style="text-align:right" | 34 | 63.92914178(69) | colspan=3 align=center|Observationally Stable | 0+ | 0.4917(75) | |-id=Zinc-65 | rowspan="2"|65Zn | rowspan="2" style="text-align:right" | 30 | rowspan="2" style="text-align:right" | 35 | rowspan="2"|64.92924053(69) | rowspan="2"|243.94(4) d | EC (98.579(7)%) | rowspan="2"|65Cu | rowspan="2"|5/2− | rowspan="2"| | rowspan="2"| |- | β+ (1.421(7)%) |-id=Zinc-65m | style="text-indent:1em" | 65mZn | colspan="3" style="text-indent:2em" | 53.928(10) keV | 1.6(6) μs | IT | 65Zn | 1/2− | | |-id=Zinc-66 | 66Zn | style="text-align:right" | 30 | style="text-align:right" | 36 | 65.92603364(80) | colspan=3 align=center|Stable | 0+ | 0.2773(98) | |-id=Zinc-67 | 67Zn | style="text-align:right" | 30 | style="text-align:right" | 37 | 66.92712742(81) | colspan=3 align=center|Stable | 5/2− | 0.0404(16) | |-id=Zinc-67m1 | style="text-indent:1em" | 67m1Zn | colspan="3" style="text-indent:2em" | 93.312(5) keV | 9.15(7) μs | IT | 67Zn | 1/2− | | |-id=Zinc-67m2 | style="text-indent:1em" | 67m2Zn | colspan="3" style="text-indent:2em" | 604.48(5) keV | 333(14) ns | IT | 67Zn | 9/2+ | | |-id=Zinc-68 | 68Zn | style="text-align:right" | 30 | style="text-align:right" | 38 | 67.92484423(84) | colspan=3 align=center|Stable | 0+ | 0.1845(63) | |-id=Zinc-69 | 69Zn | style="text-align:right" | 30 | style="text-align:right" | 39 | 68.92655036(85) | 56.4(9) min | β− | 69Ga | 1/2− | | |-id=Zinc-69m | rowspan=2 style="text-indent:1em" | 69mZn | rowspan=2 colspan="3" style="text-indent:2em" | 438.636(18) keV | rowspan=2|13.747(11) h | IT (99.97%) | 69Zn | rowspan=2|9/2+ | rowspan=2| | rowspan=2| |- | β− (0.033%) | 69Ga |-id=Zinc-70 | 70Zn | style="text-align:right" | 30 | style="text-align:right" | 40 | 69.9253192(21) | colspan=3 align=center|Observationally Stable | 0+ | 0.0061(10) | |-id=Zinc-71 | 71Zn | style="text-align:right" | 30 | style="text-align:right" | 41 | 70.9277196(28) | 2.40(5) min | β− | 71Ga | 1/2− | | |-id=Zinc-71m | rowspan=2 style="text-indent:1em" | 71mZn | rowspan=2 colspan="3" style="text-indent:2em" | 157.7(13) keV | rowspan=2|4.148(12) h | β− | 71Ga | rowspan=2|9/2+ | rowspan=2| | rowspan=2| |- | IT? | 71Zn |-id=Zinc-72 | 72Zn | style="text-align:right" | 30 | style="text-align:right" | 42 | 71.9268428(23) | 46.5(1) h | β− | 72Ga | 0+ | | |-id=Zinc-73 | 73Zn | style="text-align:right" | 30 | style="text-align:right" | 43 | 72.9295826(20) | 24.5(2) s | β− | 73Ga | 1/2− | | |-id=Zinc-73m | style="text-indent:1em" | 73mZn | colspan="3" style="text-indent:2em" | 195.5(2) keV | 13.0(2) ms | IT | 73Zn | 5/2+ | | |-id=Zinc-74 | 74Zn | style="text-align:right" | 30 | style="text-align:right" | 44 | 73.9294073(27) | 95.6(12) s | β− | 74Ga | 0+ | | |-id=Zinc-75 | 75Zn | style="text-align:right" | 30 | style="text-align:right" | 45 | 74.9328402(21) | 10.2(2) s | β− | 75Ga | 7/2+ | | |-id=Zinc-75m | rowspan=2 style="text-indent:1em" | 75mZn | rowspan=2 colspan="3" style="text-indent:2em" | 126.94(9) keV | rowspan=2|5# s | β−? | 75Ga | rowspan=2|1/2− | rowspan=2| | rowspan=2| |- | IT? | 75Zn |-id=Zinc-76 | 76Zn | style="text-align:right" | 30 | style="text-align:right" | 46 | 75.9331150(16) | 5.7(3) s | β− | 76Ga | 0+ | | |-id=Zinc-77 | 77Zn | style="text-align:right" | 30 | style="text-align:right" | 47 | 76.9368872(21) | 2.08(5) s | β− | 77Ga | 7/2+ | | |-id=Zinc-77m | rowspan=2 style="text-indent:1em" | 77mZn | rowspan=2 colspan="3" style="text-indent:2em" | 772.440(15) keV | rowspan=2|1.05(10) s | β− (66%) | 77Ga | rowspan=2|1/2− | rowspan=2| | rowspan=2| |- | IT (34%) | 77Zn |-id=Zinc-78 | rowspan=2|78Zn | rowspan=2 style="text-align:right" | 30 | rowspan=2 style="text-align:right" | 48 | rowspan=2|77.9382892(21) | rowspan=2|1.47(15) s | β− | 78Ga | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n? | 77Ga |-id=Zinc-78m | style="text-indent:1em" | 78mZn | colspan="3" style="text-indent:2em" | 2673.7(6) keV | 320(6) ns | IT | 78Zn | (8+) | | |-id=Zinc-79 | rowspan=2|79Zn | rowspan=2 style="text-align:right" | 30 | rowspan=2 style="text-align:right" | 49 | rowspan=2|78.9426381(24) | rowspan=2|746(42) ms | β− (98.3%) | 79Ga | rowspan=2|9/2+ | rowspan=2| | rowspan=2| |- | β−, n (1.7%) | 78Ga |-id=Zinc-79m | rowspan=2 style="text-indent:1em" | 79mZn | rowspan=2 colspan="3" style="text-indent:2em" | 942(10) keV | rowspan=2|>200 ms | β−? | 79Ga | rowspan=2|1/2+ | rowspan=2| | rowspan=2| |- | IT? | 79Zn |-id=Zinc-80 | rowspan=2|80Zn | rowspan=2 style="text-align:right" | 30 | rowspan=2 style="text-align:right" | 50 | rowspan=2|79.9445529(28) | rowspan=2|562.2(30) ms | β− (98.64%) | 80Ga | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (1.36%) | 79Ga |-id=Zinc-81 | rowspan=3|81Zn | rowspan=3 style="text-align:right" | 30 | rowspan=3 style="text-align:right" | 51 | rowspan=3|80.9504026(54) | rowspan=3|299.4(21) ms | β− (77%) | 81Ga | rowspan=3|(1/2+, 5/2+) | rowspan=3| | rowspan=3| |- | β−, n (23%) | 80Ga |- | β−, 2n? | 79Ga |-id=Zinc-82 | rowspan=3|82Zn | rowspan=3 style="text-align:right" | 30 | rowspan=3 style="text-align:right" | 52 | rowspan=3|81.9545741(33) | rowspan=3|177.9(25) ms | β−, n (69%) | 81Ga | rowspan=3|0+ | rowspan=3| | rowspan=3| |- | β− (31%) | 82Ga |- | β−, 2n? | 80Ga |-id=Zinc-83 | rowspan=3|83Zn | rowspan=3 style="text-align:right" | 30 | rowspan=3 style="text-align:right" | 53 | rowspan=3|82.96104(32)# | rowspan=3|100(3) ms | β−, n (71%) | 82Ga | rowspan=3|3/2+# | rowspan=3| | rowspan=3| |- | β− (29%) | 83Ga |- | β−, 2n? | 81Ga |-id=Zinc-84 | rowspan=3|84Zn | rowspan=3 style="text-align:right" | 30 | rowspan=3 style="text-align:right" | 54 | rowspan=3|83.96583(43)# | rowspan=3|54(8) ms | β−, n (73%) | 83Ga | rowspan=3|0+ | rowspan=3| | rowspan=3| |- | β− (27%) | 84Ga |- | β−, 2n? | 82Ga |-id=Zinc-85 | rowspan=3|85Zn | rowspan=3 style="text-align:right" | 30 | rowspan=3 style="text-align:right" | 55 | rowspan=3|84.97305(54)# | rowspan=3|40# ms [>400 ns] | β−? | 85Ga | rowspan=3|5/2+# | rowspan=3| | rowspan=3| |- | β−, n? | 84Ga |- | β−, 2n? | 83Ga |-id=Zinc-86 | rowspan=2|86Zn | rowspan=2 style="text-align:right" | 30 | rowspan=2 style="text-align:right" | 56 | rowspan=2|85.97846(54)# | rowspan=2| | β−? | 86Ga | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n? | 85Ga |-id=Zinc-87 | 87Zn | style="text-align:right" | 30 | style="text-align:right" | 57 | | | | | | | References Isotope masses from: Isotopic compositions and standard atomic masses from: Half-life, spin, and isomer data selected from the following sources. External links Zinc isotopes data from The Berkeley Laboratory Isotopes Project's Zinc
Isotopes of zinc
[ "Chemistry" ]
4,400
[ "Lists of isotopes by element", "Isotopes", "Isotopes of zinc" ]
2,527,035
https://en.wikipedia.org/wiki/Isotopes%20of%20gallium
Natural gallium (31Ga) consists of a mixture of two stable isotopes: gallium-69 and gallium-71. Twenty-nine radioisotopes are known, all synthetic, with atomic masses ranging from 60 to 89; along with three nuclear isomers, 64mGa, 72mGa and 74mGa. Most of the isotopes with atomic mass numbers below 69 decay to isotopes of zinc, while most of the isotopes with masses above 71 decay to isotopes of germanium. Among them, the most commercially important radioisotopes are gallium-67 and gallium-68. Gallium-67 (half-life 3.3 days) is a gamma-emitting isotope (the gamma ray emitted immediately after electron capture) used in standard nuclear medical imaging, in procedures usually referred to as gallium scans. It is usually used as the free ion, Ga3+. It is the longest-lived radioisotope of gallium. The shorter-lived gallium-68 (half-life 68 minutes) is a positron-emitting isotope generated in very small quantities from germanium-68 in gallium-68 generators or in much greater quantities by proton bombardment of 68Zn in low-energy medical cyclotrons, for use in a small minority of diagnostic PET scans. For this use, it is usually attached as a tracer to a carrier molecule (for example the somatostatin analogue DOTATOC), which gives the resulting radiopharmaceutical a different tissue-uptake specificity from the ionic 67Ga radioisotope normally used in standard gallium scans. List of isotopes |-id=Gallium-60 | rowspan=3|60Ga | rowspan=3 style="text-align:right" | 31 | rowspan=3 style="text-align:right" | 29 | rowspan=3|59.95750(22)# | rowspan=3|72.4(17) ms | β+ (98.4%) | 60Zn | rowspan=3|(2+) | rowspan=3| | rowspan=3| |- | β+, p (1.6%) | 59Cu |- | β+, α? (<0.023%) | 56Ni |-id=Gallium-61 | rowspan=2|61Ga | rowspan=2 style="text-align:right" | 31 | rowspan=2 style="text-align:right" | 30 | rowspan=2|60.949399(41) | rowspan=2|165.9(25) ms | β+ | 61Zn | rowspan=2|3/2− | rowspan=2| | rowspan=2| |- | β+, p? (<0.25%) | 60Cu |-id=Gallium-62 | 62Ga | style="text-align:right" | 31 | style="text-align:right" | 31 | 61.94418964(68) | 116.122(21) ms | β+ | 62Zn | 0+ | | |-id=Gallium-63 | 63Ga | style="text-align:right" | 31 | style="text-align:right" | 32 | 62.9392942(14) | 32.4(5) s | β+ | 63Zn | 3/2− | | |-id=Gallium-64 | 64Ga | style="text-align:right" | 31 | style="text-align:right" | 33 | 63.9368404(15) | 2.627(12) min | β+ | 64Zn | 0(+#) | | |-id=Gallium-64m | style="text-indent:1em" | 64mGa | colspan="3" style="text-indent:2em" | 42.85(8) keV | 21.9(7) μs | IT | 64Ga | (2+) | | |-id=Gallium-65 | 65Ga | style="text-align:right" | 31 | style="text-align:right" | 34 | 64.93273442(85) | 15.133(28) min | β+ | 65Zn | 3/2− | | |-id=Gallium-66 | 66Ga | style="text-align:right" | 31 | style="text-align:right" | 35 | 65.9315898(12) | 9.304(8) h | β+ | 66Zn | 0+ | | |- | 67Ga | style="text-align:right" | 31 | style="text-align:right" | 36 | 66.9282023(13) | 3.2617(4) d | EC | 67Zn | 3/2− | | |- | 68Ga | style="text-align:right" | 31 | style="text-align:right" | 37 | 67.9279802(15) | 67.842(16) min | β+ | 68Zn | 1+ | | |-id=Gallium-69 | 69Ga | style="text-align:right" | 31 | style="text-align:right" | 38 | 68.9255735(13) | colspan=3 align=center|Stable | 3/2− | 0.60108(50) | |-id=Gallium-70 | rowspan=2|70Ga | rowspan=2 style="text-align:right" | 31 | rowspan=2 style="text-align:right" | 39 | rowspan=2|69.9260219(13) | rowspan=2|21.14(5) min | β− (99.59%) | 70Ge | rowspan=2|1+ | rowspan=2| | rowspan=2| |- | EC (0.41%) | 70Zn |-id=Gallium-71 | 71Ga | style="text-align:right" | 31 | style="text-align:right" | 40 | 70.92470255(87) | colspan=3 align=center|Stable | 3/2− | 0.39892(50) | |-id=Gallium-72 | 72Ga | style="text-align:right" | 31 | style="text-align:right" | 41 | 71.92636745(88) | 14.025(10) h | β− | 72Ge | 3− | | |-id=Gallium-72m | style="text-indent:1em" | 72mGa | colspan="3" style="text-indent:2em" | 119.66(5) keV | 39.68(13) ms | IT | 72Ga | (0+) | | |-id=Gallium-73 | 73Ga | style="text-align:right" | 31 | style="text-align:right" | 42 | 72.9251747(18) | 4.86(3) h | β− | 73Ge | 1/2− | | |-id=Gallium-73m | rowspan=2 style="text-indent:1em" | 73mGa | rowspan=2 colspan="3" style="text-indent:2em" | 0.15(9) keV | rowspan=2|<200 ms | IT? | 73Ga | rowspan=2|3/2− | rowspan=2| | rowspan=2| |- | β− | 73Ge |-id=Gallium-74 | 74Ga | style="text-align:right" | 31 | style="text-align:right" | 43 | 73.9269457(32) | 8.12(12) min | β− | 74Ge | (3−) | | |-id=Gallium-74m | rowspan=2 style="text-indent:1em" | 74mGa | rowspan=2 colspan="3" style="text-indent:2em" | 59.571(14) keV | rowspan=2|9.5(10) s | IT (>75%) | 74Ga | rowspan=2|(0)(+#) | rowspan=2| | rowspan=2| |- | β−? (<25%) | 74Ge |-id=Gallium-75 | 75Ga | style="text-align:right" | 31 | style="text-align:right" | 44 | 74.92650448(72) | 126(2) s | β− | 75Ge | 3/2− | | |-id=Gallium-76 | 76Ga | style="text-align:right" | 31 | style="text-align:right" | 45 | 75.9288276(21) | 30.6(6) s | β− | 76Ge | 2− | | |-id=Gallium-77 | rowspan=2|77Ga | rowspan=2 style="text-align:right" | 31 | rowspan=2 style="text-align:right" | 46 | rowspan=2|76.9291543(26) | rowspan=2|13.2(2) s | rowspan=2|β− | 77mGe (88%) | rowspan=2|3/2− | rowspan=2| | rowspan=2| |- | 77Ge (12%) |-id=Gallium-78 | 78Ga | style="text-align:right" | 31 | style="text-align:right" | 47 | 77.9316109(11) | 5.09(5) s | β− | 78Ge | 2− | | |-id=Gallium-78m | style="text-indent:1em" | 78mGa | colspan="3" style="text-indent:2em" | 498.9(5) keV | 110(3) ns | IT | 78Ga | | | |-id=Gallium-79 | rowspan=2|79Ga | rowspan=2 style="text-align:right" | 31 | rowspan=2 style="text-align:right" | 48 | rowspan=2|78.9328516(13) | rowspan=2|2.848(3) s | β− (99.911%) | 79Ge | rowspan=2|3/2− | rowspan=2| | rowspan=2| |- | β−, n (0.089%) | 78Ge |-id=Gallium-80 | rowspan=2|80Ga | rowspan=2 style="text-align:right" | 31 | rowspan=2 style="text-align:right" | 49 | rowspan=2|79.9364208(31) | rowspan=2|1.9(1) s | β− (99.14%) | 80Ge | rowspan=2|6− | rowspan=2| | rowspan=2| |- | β−, n (.86%) | 79Ge |-id=Gallium-80m | rowspan=3 style="text-indent:1em" | 80mGa | rowspan=3 colspan="3" style="text-indent:2em" | 22.45(10) keV | rowspan=3|1.3(2) s | β− | 80Ge | rowspan=3|3− | rowspan=3| | rowspan=3| |- | β−, n? | 79Ge |- | IT | 80Ga |-id=Gallium-81 | rowspan=2|81Ga | rowspan=2 style="text-align:right" | 31 | rowspan=2 style="text-align:right" | 50 | rowspan=2|80.9381338(35) | rowspan=2|1.217(5) s | β− (87.5%) | 81mGe | rowspan=2|5/2− | rowspan=2| | rowspan=2| |- | β−, n (12.5%) | 80Ge |-id=Gallium-82 | rowspan=3|82Ga | rowspan=3 style="text-align:right" | 31 | rowspan=3 style="text-align:right" | 51 | rowspan=3|81.9431765(26) | rowspan=3|600(2) ms | β− (78.8%) | 82Ge | rowspan=3|2− | rowspan=3| | rowspan=3| |- | β−, n (21.2%) | 81Ge |- | β−, 2n? | 80Ge |-id=Gallium-82m | style="text-indent:1em" | 82mGa | colspan="3" style="text-indent:2em" | 140.7(3) keV | 93.5(67) ns | IT | 82Ga | (4−) | | |-id=Gallium-83 | rowspan=3|83Ga | rowspan=3 style="text-align:right" | 31 | rowspan=3 style="text-align:right" | 52 | rowspan=3|82.9471203(28) | rowspan=3|310.0(7) ms | β−, n (85%) | 82Ge | rowspan=3|5/2−# | rowspan=3| | rowspan=3| |- | β− (15%) | 83Ge |- | β−, 2n? | 81Ge |-id=Gallium-84 | rowspan=3|84Ga | rowspan=3 style="text-align:right" | 31 | rowspan=3 style="text-align:right" | 53 | rowspan=3|83.952663(32) | rowspan=3|97.6(12) ms | β− (55%) | 84Ge | rowspan=3|0−# | rowspan=3| | rowspan=3| |- | β−, n (43%) | 83Ge |- | β−, 2n (1.6%) | 82Ge |-id=Gallium-85 | rowspan=3|85Ga | rowspan=3 style="text-align:right" | 31 | rowspan=3 style="text-align:right" | 54 | rowspan=3|84.957333(40) | rowspan=3|95.3(10) ms | β−, n (77%) | 84Ge | rowspan=3|(5/2−) | rowspan=3| | rowspan=3| |- | β− (22%) | 85Ge |- | β−, 2n (1.3%) | 83Ge |-id=Gallium-86 | rowspan=3|86Ga | rowspan=3 style="text-align:right" | 31 | rowspan=3 style="text-align:right" | 55 | rowspan=3|85.96376(43)# | rowspan=3|49(2) ms | β−, n (69%) | 85Ge | rowspan=3| | rowspan=3| | rowspan=3| |- | β−, 2n (16.2%) | 84Ge |- | β− (15%) | 86Ge |-id=Gallium-87 | rowspan=3|87Ga | rowspan=3 style="text-align:right" | 31 | rowspan=3 style="text-align:right" | 56 | rowspan=3|86.96901(54)# | rowspan=3|29(4) ms | β−, n (81%) | 84Ge | rowspan=3|5/2−# | rowspan=3| | rowspan=3| |- | β−, 2n (10.2%) | 85Ge |- | β− (9%) | 87Ge |-id=Gallium-88 | rowspan=2|88Ga | rowspan=2 style="text-align:right" | 31 | rowspan=2 style="text-align:right" | 57 | rowspan=2|87.97596(54)# | rowspan=2| | β−? | 88Ge | rowspan=2| | rowspan=2| | rowspan=2| |- | β−, n? | 87Ge |-id=Gallium-89 | 89Ga | style="text-align:right" | 31 | style="text-align:right" | 58 | | | | | | | Gallium-67 Gallium-67 () has a half-life of 3.26 days and decays by electron capture and gamma emission (in de-excitation) to stable zinc-67. It is a radiopharmaceutical used in gallium scans (alternatively, the shorter-lived gallium-68 may be used). This gamma-emitting isotope is imaged by gamma camera. Gallium-68 Gallium-68 () is a positron emitter with a half-life of 68 minutes, decaying to stable zinc-68. It is a radiopharmaceutical, generated in situ from the electron capture of germanium-68 (half-life 271 days) owing to its short half-life. This positron-emitting isotope can be imaged efficiently by PET scan (see gallium scan); alternatively, the longer-lived gallium-67 may be used. Gallium-68 is only used as a positron emitting tag for a ligand which binds to certain tissues, such as DOTATOC, which is a somatostatin analogue useful for imaging neuroendocrine tumors. Gallium-68 DOTA scans are increasingly replacing octreotide scans (a type of indium-111 scan using octreotide as a somatostatin receptor ligand). The is bound to a chemical such as DOTATOC and the positrons it emits are imaged by PET-CT scan. Such scans are useful in locating neuroendocrine tumors and pancreatic cancer. Thus, octreotide scanning for NET tumors is being increasingly replaced by gallium-68 DOTATOC scan. References Isotope masses from: Isotopic compositions and standard atomic masses from: Half-life, spin, and isomer data selected from the following sources. Gallium Gallium
Isotopes of gallium
[ "Chemistry" ]
4,217
[ "Isotopes of gallium", "Lists of isotopes by element", "Isotopes" ]
2,527,036
https://en.wikipedia.org/wiki/Isotopes%20of%20copper
Copper (29Cu) has two stable isotopes, 63Cu and 65Cu, along with 28 radioisotopes. The most stable radioisotope is 67Cu with a half-life of 61.83 hours. Most of the others have half-lives under a minute. Unstable copper isotopes with atomic masses below 63 tend to undergo β+ decay, while isotopes with atomic masses above 65 tend to undergo β− decay. 64Cu decays by both β+ and β−. There are at least 10 metastable isomers of copper, including two each for 70Cu and 75Cu. The most stable of these is 68mCu with a half-life of 3.75 minutes. The least stable is 75m2Cu with a half-life of 149 ns. List of isotopes |-id=Copper-55 | rowspan=2|55Cu | rowspan=2 style="text-align:right" | 29 | rowspan=2 style="text-align:right" | 26 | rowspan=2|54.96604(17) | rowspan=2|55.9(15) ms | β+ | 55Ni | rowspan=2|3/2−# | rowspan=2| | rowspan=2| |- | β+, p (?%) | 54Co |-id=Copper-56 | rowspan=2|56Cu | rowspan=2 style="text-align:right" | 29 | rowspan=2 style="text-align:right" | 27 | rowspan=2|55.9585293(69) | rowspan=2|80.8(6) ms | β+ (99.60%) | 56Ni | rowspan=2|(4+) | rowspan=2| | rowspan=2| |- | β+, p (0.40%) | 55Co |-id=Copper-57 | 57Cu | style="text-align:right" | 29 | style="text-align:right" | 28 | 56.94921169(54) | 196.4(7) ms | β+ | 57Ni | 3/2− | | |-id=Copper-58 | 58Cu | style="text-align:right" | 29 | style="text-align:right" | 29 | 57.94453228(60) | 3.204(7) s | β+ | 58Ni | 1+ | | |-id=Copper-59 | 59Cu | style="text-align:right" | 29 | style="text-align:right" | 30 | 58.93949671(57) | 81.5(5) s | β+ | 59Ni | 3/2− | | |-id=Copper-60 | 60Cu | style="text-align:right" | 29 | style="text-align:right" | 31 | 59.9373638(17) | 23.7(4) min | β+ | 60Ni | 2+ | | |-id=Copper-61 | 61Cu | style="text-align:right" | 29 | style="text-align:right" | 32 | 60.9334574(10) | 3.343(16) h | β+ | 61Ni | 3/2− | | |-id=Copper-62 | 62Cu | style="text-align:right" | 29 | style="text-align:right" | 33 | 61.9325948(07) | 9.672(8) m | β+ | 62Ni | 1+ | | |-id=Copper-63 | 63Cu | style="text-align:right" | 29 | style="text-align:right" | 34 | 62.92959712(46) | colspan=3 align=center|Stable | 3/2− | 0.6915(15) | |- | rowspan=2|64Cu | rowspan=2 style="text-align:right" | 29 | rowspan=2 style="text-align:right" | 35 | rowspan=2|63.92976400(46) | rowspan=2|12.7004(13) h | β+ (61.52%) | 64Ni | rowspan=2|1+ | rowspan=2| | rowspan=2| |- | β− (38.48%) | 64Zn |-id=Copper-65 | 65Cu | style="text-align:right" | 29 | style="text-align:right" | 36 | 64.92778948(69) | colspan=3 align=center|Stable | 3/2− | 0.3085(15) | |-id=Copper-66 | 66Cu | style="text-align:right" | 29 | style="text-align:right" | 37 | 65.92886880(70) | 5.120(14) min | β− | 66Zn | 1+ | | |-id=Copper-66m | style="text-indent:1em" | 66mCu | colspan="3" style="text-indent:2em" | 1154.2(14) keV | 600(17) ns | IT | 68Cu | (6)− | | |-id=Copper-67 | 67Cu | style="text-align:right" | 29 | style="text-align:right" | 38 | 66.92772949(96) | 61.83(12) h | β− | 67Zn | 3/2− | | |-id=Copper-68 | 68Cu | style="text-align:right" | 29 | style="text-align:right" | 39 | 67.9296109(17) | 30.9(6) s | β− | 68Zn | 1+ | | |-id=Copper-68m | rowspan=2 style="text-indent:1em" | 68mCu | rowspan=2 colspan="3" style="text-indent:2em" | 721.26(8) keV | rowspan=2|3.75(5) min | IT (86%) | 68Cu | rowspan=2|6− | rowspan=2| | rowspan=2| |- | β− (14%) | 68Zn |-id=Copper-69 | 69Cu | style="text-align:right" | 29 | style="text-align:right" | 40 | 68.929429267(15) | 2.85(15) min | β− | 69Zn | 3/2− | | |-id=Copper-69m | style="text-indent:1em" | 69mCu | colspan="3" style="text-indent:2em" | 2742.0(7) keV | 357(2) ns | IT | 69Cu | (13/2+) | | |-id=Copper-70 | 70Cu | style="text-align:right" | 29 | style="text-align:right" | 41 | 69.9323921(12) | 44.5(2) s | β− | 70Zn | 6− | | |-id=Copper-70m1 | rowspan=2 style="text-indent:1em" | 70m1Cu | rowspan=2 colspan="3" style="text-indent:2em" | 101.1(3) keV | rowspan=2|33(2) s | β− (52%) | 70Zn | rowspan=2|3− | rowspan=2| | rowspan=2| |- | IT (48%) | 70Cu |-id=Copper-70m2 | rowspan=2 style="text-indent:1em" | 70m2Cu | rowspan=2 colspan="3" style="text-indent:2em" | 242.6(5) keV | rowspan=2|6.6(2) s | β− (93.2%) | 70Zn | rowspan=2|1+ | rowspan=2| | rowspan=2| |- | IT (6.8%) | 70Cu |-id=Copper-71 | 71Cu | style="text-align:right" | 29 | style="text-align:right" | 42 | 70.9326768(16) | 19.4(14) s | β− | 71Zn | 3/2− | | |-id=Copper-71m | style="text-indent:1em" | 71mCu | colspan="3" style="text-indent:2em" | 2755.7(6) keV | 271(13) ns | IT | 71Cu | (19/2−) | | |-id=Copper-72 | 72Cu | style="text-align:right" | 29 | style="text-align:right" | 43 | 71.9358203(15) | 6.63(3) s | β− | 72Zn | 2− | | |-id=Copper-72m | style="text-indent:1em" | 72mCu | colspan="3" style="text-indent:2em" | 270(3) keV | 1.76(3) μs | IT | 72Cu | (6−) | | |-id=Copper-73 | rowspan=2|73Cu | rowspan=2 style="text-align:right" | 29 | rowspan=2 style="text-align:right" | 44 | rowspan=2|72.9366744(21) | rowspan=2|4.20(12) s | β− (99.71%) | 73Zn | rowspan=2|3/2− | rowspan=2| | rowspan=2| |- | β−, n (0.29%) | 72Zn |-id=Copper-74 | rowspan=2|74Cu | rowspan=2 style="text-align:right" | 29 | rowspan=2 style="text-align:right" | 45 | rowspan=2|73.9398749(66) | rowspan=2|1.606(9) s | β− (99.93%) | 74Zn | rowspan=2|2− | rowspan=2| | rowspan=2| |- | β−, n (0.075%) | 73Zn |-id=Copper-75 | rowspan=2|75Cu | rowspan=2 style="text-align:right" | 29 | rowspan=2 style="text-align:right" | 46 | rowspan=2|74.94152382(77) | rowspan=2|1.224(3) s | β− (97.3%) | 75Zn | rowspan=2|5/2− | rowspan=2| | rowspan=2| |- | β−, n (2.7%) | 74Zn |-id=Copper-75m1 | style="text-indent:1em" | 75m1Cu | colspan="3" style="text-indent:2em" | 61.7(4) keV | 0.310(8) μs | IT | 75Cu | 1/2− | | |-id=Copper-75m2 | style="text-indent:1em" | 75m2Cu | colspan="3" style="text-indent:2em" | 66.2(4) keV | 0.149(5) μs | IT | 75Cu | 3/2− | | |-id=Copper-76 | rowspan=2|76Cu | rowspan=2 style="text-align:right" | 29 | rowspan=2 style="text-align:right" | 47 | rowspan=2|75.9452370(21) | rowspan=2|1.27(30) s | β− (?%) | 76Zn | rowspan=2|(1,2) | rowspan=2| | rowspan=2| |- | β−, n (?%) | 75Zn |-id=Copper-76m | rowspan=3 style="text-indent:1em" | 76mCu | rowspan=3 colspan="3" style="text-indent:2em" | 64.8(25) keV | rowspan=3|637.7(55) ms | β− (?%) | 76Zn | rowspan=3|3− | rowspan=3| | rowspan=3| |- | β−, n (?%) | 75Zn |- | IT (10–17%) | 76Cu |-id=Copper-77 | rowspan=2|77Cu | rowspan=2 style="text-align:right" | 29 | rowspan=2 style="text-align:right" | 48 | rowspan=2|76.9475436(13) | rowspan=2|470.3(17) ms | β− (69.9%) | 77Zn | rowspan=2|5/2− | rowspan=2| | rowspan=2| |- | β−, n (30.1%) | 76Zn |-id=Copper-78 | rowspan=2|78Cu | rowspan=2 style="text-align:right" | 29 | rowspan=2 style="text-align:right" | 49 | rowspan=2|77.9519206(81) | rowspan=2|330.7(20) ms | β−, n (50.6%) | 77Zn | rowspan=2|(6−) | rowspan=2| | rowspan=2| |- | β− (49.4%) | 78Zn |-id=Copper-79 | rowspan=2|79Cu | rowspan=2 style="text-align:right" | 29 | rowspan=2 style="text-align:right" | 50 | rowspan=2|78.95447(11) | rowspan=2|241.3(21) ms | β−, n (66%) | 78Zn | rowspan=2|(5/2−) | rowspan=2| | rowspan=2| |- | β− (34%) | 79Zn |-id=Copper-80 | rowspan=2|80Cu | rowspan=2 style="text-align:right" | 29 | rowspan=2 style="text-align:right" | 51 | rowspan=2|79.96062(32)# | rowspan=2|113.3(64) ms | β−, n (59%) | 79Zn | rowspan=2| | rowspan=2| | rowspan=2| |- | β− (41%) | 80Zn |-id=Copper-81 | rowspan=2|81Cu | rowspan=2 style="text-align:right" | 29 | rowspan=2 style="text-align:right" | 52 | rowspan=2| 80.96574(32)# | rowspan=2| 73.2(68) ms | β−, n (81%) | 80Zn | rowspan=2|5/2−# | rowspan=2| | rowspan=2| |- | β− (19%) | 81Zn |-id=Copper-82 | 82Cu | style="text-align:right" | 29 | style="text-align:right" | 53 | 81.97238(43)# | 34(7) ms | β− | 82Zn | | | |-id=Copper-83 | 83Cu | style="text-align:right" | 29 | style="text-align:right" | 54 | 82.97811(54)# | 21# ms [>410 ns] | | | 5/2−# | | |-id=Copper-84 | 84Cu | style="text-align:right" | 29 | style="text-align:right" | 55 | 83.98527(54)# | | | | | | Copper nuclear magnetic resonance Both stable isotopes of copper (63Cu and 65Cu) have nuclear spin of 3/2−, and thus produce nuclear magnetic resonance spectra, although the spectral lines are broad due to quadrupolar broadening. 63Cu is the more sensitive nucleus while 65Cu yields very slightly narrower signals. Usually though 63Cu NMR is preferred. Medical applications Copper offers a relatively large number of radioisotopes that are potentially useful for nuclear medicine. There is growing interest in the use of Cu, Cu, Cu, and Cu for diagnostic purposes and Cu and Cu for targeted radiotherapy. For example, Cu has a longer half-life than most positron-emitters (12.7 hours) and is thus ideal for diagnostic PET imaging of biological molecules. References Isotope masses from: Isotopic compositions and standard atomic masses from: Half-life, spin, and isomer data selected from the following sources. Application of Copper radioisotopes in Medicine (Review Paper): Copper Copper
Isotopes of copper
[ "Chemistry" ]
3,945
[ "Isotopes of copper", "Lists of isotopes by element", "Isotopes" ]
2,527,039
https://en.wikipedia.org/wiki/Isotopes%20of%20nickel
Naturally occurring nickel (28Ni) is composed of five stable isotopes; , , , and , with being the most abundant (68.077% natural abundance). 26 radioisotopes have been characterised with the most stable being with a half-life of 81,000 years, with a half-life of 100.1 years, and with a half-life of 6.077 days. All of the remaining radioactive isotopes have half-lives that are less than 60 hours and the majority of these have half-lives that are less than 30 seconds. This element also has 8 meta states. List of isotopes |- |rowspan=3| |rowspan=3 style="text-align:right" | 28 |rowspan=3 style="text-align:right" | 20 |rowspan=3| 48.01952(46)# |rowspan=3| 2.8(8) ms |2p (70%) | |rowspan=3| 0+ |rowspan=3| |rowspan=3| |- |β+ (30%) | |- |β+, p? | |-id=Nickel-49 |rowspan=2| |rowspan=2 style="text-align:right" | 28 |rowspan=2 style="text-align:right" | 21 |rowspan=2| 49.00916(64)# |rowspan=2| 7.5(10) ms |β+, p (83%) | |rowspan=2| 7/2−# |rowspan=2| |rowspan=2| |- |β+ (17%) | |-id=Nickel-50 |rowspan=3| |rowspan=3 style="text-align:right" | 28 |rowspan=3 style="text-align:right" | 22 |rowspan=3| 49.99629(54)# |rowspan=3| 18.5(12) ms | β+, p (73%) | |rowspan=3| 0+ |rowspan=3| |rowspan=3| |- |β+, 2p (14%) | |- |β+ (13%) | |-id=Nickel-51 |rowspan=3| |rowspan=3 style="text-align:right" | 28 |rowspan=3 style="text-align:right" | 23 |rowspan=3| 50.98749(54)# |rowspan=3| 23.8(2) ms | β+, p (87.2%) | |rowspan=3| 7/2−# |rowspan=3| |rowspan=3| |- |β+ (12.3%) | |- |β+, 2p (0.5%) | |-id=Nickel-52 | rowspan=2| | rowspan=2 style="text-align:right" | 28 | rowspan=2 style="text-align:right" | 24 | rowspan=2|51.975781(89) | rowspan=2|41.8(10) ms | β+ (68.9%) | | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β+, p (31.1%) | |-id=Nickel-53 | rowspan=2| | rowspan=2 style="text-align:right" | 28 | rowspan=2 style="text-align:right" | 25 | rowspan=2|52.968190(27) | rowspan=2|55.2(7) ms | β+ (77.3%) | | rowspan=2|(7/2−) | rowspan=2| | rowspan=2| |- | β+, p (22.7%) | |-id=Nickel-54 | rowspan=2| | rowspan=2 style="text-align:right" | 28 | rowspan=2 style="text-align:right" | 26 | rowspan=2|53.9578330(50) | rowspan=2|114.1(3) ms | β+ | | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β+, p? | |-id=Nickel-54m | rowspan=2 style="text-indent:1em" | | rowspan=2 colspan="3" style="text-indent:2em" | 6457.4(9) keV | rowspan=2|152(4) ns | IT (64%) | | rowspan=2|10+ | rowspan=2| | rowspan=2| |- | p (36%) | |-id=Nickel-55 | | style="text-align:right" | 28 | style="text-align:right" | 27 | 54.95132985(76) | 203.9(13) ms | β+ | | 7/2− | | |- | rowspan=2| | rowspan=2 style="text-align:right" | 28 | rowspan=2 style="text-align:right" | 28 | rowspan=2|55.94212776(43) | rowspan=2|6.075(10) d | EC | | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β+ (<%) | |-id=Nickel-57 | | style="text-align:right" | 28 | style="text-align:right" | 29 | 56.93979139(61) | 35.60(6) h | β+ | | 3/2− | | |- | | style="text-align:right" | 28 | style="text-align:right" | 30 | 57.93534165(37) | colspan=3 align=center|Observationally stable | 0+ | 0.680769(190) | |- | rowspan=2 | | rowspan=2 style="text-align:right" | 28 | rowspan=2 style="text-align:right" | 31 | rowspan=2 | 58.93434544(38) | rowspan=2 | 8.1(5)×104 y | EC (99%) | rowspan=2 | | rowspan=2 | 3/2− | rowspan=2 | | rowspan=2 | |- | β+ (1.5%) |- | | style="text-align:right" | 28 | style="text-align:right" | 32 | 59.93078513(38) | colspan=3 align=center|Stable | 0+ | 0.262231(150) | |-id=Nickel-61 | | style="text-align:right" | 28 | style="text-align:right" | 33 | 60.93105482(38) | colspan=3 align=center|Stable | 3/2− | 0.011399(13) | |- | | style="text-align:right" | 28 | style="text-align:right" | 34 | 61.92834475(46) | colspan=3 align=center|Stable | 0+ | 0.036345(40) | |- | | style="text-align:right" | 28 | style="text-align:right" | 35 | 62.92966902(46) | 101.2(15) y | β− | | 1/2− | | |-id=Nickel-63m | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | 87.15(11) keV | 1.67(3) μs | IT | 63Ni | 5/2− | | |- | | style="text-align:right" | 28 | style="text-align:right" | 36 | 63.92796623(50) | colspan=3 align=center|Stable | 0+ | 0.009256(19) | |-id=Nickel-65 | | style="text-align:right" | 28 | style="text-align:right" | 37 | 64.93008459(52) | 2.5175(5) h | β− | | 5/2− | | |-id=Nickel-65m | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | 63.37(5) keV | 69(3) μs | IT | 65Ni | 1/2− | | |-id=Nickel-66 | | style="text-align:right" | 28 | style="text-align:right" | 38 | 65.9291393(15) | 54.6(3) h | β− | | 0+ | | |-id=Nickel-67 | | style="text-align:right" | 28 | style="text-align:right" | 39 | 66.9315694(31) | 21(1) s | β− | | 1/2− | | |-id=Nickel-67m | rowspan=2 style="text-indent:1em" | | rowspan=2 colspan="3" style="text-indent:2em" | 1006.6(2) keV | rowspan=2|13.34(19) μs | IT | | rowspan=2|9/2+ | rowspan=2| | rowspan=2| |- | IT | |-id=Nickel-68 | | style="text-align:right" | 28 | style="text-align:right" | 40 | 67.9318688(32) | 29(2) s | β− | | 0+ | | |-id=Nickel-68m1 | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | 1603.51(28) keV | 270(5) ns | IT | 68Ni | 0+ | | |-id=Nickel-68m2 | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | 2849.1(3) keV | 850(30) μs | IT | 68Ni | 5− | | |-id=Nickel-69 | | style="text-align:right" | 28 | style="text-align:right" | 41 | 68.9356103(40) | 11.4(3) s | β− | | (9/2+) | | |-id=Nickel-69m1 | rowspan=2 style="text-indent:1em" | | rowspan=2 colspan="3" style="text-indent:2em" | 321(2) keV | rowspan=2|3.5(4) s | β− | | rowspan=2|(1/2−) | rowspan=2| | rowspan=2| |- | IT (<0.01%) | |-id=Nickel-69m2 | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | 2700.0(10) keV | 439(3) ns | IT | 69Ni | (17/2−) | | |-id=Nickel-70 | | style="text-align:right" | 28 | style="text-align:right" | 42 | 69.9364313(23) | 6.0(3) s | β− | | 0+ | | |-id=Nickel-70m | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | 2860.91(8) keV | 232(1) ns | IT | 70Ni | 8+ | | |-id=Nickel-71 | | style="text-align:right" | 28 | style="text-align:right" | 43 | 70.9405190(24) | 2.56(3) s | β− | | (9/2+) | | |-id=Nickel-71m | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | 499(5) keV | 2.3(3) s | β− | 71Cu | (1/2−) | | |-id=Nickel-72 | rowspan=2| | rowspan=2 style="text-align:right" | 28 | rowspan=2 style="text-align:right" | 44 | rowspan=2|71.9417859(24) | rowspan=2|1.57(5) s | β− | | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n? | |-id=Nickel-73 | rowspan=2| | rowspan=2 style="text-align:right" | 28 | rowspan=2 style="text-align:right" | 45 | rowspan=2|72.9462067(26) | rowspan=2|840(30) ms | β− | | rowspan=2|(9/2+) | rowspan=2| | rowspan=2| |- | β−, n? | |-id=Nickel-74 | rowspan=2| | rowspan=2 style="text-align:right" | 28 | rowspan=2 style="text-align:right" | 46 | rowspan=2|73.9479853(38) | rowspan=2|507.7(46) ms | β− | | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n? | |-id=Nickel-75 | rowspan=2| | rowspan=2 style="text-align:right" | 28 | rowspan=2 style="text-align:right" | 47 | rowspan=2|74.952704(16) | rowspan=2|331.6(32) ms | β− (90.0%) | | rowspan=2|9/2+# | rowspan=2| | rowspan=2| |- | β−, n (10.0%) | |-id=Nickel-76 | rowspan=2| | rowspan=2 style="text-align:right" | 28 | rowspan=2 style="text-align:right" | 48 | rowspan=2|75.95471(32)# | rowspan=2|234.6(27) ms | β− (86.0%) | | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β−, n (14.0%) | |-id=Nickel-76m | style="text-indent:1em" | | colspan="3" style="text-indent:2em" | 2418.0(5) keV | 547.8(33) ns | IT | 76Ni | (8+) | | |-id=Nickel-77 | rowspan=3| | rowspan=3 style="text-align:right" | 28 | rowspan=3 style="text-align:right" | 49 | rowspan=3|76.95990(43)# | rowspan=3|158.9(42) ms | β− (74%) | | rowspan=3|9/2+# | rowspan=3| | rowspan=3| |- | β−, n (26%) | |- | β−, 2n? | |- | rowspan=3| | rowspan=3 style="text-align:right" | 28 | rowspan=3 style="text-align:right" | 50 | rowspan=3|77.96256(43)# | rowspan=3|122.2(51) ms | β− | | rowspan=3|0+ | rowspan=3| | rowspan=3| |- | β−, n? | |- | β−, 2n? | |-id=Nickel-79 | rowspan=3| | rowspan=3 style="text-align:right" | 28 | rowspan=3 style="text-align:right" | 51 | rowspan=3|78.96977(54)# | rowspan=3|44(8) ms | β− | | rowspan=3|5/2+# | rowspan=3| | rowspan=3| |- | β−, n? | |- | β−, 2n? | |-id=Nickel-80 | rowspan=3| | rowspan=3 style="text-align:right" | 28 | rowspan=3 style="text-align:right" | 52 | rowspan=3|79.97505(64)# | rowspan=3|30(22) ms | β− | | rowspan=3|0+ | rowspan=3| | rowspan=3| |- | β−, n? | |- | β−, 2n? | |-id=Nickel-81 | | style="text-align:right" | 28 | style="text-align:right" | 53 | 80.98273(75)# | 30# ms[>410 ns] | β−? | | 3/2+# | | |-id=Nickel-82 | | style="text-align:right" | 28 | style="text-align:right" | 54 | 81.98849(86)# | 16# ms[>410 ns] | β−? | | 0+ | | Notable isotopes The known isotopes of nickel range in mass number from to , and include: Nickel-48, discovered in 1999, is the most neutron-poor nickel isotope known. With 28 protons and 20 neutrons is "doubly magic" (like ) and therefore much more stable (with a lower limit of its half-life-time of .5 μs) than would be expected from its position in the chart of nuclides. It has the highest ratio of protons to neutrons (proton excess) of any known doubly magic nuclide. Nickel-56 is produced in large quantities in supernovae. In the last phases of stellar evolution of very large stars, nuclear fusion of lighter elements like hydrogen and helium comes to an end. Later in the star's life cycle, elements including magnesium, silicon, and sulfur are fused to form heavier elements. Once the last nuclear fusion reactions cease, the star collapses to produce a supernova. During the supernova, silicon burning produces 56Ni. This isotope of nickel is favored because it has an equal number of neutrons and protons, making it readily produced by fusing two 28Si atoms. 56Ni is the final element that can be formed in the alpha process. Past 56Ni, nuclear reactions would be endoergic and would be energetically unfavorable. Once 56Ni is formed it subsequently decays to 56Co and then 56Fe by β+ decay. The radioactive decay of  56Ni and 56Co supplies much of the energy for the light curves observed for stellar supernovae. The shape of the light curve of these supernovae display characteristic timescales corresponding to the decay of 56Ni to 56Co and then to 56Fe. Nickel-58 is the most abundant isotope of nickel, making up 68.077% of the natural abundance. Possible sources include electron capture from copper-58 and EC + p from zinc-59. Nickel-59 is a long-lived cosmogenic radionuclide with a half-life of 81,000 years. has found many applications in isotope geology. has been used to date the terrestrial age of meteorites and to determine abundances of extraterrestrial dust in ice and sediment. Nickel-60 is the daughter product of the extinct radionuclide (half-life = 2.6 My). Because had such a long half-life, its persistence in materials in the Solar System at high enough concentrations may have generated observable variations in the isotopic composition of . Therefore, the abundance of present in extraterrestrial material may provide insight into the origin of the Solar System and its early history/very early history. Unfortunately, nickel isotopes appear to have been heterogeneously distributed in the early Solar System. Therefore, so far, no actual age information has been attained from excesses. is also the stable end-product of the decay of , the product of the final rung of the alpha ladder. Other sources may also include beta decay from cobalt-60 and electron capture from copper-60. Nickel-61 is the only stable isotope of nickel with a nuclear spin (I = 3/2), which makes it useful for studies by EPR spectroscopy. Nickel-62 has the highest binding energy per nucleon of any isotope for any element, when including the electron shell in the calculation. More energy is released forming this isotope than any other, although fusion can form heavier isotopes. For instance, two atoms can fuse to form plus 4 positrons (plus 4 neutrinos), liberating 77 keV per nucleon, but reactions leading to the iron/nickel region are more probable as they release more energy per baryon. Nickel-63 has two main uses: Detection of explosives traces, and in certain kinds of electronic devices, such as gas discharge tubes used as surge protectors. A surge protector is a device that protects sensitive electronic equipment like computers from sudden changes in the electric current flowing into them. It is also used in Electron capture detector in gas chromatography for the detection mainly of halogens. It is proposed to be used for miniature betavoltaic generators for pacemakers. Nickel-64 is another stable isotope of nickel. Possible sources include beta decay from cobalt-64, and electron capture from copper-64. Nickel-78 is one of the element's heaviest known isotopes. With 28 protons and 50 neutrons, nickel-78 is doubly magic, resulting in much greater nuclear binding energy and stability despite having a lopsided neutron-proton ratio. It has a half-life of milliseconds. As a consequence of its magic neutron number, nickel-78 is believed to have an important involvement in supernova nucleosynthesis of elements heavier than iron. 78Ni, along with N = 50 isotones 79Cu and 80Zn, are thought to constitute a waiting point in the r-process, where further neutron capture is delayed by the shell gap and a buildup of isotopes around A = 80 results. References Isotope masses from: Isotopic compositions and standard atomic masses from: Half-life, spin, and isomer data selected from the following sources. Nickel Nickel
Isotopes of nickel
[ "Chemistry" ]
5,208
[ "Isotopes of nickel", "Lists of isotopes by element", "Isotopes" ]
17,253,501
https://en.wikipedia.org/wiki/Composite%20image%20filter
A composite image filter is an electronic filter consisting of multiple image filter sections of two or more different types. The image method of filter design determines the properties of filter sections by calculating the properties they would have in an infinite chain of identical sections. In this, the analysis parallels transmission line theory on which it is based. Filters designed by this method are called image parameter filters, or just image filters. An important parameter of image filters is their image impedance, the impedance of an infinite chain of identical sections. The basic sections are arranged into a ladder network of several sections, the number of sections required is mostly determined by the amount of stopband rejection required. In its simplest form, the filter can consist entirely of identical sections. However, it is more usual to use a composite filter of two or three different types of section to improve different parameters best addressed by a particular type. The most frequent parameters considered are stopband rejection, steepness of the filter skirt (transition band) and impedance matching to the filter terminations. Image filters are linear filters and are invariably also passive in implementation. History The image method of designing filters originated at AT&T, who were interested in developing filtering that could be used with the multiplexing of many telephone channels on to a single cable. The researchers involved in this work and their contributions are briefly listed below; John Carson provided the mathematical underpinning to the theory. He invented single-sideband modulation for the purpose of multiplexing telephone channels. It was the need to recover these signals that gave rise to the need for advanced filtering techniques. He also pioneered the use of operational calculus (what has now become the theory of Laplace transforms in its more formal mathematical guise) to analyse these signals. George Campbell worked on filtering from 1910 onwards and invented the constant k filter. This can be seen as a continuation of his work on loading coils on transmission lines, a concept invented by Oliver Heaviside. Heaviside, incidentally, also invented the operational calculus that Carson used. Otto Zobel provided a theoretical basis (and the name) for Campbell's filters. In 1920 he invented the m-derived filter. Zobel also published composite designs incorporating both constant and -derived sections. R. S. Hoyt also contributed. The image method The image analysis starts with a calculation of the input and output impedances (the image impedances) and the transfer function of a section in an infinite chain of identical sections. This can be shown to be equivalent to the performance of a section terminated in its image impedances. The image method, therefore, relies on each filter section being terminated with the correct image impedance. This is easy enough to do with the internal sections of a multiple section filter, because it is only necessary to ensure that the sections facing the one in question have identical image impedances. However, the end sections are a problem. They will usually be terminated with fixed resistances that the filter cannot match perfectly except at one specific frequency. This mismatch leads to multiple reflections at the filter terminations and at the junctions between sections. These reflections result in the filter response deviating quite sharply from the theoretical, especially near the cut-off frequency. The requirement for better matching to the end impedances is one of the main motivations for using composite filters. A section designed to give good matching is used at the ends but something else (for instance stopband rejection or passband to stopband transition) is designed for the body of the filter. Filter section types Each filter section type has particular advantages and disadvantages and each has the capability to improve particular filter parameters. The sections described below are the prototype filters for low-pass sections. These prototypes may be scaled and transformed to the desired frequency bandform (low-pass, high-pass, band-pass or band-stop). The smallest unit of an image filter is an L half-section. Because the L section is not symmetrical, it has different image impedances () on each side. These are denoted and The T and the Π in the suffix refer to the shape of the filter section that would be formed if two half sections were to be connected back-to-back. T and Π are the smallest symmetrical sections that can be constructed, as shown in diagrams in the topology chart (below). Where the section in question has an image impedance different from the general case a further suffix is added identifying the section type, for instance Constant section The constant or -type filter section is the basic image filter section. It is also the simplest circuit topology. The -type has moderately fast transition from the passband to the stopband and moderately good stopband rejection. -derived section The -derived or -type filter section is a development of the -type section. The most prominent feature of the -type is a pole of attenuation just past the cut-off frequency inside the stopband. The parameter adjusts the position of this pole of attenuation. Smaller values of put the pole closer to the cut-off frequency. Larger values of put it further away. In the limit, as approaches , the pole approaches of infinity and the section approaches a -type section. The -type has a particularly fast cut-off, going from fully pass at the cut-off frequency to fully stop at the pole frequency. The cut-off can be made faster by moving the pole nearer to the cut-off frequency. This filter has the fastest cut-off of any filter design; note that the fast transition is achieved with just a single section, there is no need for multiple sections. The drawback with m-type sections is that they have poor stopband rejection past the pole of attenuation. There is a particularly useful property of -type filters with =0.6 . These have maximally flat image impedance in the passband. They are therefore good for matching in to the filter terminations, in the passband at least, the stopband is another story. There are two variants of the -type section, series and shunt. They have identical transfer functions but their image impedances are different. The shunt half-section has an image impedance which matches on one side but has a different impedance, on the other. The series half-section matches on one side and has on the other. -type section The -type section has two independent parameters ( and ) that the designer can adjust. It is arrived at by double application of the -derivation process. Its chief advantage is that it rather better at matching in to resistive end terminations than the -type or -type. The image impedance of a half-section is on one side and a different impedance, on the other. Like the -type, this section can be constructed as a series or shunt section and the image impedances will come in T and Π variants. Either a series construction is applied to a shunt -type or a shunt construction is applied to a series -type. The advantages of the -type filter are achieved at the expense of greater circuit complexity so it would normally only be used where it is needed for impedance matching purposes and not in the body of the filter. The transfer function of an -type is the same as an -type with set to the product . To choose values of and for best impedance match requires the designer to choose two frequencies at which the match is to be exact, at other frequencies there will be some deviation. There is thus some leeway in the choice, but Zobel suggests the values =0.7230 and =0.4134 which give a deviation of the impedance of less than 2% over the useful part of the band. Since =0.3, this section will also have a much faster cut-off than an -type of =0.6 which is an alternative for impedance matching. It is possible to continue the -derivation process repeatedly and produce -types and so on. However, the improvements obtained diminish at each iteration and are not usually worth the increase in complexity. Bode's filter Another variation on the -type filter was described by Hendrik Bode. This filter uses as a prototype a mid-series m-derived filter and transforms this into a bridged-T topology with the addition of a bridging resistor. This section has the advantage of being able to place the pole of attenuation much closer to the cut-off frequency than the Zobel filter, which starts to fail to work properly with very small values of because of inductor resistance. See equivalent impedance transforms for an explanation of its operation. Zobel network The distinguishing feature of Zobel network filters is that they have a constant resistance image impedance and for this reason are also known as constant resistance networks. Clearly, the Zobel network filter does not have a problem matching to its terminations and this is its main advantage. However, other filter types have steeper transfer functions and sharper cut-offs. In filtering applications, the main role of Zobel networks is as equalisation filters. Zobel networks are in a different group from other image filters. The constant resistance means that when used in combination with other image filter sections the same problem of matching arises as with end terminations. Zobel networks also suffer the disadvantage of using far more components than other equivalent image sections. Effect of end terminations A consequence of the image method of filter design is that the effect of the end terminations has to be calculated separately if its effects on response are to be taken into account. The most severe deviation of the response from that predicted occurs in the passband close to cut-off. The reason for this is twofold. Further into the passband the impedance match progressively improves, thus limiting the error. On the other hand, waves in the stopband are reflected from the end termination due to mismatch but are attenuated twice by the filter stopband rejection as they pass through it. So while stopband impedance mismatch may be severe, it has only limited effect on the filter response. Cascading sections Several L half-sections may be cascaded to form a composite filter. The most important rule when constructing a composite image filter is that the image impedances must always face an identical impedance; like must always face like. T sections must always face T sections, Π sections must always face Π sections, -type must always face -type (or the side of an -type which has the -type impedance) and -type must always face -type. Furthermore, -type impedances of different values of cannot face each other. Nor can sections of any type which have different values of cut-off frequency. Sections at the beginning and end of the filter are often chosen for their impedance match in to the terminations rather than the shape of their frequency response. For this purpose, -type sections of = 0.6 are the most common choice. An alternative is -type sections of =0.7230 and =0.4134 although this type of section is rarely used. While it has several advantages noted below, it has the disadvantages of being more complex and also, if constant sections are required in the body of the filter, it is then necessary to include -type sections to interface the -type to the -types. The inner sections of the filter are most commonly chosen to be constant since these produce the greatest stopband attenuation. However, one or two -type sections might also be included to improve the rate of fall from pass to stopband. A low value of is chosen for -types used for this purpose. The lower the value of , the faster the transition, while at the same time, the stopband attenuation becomes less, increasing the need to use extra -type sections as well. An advantage of using -types for impedance matching is that these type of end sections will have a fast transition anyway (much more so than =0.6, -type) because =0.3 for impedance matching. So the need for sections in the body of the filter to do this may be dispensed with. Another reason for using -types in the body of the filter is to place an additional pole of attenuation in the stopband. The frequency of the pole directly depends on the value of . The smaller the value of , the closer the pole is to the cut-off frequency. Conversely, a large value of places the pole further away from cut-off until in the limit when =1 the pole is at infinity and the response is the same as the -type section. If a value of is chosen for this pole which is different from the pole of the end sections it will have the effect of broadening the band of good stopband rejection near to the cut-off frequency. In this way the -type sections serve to give good stopband rejection near to cut-off and the k-type sections give good stopband rejection far from cut-off. Alternatively, -type sections can be used in the body of the filter with different values of if the value found in the end sections is unsuitable. Here again, the -type would have some advantages if used for impedance matching. The -type used for impedance matching places the pole at =0.3 . However, the other half of the impedance matching section needs to be an -type of =0.723 . This automatically gives a good spread of stopband rejection and as with the steepness of transition issue, use of -type sections may remove the need for additional -type sections in the body. Constant resistance sections may also be required, if the filter is being used on a transmission line, to improve the flatness of the passband response. This is necessary because the transmission line response is not usually anywhere near perfectly flat. These sections would normally be placed closest to the line since they present a predictable impedance to the line and also tend to mask the indeterminate impedance of the line from the rest of the filter. There is no issue with matching constant resistance sections to each other even when the sections are operating on totally different frequency bands. All sections can be made to have precisely the same image impedance of a fixed resistance. See also Image filter types constant filter general -type image filters Lattice filter -derived filter -type filter Zobel network Design concepts Image impedance Prototype filter Loading coils People George Campbell John Renshaw Carson Oliver Heaviside Otto Zobel References Bibliography Linear filters Image impedance filters Filter theory Analog circuits Electronic design
Composite image filter
[ "Engineering" ]
2,984
[ "Telecommunications engineering", "Electronic design", "Analog circuits", "Filter theory", "Electronic engineering", "Design" ]
17,254,254
https://en.wikipedia.org/wiki/Ernst%20equation
In mathematics, the Ernst equation is an integrable non-linear partial differential equation, named after the American physicist . The Ernst equation The equation reads: where is the real part of . For its Lax pair and other features see e.g. and references therein. Usage The Ernst equation is employed in order to produce exact solutions of the Einstein's equations in the general theory of relativity. References Partial differential equations General relativity Integrable systems
Ernst equation
[ "Physics" ]
92
[ "Integrable systems", "Theoretical physics", "Theory of relativity", "General relativity", "Theoretical physics stubs" ]
17,257,316
https://en.wikipedia.org/wiki/Modal%20algebra
In algebra and logic, a modal algebra is a structure such that is a Boolean algebra, is a unary operation on A satisfying and for all x, y in A. Modal algebras provide models of propositional modal logics in the same way as Boolean algebras are models of classical logic. In particular, the variety of all modal algebras is the equivalent algebraic semantics of the modal logic K in the sense of abstract algebraic logic, and the lattice of its subvarieties is dually isomorphic to the lattice of normal modal logics. Stone's representation theorem can be generalized to the Jónsson–Tarski duality, which ensures that each modal algebra can be represented as the algebra of admissible sets in a modal general frame. A Magari algebra (or diagonalizable algebra) is a modal algebra satisfying . Magari algebras correspond to provability logic. See also Interior algebra Heyting algebra References A. Chagrov and M. Zakharyaschev, Modal Logic, Oxford Logic Guides vol. 35, Oxford University Press, 1997. Algebra Boolean algebra
Modal algebra
[ "Mathematics" ]
234
[ "Boolean algebra", "Algebra stubs", "Mathematical logic", "Fields of abstract algebra", "Modal logic", "Algebra" ]
1,822,395
https://en.wikipedia.org/wiki/Thermodynamic%20versus%20kinetic%20reaction%20control
Thermodynamic reaction control or kinetic reaction control in a chemical reaction can decide the composition in a reaction product mixture when competing pathways lead to different products and the reaction conditions influence the selectivity or stereoselectivity. The distinction is relevant when product A forms faster than product B because the activation energy for product A is lower than that for product B, yet product B is more stable. In such a case A is the kinetic product and is favoured under kinetic control and B is the thermodynamic product and is favoured under thermodynamic control. The conditions of the reaction, such as temperature, pressure, or solvent, affect which reaction pathway may be favored: either the kinetically controlled or the thermodynamically controlled one. Note this is only true if the activation energy of the two pathways differ, with one pathway having a lower Ea (energy of activation) than the other. Prevalence of thermodynamic or kinetic control determines the final composition of the product when these competing reaction pathways lead to different products. The reaction conditions as mentioned above influence the selectivity of the reaction - i.e., which pathway is taken. Asymmetric synthesis is a field in which the distinction between kinetic and thermodynamic control is especially important. Because pairs of enantiomers have, for all intents and purposes, the same Gibbs free energy, thermodynamic control will produce a racemic mixture by necessity. Thus, any catalytic reaction that provides product with nonzero enantiomeric excess is under at least partial kinetic control. (In many stoichiometric asymmetric transformations, the enantiomeric products are actually formed as a complex with the chirality source before the workup stage of the reaction, technically making the reaction a diastereoselective one. Although such reactions are still usually kinetically controlled, thermodynamic control is at least possible, in principle.) Scope In Diels–Alder reactions The Diels–Alder reaction of cyclopentadiene with furan can produce two isomeric products. At room temperature, kinetic reaction control prevails and the less stable endo isomer 2 is the main reaction product. At 81 °C and after long reaction times, the chemical equilibrium can assert itself and the thermodynamically more stable exo isomer 1 is formed. The exo product is more stable by virtue of a lower degree of steric congestion, while the endo product is favoured by orbital overlap in the transition state. An outstanding and very rare example of the full kinetic and thermodynamic reaction control in the process of the tandem inter-/intramolecular Diels–Alder reaction of bis-furyl dienes 3 with hexafluoro-2-butyne or dimethyl acetylenedicarboxylate (DMAD) have been discovered and described in 2018. At low temperature, the reactions occur chemoselectively leading exclusively to adducts of pincer-[4+2] cycloaddition (5). The exclusive formation of domino-adducts (6) is observed at elevated temperatures. Theoretical DFT calculations of the reaction between hexafluoro-2-butyne and dienes 3a-c were performed. The reaction starting with [4+2] cycloaddition of CF3C≡CCF3 at one of the furan moieties occurs in a concerted fashion via TS1 and represents the rate limiting step of the whole process with the activation barrier ΔG‡ ≈ 23.1–26.8 kcal/mol. Further, the reaction could proceed via two competing channels, i.e. either leading to the pincer type products 5 via TS2k or resulting in the formation of the domino product 6 via TS2t. The calculations showed that the first channel is more kinetically favourable (ΔG‡ ≈ 5.7–5.9 kcal/mol). Meanwhile, the domino products 6 are more thermodynamically stable than 5 (ΔG‡ ≈ 4.2-4.7 kcal/mol) and this fact may cause isomerization of 5 into 6 at elevated temperature. Indeed, the calculated activation barriers for the 5 → 6 isomerization via the retro-Diels–Alder reaction of 5 followed by the intramolecular [4+2]-cycloaddition in the chain intermediate 4 to give 6 are 34.0–34.4 kcal/mol. In enolate chemistry In the protonation of an enolate ion, the kinetic product is the enol and the thermodynamic product is a ketone or aldehyde. Carbonyl compounds and their enols interchange rapidly by proton transfers catalyzed by acids or bases, even in trace amounts, in this case mediated by the enolate or the proton source. In the deprotonation of an unsymmetrical ketone, the kinetic product is the enolate resulting from removal of the most accessible α-H while the thermodynamic product has the more highly substituted enolate moiety. Use of low temperatures and sterically demanding bases increases the kinetic selectivity. Here, the difference in pKb between the base and the enolate is so large that the reaction is essentially irreversible, so the equilibration leading to the thermodynamic product is likely a proton exchange occurring during the addition between the kinetic enolate and as-yet-unreacted ketone. An inverse addition (adding ketone to the base) with rapid mixing would minimize this. The position of the equilibrium will depend on the countercation and solvent. If a much weaker base is used, the deprotonation will be incomplete, and there will be an equilibrium between reactants and products. Thermodynamic control is obtained, however the reaction remains incomplete unless the product enolate is trapped, as in the example below. Since H transfers are very fast, the trapping reaction being slower, the ratio of trapped products largely mirrors the deprotonation equilibrium. In electrophilic additions The electrophilic addition reaction of hydrogen bromide to 1,3-butadiene above room temperature leads predominantly to the thermodynamically more stable 1,4 adduct, 1-bromo-2-butene, but decreasing the reaction temperature to below room temperature favours the kinetic 1,2 adduct, 3-bromo-1-butene. The rationale for the differing selectivities is as follows: Both products result from Markovnikov protonation at position 1, resulting in a resonance-stabilized allylic cation. The 1,4 adduct places the larger Br atom at a less congested site and includes a more highly substituted alkene moiety, while the 1,2 adduct is the result of the attack by the nucleophile (Br−) at the carbon of the allylic cation bearing the greatest positive charge (the more highly substituted carbon is the most likely place for the positive charge). Characteristics In principle, every reaction is on the continuum between pure kinetic control and pure thermodynamic control. These terms are with respect to a given temperature and time scale. A process approaches pure kinetic control at low temperature and short reaction time. For a sufficiently long time scale, every reaction approaches pure thermodynamic control, at least in principle. This time scale becomes shorter as the temperature is raised. In every reaction, the first product formed is that which is most easily formed. Thus, every reaction a priori starts under kinetic control. A necessary condition for thermodynamic control is reversibility or a mechanism permitting the equilibration between products. Reactions are considered to take place under thermodynamic reaction control when the reverse reaction is sufficiently rapid that the equilibrium establishes itself within the allotted reaction time. In this way, the thermodynamically more stable product is always favoured. Under kinetic reaction control, one or both forward reactions leading to the possible products is significantly faster than the equilibration between the products. After reaction time t, the product ratio is the ratio of rate constants k and thus a function of the difference in activation energies Ea or ΔG‡:     (equation 1) Unless equilibration is prevented (e.g., by removal of the product from the reaction mixture as soon as it forms), "pure" kinetic control is strictly speaking impossible, because some amount of equilibration will take place before the reactants are entirely consumed. In practice, many systems are well approximated as operating under kinetic control, due to negligibly slow equilibration. For example, many enantioselective catalytic systems provide nearly enantiopure product (> 99% ee), even though the enantiomeric products have the same Gibbs free energy and are equally favored thermodynamically. Under pure thermodynamic reaction control, when the equilibrium has been reached, the product distribution will be a function of the stabilities G°. After an infinite amount of reaction time, the ratio of product concentrations will equal the equilibrium constant Keq and therefore be a function of the difference in Gibbs free energies,     (equation 2) In principle, "pure" thermodynamic control is also impossible, since equilibrium is only achieved after infinite reaction time. In practice, if A and B interconvert with overall rate constants kf and kr, then for most practical purposes, the change in composition becomes negligible after t ~ 3.5/(kf + kr), or approximately five half-lives, and the system product ratio can be regarded as the result of thermodynamic control. In general, short reaction times favour kinetic control, whereas longer reaction times favour thermodynamic reaction control. Low temperatures will enhance the selectivity under both sets of conditions, since T is in the denominator in both cases. The ideal temperature to optimise the yield of the fastest-forming product will be the lowest temperature that will ensure reaction completion in a reasonable amount of time. The ideal temperature for a reaction under thermodynamic control is the lowest temperature at which equilibrium will be reached in a reasonable amount of time. If needed, the selectivity can be increased by then slowly cooling the reaction mixture to shift the equilibrium further toward the most stable product. When the difference in product stability is very large, the thermodynamically controlled product can dominate even under relatively vigorous reaction conditions. If a reaction is under thermodynamic control at a given temperature, it will also be under thermodynamic control at a higher temperature for the same reaction time. In the same manner, if a reaction is under kinetic control at a given temperature, it will also be under kinetic control at any lower temperature for the same reaction time. If one presumes that a new reaction will be a priori under kinetic control, one can detect the presence of an equilibration mechanism (and therefore the possibility of thermodynamic control) if the product distribution: changes over time, shows one product to be dominant at one temperature while another dominates at a different temperature (inversion of dominance), or changes with temperature but is not consistent with equation 1, that is a change in temperature (without changing the reaction time) causes a change in the product ratio that is larger or smaller than would be expected from the change in temperature alone, assuming that is largely invariant with temperature over a modest temperature range. In the same way, one can detect the possibility of kinetic control if a temperature change causes a change in the product ratio that is inconsistent with equation 2, assuming that is largely invariant with temperature over a modest temperature range. History The first to report on the relationship between kinetic and thermodynamic control were R.B. Woodward and Harold Baer in 1944. They were re-investigating a reaction between maleic anhydride and a fulvene first reported in 1929 by Otto Diels and Kurt Alder. They observed that while the endo isomer is formed more rapidly, longer reaction times, as well as relatively elevated temperatures, result in higher exo / endo ratios which had to be considered in the light of the remarkable stability of the exo-compound on the one hand and the very facile dissociation of the endo isomer on the other. C. K. Ingold with E. D. Hughes and G. Catchpole independently described a thermodynamic and kinetic reaction control model in 1948. They were reinvestigating a certain allylic rearrangement reported in 1930 by Jakob Meisenheimer. Solvolysis of gamma-phenylallyl chloride with AcOK in acetic acid was found to give a mixture of the gamma and the alpha acetate with the latter converting to the first by equilibration. This was interpreted as a case in the field of anionotropy of the phenomenon, familiar in prototropy, of the distinction between kinetic and thermodynamic control in ion-recombination. References Chemical reactions Thermodynamics Chemical thermodynamics
Thermodynamic versus kinetic reaction control
[ "Physics", "Chemistry", "Mathematics" ]
2,755
[ "Chemical thermodynamics", "Thermodynamics", "nan", "Dynamical systems" ]
1,822,961
https://en.wikipedia.org/wiki/Electron%20crystallography
Electron crystallography is a subset of methods in electron diffraction focusing upon detailed determination of the positions of atoms in solids using a transmission electron microscope (TEM). It can involve the use of high-resolution transmission electron microscopy images, electron diffraction patterns including convergent-beam electron diffraction or combinations of these. It has been successful in determining some bulk structures, and also surface structures. Two related methods are low-energy electron diffraction which has solved the structure of many surfaces, and reflection high-energy electron diffraction which is used to monitor surfaces often during growth. The technique date back to soon after the discovery of electron diffraction in 1927-28, and was used in many early works. However, for many years quantitative electron crystallography was not used, instead the diffraction information was combined qualitatively with imaging results. A number of advances from the 1950s in particular laid the foundation for more quantitative work, ranging from accurate methods to perform forward calculations to methods to invert to maps of the atomic structure. With the improvement of the imaging capabilities of electron microscopes crystallographic data is now commonly obtained by combining images with electron diffraction information, or in some cases by collecting three dimensional electron diffraction data by a number of different approaches. History The general approach dates back to the work in 1924 of Louis de Broglie in his PhD thesis Recherches sur la théorie des quanta where he introduced the concept of electrons as matter waves. The wave nature was experimentally confirmed for electron beams in the work of two groups, the first the Davisson–Germer experiment, the other by George Paget Thomson and Alexander Reid. Alexander Reid, who was Thomson's graduate student, performed the first experiments, but he died soon after in a motorcycle accident. These experiments were rapidly followed by the first non-relativistic diffraction model for electrons by Hans Bethe based upon the Schrödinger equation, which is very close to how electron diffraction is now described. Significantly, Clinton Davisson and Lester Germer noticed that their results could not be interpreted using a Bragg's law approach as the positions were systematically different; the approach of Hans Bethe which includes both multiple scattering and the refraction due to the average potential yielded more accurate results. Very quickly there were multiple advances, for instance Seishi Kikuchi's observations of lines that can be used for crystallographic indexing due to combined elastic and inelastic scattering, gas electron diffraction developed by Herman Mark and Raymond Weil, diffraction in liquids by Louis Maxwell, and the first electron microscopes developed by Max Knoll and Ernst Ruska. Despite early successes such as the determination of the positions of hydrogen atoms in NH4Cl crystals by W. E. Laschkarew and I. D. Usykin in 1933, boric acid by John M. Cowley in 1953 and orthoboric acid by William Houlder Zachariasen in 1954, electron diffraction for many years was a qualitative technique used to check samples within electron microscopes. John M Cowley explains in a 1968 paper:Thus was founded the belief, amounting in some cases almost to an article of faith, and persisting even to the present day, that it is impossible to interpret the intensities of electron diffraction patterns to gain structural information.This has slowly changed. One key step was the development in 1936 by Walther Kossel and Gottfried Möllenstedt of convergent beam electron diffraction (CBED), This approach was extended by Peter Goodman and Gunter Lehmpfuhl, then mainly by the groups of John Steeds and Michiyoshi Tanaka who showed how to use CBED patterns to determine point groups and space groups. This was combined with other transmission electron microscopy approaches, typically where both local microstructure and atomic structure was of importance. A second key set of work was that by the group of Boris Vainshtein who demonstrated solving the structure of many different materials such as clays and micas using powder diffraction patterns, a success attributed to the samples being relatively thin. (Since the advent of precession electron diffraction it has become clear that averaging over many different electron beam directions and thicknesses significantly reduces dynamical diffraction effects, so was probably also important.) More complete crystallographic analysis of intensity data was slow to develop. One of the key steps was the demonstration in 1976 by Douglas L. Dorset and Herbert A. Hauptman that conventional direct methods for x-ray crystallography could be used. Another was the demonstration in 1986 that a Patterson function could be powerful in the seminal solution of the silicon (111) 7x7 reconstructed surface by Kunio Takanayagi using ultra-high vacuum electron diffraction. More complete analyses were the demonstration that classical inversion methods could be used for surfaces in 1997 by Dorset and Laurence D. Marks, and in 1998 the work by Jon Gjønnes who combined three-dimensional electron diffraction with precession electron diffraction and direct methods to solve an intermetallic, also using dynamical refinements. At the same time as approaches to invert diffraction data using electrons were established, the resolution of electron microscopes became good enough that images could be combined with diffraction information. At first resolution was poor, with in 1956 James Menter publishing the first electron microscope images showing the lattice structure of a material at 1.2nm resolution. In 1968 Aaron Klug and David DeRosier used electron microscopy to visualise the structure of the tail of bacteriophage T4, a common virus, a key step in the use of electrons for macromolecular structure determination. The first quantitative matching of atomic scale images and dynamical simulations was published in 1972 by J. G. Allpress, E. A. Hewat, A. F. Moodie and J. V. Sanders. In the early 1980s the resolution of electron microscopes was now sufficient to resolve the atomic structure of materials, for instance with the 600 kV instrument led by Vernon Cosslett, so combinations of high-resolution transmission electron microscopy and diffraction became standard across many areas of science. Most of the research published using these approaches is described as electron microscopy, without the addition of the term electron crystallography. Comparison with X-ray crystallography It can complement X-ray crystallography for studies of very small crystals (<0.1 micrometers), both inorganic, organic, and proteins, such as membrane proteins, that cannot easily form the large 3-dimensional crystals required for that process. Protein structures are usually determined from either 2-dimensional crystals (sheets or helices), polyhedrons such as viral capsids, or dispersed individual proteins. Electrons can be used in these situations, whereas X-rays cannot, because electrons interact more strongly with atoms than X-rays do. Thus, X-rays will travel through a thin 2-dimensional crystal without diffracting significantly, whereas electrons can be used to form an image. Conversely, the strong interaction between electrons and protons makes thick (e.g. 3-dimensional > 1 micrometer) crystals impervious to electrons, which only penetrate short distances. One of the main difficulties in X-ray crystallography is determining phases in the diffraction pattern. Because of the complexity of X-ray lenses, it is difficult to form an image of the crystal being diffracted, and hence phase information is lost. Fortunately, electron microscopes can resolve atomic structure in real space and the crystallographic structure factor phase information can be experimentally determined from an image's Fourier transform. The Fourier transform of an atomic resolution image is similar, but different, to a diffraction pattern—with reciprocal lattice spots reflecting the symmetry and spacing of a crystal. Aaron Klug was the first to realize that the phase information could be read out directly from the Fourier transform of an electron microscopy image that had been scanned into a computer, already in 1968. For this, and his studies on virus structures and transfer-RNA, Klug received the Nobel Prize for chemistry in 1982. Radiation damage A common problem to X-ray crystallography and electron crystallography is radiation damage, by which especially organic molecules and proteins are damaged as they are being imaged, limiting the resolution that can be obtained. This is especially troublesome in the setting of electron crystallography, where that radiation damage is focused on far fewer atoms. One technique used to limit radiation damage is electron cryomicroscopy, in which the samples undergo cryofixation and imaging takes place at liquid nitrogen or even liquid helium temperatures. Because of this problem, X-ray crystallography has been much more successful in determining the structure of proteins that are especially vulnerable to radiation damage. Radiation damage was recently investigated using MicroED of thin 3D crystals in a frozen hydrated state. Protein structures determined by electron crystallography The first electron crystallographic protein structure to achieve atomic resolution was bacteriorhodopsin, determined by Richard Henderson and coworkers at the Medical Research Council Laboratory of Molecular Biology in 1990. However, already in 1975 Unwin and Henderson had determined the first membrane protein structure at intermediate resolution (7 Ångström), showing for the first time the internal structure of a membrane protein, with its alpha-helices standing perpendicular to the plane of the membrane. Since then, several other high-resolution structures have been determined by electron crystallography, including the light-harvesting complex, the nicotinic acetylcholine receptor, and the bacterial flagellum. The highest resolution protein structure solved by electron crystallography of 2D crystals is that of the water channel aquaporin-0. In 2012, Jan Pieter Abrahams and coworkers extended electron crystallography to 3D protein nanocrystals by rotation electron diffraction (RED). Application to inorganic materials Electron crystallographic studies on inorganic crystals using high-resolution electron microscopy (HREM) images were first performed by Aaron Klug in 1978 and by Sven Hovmöller and coworkers in 1984. HREM images were used because they allow to select (by computer software) only the very thin regions close to the edge of the crystal for structure analysis (see also crystallographic image processing). This is of crucial importance since in the thicker parts of the crystal the exit-wave function (which carries the information about the intensity and position of the projected atom columns) is no longer linearly related to the projected crystal structure. Moreover, not only do the HREM images change their appearance with increasing crystal thickness, they are also very sensitive to the chosen setting of the defocus Δf of the objective lens (see the HREM images of GaN for example). To cope with this complexity methods based upon the Cowley-Moodie multislice algorithm and non-linear imaging theory have been developed to simulate images; this only became possible once the FFT method was developed. In addition to electron microscopy images, it is also possible to use electron diffraction (ED) patterns for crystal structure determination. The utmost care must be taken to record such ED patterns from the thinnest areas in order to keep most of the structure related intensity differences between the reflections (quasi-kinematical diffraction conditions). Just as with X-ray diffraction patterns, the important crystallographic structure factor phases are lost in electron diffraction patterns and must be uncovered by special crystallographic methods such as direct methods, maximum likelihood or (more recently) by the charge-flipping method. On the other hand, ED patterns of inorganic crystals have often a high resolution (= interplanar spacings with high Miller indices) much below 1 Ångström. This is comparable to the point resolution of the best electron microscopes. Under favourable conditions it is possible to use ED patterns from a single orientation to determine the complete crystal structure. Alternatively a hybrid approach can be used which uses HRTEM images for solving and intensities from ED for refining the crystal structure. Recent progress for structure analysis by ED was made by introducing the Vincent-Midgley precession technique for recording electron diffraction patterns. The thereby obtained intensities are usually much closer to the kinematical intensities, so that even structures can be determined that are out of range when processing conventional (selected area) electron diffraction data. Crystal structures determined via electron crystallography can be checked for their quality by using first-principles calculations within density functional theory (DFT). This approach has been used to assist in solving surface structures and for the validation of several metal-rich structures which were only accessible by HRTEM and ED, respectively. Recently, two very complicated zeolite structures have been determined by electron crystallography combined with X-ray powder diffraction. These are more complex than the most complex zeolite structures determined by X-ray crystallography. References Further reading Zou, XD, Hovmöller, S. and Oleynikov, P. "Electron Crystallography - Electron microscopy and Electron Diffraction". IUCr Texts on Crystallography 16, Oxford university press 2011. http://ukcatalogue.oup.com/product/9780199580200.do T.E. Weirich, X.D. Zou & J.L. Lábár (2006). Electron Crystallography: Novel Approaches for Structure Determination of Nanosized Materials''. Springer Netherlands, External links Interview with Aaron Klug Nobel Laureate for work on crystallograph electron microscopy Freeview video by the Vega Science Trust. Crystallography Protein structure
Electron crystallography
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,841
[ "Materials science", "Crystallography", "Condensed matter physics", "Structural biology", "Protein structure" ]
1,823,952
https://en.wikipedia.org/wiki/Diffusion%20capacitance
Diffusion Capacitance is the capacitance that happens due to transport of charge carriers between two terminals of a device, for example, the diffusion of carriers from anode to cathode in a forward biased diode or from emitter to base in a forward-biased junction of a transistor. In a semiconductor device with a current flowing through it (for example, an ongoing transport of charge by diffusion) at a particular moment there is necessarily some charge in the process of transit through the device. If the applied voltage changes to a different value and the current changes to a different value, a different amount of charge will be in transit in the new circumstances. The change in the amount of transiting charge divided by the change in the voltage causing it is the diffusion capacitance. The adjective "diffusion" is used because the original use of this term was for junction diodes, where the charge transport was via the diffusion mechanism. See Fick's laws of diffusion. To implement this notion quantitatively, at a particular moment in time let the voltage across the device be . Now assume that the voltage changes with time slowly enough that at each moment the current is the same as the DC current that would flow at that voltage, say (the quasistatic approximation). Suppose further that the time to cross the device is the forward transit time . In this case the amount of charge in transit through the device at this particular moment, denoted , is given by . Consequently, the corresponding diffusion capacitance:. is . In the event the quasi-static approximation does not hold, that is, for very fast voltage changes occurring in times shorter than the transit time , the equations governing time-dependent transport in the device must be solved to find the charge in transit, for example the Boltzmann equation. That problem is a subject of continuing research under the topic of non-quasistatic effects. See Liu , and Gildenblat et al. Notes References notes External links Junction capacitance Capacitance Electrical parameters Semiconductors
Diffusion capacitance
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
417
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Quantity", "Electrical engineering", "Materials", "Electronic engineering", "Condensed matter physics", "Capacitance", "Voltage", "Wikipedia categories named after physical quantities", "Solid state engineering"...