id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
20,426 | https://en.wikipedia.org/wiki/Metonic%20cycle | The Metonic cycle or enneadecaeteris (from , from ἐννεακαίδεκα, "nineteen") is a period of almost exactly 19 years after which the lunar phases recur at the same time of the year. The recurrence is not perfect, and by precise observation the Metonic cycle defined as 235 synodic months is just 2 hours, 4 minutes and 58 seconds longer than 19 tropical years. Meton of Athens, in the 5th century BC, judged the cycle to be a whole number of days, 6,940. Using these whole numbers facilitates the construction of a lunisolar calendar.
A tropical year (about 365.24 days) is longer than 12 lunar months (about 354.36 days) and shorter than 13 of them (about 383.90 days). In a Metonic calendar (a type of lunisolar calendar), there are twelve years of 12 lunar months and seven years of 13 lunar months.
Application in traditional calendars
In the Babylonian and Hebrew lunisolar calendars, the years 3, 6, 8, 11, 14, 17, and 19 are the long (13-month) years of the Metonic cycle. This cycle forms the basis of the Greek and Hebrew calendars. A 19-year cycle is used for the computation of the date of Easter each year.
The Babylonians applied the 19-year cycle from the late sixth century BC.
According to Livy, the second king of Rome, Numa Pompilius (reigned 715–673 BC), inserted intercalary months in such a way that "in the twentieth year the days should fall in with the same position of the sun from which they had started". As "the twentieth year" takes place nineteen years after "the first year", this seems to indicate that the Metonic cycle was applied to Numa's calendar.
Diodorus Siculus reports that Apollo is said to have visited the Hyperboreans once every 19 years.
The Metonic cycle has been implemented in the Antikythera mechanism which offers unexpected evidence for the popularity of the calendar based on it.
The (19-year) Metonic cycle is a lunisolar cycle, as is the (76-year) Callippic cycle. An important example of an application of the Metonic cycle in the Julian calendar is the 19-year lunar cycle insofar as provided with a Metonic structure. Meton introduced the 19 year cycle to the Attic calendar in 432 BC. In the following century, Callippus developed the Callippic cycle of four 19-year periods for a 76-year cycle with a mean year of exactly 365.25 days.
Around AD 260 the Alexandrian computist Anatolius, who became bishop of Laodicea in AD 268, was the first to devise a method for determining the date of Easter Sunday. However, it was some later, somewhat different, version of the Metonic 19-year lunar cycle which, as the basic structure of Dionysius Exiguus' and also of Bede's Easter table, would ultimately prevail throughout Christendom, at least until in the year 1582, when the Gregorian calendar was introduced.
The Coligny calendar is a Celtic lunisolar calendar using the Metonic cycle. The bronze plaque on which it was found dates from c. AD 200, but the internal evidence points to the calendar itself being several centuries older, created in the Iron Age or late Bronze Age.
The Metonic cycle is thought to be numerically encoded on the Berlin Gold Hat from central Europe, dating from c. 1000-800 BC.
The Runic calendar is a perpetual calendar based on the 19-year-long Metonic cycle. It is also known as a Rune staff or Runic Almanac. This calendar does not rely on knowledge of the duration of the tropical year or of the occurrence of leap years. It is set at the beginning of each year by observing the first full moon after the winter solstice. The oldest one known, and the only one from the Middle Ages, is the Nyköping staff, which is believed to date from the 13th century.
The Bahá'í calendar, established during the middle of the 19th century, is also based on cycles of 19 solar years.
Hebrew calendar
A Small Maḥzor (Hebrew מחזור, , meaning "cycle") is a 19-year cycle in the lunisolar calendar system used by the Jewish people. It is similar to, but slightly different in usage from, the Greek Metonic cycle (being based on a month of days, giving a cycle of ≈ 6939.69 days), and likely derived from or alongside the much earlier Babylonian calendar.
Polynesia
It is possible that the Polynesian kilo-hoku (astronomers) discovered the Metonic cycle in the same way Meton had, by trying to make the month fit the year.
Mathematical basis
The Metonic cycle is the most accurate cycle of time (in a timespan of less than 100 years) for synchronizing the tropical year and the lunar month (synodic month), when the method of synchronizing is the intercalation of a thirteenth lunar month in a calendar year from time to time. The traditional lunar year of 12 synodic months is about 354 days, approximately eleven days short of the solar year. Thus, every 2 to 3 years there is a discrepancy of 22 to 33 days, or a full synodic month. For example, if the winter solstice and the new moon coincide, it takes 19 tropical years for the coincidence to recur. The mathematical logic is this:
A tropical year lasts 365.2422 days.
a span of 19 tropical years (365.2422 × 19) lasts 6,939.602 days
That duration is almost the same as 235 synodic months:
A synodic month lasts 29.53059 days.
a span of 235 synodic months (29.53059 × 235) lasts 6,939.689 days
Thus the algorithm is correct to 0.087 days (2 hours, 5 minutes and 16 seconds).
For a lunisolar calendar to 'catch up' to this discrepancy and thus maintain seasonal consistency, seven intercalary months are added (one at a time), at intervals of every 2–3 years during the course of 19 solar years. Thus twelve of those years have 12 lunar months and seven have 13 months.
See also
Octaeteris (8-year cycle of antiquity)
Callippic cycle (76-year cycle from 330 BC)
Hipparchic cycle (304-year cycle from 2nd century BC)
Saros cycle of eclipses
Attic and Byzantine calendar
Julian day
Date of Easter ("the Computus")
Notes
References
External links
Eclipses, Cosmic Clockwork of the Ancients
Ancient Greek astronomy
Time in astronomy
Periodic phenomena
Calendars | Metonic cycle | [
"Physics",
"Astronomy"
] | 1,433 | [
"Time in astronomy",
"Calendars",
"Physical quantities",
"Time",
"Spacetime"
] |
20,437 | https://en.wikipedia.org/wiki/Mass%20transfer | Mass transfer is the net movement of mass from one location (usually meaning stream, phase, fraction, or component) to another. Mass transfer occurs in many processes, such as absorption, evaporation, drying, precipitation, membrane filtration, and distillation. Mass transfer is used by different scientific disciplines for different processes and mechanisms. The phrase is commonly used in engineering for physical processes that involve diffusive and convective transport of chemical species within physical systems.
Some common examples of mass transfer processes are the evaporation of water from a pond to the atmosphere, the purification of blood in the kidneys and liver, and the distillation of alcohol. In industrial processes, mass transfer operations include separation of chemical components in distillation columns, absorbers such as scrubbers or stripping, adsorbers such as activated carbon beds, and liquid-liquid extraction. Mass transfer is often coupled to additional transport processes, for instance in industrial cooling towers. These towers couple heat transfer to mass transfer by allowing hot water to flow in contact with air. The water is cooled by expelling some of its content in the form of water vapour.
Astrophysics
In astrophysics, mass transfer is the process by which matter gravitationally bound to a body, usually a star, fills its Roche lobe and becomes gravitationally bound to a second body, usually a compact object (white dwarf, neutron star or black hole), and is eventually accreted onto it. It is a common phenomenon in binary systems, and may play an important role in some types of supernovae and pulsars.
Chemical engineering
Mass transfer finds extensive application in chemical engineering problems. It is used in reaction engineering, separations engineering, heat transfer engineering, and many other sub-disciplines of chemical engineering like electrochemical engineering.
The driving force for mass transfer is usually a difference in chemical potential, when it can be defined, though other thermodynamic gradients may couple to the flow of mass and drive it as well. A chemical species moves from areas of high chemical potential to areas of low chemical potential. Thus, the maximum theoretical extent of a given mass transfer is typically determined by the point at which the chemical potential is uniform. For single phase-systems, this usually translates to uniform concentration throughout the phase, while for multiphase systems chemical species will often prefer one phase over the others and reach a uniform chemical potential only when most of the chemical species has been absorbed into the preferred phase, as in liquid-liquid extraction.
While thermodynamic equilibrium determines the theoretical extent of a given mass transfer operation, the actual rate of mass transfer will depend on additional factors including the flow patterns within the system and the diffusivities of the species in each phase. This rate can be quantified through the calculation and application of mass transfer coefficients for an overall process. These mass transfer coefficients are typically published in terms of dimensionless numbers, often including Péclet numbers, Reynolds numbers, Sherwood numbers, and Schmidt numbers, among others.
Analogies between heat, mass, and momentum transfer
There are notable similarities in the commonly used approximate differential equations for momentum, heat, and mass transfer. The molecular transfer equations of Newton's law for fluid momentum at low Reynolds number (Stokes flow), Fourier's law for heat, and Fick's law for mass are very similar, since they are all linear approximations to transport of conserved quantities in a flow field.
At higher Reynolds number, the analogy between mass and heat transfer and momentum transfer becomes less useful due to the nonlinearity of the Navier-Stokes equation (or more fundamentally, the general momentum conservation equation), but the analogy between heat and mass transfer remains good. A great deal of effort has been devoted to developing analogies among these three transport processes so as to allow prediction of one from any of the others.
References
See also
Crystal growth
Heat transfer
Fick's laws of diffusion
Distillation column
McCabe-Thiele method
Vapor-Liquid Equilibrium
Liquid-liquid extraction
Separation process
Binary star
Type Ia supernova
Thermodiffusion
Accretion (astrophysics)
Transport phenomena
Mechanical engineering
Heating, ventilation, and air conditioning | Mass transfer | [
"Physics",
"Chemistry",
"Engineering"
] | 851 | [
"Transport phenomena",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Chemical engineering",
"Mechanical engineering"
] |
20,648 | https://en.wikipedia.org/wiki/Melting | Melting, or fusion, is a physical process that results in the phase transition of a substance from a solid to a liquid. This occurs when the internal energy of the solid increases, typically by the application of heat or pressure, which increases the substance's temperature to the melting point. At the melting point, the ordering of ions or molecules in the solid breaks down to a less ordered state, and the solid melts to become a liquid.
Substances in the molten state generally have reduced viscosity as the temperature increases. An exception to this principle is elemental sulfur, whose viscosity increases in the range of 130 °C to 190 °C due to polymerization.
Some organic compounds melt through mesophases, states of partial order between solid and liquid.
First order phase transition
From a thermodynamics point of view, at the melting point the change in Gibbs free energy ∆G of the substances is zero, but there are non-zero changes in the enthalpy (H) and the entropy (S), known respectively as the enthalpy of fusion (or latent heat of fusion) and the entropy of fusion. Melting is therefore classified as a first-order phase transition. Melting occurs when the Gibbs free energy of the liquid becomes lower than the solid for that material. The temperature at which this occurs is dependent on the ambient pressure.
Low-temperature helium is the only known exception to the general rule. Helium-3 has a negative enthalpy of fusion at temperatures below 0.3 K. Helium-4 also has a very slightly negative enthalpy of fusion below 0.8 K. This means that, at appropriate constant pressures, heat must be removed from these substances in order to melt them.
Criteria
Among the theoretical criteria for melting, the Lindemann and Born criteria are those most frequently used as a basis to analyse the melting conditions.
The Lindemann criterion states that melting occurs because of "vibrational instability", e.g. crystals melt; when the average amplitude of thermal vibrations of atoms is relatively high compared with interatomic distances, e.g. <δu2>1/2 > δLRs, where δu is the atomic displacement, the Lindemann parameter δL ≈ 0.20...0.25 and Rs is one-half of the inter-atomic distance. The "Lindemann melting criterion" is supported by experimental data both for crystalline materials and for glass-liquid transitions in amorphous materials.
The Born criterion is based on a rigidity catastrophe caused by the vanishing elastic shear modulus, i.e. when the crystal no longer has sufficient rigidity to mechanically withstand the load, it becomes liquid.
Supercooling
Under a standard set of conditions, the melting point of a substance is a characteristic property. The melting point is often equal to the freezing point. However, under carefully created conditions, supercooling, or superheating past the melting or freezing point can occur. Water on a very clean glass surface will often supercool several degrees below the freezing point without freezing. Fine emulsions of pure water have been cooled to −38 °C without nucleation to form ice. Nucleation occurs due to fluctuations in the properties of the material. If the material is kept still there is often nothing (such as physical vibration) to trigger this change, and supercooling (or superheating) may occur. Thermodynamically, the supercooled liquid is in the metastable state with respect to the crystalline phase, and it is likely to crystallize suddenly.
Glasses
Glasses are amorphous solids, which are usually fabricated when the molten material cools very rapidly to below its glass transition temperature, without sufficient time for a regular crystal lattice to form. Solids are characterised by a high degree of connectivity between their molecules, and fluids have lower connectivity of their structural blocks. Melting of a solid material can also be considered as a percolation via broken connections between particles e.g. connecting bonds. In this approach melting of an amorphous material occurs, when the broken bonds form a percolation cluster with Tg dependent on quasi-equilibrium thermodynamic parameters of bonds e.g. on enthalpy (Hd) and entropy (Sd) of formation of bonds in a given system at given conditions:
where fc is the percolation threshold and R is the universal gas constant.
Although Hd and Sd are not true equilibrium thermodynamic parameters and can depend on the cooling rate of a melt, they can be found from available experimental data on viscosity of amorphous materials.
Even below its melting point, quasi-liquid films can be observed on crystalline surfaces. The thickness of the film is temperature-dependent. This effect is common for all crystalline materials. This pre-melting shows its effects in e.g. frost heave, the growth of snowflakes, and, taking grain boundary interfaces into account, maybe even in the movement of glaciers.
Related concept
In ultrashort pulse physics, a so-called nonthermal melting may take place. It occurs not because of the increase of the atomic kinetic energy, but because of changes of the interatomic potential due to excitation of electrons. Since electrons are acting like a glue sticking atoms together, heating electrons by a femtosecond laser alters the properties of this "glue", which may break the bonds between the atoms and melt a material even without an increase of the atomic temperature.
In genetics, melting DNA means to separate the double-stranded DNA into two single strands by heating or the use of chemical agents, polymerase chain reaction.
Table
See also
List of chemical elements providing melting points
Phase diagram
Zone melting
References
External links
Phase transitions
Materials science
Thermodynamics | Melting | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,185 | [
"Physical phenomena",
"Phase transitions",
"Applied and interdisciplinary physics",
"Phases of matter",
"Critical phenomena",
"Materials science",
"Thermodynamics",
"nan",
"Statistical mechanics",
"Matter",
"Dynamical systems"
] |
20,728 | https://en.wikipedia.org/wiki/Mathematical%20formulation%20of%20quantum%20mechanics | The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. This mathematical formalism uses mainly a part of functional analysis, especially Hilbert spaces, which are a kind of linear space. Such are distinguished from mathematical formalisms for physics theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces (L2 space mainly), and operators on these spaces. In brief, values of physical observables such as energy and momentum were no longer considered as values of functions on phase space, but as eigenvalues; more precisely as spectral values of linear operators in Hilbert space.
These formulations of quantum mechanics continue to be used today. At the heart of the description are ideas of quantum state and quantum observables, which are radically different from those used in previous models of physical reality. While the mathematics permits calculation of many quantities that can be measured experimentally, there is a definite theoretical limit to values that can be simultaneously measured. This limitation was first elucidated by Heisenberg through a thought experiment, and is represented mathematically in the new formalism by the non-commutativity of operators representing quantum observables.
Prior to the development of quantum mechanics as a separate theory, the mathematics used in physics consisted mainly of formal mathematical analysis, beginning with calculus, and increasing in complexity up to differential geometry and partial differential equations. Probability theory was used in statistical mechanics. Geometric intuition played a strong role in the first two and, accordingly, theories of relativity were formulated entirely in terms of differential geometric concepts. The phenomenology of quantum physics arose roughly between 1895 and 1915, and for the 10 to 15 years before the development of quantum mechanics (around 1925) physicists continued to think of quantum theory within the confines of what is now called classical physics, and in particular within the same mathematical structures. The most sophisticated example of this is the Sommerfeld–Wilson–Ishiwara quantization rule, which was formulated entirely on the classical phase space.
History of the formalism
The "old quantum theory" and the need for new mathematics
In the 1890s, Planck was able to derive the blackbody spectrum, which was later used to avoid the classical ultraviolet catastrophe by making the unorthodox assumption that, in the interaction of electromagnetic radiation with matter, energy could only be exchanged in discrete units which he called quanta. Planck postulated a direct proportionality between the frequency of radiation and the quantum of energy at that frequency. The proportionality constant, , is now called the Planck constant in his honor.
In 1905, Einstein explained certain features of the photoelectric effect by assuming that Planck's energy quanta were actual particles, which were later dubbed photons.
All of these developments were phenomenological and challenged the theoretical physics of the time. Bohr and Sommerfeld went on to modify classical mechanics in an attempt to deduce the Bohr model from first principles. They proposed that, of all closed classical orbits traced by a mechanical system in its phase space, only the ones that enclosed an area which was a multiple of the Planck constant were actually allowed. The most sophisticated version of this formalism was the so-called Sommerfeld–Wilson–Ishiwara quantization. Although the Bohr model of the hydrogen atom could be explained in this way, the spectrum of the helium atom (classically an unsolvable 3-body problem) could not be predicted. The mathematical status of quantum theory remained uncertain for some time.
In 1923, de Broglie proposed that wave–particle duality applied not only to photons but to electrons and every other physical system.
The situation changed rapidly in the years 1925–1930, when working mathematical foundations were found through the groundbreaking work of Erwin Schrödinger, Werner Heisenberg, Max Born, Pascual Jordan, and the foundational work of John von Neumann, Hermann Weyl and Paul Dirac, and it became possible to unify several different approaches in terms of a fresh set of ideas. The physical interpretation of the theory was also clarified in these years after Werner Heisenberg discovered the uncertainty relations and Niels Bohr introduced the idea of complementarity.
The "new quantum theory"
Werner Heisenberg's matrix mechanics was the first successful attempt at replicating the observed quantization of atomic spectra. Later in the same year, Schrödinger created his wave mechanics. Schrödinger's formalism was considered easier to understand, visualize and calculate as it led to differential equations, which physicists were already familiar with solving. Within a year, it was shown that the two theories were equivalent.
Schrödinger himself initially did not understand the fundamental probabilistic nature of quantum mechanics, as he thought that the absolute square of the wave function of an electron should be interpreted as the charge density of an object smeared out over an extended, possibly infinite, volume of space. It was Max Born who introduced the interpretation of the absolute square of the wave function as the probability distribution of the position of a pointlike object. Born's idea was soon taken over by Niels Bohr in Copenhagen who then became the "father" of the Copenhagen interpretation of quantum mechanics. Schrödinger's wave function can be seen to be closely related to the classical Hamilton–Jacobi equation. The correspondence to classical mechanics was even more explicit, although somewhat more formal, in Heisenberg's matrix mechanics. In his PhD thesis project, Paul Dirac discovered that the equation for the operators in the Heisenberg representation, as it is now called, closely translates to classical equations for the dynamics of certain quantities in the Hamiltonian formalism of classical mechanics, when one expresses them through Poisson brackets, a procedure now known as canonical quantization.
Already before Schrödinger, the young postdoctoral fellow Werner Heisenberg invented his matrix mechanics, which was the first correct quantum mechanics – the essential breakthrough. Heisenberg's matrix mechanics formulation was based on algebras of infinite matrices, a very radical formulation in light of the mathematics of classical physics, although he started from the index-terminology of the experimentalists of that time, not even aware that his "index-schemes" were matrices, as Born soon pointed out to him. In fact, in these early years, linear algebra was not generally popular with physicists in its present form.
Although Schrödinger himself after a year proved the equivalence of his wave-mechanics and Heisenberg's matrix mechanics, the reconciliation of the two approaches and their modern abstraction as motions in Hilbert space is generally attributed to Paul Dirac, who wrote a lucid account in his 1930 classic The Principles of Quantum Mechanics. He is the third, and possibly most important, pillar of that field (he soon was the only one to have discovered a relativistic generalization of the theory). In his above-mentioned account, he introduced the bra–ket notation, together with an abstract formulation in terms of the Hilbert space used in functional analysis; he showed that Schrödinger's and Heisenberg's approaches were two different representations of the same theory, and found a third, most general one, which represented the dynamics of the system. His work was particularly fruitful in many types of generalizations of the field.
The first complete mathematical formulation of this approach, known as the Dirac–von Neumann axioms, is generally credited to John von Neumann's 1932 book Mathematical Foundations of Quantum Mechanics, although Hermann Weyl had already referred to Hilbert spaces (which he called unitary spaces) in his 1927 classic paper and book. It was developed in parallel with a new approach to the mathematical spectral theory based on linear operators rather than the quadratic forms that were David Hilbert's approach a generation earlier. Though theories of quantum mechanics continue to evolve to this day, there is a basic framework for the mathematical formulation of quantum mechanics which underlies most approaches and can be traced back to the mathematical work of John von Neumann. In other words, discussions about interpretation of the theory, and extensions to it, are now mostly conducted on the basis of shared assumptions about the mathematical foundations.
Later developments
The application of the new quantum theory to electromagnetism resulted in quantum field theory, which was developed starting around 1930. Quantum field theory has driven the development of more sophisticated formulations of quantum mechanics, of which the ones presented here are simple special cases.
Path integral formulation
Phase-space formulation of quantum mechanics & geometric quantization
quantum field theory in curved spacetime
axiomatic, algebraic and constructive quantum field theory
C*-algebra formalism
Generalized statistical model of quantum mechanics
A related topic is the relationship to classical mechanics. Any new physical theory is supposed to reduce to successful old theories in some approximation. For quantum mechanics, this translates into the need to study the so-called classical limit of quantum mechanics. Also, as Bohr emphasized, human cognitive abilities and language are inextricably linked to the classical realm, and so classical descriptions are intuitively more accessible than quantum ones. In particular, quantization, namely the construction of a quantum theory whose classical limit is a given and known classical theory, becomes an important area of quantum physics in itself.
Finally, some of the originators of quantum theory (notably Einstein and Schrödinger) were unhappy with what they thought were the philosophical implications of quantum mechanics. In particular, Einstein took the position that quantum mechanics must be incomplete, which motivated research into so-called hidden-variable theories. The issue of hidden variables has become in part an experimental issue with the help of quantum optics.
Postulates of quantum mechanics
A physical system is generally described by three basic ingredients: states; observables; and dynamics (or law of time evolution) or, more generally, a group of physical symmetries. A classical description can be given in a fairly direct way by a phase space model of mechanics: states are points in a phase space formulated by symplectic manifold, observables are real-valued functions on it, time evolution is given by a one-parameter group of symplectic transformations of the phase space, and physical symmetries are realized by symplectic transformations. A quantum description normally consists of a Hilbert space of states, observables are self-adjoint operators on the space of states, time evolution is given by a one-parameter group of unitary transformations on the Hilbert space of states, and physical symmetries are realized by unitary transformations. (It is possible, to map this Hilbert-space picture to a phase space formulation, invertibly. See below.)
The following summary of the mathematical framework of quantum mechanics can be partly traced back to the Dirac–von Neumann axioms.
Description of the state of a system
Each isolated physical system is associated with a (topologically) separable complex Hilbert space with inner product .
Separability is a mathematically convenient hypothesis, with the physical interpretation that the state is uniquely determined by countably many observations. Quantum states can be identified with equivalence classes in , where two vectors (of length 1) represent the same state if they differ only by a phase factor. As such, quantum states form a ray in projective Hilbert space, not a vector. Many textbooks fail to make this distinction, which could be partly a result of the fact that the Schrödinger equation itself involves Hilbert-space "vectors", with the result that the imprecise use of "state vector" rather than ray is very difficult to avoid.
Accompanying Postulate I is the composite system postulate:
In the presence of quantum entanglement, the quantum state of the composite system cannot be factored as a tensor product of states of its local constituents; Instead, it is expressed as a sum, or superposition, of tensor products of states of component subsystems. A subsystem in an entangled composite system generally cannot be described by a state vector (or a ray), but instead is described by a density operator; Such quantum state is known as a mixed state. The density operator of a mixed state is a trace class, nonnegative (positive semi-definite) self-adjoint operator normalized to be of trace 1. In turn, any density operator of a mixed state can be represented as a subsystem of a larger composite system in a pure state (see purification theorem).
In the absence of quantum entanglement, the quantum state of the composite system is called a separable state. The density matrix of a bipartite system in a separable state can be expressed as , where . If there is only a single non-zero , then the state can be expressed just as and is called simply separable or product state.
Measurement on a system
Description of physical quantities
Physical observables are represented by Hermitian matrices on . Since these operators are Hermitian, their eigenvalues are always real, and represent the possible outcomes/results from measuring the corresponding observable. If the spectrum of the observable is discrete, then the possible results are quantized.
Results of measurement
By spectral theory, we can associate a probability measure to the values of in any state . We can also show that the possible values of the observable in any state must belong to the spectrum of . The expectation value (in the sense of probability theory) of the observable for the system in state represented by the unit vector ∈ H is . If we represent the state in the basis formed by the eigenvectors of , then the square of the modulus of the component attached to a given eigenvector is the probability of observing its corresponding eigenvalue.
For a mixed state , the expected value of in the state is , and the probability of obtaining an eigenvalue in a discrete, nondegenerate spectrum of the corresponding observable is given by .
If the eigenvalue has degenerate, orthonormal eigenvectors , then the projection operator onto the eigensubspace can be defined as the identity operator in the eigensubspace:
and then .
Postulates II.a and II.b are collectively known as the Born rule of quantum mechanics.
Effect of measurement on the state
When a measurement is performed, only one result is obtained (according to some interpretations of quantum mechanics). This is modeled mathematically as the processing of additional information from the measurement, confining the probabilities of an immediate second measurement of the same observable. In the case of a discrete, non-degenerate spectrum, two sequential measurements of the same observable will always give the same value assuming the second immediately follows the first. Therefore, the state vector must change as a result of measurement, and collapse onto the eigensubspace associated with the eigenvalue measured.
For a mixed state , after obtaining an eigenvalue in a discrete, nondegenerate spectrum of the corresponding observable , the updated state is given by . If the eigenvalue has degenerate, orthonormal eigenvectors , then the projection operator onto the eigensubspace is .
Postulates II.c is sometimes called the "state update rule" or "collapse rule"; Together with the Born rule (Postulates II.a and II.b), they form a complete representation of measurements, and are sometimes collectively called the measurement postulate(s).
Note that the projection-valued measures (PVM) described in the measurement postulate(s) can be generalized to positive operator-valued measures (POVM), which is the most general kind of measurement in quantum mechanics. A POVM can be understood as the effect on a component subsystem when a PVM is performed on a larger, composite system (see Naimark's dilation theorem).
Time evolution of a system
Though it is possible to derive the Schrödinger equation, which describes how a state vector evolves in time, most texts assert the equation as a postulate. Common derivations include using the de Broglie hypothesis or path integrals.
Equivalently, the time evolution postulate can be stated as:
For a closed system in a mixed state , the time evolution is .
The evolution of an open quantum system can be described by quantum operations (in an operator sum formalism) and quantum instruments, and generally does not have to be unitary.
Other implications of the postulates
Physical symmetries act on the Hilbert space of quantum states unitarily or antiunitarily due to Wigner's theorem (supersymmetry is another matter entirely).
Density operators are those that are in the closure of the convex hull of the one-dimensional orthogonal projectors. Conversely, one-dimensional orthogonal projectors are extreme points of the set of density operators. Physicists also call one-dimensional orthogonal projectors pure states and other density operators mixed states.
One can in this formalism state Heisenberg's uncertainty principle and prove it as a theorem, although the exact historical sequence of events, concerning who derived what and under which framework, is the subject of historical investigations outside the scope of this article.
Recent research has shown that the composite system postulate (tensor product postulate) can be derived from the state postulate (Postulate I) and the measurement postulates (Postulates II); Moreover, it has also been shown that the measurement postulates (Postulates II) can be derived from "unitary quantum mechanics", which includes only the state postulate (Postulate I), the composite system postulate (tensor product postulate) and the unitary evolution postulate (Postulate III).
Furthermore, to the postulates of quantum mechanics one should also add basic statements on the properties of spin and Pauli's exclusion principle, see below.
Spin
In addition to their other properties, all particles possess a quantity called spin, an intrinsic angular momentum. Despite the name, particles do not literally spin around an axis, and quantum mechanical spin has no correspondence in classical physics. In the position representation, a spinless wavefunction has position and time as continuous variables, . For spin wavefunctions the spin is an additional discrete variable: , where takes the values;
That is, the state of a single particle with spin is represented by a -component spinor of complex-valued wave functions.
Two classes of particles with very different behaviour are bosons which have integer spin (), and fermions possessing half-integer spin ().
Symmetrization postulate
In quantum mechanics, two particles can be distinguished from one another using two methods. By performing a measurement of intrinsic properties of each particle, particles of different types can be distinguished. Otherwise, if the particles are identical, their trajectories can be tracked which distinguishes the particles based on the locality of each particle. While the second method is permitted in classical mechanics, (i.e. all classical particles are treated with distinguishability), the same cannot be said for quantum mechanical particles since the process is infeasible due to the fundamental uncertainty principles that govern small scales. Hence the requirement of indistinguishability of quantum particles is presented by the symmetrization postulate. The postulate is applicable to a system of bosons or fermions, for example, in predicting the spectra of helium atom. The postulate, explained in the following sections, can be stated as follows:
Exceptions can occur when the particles are constrained to two spatial dimensions where existence of particles known as anyons are possible which are said to have a continuum of statistical properties spanning the range between fermions and bosons. The connection between behaviour of identical particles and their spin is given by spin statistics theorem.
It can be shown that two particles localized in different regions of space can still be represented using a symmetrized/antisymmetrized wavefunction and that independent treatment of these wavefunctions gives the same result. Hence the symmetrization postulate is applicable in the general case of a system of identical particles.
Exchange Degeneracy
In a system of identical particles, let P be known as exchange operator that acts on the wavefunction as:
If a physical system of identical particles is given, wavefunction of all particles can be well known from observation but these cannot be labelled to each particle. Thus, the above exchanged wavefunction represents the same physical state as the original state which implies that the wavefunction is not unique. This is known as exchange degeneracy.
More generally, consider a linear combination of such states, . For the best representation of the physical system, we expect this to be an eigenvector of P since exchange operator is not excepted to give completely different vectors in projective Hilbert space. Since , the possible eigenvalues of P are +1 and −1. The states for identical particle system are represented as symmetric for +1 eigenvalue or antisymmetric for -1 eigenvalue as follows:
The explicit symmetric/antisymmetric form of is constructed using a symmetrizer or antisymmetrizer operator. Particles that form symmetric states are called bosons and those that form antisymmetric states are called as fermions. The relation of spin with this classification is given from spin statistics theorem which shows that integer spin particles are bosons and half integer spin particles are fermions.
Pauli exclusion principle
The property of spin relates to another basic property concerning systems of identical particles: the Pauli exclusion principle, which is a consequence of the following permutation behaviour of an -particle wave function; again in the position representation one must postulate that for the transposition of any two of the particles one always should have
i.e., on transposition of the arguments of any two particles the wavefunction should reproduce, apart from a prefactor which is for bosons, but () for fermions.
Electrons are fermions with ; quanta of light are bosons with .
Due to the form of anti-symmetrized wavefunction:
if the wavefunction of each particle is completely determined by a set of quantum number, then two fermions cannot share the same set of quantum numbers since the resulting function cannot be anti-symmetrized (i.e. above formula gives zero). The same cannot be said of Bosons since their wavefunction is:
where is the number of particles with same wavefunction.
Exceptions for symmetrization postulate
In nonrelativistic quantum mechanics all particles are either bosons or fermions; in relativistic quantum theories also "supersymmetric" theories exist, where a particle is a linear combination of a bosonic and a fermionic part. Only in dimension can one construct entities where is replaced by an arbitrary complex number with magnitude 1, called anyons. In relativistic quantum mechanics, spin statistic theorem can prove that under certain set of assumptions that the integer spins particles are classified as bosons and half spin particles are classified as fermions. Anyons which form neither symmetric nor antisymmetric states are said to have fractional spin.
Although spin and the Pauli principle can only be derived from relativistic generalizations of quantum mechanics, the properties mentioned in the last two paragraphs belong to the basic postulates already in the non-relativistic limit. Especially, many important properties in natural science, e.g. the periodic system of chemistry, are consequences of the two properties.
Mathematical structure of quantum mechanics
Pictures of dynamics
Summary:
Representations
The original form of the Schrödinger equation depends on choosing a particular representation of Heisenberg's canonical commutation relations. The Stone–von Neumann theorem dictates that all irreducible representations of the finite-dimensional Heisenberg commutation relations are unitarily equivalent. A systematic understanding of its consequences has led to the phase space formulation of quantum mechanics, which works in full phase space instead of Hilbert space, so then with a more intuitive link to the classical limit thereof. This picture also simplifies considerations
of quantization, the deformation extension from classical to quantum mechanics.
The quantum harmonic oscillator is an exactly solvable system where the different representations are easily compared. There, apart from the Heisenberg, or Schrödinger (position or momentum), or phase-space representations, one also encounters the Fock (number) representation and the Segal–Bargmann (Fock-space or coherent state) representation (named after Irving Segal and Valentine Bargmann). All four are unitarily equivalent.
Time as an operator
The framework presented so far singles out time as the parameter that everything depends on. It is possible to formulate mechanics in such a way that time becomes itself an observable associated with a self-adjoint operator. At the classical level, it is possible to arbitrarily parameterize the trajectories of particles in terms of an unphysical parameter , and in that case the time t becomes an additional generalized coordinate of the physical system. At the quantum level, translations in would be generated by a "Hamiltonian" , where E is the energy operator and is the "ordinary" Hamiltonian. However, since s is an unphysical parameter, physical states must be left invariant by "s-evolution", and so the physical state space is the kernel of (this requires the use of a rigged Hilbert space and a renormalization of the norm).
This is related to the quantization of constrained systems and quantization of gauge theories. It
is also possible to formulate a quantum theory of "events" where time becomes an observable.
Problem of measurement
The picture given in the preceding paragraphs is sufficient for description of a completely isolated system. However, it fails to account for one of the main differences between quantum mechanics and classical mechanics, that is, the effects of measurement. The von Neumann description of quantum measurement of an observable , when the system is prepared in a pure state is the following (note, however, that von Neumann's description dates back to the 1930s and is based on experiments as performed during that time – more specifically the Compton–Simon experiment; it is not applicable to most present-day measurements within the quantum domain):
Let have spectral resolution where is the resolution of the identity (also called projection-valued measure) associated with . Then the probability of the measurement outcome lying in an interval of is . In other words, the probability is obtained by integrating the characteristic function of against the countably additive measure
If the measured value is contained in , then immediately after the measurement, the system will be in the (generally non-normalized) state . If the measured value does not lie in , replace by its complement for the above state.
For example, suppose the state space is the -dimensional complex Hilbert space and is a Hermitian matrix with eigenvalues , with corresponding eigenvectors . The projection-valued measure associated with , , is then
where is a Borel set containing only the single eigenvalue . If the system is prepared in state
Then the probability of a measurement returning the value can be calculated by integrating the spectral measure
over . This gives trivially
The characteristic property of the von Neumann measurement scheme is that repeating the same measurement will give the same results. This is also called the projection postulate.
A more general formulation replaces the projection-valued measure with a positive-operator valued measure (POVM). To illustrate, take again the finite-dimensional case. Here we would replace the rank-1 projections
by a finite set of positive operators
whose sum is still the identity operator as before (the resolution of identity). Just as a set of possible outcomes is associated to a projection-valued measure, the same can be said for a POVM. Suppose the measurement outcome is . Instead of collapsing to the (unnormalized) state
after the measurement, the system now will be in the state
Since the operators need not be mutually orthogonal projections, the projection postulate of von Neumann no longer holds.
The same formulation applies to general mixed states.
In von Neumann's approach, the state transformation due to measurement is distinct from that due to time evolution in several ways. For example, time evolution is deterministic and unitary whereas measurement is non-deterministic and non-unitary. However, since both types of state transformation take one quantum state to another, this difference was viewed by many as unsatisfactory. The POVM formalism views measurement as one among many other quantum operations, which are described by completely positive maps which do not increase the trace.
In any case it seems that the above-mentioned problems can only be resolved if the time evolution included not only the quantum system, but also, and essentially, the classical measurement apparatus (see above).
List of mathematical tools
Part of the folklore of the subject concerns the mathematical physics textbook Methods of Mathematical Physics put together by Richard Courant from David Hilbert's Göttingen University courses. The story is told (by mathematicians) that physicists had dismissed the material as not interesting in the current research areas, until the advent of Schrödinger's equation. At that point it was realised that the mathematics of the new quantum mechanics was already laid out in it. It is also said that Heisenberg had consulted Hilbert about his matrix mechanics, and Hilbert observed that his own experience with infinite-dimensional matrices had derived from differential equations, advice which Heisenberg ignored, missing the opportunity to unify the theory as Weyl and Dirac did a few years later. Whatever the basis of the anecdotes, the mathematics of the theory was conventional at the time, whereas the physics was radically new.
The main tools include:
linear algebra: complex numbers, eigenvectors, eigenvalues
functional analysis: Hilbert spaces, linear operators, spectral theory
differential equations: partial differential equations, separation of variables, ordinary differential equations, Sturm–Liouville theory, eigenfunctions
harmonic analysis: Fourier transforms
See also
List of mathematical topics in quantum theory
Symmetry in quantum mechanics
Notes
References
Further reading
Mathematical physics
History of physics | Mathematical formulation of quantum mechanics | [
"Physics",
"Mathematics"
] | 6,210 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics",
"Quantum mechanics"
] |
20,821 | https://en.wikipedia.org/wiki/Miller%E2%80%93Urey%20experiment | The Miller–Urey experiment, or Miller experiment, was an experiment in chemical synthesis carried out in 1952 that simulated the conditions thought at the time to be present in the atmosphere of the early, prebiotic Earth. It is seen as one of the first successful experiments demonstrating the synthesis of organic compounds from inorganic constituents in an origin of life scenario. The experiment used methane (CH4), ammonia (NH3), hydrogen (H2), in ratio 2:2:1, and water (H2O). Applying an electric arc (simulating lightning) resulted in the production of amino acids.
It is regarded as a groundbreaking experiment, and the classic experiment investigating the origin of life (abiogenesis). It was performed in 1952 by Stanley Miller, supervised by Nobel laureate Harold Urey at the University of Chicago, and published the following year. At the time, it supported Alexander Oparin's and J. B. S. Haldane's hypothesis that the conditions on the primitive Earth favored chemical reactions that synthesized complex organic compounds from simpler inorganic precursors.
After Miller's death in 2007, scientists examining sealed vials preserved from the original experiments were able to show that more amino acids were produced in the original experiment than Miller was able to report with paper chromatography. While evidence suggests that Earth's prebiotic atmosphere might have typically had a composition different from the gas used in the Miller experiment, prebiotic experiments continue to produce racemic mixtures of simple-to-complex organic compounds, including amino acids, under varying conditions. Moreover, researchers have shown that transient, hydrogen-rich atmospheres – conducive to Miller-Urey synthesis – would have occurred after large asteroid impacts on early Earth.
History
Foundations of organic synthesis and the origin of life
Until the 19th century, there was considerable acceptance of the theory of spontaneous generation, the idea that "lower" animals, such as insects or rodents, arose from decaying matter. However, several experiments in the 19th century – particularly Louis Pasteur's swan neck flask experiment in 1859 — disproved the theory that life arose from decaying matter. Charles Darwin published On the Origin of Species that same year, describing the mechanism of biological evolution. While Darwin never publicly wrote about the first organism in his theory of evolution, in a letter to Joseph Dalton Hooker, he speculated:But if (and oh what a big if) we could conceive in some warm little pond with all sorts of ammonia and phosphoric salts, light, heat, electricity etcetera present, that a protein compound was chemically formed, ready to undergo still more complex changes [...]"
At this point, it was known that organic molecules could be formed from inorganic starting materials, as Friedrich Wöhler had described Wöhler synthesis of urea from ammonium cyanate in 1828. Several other early seminal works in the field of organic synthesis followed, including Alexander Butlerov's synthesis of sugars from formaldehyde and Adolph Strecker's synthesis of the amino acid alanine from acetaldehyde, ammonia, and hydrogen cyanide. In 1913, Walther Löb synthesized amino acids by exposing formamide to silent electric discharge, so scientists were beginning to produce the building blocks of life from simpler molecules, but these were not intended to simulate any prebiotic scheme or even considered relevant to origin of life questions.
But the scientific literature of the early 20th century contained speculations on the origin of life. In 1903, physicist Svante Arrhenius hypothesized that the first microscopic forms of life, driven by the radiation pressure of stars, could have arrived on Earth from space in the panspermia hypothesis. In the 1920s, Leonard Troland wrote about a primordial enzyme that could have formed by chance in the primitive ocean and catalyzed reactions, and Hermann J. Muller suggested that the formation of a gene with catalytic and autoreplicative properties could have set evolution in motion. Around the same time, Alexander Oparin's and J. B. S. Haldane's "Primordial soup" ideas were emerging, which hypothesized that a chemically-reducing atmosphere on early Earth would have been conducive to organic synthesis in the presence of sunlight or lightning, gradually concentrating the ocean with random organic molecules until life emerged. In this way, frameworks for the origin of life were coming together, but at the mid-20th century, hypotheses lacked direct experimental evidence.
Stanley Miller and Harold Urey
At the time of the Miller–Urey experiment, Harold Urey was a Professor of Chemistry at the University of Chicago who had a well-renowned career, including receiving the Nobel Prize in Chemistry in 1934 for his isolation of deuterium and leading efforts to use gaseous diffusion for uranium isotope enrichment in support of the Manhattan Project. In 1952, Urey postulated that the high temperatures and energies associated with large impacts in Earth's early history would have provided an atmosphere of methane (CH4), water (H2O), ammonia (NH3), and hydrogen (H2), creating the reducing environment necessary for the Oparin-Haldane "primordial soup" scenario.
Stanley Miller arrived at the University of Chicago in 1951 to pursue a PhD under nuclear physicist Edward Teller, another prominent figure in the Manhattan Project. Miller began to work on how different chemical elements were formed in the early universe, but, after a year of minimal progress, Teller was to leave for California to establish Lawrence Livermore National Laboratory and further nuclear weapons research. Miller, having seen Urey lecture on his 1952 paper, approached him about the possibility of a prebiotic synthesis experiment. While Urey initially discouraged Miller, he agreed to allow Miller to try for a year. By February 1953, Miller had mailed a manuscript as sole author reporting the results of his experiment to Science. Urey refused to be listed on the manuscript because he believed his status would cause others to underappreciate Miller's role in designing and conducting the experiment and so encouraged Miller to take full credit for the work. Despite this the set-up is still most commonly referred to including both their names. After not hearing from Science for a few weeks, a furious Urey wrote to the editorial board demanding an answer, stating, "If Science does not wish to publish this promptly we will send it to the Journal of the American Chemical Society." Miller's manuscript was eventually published in Science in May 1953.
Experiment
In the original 1952 experiment, methane (CH4), ammonia (NH3), and hydrogen (H2) were all sealed together in a 2:2:1 ratio (1 part H2) inside a sterile 5-L glass flask connected to a 500-mL flask half-full of water (H2O). The gas chamber was intended to represent Earth's prebiotic atmosphere, while the water simulated an ocean. The water in the smaller flask was boiled such that water vapor entered the gas chamber and mixed with the "atmosphere". A continuous electrical spark was discharged between a pair of electrodes in the larger flask. The spark passed through the mixture of gases and water vapor, simulating lightning. A condenser below the gas chamber allowed aqueous solution to accumulate into a U-shaped trap at the bottom of the apparatus, which was sampled.
After a day, the solution that had collected at the trap was pink, and after a week of continuous operation the solution was deep red and turbid, which Miller attributed to organic matter adsorbed onto colloidal silica. The boiling flask was then removed, and mercuric chloride (a poison) was added to prevent microbial contamination. The reaction was stopped by adding barium hydroxide and sulfuric acid, and evaporated to remove impurities. Using paper chromatography, Miller identified five amino acids present in the solution: glycine, α-alanine and β-alanine were positively identified, while aspartic acid and α-aminobutyric acid (AABA) were less certain, due to the spots being faint.
Materials and samples from the original experiments remained in 2017 under the care of Miller's former student, Jeffrey Bada, a professor at the UCSD, Scripps Institution of Oceanography who also conducts origin of life research. , the apparatus used to conduct the experiment was on display at the Denver Museum of Nature and Science.
Chemistry of experiment
In 1957 Miller published research describing the chemical processes occurring inside his experiment. Hydrogen cyanide (HCN) and aldehydes (e.g., formaldehyde) were demonstrated to form as intermediates early on in the experiment due to the electric discharge. This agrees with current understanding of atmospheric chemistry, as HCN can generally be produced from reactive radical species in the atmosphere that arise when CH4 and nitrogen break apart under ultraviolet (UV) light. Similarly, aldehydes can be generated in the atmosphere from radicals resulting from CH4 and H2O decomposition and other intermediates like methanol. Several energy sources in planetary atmospheres can induce these dissociation reactions and subsequent hydrogen cyanide or aldehyde formation, including lightning, ultraviolet light, and galactic cosmic rays.
For example, here is a set photochemical reactions of species in the Miller-Urey atmosphere that can result in formaldehyde:
H2O + hv → H + OH
CH4 + OH → CH3 + HOH
CH3 + OH → CH3OH
CH3OH + hv → CH2O (formaldehyde) + H2
A photochemical path to HCN from NH3 and CH4 is:
NH3 + hv → NH2 + H
NH2 + CH4 → NH3 + CH3
NH2 + CH3 → CH5N
CH5N + hv → HCN + 2H2
Other active intermediate compounds (acetylene, cyanoacetylene, etc.) have been detected in the aqueous solution of Miller–Urey-type experiments, but the immediate HCN and aldehyde production, the production of amino acids accompanying the plateau in HCN and aldehyde concentrations, and slowing of amino acid production rate during HCN and aldehyde depletion provided strong evidence that Strecker amino acid synthesis was occurring in the aqueous solution.
Strecker synthesis describes the reaction of an aldehyde, ammonia, and HCN to a simple amino acid through an aminoacetonitrile intermediate:
CH2O + HCN + NH3 → NH2-CH2-CN (aminoacetonitrile) + H2O
NH2-CH2-CN + 2H2O → NH3 + NH2-CH2-COOH (glycine)
Furthermore, water and formaldehyde can react via Butlerov's reaction to produce various sugars like ribose.
The experiments showed that simple organic compounds, including the building blocks of proteins and other macromolecules, can abiotically be formed from gases with the addition of energy.
Related experiments and follow-up work
Contemporary experiments
There were a few similar spark discharge experiments contemporaneous with Miller-Urey. An article in The New York Times (March 8, 1953) titled "Looking Back Two Billion Years" describes the work of Wollman M. MacNevin at Ohio State University, before the Miller Science paper was published in May 1953. MacNevin was passing 100,000V sparks through methane and water vapor and produced "resinous solids" that were "too complex for analysis." Furthermore, K. A. Wilde submitted a manuscript to Science on December 15, 1952, before Miller submitted his paper to the same journal in February 1953. Wilde's work, published on July 10, 1953, used voltages up to only 600V on a binary mixture of carbon dioxide (CO2) and water in a flow system and did not note any significant reduction products. According to some, the reports of these experiments explain why Urey was rushing Miller's manuscript through Science and threatening to submit to the Journal of the American Chemical Society.
By introducing an experimental framework to test prebiotic chemistry, the Miller–Urey experiment paved the way for future origin of life research. In 1961, Joan Oró produced milligrams of the nucleobase adenine from a concentrated solution of HCN and NH3 in water. Oró found that several amino acids were also formed from HCN and ammonia under those conditions. Experiments conducted later showed that the other RNA and DNA nucleobases could be obtained through simulated prebiotic chemistry with a reducing atmosphere. Other researchers also began using UV-photolysis in prebiotic schemes, as the UV flux would have been much higher on early Earth. For example, UV-photolysis of water vapor with carbon monoxide was found to yield various alcohols, aldehydes, and organic acids. In the 1970s, Carl Sagan used Miller-Urey-type reactions to synthesize and experiment with complex organic particles dubbed "tholins", which likely resemble particles formed in hazy atmospheres like that of Titan.
Modified Miller–Urey experiments
Much work has been done since the 1950s toward understanding how Miller-Urey chemistry behaves in various environmental settings. In 1983, testing different atmospheric compositions, Miller and another researcher repeated experiments with varying proportions of H2, H2O, N2, CO2 or CH4, and sometimes NH3. They found that the presence or absence of NH3 in the mixture did not significantly impact amino acid yield, as NH3 was generated from N2 during the spark discharge. Additionally, CH4 proved to be one of the most important atmospheric ingredients for high yields, likely due to its role in HCN formation. Much lower yields were obtained with more oxidized carbon species in place of CH4, but similar yields could be reached with a high H2/CO2 ratio. Thus, Miller-Urey reactions work in atmospheres of other compositions as well, depending on the ratio of reducing and oxidizing gases. More recently, Jeffrey Bada and H. James Cleaves, graduate students of Miller, hypothesized that the production of nitrites, which destroy amino acids, in CO2 and N2-rich atmospheres may explain low amino acids yields. In a Miller-Urey setup with a less-reducing (CO2 + N2 + H2O) atmosphere, when they added calcium carbonate to buffer the aqueous solution and ascorbic acid to inhibit oxidation, yields of amino acids greatly increased, demonstrating that amino acids can still be formed in more neutral atmospheres under the right geochemical conditions. In a prebiotic context, they argued that seawater would likely still be buffered and ferrous iron could inhibit oxidation.
In 1999, after Miller suffered a stroke, he donated the contents of his laboratory to Bada. In an old cardboard box, Bada discovered unanalyzed samples from modified experiments that Miller had conducted in the 1950s. In a "volcanic" apparatus, Miller had amended an aspirating nozzle to shoot a jet of steam into the reaction chamber. Using high-performance liquid chromatography and mass spectrometry, Bada's lab analyzed old samples from a set of experiments Miller conducted with this apparatus and found some higher yields and a more diverse suite of amino acids. Bada speculated that injecting the steam into the spark could have split water into H and OH radicals, leading to more hydroxylated amino acids during Strecker synthesis. In a separate set of experiments, Miller added hydrogen sulfide (H2S) to the reducing atmosphere, and Bada's analyses of the products suggested order-of-magnitude higher yields, including some amino acids with sulfur moieties.
A 2021 work highlighted the importance of the high-energy free electrons present in the experiment. It is these electrons that produce ions and radicals, and represent an aspect of the experiment that needs to be better understood.
After comparing Miller–Urey experiments conducted in borosilicate glassware with those conducted in Teflon apparatuses, a 2021 paper suggests that the glass reaction vessel acts as a mineral catalyst, implicating silicate rocks as important surfaces in prebiotic Miller-Urey reactions.
Early Earth's prebiotic atmosphere
While there is a lack of geochemical observations to constrain the exact composition of the prebiotic atmosphere, recent models point to an early "weakly reducing" atmosphere; that is, early Earth's atmosphere was likely dominated by CO2 and N2 and not CH4 and NH3 as used in the original Miller–Urey experiment. This is explained, in part, by the chemical composition of volcanic outgassing. Geologist William Rubey was one of the first to compile data on gases emitted from modern volcanoes and concluded that they are rich in CO2, H2O, and likely N2, with varying amounts of H2, sulfur dioxide (SO2), and H2S. Therefore, if the redox state of Earth's mantle — which dictates the composition of outgassing – has been constant since formation, then the atmosphere of early Earth was likely weakly reducing, but there are some arguments for a more-reducing atmosphere for the first few hundred million years.
While the prebiotic atmosphere could have had a different redox condition than that of the Miller–Urey atmosphere, the modified Miller–Urey experiments described in the above section demonstrated that amino acids can still be abiotically produced in less-reducing atmospheres under specific geochemical conditions. Furthermore, harkening back to Urey's original hypothesis of a "post-impact" reducing atmosphere, a recent atmospheric modeling study has shown that an iron-rich impactor with a minimum mass around 4×1020 – 5×1021 kg would be enough to transiently reduce the entire prebiotic atmosphere, resulting in a Miller-Urey-esque H2-, CH4-, and NH3-dominated atmosphere that persists for millions of years. Previous work has estimated from the lunar cratering record and composition of Earth's mantle that between four and seven such impactors reached the Hadean Earth.
A large factor controlling the redox budget of early Earth's atmosphere is the rate of atmospheric escape of H2 after Earth's formation. Atmospheric escape – common to young, rocky planets — occurs when gases in the atmosphere have sufficient kinetic energy to overcome gravitational energy. It is generally accepted that the timescale of hydrogen escape is short enough such that H2 made up < 1% of the atmosphere of prebiotic Earth, but, in 2005, a hydrodynamic model of hydrogen escape predicted escape rates two orders of magnitude lower than previously thought, maintaining a hydrogen mixing ratio of 30%. A hydrogen-rich prebiotic atmosphere would have large implications for Miller-Urey synthesis in the Hadean and Archean, but later work suggests solutions in that model might have violated conservation of mass and energy. That said, during hydrodynamic escape, lighter molecules like hydrogen can "drag" heavier molecules with them through collisions, and recent modeling of xenon escape has pointed to a hydrogen atmospheric mixing ratio of at least 1% or higher at times during the Archean.
Taken together, the view that early Earth's atmosphere was weakly reducing, with transient instances of highly-reducing compositions following large impacts is generally supported.
Extraterrestrial sources of amino acids
Conditions similar to those of the Miller–Urey experiments are present in other regions of the Solar System, often substituting ultraviolet light for lightning as the energy source for chemical reactions. The Murchison meteorite that fell near Murchison, Victoria, Australia in 1969 was found to contain an amino acid distribution remarkably similar to Miller-Urey discharge products. Analysis of the organic fraction of the Murchison meteorite with Fourier-transform ion cyclotron resonance mass spectrometry detected over 10,000 unique compounds, albeit at very low (ppb–ppm) concentrations. In this way, the organic composition of the Murchison meteorite is seen as evidence of Miller-Urey synthesis outside Earth.
Comets and other icy outer-solar-system bodies are thought to contain large amounts of complex carbon compounds (such as tholins) formed by processes akin to Miller-Urey setups, darkening surfaces of these bodies. Some argue that comets bombarding the early Earth could have provided a large supply of complex organic molecules along with the water and other volatiles, however very low concentrations of biologically-relevant material combined with uncertainty surrounding the survival of organic matter upon impact make this difficult to determine.
Relevance to the origin of life
The Miller–Urey experiment was proof that the building blocks of life could be synthesized abiotically from gases, and introduced a new prebiotic chemistry framework through which to study the origin of life. Simulations of protein sequences present in the last universal common ancestor (LUCA), or the last shared ancestor of all extant species today, show an enrichment in simple amino acids that were available in the prebiotic environment according to Miller-Urey chemistry. This suggests that the genetic code from which all life evolved was rooted in a smaller suite of amino acids than those used today. Thus, while creationist arguments focus on the fact that Miller–Urey experiments have not generated all 22 genetically-encoded amino acids, this does not actually conflict with the evolutionary perspective on the origin of life.
Another common criticism is that the racemic (containing both L and D enantiomers) mixture of amino acids produced in a Miller–Urey experiment is not exemplary of abiogenesis theories, as life on Earth today uses almost exclusively L-amino acids. While it is true that Miller-Urey setups produce racemic mixtures, the origin of homochirality is a separate area in origin of life research.
Recent work demonstrates that magnetic mineral surfaces like magnetite can be templates for the enantioselective crystallization of chiral molecules, including RNA precursors, due to the chiral-induced spin selectivity (CISS) effect. Once an enantioselective bias is introduced, homochirality can then propagate through biological systems in various ways. In this way, enantioselective synthesis is not required of Miller-Urey reactions if other geochemical processes in the environment are introducing homochirality.
Finally, Miller-Urey and similar experiments primarily deal with the synthesis of monomers; polymerization of these building blocks to form peptides and other more complex structures is the next step of prebiotic chemistry schemes. Polymerization requires condensation reactions, which are thermodynamically unfavored in aqueous solutions because they expel water molecules. Scientists as far back as John Desmond Bernal in the late 1940s thus speculated that clay surfaces would play a large role in abiogenesis, as they might concentrate monomers. Several such models for mineral-mediated polymerization have emerged, such as the interlayers of layered double hydroxides like green rust over wet-dry cycles. Some scenarios for peptide formation have been proposed that are even compatible with aqueous solutions, such as the hydrophobic air-water interface and a novel "sulfide-mediated α-aminonitrile ligation" scheme, where amino acid precursors come together to form peptides. Polymerization of life's building blocks is an active area of research in prebiotic chemistry.
Amino acids identified
Below is a table of amino acids produced and identified in the "classic" 1952 experiment, as analyzed by Miller in 1952 and more recently by Bada and collaborators with modern mass spectrometry, the 2008 re-analysis of vials from the volcanic spark discharge experiment, and the 2010 re-analysis of vials from the H2S-rich spark discharge experiment. While not all proteinogenic amino acids have been produced in spark discharge experiments, it is generally accepted that early life used a simpler set of prebiotically-available amino acids.
References
External links
A simulation of the Miller–Urey Experiment along with a video Interview with Stanley Miller by Scott Ellis from CalSpace (UCSD)
Origin-Of-Life Chemistry Revisited: Reanalysis of famous spark-discharge experiments reveals a richer collection of amino acids were formed.
Miller–Urey experiment explained
Miller experiment with Lego bricks
"Stanley Miller's Experiment: Sparking the Building Blocks of Life" on PBS
The Miller-Urey experiment website
Details of 2008 re-analysis
Articles containing video clips
Biology experiments
Chemical synthesis of amino acids
Chemistry experiments
Origin of life
1952 in biology
1953 in biology
2008 in science | Miller–Urey experiment | [
"Chemistry",
"Biology"
] | 5,085 | [
"Biological hypotheses",
"Origin of life",
"nan"
] |
20,941 | https://en.wikipedia.org/wiki/Metabolic%20pathway | In biochemistry, a metabolic pathway is a linked series of chemical reactions occurring within a cell. The reactants, products, and intermediates of an enzymatic reaction are known as metabolites, which are modified by a sequence of chemical reactions catalyzed by enzymes. In most cases of a metabolic pathway, the product of one enzyme acts as the substrate for the next. However, side products are considered waste and removed from the cell.
Different metabolic pathways function in the position within a eukaryotic cell and the significance of the pathway in the given compartment of the cell. For instance, the electron transport chain and oxidative phosphorylation all take place in the mitochondrial membrane. In contrast, glycolysis, pentose phosphate pathway, and fatty acid biosynthesis all occur in the cytosol of a cell.
There are two types of metabolic pathways that are characterized by their ability to either synthesize molecules with the utilization of energy (anabolic pathway), or break down complex molecules and release energy in the process (catabolic pathway).
The two pathways complement each other in that the energy released from one is used up by the other. The degradative process of a catabolic pathway provides the energy required to conduct the biosynthesis of an anabolic pathway. In addition to the two distinct metabolic pathways is the amphibolic pathway, which can be either catabolic or anabolic based on the need for or the availability of energy.
Pathways are required for the maintenance of homeostasis within an organism and the flux of metabolites through a pathway is regulated depending on the needs of the cell and the availability of the substrate. The end product of a pathway may be used immediately, initiate another metabolic pathway or be stored for later use. The metabolism of a cell consists of an elaborate network of interconnected pathways that enable the synthesis and breakdown of molecules (anabolism and catabolism).
Overview
Each metabolic pathway consists of a series of biochemical reactions that are connected by their intermediates: the products of one reaction are the substrates for subsequent reactions, and so on. Metabolic pathways are often considered to flow in one direction. Although all chemical reactions are technically reversible, conditions in the cell are often such that it is thermodynamically more favorable for flux to proceed in one direction of a reaction. For example, one pathway may be responsible for the synthesis of a particular amino acid, but the breakdown of that amino acid may occur via a separate and distinct pathway. One example of an exception to this "rule" is the metabolism of glucose. Glycolysis results in the breakdown of glucose, but several reactions in the glycolysis pathway are reversible and participate in the re-synthesis of glucose (gluconeogenesis).
Glycolysis was the first metabolic pathway discovered:
As glucose enters a cell, it is immediately phosphorylated by ATP to glucose 6-phosphate in the irreversible first step.
In times of excess lipid or protein energy sources, certain reactions in the glycolysis pathway may run in reverse to produce glucose 6-phosphate, which is then used for storage as glycogen or starch.
Metabolic pathways are often regulated by feedback inhibition.
Some metabolic pathways flow in a 'cycle' wherein each component of the cycle is a substrate for the subsequent reaction in the cycle, such as in the Krebs Cycle (see below).
Anabolic and catabolic pathways in eukaryotes often occur independently of each other, separated either physically by compartmentalization within organelles or separated biochemically by the requirement of different enzymes and co-factors.
Major metabolic pathways
Catabolic pathway (catabolism)
A catabolic pathway is a series of reactions that bring about a net release of energy in the form of a high energy phosphate bond formed with the energy carriers adenosine diphosphate (ADP) and guanosine diphosphate (GDP) to produce adenosine triphosphate (ATP) and guanosine triphosphate (GTP), respectively. The net reaction is, therefore, thermodynamically favorable, for it results in a lower free energy for the final products. A catabolic pathway is an exergonic system that produces chemical energy in the form of ATP, GTP, NADH, NADPH, FADH2, etc. from energy containing sources such as carbohydrates, fats, and proteins. The end products are often carbon dioxide, water, and ammonia. Coupled with an endergonic reaction of anabolism, the cell can synthesize new macromolecules using the original precursors of the anabolic pathway. An example of a coupled reaction is the phosphorylation of fructose-6-phosphate to form the intermediate fructose-1,6-bisphosphate by the enzyme phosphofructokinase accompanied by the hydrolysis of ATP in the pathway of glycolysis. The resulting chemical reaction within the metabolic pathway is highly thermodynamically favorable and, as a result, irreversible in the cell.
Fructose-6-Phosphate + ATP -> Fructose-1,6-Bisphosphate + ADP
Cellular respiration
A core set of energy-producing catabolic pathways occur within all living organisms in some form. These pathways transfer the energy released by breakdown of nutrients into ATP and other small molecules used for energy (e.g. GTP, NADPH, FADH2). All cells can perform anaerobic respiration by glycolysis. Additionally, most organisms can perform more efficient aerobic respiration through the citric acid cycle and oxidative phosphorylation. Additionally plants, algae and cyanobacteria are able to use sunlight to anabolically synthesize compounds from non-living matter by photosynthesis.
Anabolic pathway (anabolism)
In contrast to catabolic pathways, anabolic pathways require an energy input to construct macromolecules such as polypeptides, nucleic acids, proteins, polysaccharides, and lipids. The isolated reaction of anabolism is unfavorable in a cell due to a positive Gibbs free energy (+ΔG). Thus, an input of chemical energy through a coupling with an exergonic reaction is necessary. The coupled reaction of the catabolic pathway affects the thermodynamics of the reaction by lowering the overall activation energy of an anabolic pathway and allowing the reaction to take place. Otherwise, an endergonic reaction is non-spontaneous.
An anabolic pathway is a biosynthetic pathway, meaning that it combines smaller molecules to form larger and more complex ones. An example is the reversed pathway of glycolysis, otherwise known as gluconeogenesis, which occurs in the liver and sometimes in the kidney to maintain proper glucose concentration in the blood and supply the brain and muscle tissues with adequate amount of glucose. Although gluconeogenesis is similar to the reverse pathway of glycolysis, it contains four distinct enzymes(pyruvate carboxylase, phosphoenolpyruvate carboxykinase, fructose 1,6-bisphosphatase, glucose 6-phosphatase) from glycolysis that allow the pathway to occur spontaneously.
Amphibolic pathway (Amphibolism)
An amphibolic pathway is one that can be either catabolic or anabolic based on the availability of or the need for energy. The currency of energy in a biological cell is adenosine triphosphate (ATP), which stores its energy in the phosphoanhydride bonds. The energy is utilized to conduct biosynthesis, facilitate movement, and regulate active transport inside of the cell. Examples of amphibolic pathways are the citric acid cycle and the glyoxylate cycle. These sets of chemical reactions contain both energy producing and utilizing pathways. To the right is an illustration of the amphibolic properties of the TCA cycle.
The glyoxylate shunt pathway is an alternative to the tricarboxylic acid (TCA) cycle, for it redirects the pathway of TCA to prevent full oxidation of carbon compounds, and to preserve high energy carbon sources as future energy sources. This pathway occurs only in plants and bacteria and transpires in the absence of glucose molecules.
Regulation
The flux of the entire pathway is regulated by the rate-determining steps. These are the slowest steps in a network of reactions. The rate-limiting step occurs near the beginning of the pathway and is regulated by feedback inhibition, which ultimately controls the overall rate of the pathway. The metabolic pathway in the cell is regulated by covalent or non-covalent modifications. A covalent modification involves an addition or removal of a chemical bond, whereas a non-covalent modification (also known as allosteric regulation) is the binding of the regulator to the enzyme via hydrogen bonds, electrostatic interactions, and Van der Waals forces.
The rate of turnover in a metabolic pathway, also known as the metabolic flux, is regulated based on the stoichiometric reaction model, the utilization rate of metabolites, and the translocation pace of molecules across the lipid bilayer. The regulation methods are based on experiments involving 13C-labeling, which is then analyzed by nuclear magnetic resonance (NMR) or gas chromatography–mass spectrometry (GC–MS)–derived mass compositions. The aforementioned techniques synthesize a statistical interpretation of mass distribution in proteinogenic amino acids to the catalytic activities of enzymes in a cell.
Clinical applications in targeting metabolic pathways
Targeting oxidative phosphorylation
Metabolic pathways can be targeted for clinically therapeutic uses. Within the mitochondrial metabolic network, for instance, there are various pathways that can be targeted by compounds to prevent cancer cell proliferation. One such pathway is oxidative phosphorylation (OXPHOS) within the electron transport chain (ETC). Various inhibitors can downregulate the electrochemical reactions that take place at Complex I, II, III, and IV, thereby preventing the formation of an electrochemical gradient and downregulating the movement of electrons through the ETC. The substrate-level phosphorylation that occurs at ATP synthase can also be directly inhibited, preventing the formation of ATP that is necessary to supply energy for cancer cell proliferation. Some of these inhibitors, such as lonidamine and atovaquone, which inhibit Complex II and Complex III, respectively, are currently undergoing clinical trials for FDA approval. Other non-FDA-approved inhibitors have still shown experimental success in vitro.
Targeting Heme
Heme, an important prosthetic group present in Complexes I, II, and IV can also be targeted, since heme biosynthesis and uptake have been correlated with increased cancer progression. Various molecules can inhibit heme via different mechanisms. For instance, succinylacetone has been shown to decrease heme concentrations by inhibiting δ-aminolevulinic acid in murine erythroleukemia cells. The primary structure of heme-sequestering peptides, such as HSP1 and HSP2, can be modified to downregulate heme concentrations and reduce proliferation of non-small lung cancer cells.
Targeting the tricarboxylic acid cycle and glutaminolysis
The tricarboxylic acid cycle (TCA) and glutaminolysis can also be targeted for cancer treatment, since they are essential for the survival and proliferation of cancer cells. Ivosidenib and enasidenib, two FDA-approved cancer treatments, can arrest the TCA cycle of cancer cells by inhibiting isocitrate dehydrogenase-1 (IDH1) and isocitrate dehydrogenase-2 (IDH2), respectively. Ivosidenib is specific to acute myeloid leukemia (AML) and cholangiocarcinoma, whereas enasidenib is specific to just acute myeloid leukemia (AML).
In a clinical trial consisting of 185 adult patients with cholangiocarcinoma and an IDH-1 mutation, there was a statistically significant improvement (p<0.0001; HR: 0.37) in patients randomized to ivosidenib. Still, some of the adverse side effects in these patients included fatigue, nausea, diarrhea, decreased appetite, ascites, and anemia. In a clinical trial consisting of 199 adult patients with AML and an IDH2 mutation, 23% of patients experienced complete response (CR) or complete response with partial hematologic recovery (CRh) lasting a median of 8.2 months while on enasidenib. Of the 157 patients who required transfusion at the beginning of the trial, 34% no longer required transfusions during the 56-day time period on enasidenib. Of the 42% of patients who did not require transfusions at the beginning of the trial, 76% still did not require a transfusion by the end of the trial. Side effects of enasidenib included nausea, diarrhea, elevated bilirubin and, most notably, differentiation syndrome.
Glutaminase (GLS), the enzyme responsible for converting glutamine to glutamate via hydrolytic deamidation during the first reaction of glutaminolysis, can also be targeted. In recent years, many small molecules, such as azaserine, acivicin, and CB-839 have been shown to inhibit glutaminase, thus reducing cancer cell viability and inducing apoptosis in cancer cells. Due to its effective antitumor ability in several cancer types such as ovarian, breast and lung cancers, CB-839 is the only GLS inhibitor currently undergoing clinical studies for FDA-approval.
Genetic engineering of metabolic pathways
Many metabolic pathways are of commercial interest. For instance, the production of many antibiotics or other drugs requires complex pathways. The pathways to produce such compounds can be transplanted into microbes or other more suitable organism for production purposes. For example, the world's supply of the anti-cancer drug vinblastine is produced by relatively ineffient extraction and purification of the precursors vindoline and catharanthine from the plant Catharanthus roseus, which are then chemically converted into vinblastine. The biosynthetic pathway to produce vinblastine, including 30 enzymatic steps, has been transferred into yeast cells which is a convenient system to grow in large amounts. With these genetic modifications yeast can use its own metabolites geranyl pyrophosphate and tryptophan to produce the precursors of catharanthine and vindoline. This process required 56 genetic edits, including expression of 34 heterologous genes from plants in yeast cells.
See also
KaPPA-View4 (2010)
Metabolism
Metabolic control analysis
Metabolic network
Metabolic network modelling
Metabolic engineering
Biochemical systems equation
Linear biochemical pathway
References
External links
Full map of metabolic pathways
Biochemical pathways, Gerhard Michal
Overview Map from BRENDA
BioCyc: Metabolic network models for thousands of sequenced organisms
KEGG: Kyoto Encyclopedia of Genes and Genomes
Reactome, a database of reactions, pathways and biological processes
MetaCyc: A database of experimentally elucidated metabolic pathways (2,200+ pathways from more than 2,500 organisms)
MetaboMAPS: A platform for pathway sharing and data visualization on metabolic pathways
The Pathway Localization database (PathLocdb)
DAVID: Visualize genes on pathway maps
Wikipathways: pathways for the people
ConsensusPathDB
metpath: Integrated interactive metabolic chart | Metabolic pathway | [
"Chemistry"
] | 3,293 | [
"Metabolic pathways",
"Metabolism"
] |
13,850,157 | https://en.wikipedia.org/wiki/Epoetin%20beta | Epoetin beta (INN), sold under the brand name Neorecormon among others, is a synthetic, recombinant form of erythropoietin, a protein that promotes the production of red blood cells. It is an erythropoiesis-stimulating agent (ESA) that is used to treat anemia, commonly associated with chronic kidney failure and cancer chemotherapy.
It is on the World Health Organization's List of Essential Medicines.
Chemistry
Epoetin beta is a recombinant form of human erythropoietin which is produced in Chinese hamster ovary cells. It has the same protein sequence as natural human erythropoietin, being composed of 165 amino acids with about 30 KDa molecular weight.
History
Erythropoietin (EPO) is a hormone produced in the kidneys. The existence of this hormone has been known since 1906, when scientists first started isolating it, and since the 1980s, a recombinant version of the hormone has been available for use in medical treatment.
See also
Epoetin alfa
Epoetin theta
References
Further reading
External links
Antianemic preparations
Growth factors
Erythropoiesis-stimulating agents
Drugs developed by Hoffmann-La Roche | Epoetin beta | [
"Chemistry"
] | 261 | [
"Growth factors",
"Signal transduction"
] |
13,852,021 | https://en.wikipedia.org/wiki/Asulam | Asulam is a herbicide invented by May & Baker Ltd, internally called M&B9057, that is used in horticulture and agriculture to kill bracken and docks. It is also used as an antiviral agent. It is currently marketed, by United Phosphorus Ltd - UPL, as "Asulox" which contains 400 g/L of asulam sodium salt.
Asulam was declared not approved by the Commission Implementing Regulation (EU) No 1045/2011 of 19 October 2011 concerning the non-approval of the active substance asulam. Concerns included: lack of evidence concerning the fate of the toxic metabolite sulfanilamide and other metabolites; the poorly characterised nature of the impurities potentially present in the technical-grade product; toxicity to birds. This decision is given in with Regulation (EC) No 1107/2009 of the European Parliament and of the Council concerning the placing of plant protection products on the market, and amending Commission Decision 2008/934/EC.
References
Further reading
Herbicides
Carbamates
Sulfonamides
4-Aminophenyl compounds | Asulam | [
"Chemistry",
"Biology"
] | 231 | [
"Herbicides",
"Organic compounds",
"Biocides",
"Organic compound stubs",
"Organic chemistry stubs"
] |
13,854,154 | https://en.wikipedia.org/wiki/Pt/Co%20scale | The Platinum-Cobalt Scale (Pt/Co scale or Apha-Hazen Scale ) is a color scale that was introduced in 1892 by chemist Allen Hazen (1869–1930). The index was developed as a way to evaluate pollution levels in waste water. It has since expanded to a common method of comparison of the intensity of yellow-tinted samples. It is specific to the color yellow and is based on dilutions of a 500 ppm platinum cobalt solution. The colour produced by one milligram of platinum cobalt dissolved in one liter of water is fixed as one unit of colour in platinum-cobalt scale. The ASTM has detailed description and procedures in ASTM Designation D1209, "Standard Test Method for Color of Clear Liquids (Platinum-Cobalt Scale)".
Colour may be reported on a water quality report using this scale.
See also
APHA color
References
Dimensionless numbers of chemistry
Environmental engineering
Water chemistry
Water pollution
Color scales
Platinum
Cobalt | Pt/Co scale | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 195 | [
"Hydrology",
"Chemical engineering",
"Water pollution",
"Hydrology stubs",
"Civil engineering",
"nan",
"Environmental engineering",
"Dimensionless numbers of chemistry"
] |
382,683 | https://en.wikipedia.org/wiki/Landing%20gear | Landing gear is the undercarriage of an aircraft or spacecraft that is used for taxiing, takeoff or landing. For aircraft, it is generally needed for all three of these. It was also formerly called alighting gear by some manufacturers, such as the Glenn L. Martin Company. For aircraft, Stinton makes the terminology distinction undercarriage (British) = landing gear (US).
For aircraft, the landing gear supports the craft when it is not flying, allowing it to take off, land, and taxi without damage. Wheeled landing gear is the most common, with skis or floats needed to operate from snow/ice/water and skids for vertical operation on land. Retractable undercarriages fold away during flight, which reduces drag, allowing for faster airspeeds. Landing gear must be strong enough to support the aircraft and its design affects the weight, balance and performance. It often comprises three wheels, or wheel-sets, giving a tripod effect.
Some unusual landing gear have been evaluated experimentally. These include: no landing gear (to save weight), made possible by operating from a catapult cradle and flexible landing deck: air cushion (to enable operation over a wide range of ground obstacles and water/snow/ice); tracked (to reduce runway loading).
For launch vehicles and spacecraft landers, the landing gear usually only supports the vehicle on landing and during subsequent surface movement, and is not used for takeoff.
Given their varied designs and applications, there exist dozens of specialized landing gear manufacturers. The three largest are Safran Landing Systems, Collins Aerospace (part of Raytheon Technologies) and Héroux-Devtek.
Aircraft
The landing gear represents 2.5 to 5% of the maximum takeoff weight (MTOW) and 1.5 to 1.75% of the aircraft cost, but 20% of the airframe direct maintenance cost. A suitably-designed wheel can support , tolerate a ground speed of 300 km/h and roll a distance of ; it has a 20,000 hours time between overhaul and a 60,000 hours or 20 year life time.
Gear arrangements
Wheeled undercarriages normally come in two types:
Conventional landing gear or "taildragger", where there are two main wheels towards the front of the aircraft and a single, much smaller, wheel or skid at the rear. The same helicopter arrangement is called tricycle tailwheel.
Tricycle landing gear, where there are two main wheels (or wheel assemblies) under the wings and a third smaller wheel in the nose. PZL.37 Łoś Was the first bomber aircraft with twin wheels on a single shock absorber. The same helicopter arrangement is called tricycle nosewheel.
The taildragger arrangement was common during the early propeller era, as it allows more room for propeller clearance. Most modern aircraft have tricycle undercarriages. Taildraggers are considered harder to land and take off (because the arrangement is usually unstable, that is, a small deviation from straight-line travel will tend to increase rather than correct itself), and usually require special pilot training. A small tail wheel or skid/bumper may be added to a tricycle undercarriage to prevent damage to the underside of the fuselage if over-rotation occurs on take-off leading to a tail strike. Aircraft with tail-strike protection include the B-29 Superfortress, Boeing 727 trijet and Concorde. Some aircraft with retractable conventional landing gear have a fixed tailwheel. Hoerner estimated the drag of the Bf 109 fixed tailwheel and compared it with that of other protrusions such as the pilot's canopy.
A third arrangement (known as tandem or bicycle) has the main and nose gear located fore and aft of the center of gravity (CG) under the fuselage with outriggers on the wings. This is used when there is no convenient location on either side of the fuselage to attach the main undercarriage or to store it when retracted. Examples include the Lockheed U-2 spy plane and the Harrier jump jet. The Boeing B-52 uses a similar arrangement, except that the fore and aft gears each have two twin-wheel units side by side.
Quadricycle gear is similar to bicycle but with two sets of wheels displaced laterally in the fore and aft positions. Raymer classifies the B-52 gear as quadricycle. The experimental Fairchild XC-120 Packplane had quadricycle gear located in the engine nacelles to allow unrestricted access beneath the fuselage for attaching a large freight container.
Helicopters use skids, pontoons or wheels depending on their size and role.
Retractable gear
To decrease drag in flight, undercarriages retract into the wings and/or fuselage with wheels flush with the surrounding surface, or concealed behind flush-mounted doors; this is called retractable gear. If the wheels do not retract completely but protrude partially exposed to the airstream, it is called a semi-retractable gear.
Most retractable gear is hydraulically operated, though some is electrically operated or even manually operated on very light aircraft. The landing gear is stowed in a compartment called a wheel well.
Pilots confirming that their landing gear is down and locked refer to "three greens" or "three in the green.", a reference to the electrical indicator lights (or painted panels of mechanical indicator units) from the nosewheel/tailwheel and the two main gears. Blinking green lights or red lights indicate the gear is in transit and neither up and locked or down and locked. When the gear is fully stowed up with the up-locks secure, the lights often extinguish to follow the dark cockpit philosophy; some airplanes have gear up indicator lights.
Redundant systems are used to operate the landing gear and redundant main gear legs may also be provided so the aircraft can be landed in a satisfactory manner in a range of failure scenarios. The Boeing 747 was given four separate and independent hydraulic systems (when previous airliners had two) and four main landing gear posts (when previous airliners had two). Safe landing would be possible if two main gear legs were torn off provided they were on opposite sides of the fuselage. In the case of power failure in a light aircraft, an emergency extension system is always available. This may be a manually operated crank or pump, or a mechanical free-fall mechanism which disengages the uplocks and allows the landing gear to fall under gravity.
Shock absorbers
Aircraft landing gear includes wheels equipped with solid shock absorbers on light planes, and air/oil oleo struts on larger aircraft.
Large aircraft
As aircraft weights have increased more wheels have been added and runway thickness has increased to keep within the runway loading limit. The Zeppelin-Staaken R.VI, a large German World War I long-range bomber of 1916, used eighteen wheels for its undercarriage, split between two wheels on its nose gear struts, and sixteen wheels on its main gear units—split into four side-by-side quartets each, two quartets of wheels per side—under each tandem engine nacelle, to support its loaded weight of almost .
Multiple "tandem wheels" on an aircraft—particularly for cargo aircraft, mounted to the fuselage lower sides as retractable main gear units on modern designs—were first seen during World War II, on the experimental German Arado Ar 232 cargo aircraft, which used a row of eleven "twinned" fixed wheel sets directly under the fuselage centerline to handle heavier loads while on the ground. Many of today's large cargo aircraft use this arrangement for their retractable main gear setups, usually mounted on the lower corners of the central fuselage structure.
The prototype Convair XB-36 had most of its weight on two main wheels, which needed runways at least thick. Production aircraft used two four-wheel bogies, allowing the aircraft to use any airfield suitable for a B-29.
A relatively light Lockheed JetStar business jet, with four wheels supporting , needed a thick flexible asphalt pavement. The Boeing 727-200 with four tires on two legs main landing gears required a thick pavement. The thickness rose to for a McDonnell Douglas DC-10-10 with supported on eight wheels on two legs. The heavier, , DC-10-30/40 were able to operate from the same thickness pavements with a third main leg for ten wheels, like the first Boeing 747-100, weighing on four legs and 16 wheels. The similar-weight Lockheed C-5, with 24 wheels, needs an pavement.
The twin-wheel unit on the fuselage centerline of the McDonnell Douglas DC-10-30/40 was retained on the MD-11 airliner and the same configuration was used on the initial Airbus A340-200/300, which evolved in a complete four-wheel undercarriage bogie for the heavier Airbus A340-500/-600. The up to Boeing 777 has twelve main wheels on two three-axles bogies, like the later Airbus A350.
The Airbus A380 has a four-wheel bogie under each wing with two sets of six-wheel bogies under the fuselage. The Antonov An-225, the largest cargo aircraft, had 4 wheels on the twin-strut nose gear units like the smaller Antonov An-124, and 28 main gear wheels.
The A321neo has a twin-wheel main gear inflated to 15.7 bar (228 psi), while the A350-900 has a four-wheel main gear inflated to 17.1 bar (248 psi).
STOL aircraft
STOL aircraft have a higher sink-rate requirement if a carrier-type, no-flare landing technique has to be adopted to reduce touchdown scatter. For example, the Saab 37 Viggen, with landing gear designed for a 5m/sec impact, could use a carrier-type landing and HUD to reduce its scatter from 300 m to 100m.
The de Havilland Canada DHC-4 Caribou used long-stroke legs to land from a steep approach with no float.
Operation from water
A flying boat has a lower fuselage with the shape of a boat hull giving it buoyancy. Wing-mounted floats or stubby wing-like sponsons are added for stability. Sponsons are attached to the lower sides of the fuselage.
A floatplane has two or three streamlined floats. Amphibious floats have retractable wheels for land operation.
An amphibious aircraft or amphibian usually has two distinct landing gears, namely a "boat" hull/floats and retractable wheels, which allow it to operate from land or water.
Beaching gear is detachable wheeled landing gear that allows a non-amphibious floatplane or flying boat to be maneuvered on land. It is used for aircraft maintenance and storage and is either carried in the aircraft or kept at a slipway. Beaching gear may consist of individual detachable wheels or a cradle that supports the entire aircraft. In the former case, the beaching gear is manually attached or detached with the aircraft in the water; in the latter case, the aircraft is maneuvered onto the cradle.
Helicopters are able to land on water using floats or a hull and floats.
For take-off a step and planing bottom are required to lift from the floating position to planing on the surface. For landing a cleaving action is required to reduce the impact with the surface of the water. A vee bottom parts the water and chines deflect the spray to prevent it damaging vulnerable parts of the aircraft. Additional spray control may be needed using spray strips or inverted gutters. A step is added to the hull, just behind the center of gravity, to stop water clinging to the afterbody so the aircraft can accelerate to flying speed. The step allows air, known as ventilation air, to break the water suction on the afterbody. Two steps were used on the Kawanishi H8K. A step increases the drag in flight. The drag contribution from the step can be reduced with a fairing. A faired step was introduced on the Short Sunderland III.
One goal of seaplane designers was the development of an open ocean seaplane capable of routine operation from very rough water. This led to changes in seaplane hull configuration. High length/beam ratio hulls and extended afterbodies improved rough water capabilities. A hull much longer than its width also reduced drag in flight. An experimental development of the Martin Marlin, the Martin M-270, was tested with a new hull with a greater length/beam ratio of 15 obtained by adding 6 feet to both the nose and tail. Rough-sea capability can be improved with lower take-off and landing speeds because impacts with waves are reduced. The Shin Meiwa US-1A is a STOL amphibian with blown flaps and all control surfaces. The ability to land and take-off at relatively low speeds of about 45 knots and the hydrodynamic features of the hull, long length/beam ratio and inverted spray gutter for example, allow operation in wave heights of 15 feet. The inverted gutters channel spray to the rear of the propeller discs.
Low speed maneuvring is necessary between slipways and buoys and take-off and landing areas. Water rudders are used on seaplanes ranging in size from the Republic RC-3 Seabee to the Beriev A-40 Hydro flaps were used on the Martin Marlin and Martin SeaMaster. Hydroflaps, submerged at the rear of the afterbody, act as a speed brake or differentially as a rudder. A fixed fin, known as a skeg, has been used for directional stability. A skeg, was added to the second step on the Kawanishi H8K flying boat hull.
High speed impacts in rough water between the hull and wave flanks may be reduced using hydro-skis which hold the hull out of the water at higher speeds. Hydro skis replace the need for a boat hull and only require a plain fuselage which planes at the rear. Alternatively skis with wheels can be used for land-based aircraft which start and end their flight from a beach or floating barge. Hydro-skis with wheels were demonstrated as an all-purpose landing gear conversion of the Fairchild C-123, known as the Panto-base Stroukoff YC-134. A seaplane designed from the outset with hydro-skis was the Convair F2Y Sea Dart prototype fighter. The skis incorporated small wheels, with a third wheel on the fuselage, for ground handling.
In the 1950s hydro-skis were envisaged as a ditching aid for large piston-engined aircraft. Water-tank tests done using models of the Lockheed Constellation, Douglas DC-4 and Lockheed Neptune concluded that chances of survival and rescue would be greatly enhanced by preventing critical damage associated with ditching.
Shipboard operation
The landing gear on fixed-wing aircraft that land on aircraft carriers have a higher sink-rate requirement because the aircraft are flown onto the deck with no landing flare. Other features are related to catapult take-off requirements for specific aircraft. For example, the Blackburn Buccaneer was pulled down onto its tail-skid to set the required nose-up attitude. The naval McDonnell Douglas F-4 Phantom II in UK service needed an extending nosewheel leg to set the wing attitude at launch.
The landing gear for an aircraft using a ski-jump on take-off is subjected to loads of 0.5g which also last for much longer than a landing impact.
Helicopters may have a deck-lock harpoon to anchor them to the deck.
In-flight use
Some aircraft have a requirement to use the landing-gear as a speed brake.
Flexible mounting of the stowed main landing-gear bogies on the Tupolev Tu-22R raised the aircraft flutter speed to . The bogies oscillated within the nacelle under the control of dampers and springs as an anti-flutter device.
Gear common to different aircraft
Some experimental aircraft have used gear from existing aircraft to reduce program costs. The Martin-Marietta X-24 lifting body used the nose/main gear from the North American T-39 / Northrop T-38 and the Grumman X-29 from the Northrop F-5 / General Dynamics F-16.
Other types
Skids
Skids has been used on aircraft landing gear. The North American X-15 used skids as the rear landing gear and the Rockwell HiMAT used them in testing.
When an airplane needs to land on surfaces covered by snow, the landing gear usually consists of skis or a combination of wheels and skis.
Detachable
Some aircraft use wheels for takeoff and jettison them when airborne for improved streamlining without the complexity, weight and space requirements of a retraction mechanism. The wheels are sometimes mounted onto axles that are part of a separate "dolly" (for main wheels only) or "trolley" (for a three-wheel set with a nosewheel) chassis. Landing is done on skids or similar simple devices (fixed or retractable). The SNCASE Baroudeur used this arrangement.
Historical examples include the "dolly"-using Messerschmitt Me 163 Komet rocket fighter, the Messerschmitt Me 321 Gigant troop glider, and the first eight "trolley"-using prototypes of the Arado Ar 234 jet reconnaissance bomber. The main disadvantage to using the takeoff dolly/trolley and landing skid(s) system on German World War II aircraft—intended for a sizable number of late-war German jet and rocket-powered military aircraft designs—was that aircraft would likely be scattered all over a military airfield after they had landed from a mission, and would be unable to taxi on their own to an appropriately hidden "dispersal" location, which could easily leave them vulnerable to being shot up by attacking Allied fighters. A related contemporary example are the wingtip support wheels ("pogos") on the Lockheed U-2 reconnaissance aircraft, which fall away after take-off and drop to earth; the aircraft then relies on titanium skids on the wingtips for landing.
Rearwards and sideways retraction
Some main landing gear struts on World War II aircraft, in order to allow a single-leg main gear to more efficiently store the wheel within either the wing or an engine nacelle, rotated the single gear strut through a 90° angle during the rearwards-retraction sequence to allow the main wheel to rest "flat" above the lower end of the main gear strut, or flush within the wing or engine nacelles, when fully retracted. Examples are the Curtiss P-40, Vought F4U Corsair, Grumman F6F Hellcat, Messerschmitt Me 210 and Junkers Ju 88. The Aero Commander family of twin-engined business aircraft also shares this feature on the main gears, which retract aft into the ends of the engine nacelles. The rearward-retracting nosewheel strut on the Heinkel He 219 and the forward-retracting nose gear strut on the later Cessna Skymaster similarly rotated 90 degrees as they retracted.
On most World War II single-engined fighter aircraft (and even one German heavy bomber design) with sideways retracting main gear, the main gear that retracted into the wings was raked forward in the "down" position for better ground handling, with a retracted position that placed the main wheels at some distance aft of their position when downairframe—this led to a complex angular geometry for setting up the "pintle" angles at the top ends of the struts for the retraction mechanism's axis of rotation. with some aircraft, like the P-47 Thunderbolt and Grumman Bearcat, even mandating that the main gear struts lengthened as they were extended to give sufficient ground clearance for their large four-bladed propellers. One exception to the need for this complexity in many WW II fighter aircraft was Japan's famous Zero fighter, whose main gear stayed at a perpendicular angle to the centerline of the aircraft when extended, as seen from the side.
Variable axial position of main wheels
The main wheels on the Vought F7U Cutlass could move 20 inches between a forward and aft position. The forward position was used for take-off to give a longer lever-arm for pitch control and greater nose-up attitude. The aft position was used to reduce landing bounce and reduce risk of tip-back during ground handling.
Tandem layout
The tandem or bicycle layout is used on the Hawker Siddeley Harrier, which has two main-wheels behind a single nose-wheel under the fuselage and a smaller wheel near the tip of each wing. On second generation Harriers, the wing is extended past the outrigger wheels to allow greater wing-mounted munition loads to be carried, or to permit wing-tip extensions to be bolted on for ferry flights.
A tandem layout was evaluated by Martin using a specially-modified Martin B-26 Marauder (the XB-26H) to evaluate its use on Martin's first jet bomber, the Martin XB-48. This configuration proved so manoeuvrable that it was also selected for the B-47 Stratojet. It was also used on the U-2, Myasishchev M-4, Yakovlev Yak-25, Yak-28 and Sud Aviation Vautour. A variation of the multi tandem layout is also used on the B-52 Stratofortress which has four main wheel bogies (two forward and two aft) underneath the fuselage and a small outrigger wheel supporting each wing-tip. The B-52's landing gear is also unique in that all four pairs of main wheels can be steered. This allows the landing gear to line up with the runway and thus makes crosswind landings easier (using a technique called crab landing). Since tandem aircraft cannot rotate for takeoff, the forward gear must be long enough to give the wings the correct angle of attack during takeoff. During landing, the forward gear must not touch the runway first, otherwise the rear gear will slam down and may cause the aircraft to bounce and become airborne again.
Crosswind landing accommodation
One very early undercarriage incorporating castoring for crosswind landings was pioneered on the Bleriot VIII design of 1908. It was later used in the much more famous Blériot XI Channel-crossing aircraft of 1909 and also copied in the earliest examples of the Etrich Taube. In this arrangement the main landing gear's shock absorption was taken up by a vertically sliding bungee cord-sprung upper member. The vertical post along which the upper member slid to take landing shocks also had its lower end as the rotation point for the forward end of the main wheel's suspension fork, allowing the main gear to pivot on moderate crosswind landings.
Manually-adjusted main-gear units on the B-52 can be set for crosswind take-offs. It rarely has to be used from SAC-designated airfields which have major runways in the predominant strongest wind direction. The Lockheed C-5 Galaxy has swivelling 6-wheel main units for crosswind landings and castoring rear units to prevent tire scrubbing on tight turns.
"Kneeling" gear
Both the nosegear and the wing-mounted main landing gear of the World War II German Arado Ar 232 cargo/transport aircraft were designed to kneel. This made it easier to load and unload cargo, and improved taxiing over ditches and on soft ground.
Some early U.S. Navy jet fighters were equipped with "kneeling" nose gear consisting of small steerable auxiliary wheels on short struts located forward of the primary nose gear, allowing the aircraft to be taxied tail-high with the primary nose gear retracted. This feature was intended to enhance safety aboard aircraft carriers by redirecting the hot exhaust blast upwards, and to reduce hangar space requirements by enabling the aircraft to park with its nose underneath the tail of a similarly equipped jet. Kneeling gear was used on the North American FJ-1 Fury and on early versions of the McDonnell F2H Banshee, but was found to be of little use operationally, and was omitted from later Navy fighters.
The nosewheel on the Lockheed C-5, partially retracts against a bumper to assist in loading and unloading of cargo using ramps through the forward, "tilt-up" hinged fuselage nose while stationary on the ground. The aircraft also tilts backwards. The Messier twin-wheel main units fitted to the Transall and other cargo aircraft can tilt forward or backward as necessary.
The Boeing AH-64 Apache helicopter is able to kneel to fit inside the cargo hold of a transport aircraft and for storage.
Tail support
Aircraft landing gear includes devices to prevent fuselage contact with the ground by tipping back when the aircraft is being loaded. Some commercial aircraft have used tail props when parked at the gate. The Douglas C-54 had a critical CG location which required a ground handling strut. The Lockheed C-130 and Boeing C-17 Globemaster III use ramp supports.
The unladen CG of the rear-engined Ilyushin IL-62 is aft of the main gear due to design decisions stemming from efforts to reduce overall weight, systems complexity and drag; to prevent the fuselage from tilting back when unloaded, the aircraft has a unique fully retractable vertical tail strut with castering wheels to allow towing or pushback. The strut is not intended for taxiing or flight, when the weight of the crew, passengers, cargo and fuel provide the necessary fore-aft balance.
Monowheel
To minimize drag, modern gliders usually have a single wheel, retractable or fixed, centered under the fuselage, which is referred to as monowheel gear or monowheel landing gear. Monowheel gear is also used on some powered aircraft, where drag reduction is a priority, such as the Europa Classic. Much like the Me 163 rocket fighter, some gliders from prior to the Second World War used a take-off dolly that was jettisoned on take-off; these gliders then landed on a fixed skid. This configuration is necessarily accompanied with a taildragger.
Helicopters
Light helicopters use simple landing skids to save weight and cost. The skids may have attachment points for wheels so that they can be moved for short distances on the ground. Skids are impractical for helicopters weighing more than four tons. Some high-speed machines have retractable wheels, but most use fixed wheels for their robustness, and to avoid the need for a retraction mechanism.
Tailsitter
Experimental tailsitter aircraft use landing gear located in their tails for VTOL operation.
Light aircraft
For light aircraft a type of landing gear which is economical to produce is a simple wooden arch laminated from ash, as used on some homebuilt aircraft. A similar arched gear is often formed from spring steel. The Cessna Airmaster was among the first aircraft to use spring steel landing gear. The main advantage of such gear is that no other shock-absorbing device is needed; the deflecting leaf provides the shock absorption.
Folding gear
The limited space available to stow landing gear has led to many complex retraction mechanisms, each unique to a particular aircraft. An early example, the German Bomber B combat aircraft design competition winner, the Junkers Ju 288, had a complex "folding" main landing gear unlike any other aircraft designed by either Axis or Allied sides in the war: its single oleo strut was only attached to the lower end of its Y-form main retraction struts, handling the twinned main gear wheels, and folding by swiveling downwards and aftwards during retraction to "fold" the maingear's length to shorten it for stowage in the engine nacelle it was mounted in. However, the single pivot-point design also led to numerous incidents of collapsed maingear units for its prototype airframes.
Tracked
Increased contact area can be obtained with very large wheels, many smaller wheels or track-type gear. Tracked gear made by Dowty was fitted to a Westland Lysander in 1938 for taxi tests, then a Fairchild Cornell and a Douglas Boston. Bonmartini, in Italy, fitted tracked gear to a Piper Cub in 1951. Track-type gear was also tested using a C-47, C-82 and B-50. A much heavier aircraft, an XB-36, was made available for further tests, although there was no intention of using it on production aircraft. The stress on the runway was reduced to one third that of the B-36 four-wheel bogie.
Ground carriage
Ground carriage is a long-term (after 2030) concept of flying without landing gear. It is one of many aviation technologies being proposed to reduce greenhouse gas emissions. Leaving the landing gear on the ground reduces weight and drag. Leaving it behind after take-off was done for a different reason, i.e. with military objectives, during World War II using the "dolly" and "trolley" arrangements of the German Me 163B rocket fighter and Arado Ar 234A prototype jet recon-bomber.
Steering
There are several types of steering. Taildragger aircraft may be steered by rudder alone (depending upon the prop wash produced by the aircraft to turn it) with a freely pivoting tail wheel, or by a steering linkage with the tail wheel, or by differential braking (the use of independent brakes on opposite sides of the aircraft to turn the aircraft by slowing one side more sharply than the other). Aircraft with tricycle landing gear usually have a steering linkage with the nosewheel (especially in large aircraft), but some allow the nosewheel to pivot freely and use differential braking and/or the rudder to steer the aircraft, like the Cirrus SR22.
Some aircraft require that the pilot steer by using rudder pedals; others allow steering with the yoke or control stick. Some allow both. Still others have a separate control, called a tiller, used for steering on the ground exclusively.
Rudder
When an aircraft is steered on the ground exclusively using the rudder, it needs a substantial airflow past the rudder, which can be generated either by the forward motion of the aircraft or by propeller slipstream. Rudder steering requires considerable practice to use effectively. Although it needs airflow past the rudder, it has the advantage of not needing any friction with the ground, which makes it useful for aircraft on water, snow or ice.
Direct
Some aircraft link the yoke, control stick, or rudder directly to the wheel used for steering. Manipulating these controls turns the steering wheel (the nose wheel for tricycle landing gear, and the tail wheel for taildraggers). The connection may be a firm one in which any movement of the controls turns the steering wheel (and vice versa), or it may be a soft one in which a spring-like mechanism twists the steering wheel but does not force it to turn. The former provides positive steering but makes it easier to skid the steering wheel; the latter provides softer steering (making it easy to overcontrol) but reduces the probability of skidding. Aircraft with retractable gear may disable the steering mechanism wholly or partially when the gear is retracted.
Differential braking
Differential braking depends on asymmetric application of the brakes on the main gear wheels to turn the aircraft. For this, the aircraft must be equipped with separate controls for the right and left brakes (usually on the rudder pedals). The nose or tail wheel usually is not equipped with brakes. Differential braking requires considerable skill. In aircraft with several methods of steering that include differential braking, differential braking may be avoided because of the wear it puts on the braking mechanisms. Differential braking has the advantage of being largely independent of any movement or skidding of the nose or tailwheel.
Tiller
A tiller in an aircraft is a small wheel or lever, sometimes accessible to one pilot and sometimes duplicated for both pilots, that controls the steering of the aircraft while it is on the ground. The tiller may be designed to work in combination with other controls such as the rudder or yoke. In large airliners, for example, the tiller is often used as the sole means of steering during taxi, and then the rudder is used to steer during takeoff and landing, so that both aerodynamic control surfaces and the landing gear can be controlled simultaneously when the aircraft is moving at aerodynamic speeds.
Tires and wheels
The specified selection criterion, e.g., minimum size, weight, or pressure, are used to select suitable tires and wheels from manufacturer's catalog and industry standards found in the Aircraft Yearbook published by the Tire and Rim Association, Inc.
Gear loading
The choice of the main wheel tires is made on the basis of the static loading case. The total main gear load is calculated assuming that the aircraft is taxiing at low speed without braking:
where is the weight of the aircraft and and are the distance measured from the aircraft's center of gravity(cg) to the main and nose gear, respectively.
The choice of the nose wheel tires is based on the nose wheel load during braking at maximum effort:
where is the lift, is the drag, is the thrust, and is the height of aircraft cg from the static groundline. Typical values for on dry concrete vary from 0.35 for a simple brake system to 0.45 for an automatic brake pressure control system. As both and are positive, the maximum nose gear load occurs at low speed. Reverse thrust decreases the nose gear load, and hence the condition results in the maximum value:
To ensure that the rated loads will not be exceeded in the static and braking conditions, a seven percent safety factor is used in the calculation of the applied loads.
Inflation pressure
Provided that the wheel load and configuration of the landing gear remain unchanged, the weight and volume of the tire will decrease with an increase in inflation pressure. From the flotation standpoint, a decrease in the tire contact area will induce a higher bearing stress on the pavement which may reduce the number of airfields available to the aircraft. Braking will also become less effective due to a reduction in the frictional force between the tires and the ground. In addition, the decrease in the size of the tire, and hence the size of the wheel, could pose a problem if internal brakes are to be fitted inside the wheel rims. The arguments against higher pressure are of such a nature that commercial operators generally prefer the lower pressures in order to maximize tire life and minimize runway stress. To prevent punctures from stones Philippine Airlines had to operate their Hawker Siddeley 748 aircraft with pressures as low as the tire manufacturer would permit. However, too low a pressure can lead to an accident as in the Nigeria Airways Flight 2120.
A rough general rule for required tire pressure is given by the manufacturer in their catalog. Goodyear for example advises the pressure to be 4% higher than required for a given weight or as fraction of the rated static load and inflation.
Tires of many commercial aircraft are required to be filled with nitrogen, and not subsequently diluted with more than 5% oxygen, to prevent auto-ignition of the gas which may result from overheating brakes producing volatile vapors from the tire lining.
Naval aircraft use different pressures when operating from a carrier and ashore. For example, the Northrop Grumman E-2 Hawkeye tire pressures are on ship and ashore. En-route deflation is used in the Lockheed C-5 Galaxy to suit airfield conditions at the destination but adds excessive complication to the landing gear and wheels
Future developments
Airport community noise is an environmental issue which has brought into focus the contribution of aerodynamic noise from the landing gear. A NASA long-term goal is to confine aircraft objectional noise to within the airport boundary. During the approach to land the landing gear is lowered several miles from touchdown and the landing gear is the dominant airframe noise source, followed by deployed highlift devices. With engines at a reduced power setting on the approach it is necessary to reduce airframe noise to make a significant reduction to total aircraft noise. The addition of add-on fairings is one approach for reducing the noise from the landing gear with a longer term approach to address noise generation during initial design.
Airline specifications require an airliner to reach up to 90,000 take-offs and landings and roll 500,000 km on the ground in its lifetime. Conventional landing gear is designed to absorb the energy of a landing and does not perform well at reducing ground-induced vibrations in the airframe during landing ground roll, taxi and take-off. Airframe vibrations and fatigue damage can be reduced using semi-active oleos which vary damping over a wide range of ground speeds and runway quality.
Accidents
Malfunctions or human errors (or a combination of these) related to retractable landing gear have been the cause of numerous accidents and incidents throughout aviation history. Distraction and preoccupation during the landing sequence played a prominent role in the approximately 100 gear-up landing incidents that occurred each year in the United States between 1998 and 2003. A gear-up landing, also known as a belly landing, is an accident that results from the pilot forgetting to lower the landing gear, or being unable to do so because of a malfunction. Although rarely fatal, a gear-up landing can be very expensive if it causes extensive airframe/engine damage. For propeller-driven aircraft a prop strike may require an engine overhaul.
Some aircraft have a stiffened fuselage underside or added features to minimize structural damage in a wheels-up landing. When the Cessna Skymaster was converted for a military spotting role (the O-2 Skymaster), fiberglass railings were added to the length of the fuselage; they were adequate to support the aircraft without damage if it was landed on a grassy surface.
The Bombardier Dash 8 is notorious for its landing gear problems. There were three incidents involved, all of them involving Scandinavian Airlines, flights SK1209, SK2478, and SK2867. This led to Scandinavian retiring all of its Dash 8s. The cause of these incidents was a locking mechanism that failed to work properly. This also caused concern for the aircraft for many other airlines that found similar problems, Bombardier Aerospace ordered all Dash 8s with 10,000 or more hours to be grounded, it was soon found that 19 Horizon Airlines Dash 8s had locking mechanism problems, so did 8 Austrian Airlines planes, this did cause several hundred flights to be canceled.
On September 21, 2005, JetBlue Airways Flight 292 successfully landed with its nose gear turned 90 degrees sideways, resulting in a shower of sparks and flame after touchdown.
On November 1, 2011, LOT Polish Airlines Flight LO16 successfully belly landed at Warsaw Chopin Airport due to technical failures; all 231 people on board escaped without injury.
Emergency extension systems
In the event of a failure of the aircraft's landing gear extension mechanism a backup is provided. This may be an alternate hydraulic system, a hand-crank, compressed air (nitrogen), pyrotechnic or free-fall system.
A free-fall or gravity drop system uses gravity to deploy the landing gear into the down and locked position. To accomplish this the pilot activates a switch or mechanical handle in the cockpit, which releases the up-lock. Gravity then pulls the landing gear down and deploys it. Once in position the landing gear is mechanically locked and safe to use for landing.
Ground resonance in rotorcraft
Rotorcraft with fully articulated rotors may experience a dangerous and self-perpetuating phenomenon known as ground resonance, in which the unbalanced rotor system vibrates at a frequency coinciding with the natural frequency of the airframe, causing the entire aircraft to violently shake or wobble in contact with the ground. Ground resonance occurs when shock is continuously transmitted to the turning rotors through the landing gear, causing the angles between the rotor blades to become uneven; this is typically triggered if the aircraft touches the ground with forward or lateral motion, or touches down on one corner of the landing gear due to sloping ground or the craft's flight attitude. The resulting violent oscillations may cause the rotors or other parts to catastrophically fail, detach, and/or strike other parts of the airframe; this can destroy the aircraft in seconds and critically endanger persons unless the pilot immediately initiates a takeoff or closes the throttle and reduces rotor pitch. Ground resonance was cited in 34 National Transportation Safety Board incident and accident reports in the United States between 1990 and 2008.
Rotorcraft with fully articulated rotors typically have shock-absorbing landing gear designed to prevent ground resonance; however, poor landing gear maintenance and improperly inflated tires may contribute to the phenomenon. Helicopters with skid-type landing gear are less prone to ground resonance than those with wheels.
Stowaways
Unauthorized passengers have been known to stowaway on larger aircraft by climbing a landing gear strut and riding in the compartment meant for the wheels. There are extreme dangers to this practice, with numerous deaths reported. Dangers include a lack of oxygen at high altitude, temperatures well below freezing, crush injury or death from the gear retracting into its confined space, and falling out of the compartment during takeoff or landing.
Spacecraft
Launch vehicles
Landing gear has traditionally not been used on the vast majority of launch vehicles, which take off vertically and are destroyed on falling back to earth. With some exceptions for suborbital vertical-landing vehicles (e.g., the Masten Xoie or Armadillo Aerospace's Lunar Lander Challenge vehicle), or for spaceplanes that use the vertical takeoff, horizontal landing (VTHL) approach (e.g., the Space Shuttle orbiter, or the USAF X-37), landing gear have been largely absent from orbital vehicles during the early decades since the advent of spaceflight technology, when orbital space transport has been the exclusive preserve of national-monopoly governmental space programs. Each spaceflight system through 2015 had relied on expendable boosters to begin each ascent to orbital velocity.
Advances during the 2010s in private space transport, where new competition to governmental space initiatives has emerged, have included the explicit design of landing gear into orbital booster rockets. SpaceX has initiated and funded a multimillion-dollar reusable launch system development program to pursue this objective. As part of this program, SpaceX built, and flew eight times in 2012–2013, a first-generation test vehicle called Grasshopper with a large fixed landing gear in order to test low-altitude vehicle dynamics and control for vertical landings of a near-empty orbital first stage. A second-generation test vehicle called F9R Dev1 was built with extensible landing gear. The prototype was flown four times—with all landing attempts successful—in 2014 for low-altitude tests before being self-destructed for safety reasons on a fifth test flight due to a blocked engine sensor port.
The orbital-flight version of the test vehicles–Falcon 9 and Falcon Heavy—includes a lightweight, deployable landing gear for the booster stage: a nested, telescoping piston on an A-frame. The total span of the four carbon fiber/aluminum extensible landing legs is approximately , and weigh less than ; the deployment system uses high-pressure helium as the working fluid.
The first test of the extensible landing gear was successfully accomplished in April 2014 on a Falcon 9 returning from an orbital launch and was the first successful controlled ocean soft touchdown of a liquid-rocket-engine orbital booster. After a single successful booster recovery in 2015, and several in 2016, the recovery of SpaceX booster stages became routine by 2017. Landing legs had become an ordinary operational part of orbital spaceflight launch vehicles.
The newest launch vehicle under development at SpaceX—the Starship—is expected to have landing legs on its first stage called Super Heavy like Falcon 9 but also has landing legs on its reusable second stage, a first for launch vehicle second stages. The first prototype of Starship—Starhopper, built in early 2019—had three fixed landing legs with replaceable shock absorbers. In order to reduce mass of the flight vehicle and the payload penalty for a reusable design, the long-term plan is for Super Heavy to land directly back at the launch site on special ground equipment that is part of the launch mount.
Landers
Spacecraft designed to land safely on extraterrestrial bodies such as the Moon or Mars are known as either legged landers (for example the Apollo Lunar Module) or pod landers (for example Mars Pathfinder) depending on their landing gear. Pod landers are designed to land in any orientation after which they may bounce and roll before coming to rest at which time they have to be given the correct orientation to function. The whole vehicle is enclosed in crushable material or airbags for the impacts and may have opening petals to right it.
Features for landing and movement on the surface were combined in the landing gear for the Mars Science Laboratory.
For landing on low-gravity bodies landing gear may include hold-down thrusters, harpoon anchors and foot-pad screws, all of which were incorporated in the design of comet-lander Philae for redundancy.
In the case of Philae, however, both harpoons and the hold-down thruster failed, resulting in the craft bouncing before landing for good at a non-optimal orientation.
See also
Dayton-Wright RB-1 Racer, an early example of an airplane with retractable landing gear.
Landing gear extender
Tundra tire, a low-pressure landing gear tire allowing landings on rough surfaces
Undercarriage arrangements of jetliners and other aircraft.
Verville Racer Aircraft, an early example of an airplane with retractable landing gear.
References
External links
Aircraft undercarriage
Articles containing video clips
Aircraft systems | Landing gear | [
"Engineering"
] | 9,415 | [
"Systems engineering",
"Aircraft systems"
] |
382,771 | https://en.wikipedia.org/wiki/Seifert%E2%80%93Van%20Kampen%20theorem | In mathematics, the Seifert–Van Kampen theorem of algebraic topology (named after Herbert Seifert and Egbert van Kampen), sometimes just called Van Kampen's theorem, expresses the structure of the fundamental group of a topological space in terms of the fundamental groups of two open, path-connected subspaces that cover . It can therefore be used for computations of the fundamental group of spaces that are constructed out of simpler ones.
Van Kampen's theorem for fundamental groups
Let X be a topological space which is the union of two open and path connected subspaces U1, U2. Suppose U1 ∩ U2 is path connected and nonempty, and let x0 be a point in U1 ∩ U2 that will be used as the base of all fundamental groups. The inclusion maps of U1 and U2 into X induce group homomorphisms and . Then X is path connected and and form a commutative pushout diagram:
The natural morphism k is an isomorphism. That is, the fundamental group of X is the free product of the fundamental groups of U1 and U2 with amalgamation of .
Usually the morphisms induced by inclusion in this theorem are not themselves injective, and the more precise version of the statement is in terms of pushouts of groups.
Van Kampen's theorem for fundamental groupoids
Unfortunately, the theorem as given above does not compute the fundamental group of the circle – which is the most important basic example in algebraic topology – because the circle cannot be realised as the union of two open sets with connected intersection. This problem can be resolved by working with the fundamental groupoid on a set A of base points, chosen according to the geometry of the situation. Thus for the circle, one uses two base points.
This groupoid consists of homotopy classes relative to the end points of paths in X joining points of A ∩ X. In particular, if X is a contractible space, and A consists of two distinct points of X, then is easily seen to be isomorphic to the groupoid often written with two vertices and exactly one morphism between any two vertices. This groupoid plays a role in the theory of groupoids analogous to that of the group of integers in the theory of groups. The groupoid also allows for groupoids a notion of homotopy: it is a unit interval object in the category of groupoids.
The category of groupoids admits all colimits, and in particular all pushouts.
Theorem. Let the topological space X be covered by the interiors of two subspaces X1, X2 and let A be a set which meets each path component of X1, X2 and X0 = X1 ∩ X2. Then A meets each path component of X and the diagram P of morphisms induced by inclusion
is a pushout diagram in the category of groupoids.
This theorem gives the transition from topology to algebra, in determining completely the fundamental groupoid ; one then has to use algebra and combinatorics to determine a fundamental group at some basepoint.
One interpretation of the theorem is that it computes homotopy 1-types. To see its utility, one can easily find cases where X is connected but is the union of the interiors of two subspaces, each with say 402 path components and whose intersection has say 1004 path components. The interpretation of this theorem as a calculational tool for "fundamental groups" needs some development of 'combinatorial groupoid theory'. This theorem implies the calculation of the fundamental group of the circle as the group of integers, since the group of integers is obtained from the groupoid by identifying, in the category of groupoids, its two vertices.
There is a version of the last theorem when X is covered by the union of the interiors of a family of subsets.
The conclusion is that if A meets each path component of all 1,2,3-fold intersections of the sets , then A meets all path components of X and the diagram
of morphisms induced by inclusions is a coequaliser in the category of groupoids.
Equivalent formulations
In the language of combinatorial group theory, if is a topological space; and are open, path connected subspaces of ; is nonempty and path-connected; and ; then is the free product with amalgamation of and , with respect to the (not necessarily injective) homomorphisms and . Given group presentations:
the amalgamation can be presented as
In category theory, is the pushout, in the category of groups, of the diagram:
Examples
2-sphere
One can use Van Kampen's theorem to calculate fundamental groups for topological spaces that can be decomposed into simpler spaces. For example, consider the sphere . Pick open sets and where n and s denote the north and south poles respectively. Then we have the property that A, B and A ∩ B are open path connected sets. Thus we can see that there is a commutative diagram including A ∩ B into A and B and then another inclusion from A and B into and that there is a corresponding diagram of homomorphisms between the fundamental groups of each subspace. Applying Van Kampen's theorem gives the result
However, A and B are both homeomorphic to R2 which is simply connected, so both A and B have trivial fundamental groups. It is clear from this that the fundamental group of is trivial.
Wedge sum of spaces
Given two pointed spaces and we can form their wedge sum, , by taking the quotient of by identifying their two basepoints.
If admits a contractible open neighborhood and admits a contractible open neighborhood (which is the case if, for instance, and are CW complexes), then we can apply the Van Kampen theorem to by taking and as the two open sets and we conclude that the fundamental group of the wedge is the free product
of the fundamental groups of the two spaces we started with:
.
Orientable genus-g surfaces
A more complicated example is the calculation of the fundamental group of a genus-n orientable surface S, otherwise known as the genus-n surface group. One can construct S using its standard fundamental polygon. For the first open set A, pick a disk within the center of the polygon. Pick B to be the complement in S of the center point of A. Then the intersection of A and B is an annulus, which is known to be homotopy equivalent to (and so has the same fundamental group as) a circle. Then , which is the integers, and . Thus the inclusion of into sends any generator to the trivial element. However, the inclusion of into is not trivial. In order to understand this, first one must calculate . This is easily done as one can deformation retract B (which is S with one point deleted) onto the edges labeled by
This space is known to be the wedge sum of 2n circles (also called a bouquet of circles), which further is known to have fundamental group isomorphic to the free group with 2n generators, which in this case can be represented by the edges themselves: . We now have enough information to apply Van Kampen's theorem. The generators are the loops (A is simply connected, so it contributes no generators) and there is exactly one relation:
Using generators and relations, this group is denoted
Simple-connectedness
If X is space that can be written as the union of two open simply connected sets U and V with U ∩ V non-empty and path-connected, then X is simply connected.
Generalizations
As explained above, this theorem was extended by Ronald Brown to the non-connected case by using the fundamental groupoid on a set A of base points. The theorem for arbitrary covers, with the restriction that A meets all threefold intersections of the sets of the cover, is given in the paper by Brown and Abdul Razak Salleh. The theorem and proof for the fundamental group, but using some groupoid methods, are also given in J. Peter May's book. The version that allows more than two overlapping sets but with A a singleton is also given in Allen Hatcher's book below, theorem 1.20.
Applications of the fundamental groupoid on a set of base points to the Jordan curve theorem, covering spaces, and orbit spaces are given in Ronald Brown's book. In the case of orbit spaces, it is convenient to take A to include all the fixed points of the action. An example here is the conjugation action on the circle.
References to higher-dimensional versions of the theorem which yield some information on homotopy types are given in an article on higher-dimensional group theories and groupoids. Thus a 2-dimensional Van Kampen theorem which computes nonabelian second relative homotopy groups was given by Ronald Brown and Philip J. Higgins. A full account and extensions to all dimensions are given by Brown, Higgins, and Rafael Sivera, while an extension to n-cubes of spaces is given by Ronald Brown and Jean-Louis Loday.
Fundamental groups also appear in algebraic geometry and are the main topic of Alexander Grothendieck's first Séminaire de géométrie algébrique (SGA1). A version of Van Kampen's theorem appears there, and is proved along quite different lines than in algebraic topology, namely by descent theory. A similar proof works in algebraic topology.
See also
Higher-dimensional algebra
Higher category theory
Mayer–Vietoris sequence
Pseudocircle
Ronald Brown (mathematician)
Notes
References
Allen Hatcher, Algebraic topology. (2002) Cambridge University Press, Cambridge, xii+544 pp. and
Peter May, A Concise Course in Algebraic Topology. (1999) University of Chicago Press, (Section 2.7 provides a category-theoretic presentation of the theorem as a colimit in the category of groupoids).
Ronald Brown, Groupoids and Van Kampen's theorem, Proc. London Math. Soc. (3) 17 (1967) 385–401.
Mathoverflow discussion on many base points
Ronald Brown, Topology and groupoids (2006) Booksurge LLC
R. Brown and A. Razak, A Van Kampen theorem for unions of non-connected spaces, Archiv. Math. 42 (1984) 85–88. (This paper gives probably the optimal version of the theorem, namely the groupoid version of the theorem for an arbitrary open cover and a set of base points which meets every path component of every 1-.2-3-fold intersections of the sets of the cover.)
P.J. Higgins, Categories and groupoids (1971) Van Nostrand Reinhold
Ronald Brown, Higher-dimensional group theory (2007) (Gives a broad view of higher-dimensional Van Kampen theorems involving multiple groupoids).
Seifert, H., Konstruction drei dimensionaler geschlossener Raume. Berichte Sachs. Akad. Leipzig, Math.-Phys. Kl. (83) (1931) 26–66.
E. R. van Kampen. On the connection between the fundamental groups of some related spaces. American Journal of Mathematics, vol. 55 (1933), pp. 261–267.
Brown, R., Higgins, P. J, On the connection between the second relative homotopy groups of some related spaces, Proc. London Math. Soc. (3) 36 (1978) 193–212.
Brown, R., Higgins, P. J. and Sivera, R.. 2011, EMS Tracts in Mathematics Vol.15 (2011) Nonabelian Algebraic Topology: filtered spaces, crossed complexes, cubical homotopy groupoids; (The first of three Parts discusses the applications of the 1- and 2-dimensional versions of the Seifert–van Kampen Theorem. The latter allows calculations of nonabelian second relative homotopy groups, and in fact of homotopy 2-types. The second part applies a Higher Homotopy van Kampen Theorem for crossed complexes, proved in Part III.)
R. Brown, H. Kamps, T. Porter : A homotopy double groupoid of a Hausdorff space II: a Van Kampen theorem', Theory and Applications of Categories, 14 (2005) 200–220.
Dylan G.L. Allegretti, Simplicial Sets and Van Kampen's Theorem (Discusses generalized versions of Van Kampen's theorem applied to topological spaces and simplicial sets).
R. Brown and J.-L. Loday, "Van Kampen theorems for diagrams of spaces", Topology 26 (1987) 311–334.
External links
Category theory
Higher category theory
Homotopy theory
Theorems in algebraic topology | Seifert–Van Kampen theorem | [
"Mathematics"
] | 2,614 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Theorems in topology",
"Fields of abstract algebra",
"Higher category theory",
"Category theory",
"Mathematical relations",
"Theorems in algebraic topology"
] |
383,675 | https://en.wikipedia.org/wiki/Carbohydrate%20metabolism | Carbohydrate metabolism is the whole of the biochemical processes responsible for the metabolic formation, breakdown, and interconversion of carbohydrates in living organisms.
Carbohydrates are central to many essential metabolic pathways. Plants synthesize carbohydrates from carbon dioxide and water through photosynthesis, allowing them to store energy absorbed from sunlight internally. When animals and fungi consume plants, they use cellular respiration to break down these stored carbohydrates to make energy available to cells. Both animals and plants temporarily store the released energy in the form of high-energy molecules, such as adenosine triphosphate (ATP), for use in various cellular processes.
Humans can consume a variety of carbohydrates, digestion breaks down complex carbohydrates into simple monomers (monosaccharides): glucose, fructose, mannose and galactose. After resorption in the gut, the monosaccharides are transported, through the portal vein, to the liver, where all non-glucose monosacharids (fructose, galactose) are transformed into glucose as well. Glucose (blood sugar) is distributed to cells in the tissues, where it is broken down via cellular respiration, or stored as glycogen. In cellular (aerobic) respiration, glucose and oxygen are metabolized to release energy, with carbon dioxide and water as endproducts.
Metabolic pathways
Glycolysis
Glycolysis is the process of breaking down a glucose molecule into two pyruvate molecules, while storing energy released during this process as adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide (NADH). Nearly all organisms that break down glucose utilize glycolysis. Glucose regulation and product use are the primary categories in which these pathways differ between organisms. In some tissues and organisms, glycolysis is the sole method of energy production. This pathway is common to both anaerobic and aerobic respiration.
Glycolysis consists of ten steps, split into two phases. During the first phase, it requires the breakdown of two ATP molecules. During the second phase, chemical energy from the intermediates is transferred into ATP and NADH. The breakdown of one molecule of glucose results in two molecules of pyruvate, which can be further oxidized to access more energy in later processes.
Glycolysis can be regulated at different steps of the process through feedback regulation. The step that is regulated the most is the third step. This regulation is to ensure that the body is not over-producing pyruvate molecules. The regulation also allows for the storage of glucose molecules into fatty acids. There are various enzymes that are used throughout glycolysis. The enzymes upregulate, downregulate, and feedback regulate the process.
Gluconeogenesis
Gluconeogenesis (GNG) is a metabolic pathway that results in the generation of glucose from certain non-carbohydrate carbon substrates. It is a ubiquitous process, present in plants, animals, fungi, bacteria, and other microorganisms. In vertebrates, gluconeogenesis occurs mainly in the liver and, to a lesser extent, in the cortex of the kidneys. It is one of two primary mechanisms – the other being degradation of glycogen (glycogenolysis) – used by humans and many other animals to maintain blood sugar levels, avoiding low levels (hypoglycemia). In ruminants, because dietary carbohydrates tend to be metabolized by rumen organisms, gluconeogenesis occurs regardless of fasting, low-carbohydrate diets, exercise, etc. In many other animals, the process occurs during periods of fasting, starvation, low-carbohydrate diets, or intense exercise.
In humans, substrates for gluconeogenesis may come from any non-carbohydrate sources that can be converted to pyruvate or intermediates of glycolysis (see figure). For the breakdown of proteins, these substrates include glucogenic amino acids (although not ketogenic amino acids); from breakdown of lipids (such as triglycerides), they include glycerol, odd-chain fatty acids (although not even-chain fatty acids, see below); and from other parts of metabolism they include lactate from the Cori cycle. Under conditions of prolonged fasting, acetone derived from ketone bodies can also serve as a substrate, providing a pathway from fatty acids to glucose. Although most gluconeogenesis occurs in the liver, the relative contribution of gluconeogenesis by the kidney is increased in diabetes and prolonged fasting.
The gluconeogenesis pathway is highly endergonic until it is coupled to the hydrolysis of ATP or guanosine triphosphate (GTP), effectively making the process exergonic. For example, the pathway leading from pyruvate to glucose-6-phosphate requires 4 molecules of ATP and 2 molecules of GTP to proceed spontaneously. These ATPs are supplied from fatty acid catabolism via beta oxidation.
Glycogenolysis
Glycogenolysis refers to the breakdown of glycogen. In the liver, muscles, and the kidney, this process occurs to provide glucose when necessary. A single glucose molecule is cleaved from a branch of glycogen, and is transformed into glucose-1-phosphate during this process. This molecule can then be converted to glucose-6-phosphate, an intermediate in the glycolysis pathway.
Glucose-6-phosphate can then progress through glycolysis. Glycolysis only requires the input of one molecule of ATP when the glucose originates in glycogen. Alternatively, glucose-6-phosphate can be converted back into glucose in the liver and the kidneys, allowing it to raise blood glucose levels if necessary.
Glucagon in the liver stimulates glycogenolysis when the blood glucose is lowered, known as hypoglycemia. The glycogen in the liver can function as a backup source of glucose between meals. Liver glycogen mainly serves the central nervous system. Adrenaline stimulates the breakdown of glycogen in the skeletal muscle during exercise. In the muscles, glycogen ensures a rapidly accessible energy source for movement.
Glycogenesis
Glycogenesis refers to the process of synthesizing glycogen. In humans, glucose can be converted to glycogen via this process. Glycogen is a highly branched structure, consisting of the core protein Glycogenin, surrounded by branches of glucose units, linked together. The branching of glycogen increases its solubility, and allows for a higher number of glucose molecules to be accessible for breakdown at the same time. Glycogenesis occurs primarily in the liver, skeletal muscles, and kidney. The Glycogenesis pathway consumes energy, like most synthetic pathways, because an ATP and a UTP are consumed for each molecule of glucose introduced.
Pentose phosphate pathway
The pentose phosphate pathway is an alternative method of oxidizing glucose. It occurs in the liver, adipose tissue, adrenal cortex, testis, mammary glands, phagocytes, and red blood cells. It produces products that are used in other cell processes, while reducing NADP to NADPH. This pathway is regulated through changes in the activity of glucose-6-phosphate dehydrogenase.
Fructose metabolism
Fructose must undergo certain extra steps in order to enter the glycolysis pathway. Enzymes located in certain tissues can add a phosphate group to fructose. This phosphorylation creates fructose-6-phosphate, an intermediate in the glycolysis pathway that can be broken down directly in those tissues. This pathway occurs in the muscles, adipose tissue, and kidney. In the liver, enzymes produce fructose-1-phosphate, which enters the glycolysis pathway and is later cleaved into glyceraldehyde and dihydroxyacetone phosphate.
Galactose metabolism
Lactose, or milk sugar, consists of one molecule of glucose and one molecule of galactose. After separation from glucose, galactose travels to the liver for conversion to glucose. Galactokinase uses one molecule of ATP to phosphorylate galactose. The phosphorylated galactose is then converted to glucose-1-phosphate, and then eventually glucose-6-phosphate, which can be broken down in glycolysis.
Energy production
Many steps of carbohydrate metabolism allow the cells to access energy and store it more transiently in ATP. The cofactors NAD+ and FAD are sometimes reduced during this process to form NADH and FADH2, which drive the creation of ATP in other processes. A molecule of NADH can produce 1.5–2.5 molecules of ATP, whereas a molecule of FADH2 yields 1.5 molecules of ATP.
Typically, the complete breakdown of one molecule of glucose by aerobic respiration (i.e. involving glycolysis, the citric-acid cycle and oxidative phosphorylation, the last providing the most energy) is usually about 30–32 molecules of ATP. Oxidation of one gram of carbohydrate yields approximately 4 kcal of energy.
Hormonal regulation
Glucoregulation is the maintenance of steady levels of glucose in the body.
Hormones released from the pancreas regulate the overall metabolism of glucose. Insulin and glucagon are the primary hormones involved in maintaining a steady level of glucose in the blood, and the release of each is controlled by the amount of nutrients currently available. The amount of insulin released in the blood and sensitivity of the cells to the insulin both determine the amount of glucose that cells break down. Increased levels of glucagon activates the enzymes that catalyze glycogenolysis, and inhibits the enzymes that catalyze glycogenesis. Conversely, glycogenesis is enhanced and glycogenolysis inhibited when there are high levels of insulin in the blood.
The level of circulatory glucose (known informally as "blood sugar"), as well as the detection of nutrients in the Duodenum is the most important factor determining the amount of glucagon or insulin produced. The release of glucagon is precipitated by low levels of blood glucose, whereas high levels of blood glucose stimulates cells to produce insulin. Because the level of circulatory glucose is largely determined by the intake of dietary carbohydrates, diet controls major aspects of metabolism via insulin. In humans, insulin is made by beta cells in the pancreas, fat is stored in adipose tissue cells, and glycogen is both stored and released as needed by liver cells. Regardless of insulin levels, no glucose is released to the blood from internal glycogen stores from muscle cells.
Carbohydrates as storage
Carbohydrates are typically stored as long polymers of glucose molecules with glycosidic bonds for structural support (e.g. chitin, cellulose) or for energy storage (e.g. glycogen, starch). However, the strong affinity of most carbohydrates for water makes storage of large quantities of carbohydrates inefficient due to the large molecular weight of the solvated water-carbohydrate complex. In most organisms, excess carbohydrates are regularly catabolised to form acetyl-CoA, which is a feed stock for the fatty acid synthesis pathway; fatty acids, triglycerides, and other lipids are commonly used for long-term energy storage. The hydrophobic character of lipids makes them a much more compact form of energy storage than hydrophilic carbohydrates. Gluconeogenesis permits glucose to be synthesized from various sources, including lipids.
In some animals (such as termites) and some microorganisms (such as protists and bacteria), cellulose can be disassembled during digestion and absorbed as glucose.
Human diseases
Diabetes mellitus
Lactose intolerance
Fructose malabsorption
Galactosemia
Glycogen storage disease
See also
Inborn errors of carbohydrate metabolism
Hitting the wall (glycogen depletion)
Second wind (increased ATP from fatty acids after glycogen depletion)
References
External links
BBC - GCSE Bitesize - Biology | Humans | Glucoregulation
Sugar4Kids
Carbohydrate metabolism
de:Glucose#Biochemie | Carbohydrate metabolism | [
"Chemistry"
] | 2,695 | [
"Carbohydrate metabolism",
"Carbohydrate chemistry",
"Metabolism"
] |
383,833 | https://en.wikipedia.org/wiki/Implosion%20%28mechanical%20process%29 | Implosion is the collapse of an object into itself from a pressure differential or gravitational force. The opposite of explosion (which expands the volume), implosion reduces the volume occupied and concentrates matter and energy. Implosion involves a difference between internal (lower) and external (higher) pressure, or inward and outward forces, that is so large that the structure collapses inward into itself, or into the space it occupied if it is not a completely solid object. Examples of implosion include a submarine being crushed by hydrostatic pressure and the collapse of a star under its own gravitational pressure.
In some but not all cases, an implosion propels material outward, for example due to the force of inward falling material rebounding, or peripheral material being ejected as the inner parts collapse. If the object was previously solid, then implosion usually requires it to take on a more dense form—in effect to be more concentrated, compressed, or converted into a denser material.
Examples
Nuclear weapons
In an implosion-type nuclear weapon design, a sphere of plutonium, uranium, or other fissile material is imploded by a spherical arrangement of explosive charges. This decreases the material's volume and thus increases its density by a factor of two to three, causing it to reach critical mass and create a nuclear explosion.
In some forms of thermonuclear weapons, the energy from this explosion is then used to implode a capsule of fusion fuel before igniting it, causing a fusion reaction (see Teller–Ulam design). In general, the use of radiation to implode something, as in a hydrogen bomb or in laser driven inertial confinement fusion, is known as radiation implosion.
Fluid dynamics
Cavitation (bubble formation/collapse in a fluid) involves an implosion process. When a cavitation bubble forms in a liquid (for example, by a high-speed water propeller), this bubble is typically rapidly collapsed—imploded—by the surrounding liquid.
Astrophysics
Implosion is a key part of the gravitational collapse of large stars, which can lead to the creation of supernovas, neutron stars and black holes.
In the most common case, the innermost part of a large star (called the core) stops burning and without this source of heat, the forces holding electrons and protons apart are no longer strong enough to do so. The core collapses in on itself exceedingly quickly, and becomes a neutron star or black hole; the outer layers of the original star fall inwards and may rebound off the newly created neutron star (if one was created), creating a supernova.
Cathode-ray tube and fluorescent lighting implosion
A high vacuum exists within all cathode-ray tubes. If the outer glass envelope is damaged, a dangerous implosion may occur. The implosion may scatter glass pieces at dangerous speeds. While modern CRTs used in televisions and computer displays have epoxy-bonded face-plates or other measures to prevent shattering of the envelope, CRTs removed from equipment must be handled carefully to avoid injury.
Controlled structure demolition
The demolition of large buildings using precisely placed and timed explosions so that the structure collapses on itself is often erroneously described as implosion.
See also
Type II supernova
References
External links
Converging Shock Waves
Mechanics
Implosion | Implosion (mechanical process) | [
"Physics",
"Engineering"
] | 691 | [
"Mechanics",
"Mechanical engineering",
"Implosion"
] |
384,701 | https://en.wikipedia.org/wiki/Fountain | A fountain, from the Latin "fons" (genitive "fontis"), meaning source or spring, is a decorative reservoir used for discharging water. It is also a structure that jets water into the air for a decorative or dramatic effect.
Fountains were originally purely functional, connected to springs or aqueducts and used to provide drinking water and water for bathing and washing to the residents of cities, towns and villages. Until the late 19th century most fountains operated by gravity, and needed a source of water higher than the fountain, such as a reservoir or aqueduct, to make the water flow or jet into the air.
In addition to providing drinking water, fountains were used for decoration and to celebrate their builders. Roman fountains were decorated with bronze or stone masks of animals or heroes. In the Middle Ages, Moorish and Muslim garden designers used fountains to create miniature versions of the gardens of paradise. King Louis XIV of France used fountains in the Gardens of Versailles to illustrate his power over nature. The baroque decorative fountains of Rome in the 17th and 18th centuries marked the arrival point of restored Roman aqueducts and glorified the Popes who built them.
By the end of the 19th century, as indoor plumbing became the main source of drinking water, urban fountains became purely decorative. Mechanical pumps replaced gravity and allowed fountains to recycle water and to force it high into the air. The Jet d'Eau in Lake Geneva, built in 1951, shoots water in the air. The highest such fountain in the world is King Fahd's Fountain in Jeddah, Saudi Arabia, which spouts water above the Red Sea.
Fountains are used today to decorate city parks and squares; to honor individuals or events; for recreation and for entertainment. A splash pad or spray pool allows city residents to enter, get wet and cool off in summer. The musical fountain combines moving jets of water, colored lights and recorded music, controlled by a computer, for dramatic effects. Fountains can themselves also be musical instruments played by obstruction of one or more of their water jets.
Drinking fountains provide clean drinking water in public buildings, parks and public spaces.
History
Ancient fountains
Ancient civilizations built stone basins to capture and hold precious drinking water. A carved stone basin, dating to around 700 BC, was discovered in the ruins of the ancient Sumerian city of Lagash in modern Iraq. The ancient Assyrians constructed a series of basins in the gorge of the Comel River, carved in solid rock, connected by small channels, descending to a stream. The lowest basin was decorated with carved reliefs of two lions. The ancient Egyptians had ingenious systems for hoisting water up from the Nile for drinking and irrigation, but without a higher source of water it was not possible to make water flow by gravity, There are lion-shaped fountains in the Temple of Dendera in Qena.
The ancient Greeks used aqueducts and gravity-powered fountains to distribute water. According to ancient historians, fountains existed in Athens, Corinth, and other ancient Greek cities in the 6th century BC as the terminating points of aqueducts which brought water from springs and rivers into the cities. In the 6th century BC, the Athenian ruler Peisistratos built the main fountain of Athens, the Enneacrounos, in the Agora, or main square. It had nine large cannons, or spouts, which supplied drinking water to local residents.
Greek fountains were made of stone or marble, with water flowing through bronze pipes and emerging from the mouth of a sculpted mask that represented the head of a lion or the muzzle of an animal. Most Greek fountains flowed by simple gravity, but they also discovered how to use principle of a siphon to make water spout, as seen in pictures on Greek vases.
Ancient Roman fountains
The Ancient Romans built an extensive system of aqueducts from mountain rivers and lakes to provide water for the fountains and baths of Rome. The Roman engineers used lead pipes instead of bronze to distribute the water throughout the city. The excavations at Pompeii, which revealed the city as it was when it was destroyed by Mount Vesuvius in 79 AD, uncovered free-standing fountains and basins placed at intervals along city streets, fed by siphoning water upwards from lead pipes under the street. The excavations of Pompeii also showed that the homes of wealthy Romans often had a small fountain in the atrium, or interior courtyard, with water coming from the city water supply and spouting into a small bowl or basin.
Ancient Rome was a city of fountains. According to Sextus Julius Frontinus, the Roman consul who was named curator aquarum or guardian of the water of Rome in 98 AD, Rome had nine aqueducts which fed 39 monumental fountains and 591 public basins, not counting the water supplied to the Imperial household, baths and owners of private villas. Each of the major fountains was connected to two different aqueducts, in case one was shut down for service.
The Romans were able to make fountains jet water into the air, by using the pressure of water flowing from a distant and higher source of water to create hydraulic head, or force. Illustrations of fountains in gardens spouting water are found on wall paintings in Rome from the 1st century BC, and in the villas of Pompeii. The Villa of Hadrian in Tivoli featured a large swimming basin with jets of water. Pliny the Younger described the banquet room of a Roman villa where a fountain began to jet water when visitors sat on a marble seat. The water flowed into a basin, where the courses of a banquet were served in floating dishes shaped like boats.
Roman engineers built aqueducts and fountains throughout the Roman Empire. Examples can be found today in the ruins of Roman towns in Vaison-la-Romaine and Glanum in France, in Augst, Switzerland, and other sites.
Medieval fountains
In Nepal there were public drinking fountains at least as early as 550 AD. They are called dhunge dharas or hitis. They consist of intricately carved stone spouts through which water flows uninterrupted from underground water sources. They are found extensively in Nepal and some of them are still operational. Construction of water conduits like hitis and dug wells are considered as pious acts in Nepal.
During the Middle Ages, Roman aqueducts were wrecked or fell into decay, and many fountains throughout Europe stopped working, so fountains existed mainly in art and literature, or in secluded monasteries or palace gardens. Fountains in the Middle Ages were associated with the source of life, purity, wisdom, innocence, and the Garden of Eden. In illuminated manuscripts like the Tres Riches Heures du Duc de Berry (1411–1416), the Garden of Eden was shown with a graceful gothic fountain in the center (see illustration). The Ghent Altarpiece by Jan van Eyck, finished in 1432, also shows a fountain as a feature of the adoration of the mystic lamb, a scene apparently set in Paradise.
The cloister of a monastery was supposed to be a replica of the Garden of Eden, protected from the outside world. Simple fountains, called lavabos, were placed inside Medieval monasteries such as Le Thoronet Abbey in Provence and were used for ritual washing before religious services.
Fountains were also found in the enclosed medieval jardins d'amour, "gardens of courtly love" – ornamental gardens used for courtship and relaxation. The medieval romance The Roman de la Rose describes a fountain in the center of an enclosed garden, feeding small streams bordered by flowers and fresh herbs.
Some Medieval fountains, like the cathedrals of their time, illustrated biblical stories, local history and the virtues of their time. The Fontana Maggiore in Perugia, dedicated in 1278, is decorated with stone carvings representing prophets and saints, allegories of the arts, labors of the months, the signs of the zodiac, and scenes from Genesis and Roman history.
Medieval fountains could also provide amusement. The gardens of the Counts of Artois at the Château de Hesdin, built in 1295, contained famous fountains, called Les Merveilles de Hesdin ("The Wonders of Hesdin") which could be triggered to drench surprised visitors.
Fountains of the Islamic world
Shortly after the spread of Islam, the Arabs incorporated into their city planning the famous Islamic gardens. Islamic gardens after the 7th century were traditionally enclosed by walls and were designed to represent paradise. The paradise gardens, were laid out in the form of a cross, with four channels representing the rivers of Paradise, dividing the four parts of the world. Water sometimes spouted from a fountain in the center of the cross, representing the spring or fountain, Salsabil, described in the Qur'an as the source of the rivers of Paradise.
In the 9th century, the Banū Mūsā brothers, a trio of Persian Inventors, were commissioned by the Caliph of Baghdad to summarize the engineering knowledge of the ancient Greek and Roman world. They wrote a book entitled the Book of Ingenious Devices, describing the works of the 1st century Greek Engineer Hero of Alexandria and other engineers, plus many of their own inventions. They described fountains which formed water into different shapes and a wind-powered water pump, but it is not known if any of their fountains were ever actually built.
The Persian rulers of the Middle Ages had elaborate water distribution systems and fountains in their palaces and gardens. Water was carried by a pipe into the palace from a source at a higher elevation. Once inside the palace or garden it came up through a small hole in a marble or stone ornament and poured into a basin or garden channels. The gardens of Pasargades had a system of canals which flowed from basin to basin, both watering the garden and making a pleasant sound. The Persian engineers also used the principle of the syphon (called shotor-gelu in Persian, literally 'neck of the camel) to create fountains which spouted water or made it resemble a bubbling spring. The garden of Fin, near Kashan, used 171 spouts connected to pipes to create a fountain called the Howz-e jush, or "boiling basin".
The 11th century Persian poet Azraqi described a Persian fountain:
From a marvelous faucet of gold pours a wave
whose clarity is more pure than a soul;
The turquoise and silver form ribbons in the basin
coming from this faucet of gold ...
Reciprocating motion was first described in 1206 by Arab Muslim engineer and inventor al-Jazari when the kings of the Artuqid dynasty in Turkey commissioned him to manufacture a machine to raise water for their palaces. The finest result was a machine called the double-acting reciprocating piston pump, which translated rotary motion to reciprocating motion via the crankshaft-connecting rod mechanism.
The palaces of Moorish Spain, particularly the Alhambra in Granada, had famous fountains. The patio of the Sultan in the gardens of Generalife in Granada (1319) featured spouts of water pouring into a basin, with channels which irrigated orange and myrtle trees. The garden was modified over the centuries – the jets of water which cross the canal today were added in the 19th century.
The fountain in the Court of the Lions of the Alhambra, built from 1362 to 1391, is a large vasque mounted on twelve stone statues of lions. Water spouts upward in the vasque and pours from the mouths of the lions, filling four channels dividing the courtyard into quadrants. The basin dates to the 14th century, but the lions spouting water are believed to be older, dating to the 11th century.
The design of the Islamic garden spread throughout the Islamic world, from Moorish Spain to the Mughal Empire in the Indian subcontinent. The Shalimar Gardens built by Emperor Shah Jahan in 1641, were said to be ornamented with 410 fountains, which fed into a large basin, canal and marble pools.
In the Ottoman Empire, rulers often built fountains next to mosques so worshippers could do their ritual washing. Examples include the Fountain of Qasim Pasha (1527), Temple Mount, Jerusalem, an ablution and drinking fountain built during the Ottoman reign of Suleiman the Magnificent; the Fountain of Ahmed III (1728) at the Topkapı Palace, Istanbul, another Fountain of Ahmed III in Üsküdar (1729) and Tophane Fountain (1732). Palaces themselves often had small decorated fountains, which provided drinking water, cooled the air, and made a pleasant splashing sound. One surviving example is the Fountain of Tears (1764) at the Bakhchisarai Palace, in Crimea; which was made famous by a poem of Alexander Pushkin.
The sebil was a decorated fountain that was often the only source of water for the surrounding neighborhood.
It was often commissioned as an act of Islamic piety by a rich person.
Renaissance fountains (15th–17th centuries)
In the 14th century, Italian humanist scholars began to rediscover and translate forgotten Roman texts on architecture by Vitruvius, on hydraulics by Hero of Alexandria, and descriptions of Roman gardens and fountains by Pliny the Younger, Pliny the Elder, and Varro. The treatise on architecture, De re aedificatoria, by Leon Battista Alberti, which described in detail Roman villas, gardens and fountains, became the guidebook for Renaissance builders.
In Rome, Pope Nicholas V (1397–1455), himself a scholar who commissioned hundreds of translations of ancient Greek classics into Latin, decided to embellish the city and make it a worthy capital of the Christian world. In 1453, he began to rebuild the Acqua Vergine, the ruined Roman aqueduct which had brought clean drinking water to the city from eight miles (13 km) away. He also decided to revive the Roman custom of marking the arrival point of an aqueduct with a mostra, a grand commemorative fountain. He commissioned the architect Leon Battista Alberti to build a wall fountain where the Trevi Fountain is now located. The aqueduct he restored, with modifications and extensions, eventually supplied water to the Trevi Fountain and the famous baroque fountains in the Piazza del Popolo and Piazza Navona.
One of the first new fountains to be built in Rome during the Renaissance was the fountain in the piazza in front of the church of Santa Maria in Trastevere (1472), which was placed on the site of an earlier Roman fountain. Its design, based on an earlier Roman model, with a circular vasque on a pedestal pouring water into a basin below, became the model for many other fountains in Rome, and eventually for fountains in other cities, from Paris to London.
In 1503, Pope Julius II decided to recreate a classical pleasure garden in the same place. The new garden, called the Cortile del Belvedere, was designed by Donato Bramante. The garden was decorated with the Pope's famous collection of classical statues, and with fountains. The Venetian Ambassador wrote in 1523, "... On one side of the garden is a most beautiful loggia, at one end of which is a lovely fountain that irrigates the orange trees and the rest of the garden by a little canal in the center of the loggia ... The original garden was split in two by the construction of the Vatican Library in the 16th century, but a new fountain by Carlo Maderno was built in the Cortile del Belvedere, with a jet of water shooting up from a circular stone bowl on an octagonal pedestal in a large basin.
In 1537, in Florence, Cosimo I de' Medici, who had become ruler of the city at the age of only 17, also decided to launch a program of aqueduct and fountain building. The city had previously gotten all its drinking water from wells and reservoirs of rain water, which meant that there was little water or water pressure to run fountains. Cosimo built an aqueduct large enough for the first continually-running fountain in Florence, the Fountain of Neptune in the Piazza della Signoria (1560–1567). This fountain featured an enormous white marble statue of Neptune, resembling Cosimo, by sculptor Bartolomeo Ammannati.
Under the Medicis, fountains were not just sources of water, but advertisements of the power and benevolence of the city's rulers. They became central elements not only of city squares, but of the new Italian Renaissance garden. The great Medici Villa at Castello, built for Cosimo by Benedetto Varchi, featured two monumental fountains on its central axis; one showing with two bronze figures representing Hercules slaying Antaeus, symbolizing the victory of Cosimo over his enemies; and a second fountain, in the middle of a circular labyrinth of cypresses, laurel, myrtle and roses, had a bronze statue by Giambologna which showed the goddess Venus wringing her hair. The planet Venus was governed by Capricorn, which was the emblem of Cosimo; the fountain symbolized that he was the absolute master of Florence.
By the middle Renaissance, fountains had become a form of theater, with cascades and jets of water coming from marble statues of animals and mythological figures. The most famous fountains of this kind were found in the Villa d'Este (1550–1572), at Tivoli near Rome, which featured a hillside of basins, fountains and jets of water, as well as a fountain which produced music by pouring water into a chamber, forcing air into a series of flute-like pipes. The gardens also featured giochi d'acqua, water jokes, hidden fountains which suddenly soaked visitors.
Between 1546 and 1549, the merchants of Paris built the first Renaissance-style fountain in Paris, the Fontaine des Innocents, to commemorate the ceremonial entry of the King into the city. The fountain, which originally stood against the wall of the church of the Holy Innocents, as rebuilt several times and now stands in a square near Les Halles. It is the oldest fountain in Paris.
Henry constructed an Italian-style garden with a fountain shooting a vertical jet of water for his favorite mistress, Diane de Poitiers, next to the Château de Chenonceau (1556–1559). At the royal Château de Fontainebleau, he built another fountain with a bronze statue of Diane, goddess of the hunt, modeled after Diane de Poitiers.
Later, after the death of Henry II, his widow, Catherine de Medici, expelled Diane de Poitiers from Chenonceau and built her own fountain and garden there.
King Henry IV of France made an important contribution to French fountains by inviting an Italian hydraulic engineer, Tommaso Francini, who had worked on the fountains of the villa at Pratalino, to make fountains in France. Francini became a French citizen in 1600, built the Medici Fountain, and during the rule of the young King Louis XIII, he was raised to the position of Intendant général des Eaux et Fontaines of the king, a position which was hereditary. His descendants became the royal fountain designers for Louis XIII and for Louis XIV at Versailles.
In 1630, another Medici, Marie de Medici, the widow of Henry IV, built her own monumental fountain in Paris, the Medici Fountain, in the garden of the Palais du Luxembourg. That fountain still exists today, with a long basin of water and statues added in 1866.
Baroque fountains (17th–18th century)
Baroque Fountains of Rome
The 17th and 18th centuries were a golden age for fountains in Rome, which began with the reconstruction of ruined Roman aqueducts and the construction by the Popes of mostra, or display fountains, to mark their termini. The new fountains were expressions of the new Baroque art, which was officially promoted by the Catholic Church as a way to win popular support against the Protestant Reformation; the Council of Trent had declared in the 16th century that the Church should counter austere Protestantism with art that was lavish, animated and emotional. The fountains of Rome, like the paintings of Rubens, were examples of the principles of Baroque art. They were crowded with allegorical figures, and filled with emotion and movement. In these fountains, sculpture became the principal element, and the water was used simply to animate and decorate the sculptures. They, like baroque gardens, were "a visual representation of confidence and power."
The first of the Fountains of St. Peter's Square, by Carlo Maderno, (1614) was one of the earliest Baroque fountains in Rome, made to complement the lavish Baroque façade he designed for St. Peter's Basilica behind it. It was fed by water from the Paola aqueduct, restored in 1612, whose source was above sea level, which meant it could shoot water twenty feet up from the fountain. Its form, with a large circular vasque on a pedestal pouring water into a basin and an inverted vasque above it spouting water, was imitated two centuries later in the Fountains of the Place de la Concorde in Paris.
The Triton Fountain in the Piazza Barberini (1642), by Gian Lorenzo Bernini, is a masterpiece of Baroque sculpture, representing Triton, half-man and half-fish, blowing his horn to calm the waters, following a text by the Roman poet Ovid in the Metamorphoses. The Triton fountain benefited from its location in a valley, and the fact that it was fed by the Aqua Felice aqueduct, restored in 1587, which arrived in Rome at an elevation of above sea level (fasl), a difference of in elevation between the source and the fountain, which meant that the water from this fountain jetted sixteen feet straight up into the air from the conch shell of the triton.
The Piazza Navona became a grand theater of water, with three fountains, built in a line on the site of the Stadium of Domitian. The fountains at either end are by Giacomo della Porta; the Neptune fountain to the north, (1572) shows the God of the Sea spearing an octopus, surrounded by tritons, sea horses and mermaids. At the southern end is Il Moro, possibly also a figure of Neptune riding a fish in a conch shell. In the center is the Fontana dei Quattro Fiumi, (The Fountain of the Four Rivers) (1648–51), a highly theatrical fountain by Bernini, with statues representing rivers from the four continents; the Nile, Danube, Plate River and Ganges. Over the whole structure is a Egyptian obelisk, crowned by a cross with the emblem of the Pamphili family, representing Pope Innocent X, whose family palace was on the piazza. The theme of a fountain with statues symbolizing great rivers was later used in the Place de la Concorde (1836–40) and in the Fountain of Neptune in the Alexanderplatz in Berlin (1891). The fountains of Piazza Navona had one drawback - their water came from the Acqua Vergine, which had only a drop from the source to the fountains, which meant the water could only fall or trickle downwards, not jet very high upwards.
The Trevi Fountain is the largest and most spectacular of Rome's fountains, designed to glorify the three different Popes who created it. It was built beginning in 1730 at the terminus of the reconstructed Acqua Vergine aqueduct, on the site of Renaissance fountain by Leon Battista Alberti. It was the work of architect Nicola Salvi and the successive project of Pope Clement XII, Pope Benedict XIV and Pope Clement XIII, whose emblems and inscriptions are carried on the attic story, entablature and central niche. The central figure is Oceanus, the personification of all the seas and oceans, in an oyster-shell chariot, surrounded by Tritons and Sea Nymphs.
In fact, the fountain had very little water pressure, because the source of water was, like the source for the Piazza Navona fountains, the Acqua Vergine, with a drop. Salvi compensated for this problem by sinking the fountain down into the ground, and by carefully designing the cascade so that the water churned and tumbled, to add movement and drama. Wrote historians Maria Ann Conelli and Marilyn Symmes, "On many levels the Trevi altered the appearance, function and intent of fountains and was a watershed for future designs."
Baroque fountains of Versailles
Beginning in 1662, King Louis XIV of France began to build a new kind of garden, the Garden à la française, or French formal garden, at the Palace of Versailles. In this garden, the fountain played a central role. He used fountains to demonstrate the power of man over nature, and to illustrate the grandeur of his rule. In the Gardens of Versailles, instead of falling naturally into a basin, water was shot into the sky, or formed into the shape of a fan or bouquet. Dancing water was combined with music and fireworks to form a grand spectacle. These fountains were the work of the descendants of Tommaso Francini, the Italian hydraulic engineer who had come to France during the time of Henry IV and built the Medici Fountain and the Fountain of Diana at Fontainebleau.
Two fountains were the centerpieces of the Gardens of Versailles, both taken from the myths about Apollo, the sun god, the emblem of Louis XIV, and both symbolizing his power. The Fontaine Latone (1668–70) designed by André Le Nôtre and sculpted by Gaspard and Balthazar Marsy, represents the story of how the peasants of Lycia tormented Latona and her children, Diana and Apollo, and were punished by being turned into frogs. This was a reminder of how French peasants had abused Louis's mother, Anne of Austria, during the uprising called the Fronde in the 1650s. When the fountain is turned on, sprays of water pour down on the peasants, who are frenzied as they are transformed into creatures.
The other centerpiece of the Gardens, at the intersection of the main axes of the Gardens of Versailles, is the Bassin d'Apollon (1668–71), designed by Charles Le Brun and sculpted by Jean Baptiste Tuby. This statue shows a theme also depicted in the painted decoration in the Hall of Mirrors of the Palace of Versailles: Apollo in his chariot about to rise from the water, announced by Tritons with seashell trumpets. Historians Mary Anne Conelli and Marilyn Symmes wrote, "Designed for dramatic effect and to flatter the king, the fountain is oriented so that the Sun God rises from the west and travels east toward the chateau, in contradiction to nature."
Besides these two monumental fountains, the Gardens over the years contained dozens of other fountains, including thirty-nine animal fountains in the labyrinth depicting the fables of Jean de La Fontaine.
There were so many fountains at Versailles that it was impossible to have them all running at once; when Louis XIV made his promenades, his fountain-tenders turned on the fountains ahead of him and turned off those behind him. Louis built an enormous pumping station, the Machine de Marly, with fourteen water wheels and 253 pumps to raise the water three hundred feet from the River Seine, and even attempted to divert the River Eure to provide water for his fountains, but the water supply was never enough.
Baroque fountains of Peterhof
In Russia, Peter the Great founded a new capital at St. Petersburg in 1703 and built a small Summer Palace and gardens there beside the Neva River. The gardens featured a fountain of two sea monsters spouting water, among the earliest fountains in Russia.
In 1709, he began constructing a larger palace, Peterhof Palace, alongside the Gulf of Finland, Peter visited France in 1717 and saw the gardens and fountains of Louis XIV at Versailles, Marly and Fontainebleau. When he returned he began building a vast Garden à la française with fountains at Peterhof. The central feature of the garden was a water cascade, modeled after the cascade at the Château de Marly of Louis XIV, built in 1684. The gardens included trick fountains designed to drench unsuspecting visitors, a popular feature of the Italian Renaissance garden.,
In 1800–1802 the Emperor Paul I of Russia and his successor, Alexander I of Russia, built a new fountain at the foot of the cascade depicting Samson prying open the mouth of a lion, representing Peter's victory over Sweden in the Great Northern War in 1721. The fountains were fed by reservoirs in the upper garden, while the Samson fountain was fed by a specially-constructed aqueduct four kilometers in length.
19th century fountains
In the early 19th century, London and Paris built aqueducts and new fountains to supply clean drinking water to their exploding populations. Napoleon Bonaparte started construction on the first canals bringing drinking water to Paris, fifteen new fountains, the most famous being the Fontaine du Palmier in the Place du Châtelet, (1896–1808), celebrating his military victories.
He also restored and put back into service some of the city's oldest fountains, such as the Medici Fountain. Two of Napoleon's fountains, the Chateau d'Eau and the fountain in the Place des Vosges, were the first purely decorative fountains in Paris, without water taps for drinking water.
Louis-Philippe (1830–1848) continued Napoleon's work, and added some of Paris's most famous fountains, notably the Fontaines de la Concorde (1836–1840) and the fountains in the Place des Vosges.
Following a deadly cholera epidemic in 1849, Louis Napoleon decided to completely rebuild the Paris water supply system, separating the water supply for fountains from the water supply for drinking. The most famous fountain built by Louis Napoleon was the Fontaine Saint-Michel, part of his grand reconstruction of Paris boulevards. Louis Napoleon relocated and rebuilt several earlier fountains, such as the Medici Fountain and the Fontaine de Leda, when their original sites were destroyed by his construction projects.
In the mid-nineteenth century the first fountains were built in the United States, connected to the first aqueducts bringing drinking water from outside the city. The first fountain in Philadelphia, at Centre Square, opened in 1809, and featured a statue by sculptor William Rush. The first fountain in New York City, in City Hall Park, opened in 1842, and the first fountain in Boston was turned on in 1848. The first famous American decorative fountain was the Bethesda Fountain in Central Park in New York City, opened in 1873.
The 19th century also saw the introduction of new materials in fountain construction; cast iron (the Fontaines de la Concorde); glass (the Crystal Fountain in London (1851)) and even aluminium (the Shaftesbury Memorial Fountain in Piccadilly Circus, London, (1897)).
The invention of steam pumps meant that water could be supplied directly to homes, and pumped upward from fountains. The new fountains in Trafalgar Square (1845) used steam pumps from an artesian well. By the end of the 19th century fountains in big cities were no longer used to supply drinking water, and were simply a form of art and urban decoration.
Another fountain innovation of the 19th century was the illuminated fountain: The Bartholdi Fountain at the Philadelphia Exposition of 1876 was illuminated by gas lamps. In 1884 a fountain in Britain featured electric lights shining upward through the water. The Exposition Universelle (1889) which celebrated the 100th anniversary of the French Revolution featured a fountain illuminated by electric lights shining up though the columns of water. The fountains, located in a basin forty meters in diameter, were given color by plates of colored glass inserted over the lamps. The Fountain of Progress gave its show three times each evening, for twenty minutes, with a series of different colors.
20th century fountains
Paris fountains in the 20th century no longer had to supply drinking water - they were purely decorative; and, since their water usually came from the river and not from the city aqueducts, their water was no longer drinkable. Twenty-eight new fountains were built in Paris between 1900 and 1940; nine new fountains between 1900 and 1910; four between 1920 and 1930; and fifteen between 1930 and 1940.
The biggest fountains of the period were those built for the International Expositions of 1900, 1925 and 1937, and for the Colonial Exposition of 1931. Of those, only the fountains from the 1937 exposition at the Palais de Chaillot still exist. (See Fountains of International Expositions).
Only a handful of fountains were built in Paris between 1940 and 1980. The most important ones built during that period were on the edges of the city, on the west, just outside the city limits, at La Défense, and to the east at the Bois de Vincennes.
Between 1981 and 1995, during the terms of President François Mitterrand and Culture Minister Jack Lang, and of Mitterrand's bitter political rival, Paris Mayor Jacques Chirac (Mayor from 1977 until 1995), the city experienced a program of monumental fountain building that exceeded that of Napoleon Bonaparte or Louis Philippe. More than one hundred fountains were built in Paris in the 1980s, mostly in the neighborhoods outside the center of Paris, where there had been few fountains before These included the Fontaine Cristaux, homage to Béla Bartók by Jean-Yves Lechevallier (1980); the Stravinsky Fountain next to the Pompidou Center, by sculptors Niki de Saint Phalle and Jean Tinguely (1983); the fountain of the Pyramid of the Louvre by I.M. Pei, (1989), the Buren Fountain by sculptor Daniel Buren, Les Sphérades fountain, both in the Palais-Royal, and the fountains of Parc André-Citroën. The Mitterrand-Chirac fountains had no single style or theme. Many of the fountains were designed by famous sculptors or architects, such as Jean Tinguely, I.M. Pei, Claes Oldenburg and Daniel Buren, who had radically different ideas of what a fountain should be. Some were solemn, and others were whimsical. Most made little effort to blend with their surroundings - they were designed to attract attention.
Fountains built in the United States between 1900 and 1950 mostly followed European models and classical styles. The Samuel Francis Dupont Memorial Fountain was designed and created by Henry Bacon and Daniel Chester French, the architect and sculptor of the Lincoln Memorial, in 1921, in a pure neoclassical style.
Buckingham Fountain in Chicago was one of the first American fountains to use powerful modern pumps to shoot water as high as into the air.
The Fountain of Prometheus, built at the Rockefeller Center in 1933, was the first American fountain in the Art-Deco style.
After World War II, fountains in the United States became more varied in form. Some, like Ruth Asawa's Andrea (1968) and Vaillancourt Fountain (1971), both located in San Francisco, were pure works of sculpture. Other fountains, like the Frankin Roosevelt Memorial Waterfall (1997), by architect Lawrence Halprin, were designed as landscapes to illustrate themes. This fountain is part of the Franklin Delano Roosevelt Memorial in Washington D.C., which has four outdoor "rooms" illustrating his presidency. Each "room" contains a cascade or waterfall; the cascade in the third room illustrates the turbulence of the years of the World War II. Halprin wrote at an early stage of the design; "the whole environment of the memorial becomes sculpture: to touch, feel, hear and contact - with all the senses."
The end of the 20th century the development of high-shooting fountains, beginning with the Jet d'eau in Geneva in 1951, and followed by taller and taller fountains in the United States and the Middle East. The highest fountain today is King Fahd's Fountain in Jeddah, Saudi Arabia.
It also saw the increasing popularity of the musical fountain, which combined water, music and light, choreographed by computers. (See Musical fountain below).
Contemporary fountains (2001–present)
The fountain called 'Bit.Fall' by German artist Julius Popp (2005) uses digital technologies to spell out words with water. The fountain is run by a statistical program which selects words at random from news stories on the Internet. It then recodes these words into pictures. Then 320 nozzles inject the water into electromagnetic valves. The program uses rasterization and bitmap technologies to synchronize the valves so drops of water form an image of the words as they fall. According to Popp, the sheet of water is "a metaphor for the constant flow of information from which we cannot escape."
Crown Fountain is an interactive fountain and video sculpture feature in Chicago's Millennium Park. Designed by Catalan artist Jaume Plensa, it opened in July 2004. The fountain is composed of a black granite reflecting pool placed between a pair of glass brick towers. The towers are tall, and they use light-emitting diodes (LEDs) to display digital videos on their inward faces. Construction and design of Crown Fountain cost US$17 million. Weather permitting, the water operates from May to October, intermittently cascading down the two towers and spouting through a nozzle on each tower's front face.
Few new fountains have been built in Paris since 2000. The most notable is La Danse de la fontaine emergente (2008), located on Place Augusta-Holmes, rue Paul Klee, in the 13th arrondissement. It was designed by the French-Chinese sculptor Chen Zhen (1955-2000), shortly before his death in 2000, and finished through the efforts of his spouse and collaborator. It shows a dragon, in stainless steel, glass and plastic, emerging and submerging from the pavement of the square. The fountain is in three parts. A bas-relief of the dragon is fixed on the wall of the structure of the water-supply plant, and the dragon seems to be emerging from the wall and plunging underground. This part of the dragon is opaque. The second and third parts depict the arch of the dragon's back coming out of the pavement. These parts of the dragon are transparent, and water under pressure flows visibly within, and is illuminated at night.
Musical fountains
Musical fountains create a theatrical spectacle with music, light and water, usually employing a variety of programmable spouts and water jets controlled by a computer.
Musical fountains were first described in the 1st century AD by the Greek scientist and engineer Hero of Alexandria in his book Pneumatics. Hero described and provided drawings of "A bird made to whistle by flowing water," "A Trumpet sounded by flowing water," and "Birds made to sing and be silent alternately by flowing water." In Hero's descriptions, water pushed air through musical instruments to make sounds. It is not known if Hero made working models of any of his designs.
During the Italian Renaissance, the most famous musical fountains were located in the gardens of the Villa d'Este, in Tivoli. which were created between 1550 and 1572. Following the ideas of Hero of Alexandria, the Fountain of the Owl used a series of bronze pipes like flutes to make the sound of birds. The most famous feature of the garden was the great Organ Fountain. It was described by the French philosopher Michel de Montaigne, who visited the garden in 1580: "The music of the Organ Fountain is true music, naturally created ... made by water which falls with great violence into a cave, rounded and vaulted, and agitates the air, which is forced to exit through the pipes of an organ. Other water, passing through a wheel, strikes in a certain order the keyboard of the organ. The organ also imitates the sound of trumpets, the sound of cannon, and the sound of muskets, made by the sudden fall of water ... The Organ Fountain fell into ruins, but it was recently restored and plays music again.
Louis XIV created the idea of the modern musical fountain by staging spectacles in the Gardens of Versailles, using music and fireworks to accompany the flow of the fountains.
The great international expositions held in Philadelphia, London and Paris featured the ancestors of the modern musical fountain. They introduced the first fountains illuminated by gas lights (Philadelphia in 1876); and the first fountains illuminated by electric lights (London in 1884 and Paris in 1889). The Exposition Universelle (1900) in Paris featured fountains illuminated by colored lights controlled by a keyboard. The Paris Colonial Exposition of 1931 presented the Théâtre d'eau, or water theater, located in a lake, with performance of dancing water. The Exposition Internationale des Arts et Techniques dans la Vie Moderne (1937) had combined arches and columns of water from fountains in the Seine with light, and with music from loudspeakers on eleven rafts anchored in the river, playing the music of the leading composers of the time. (See International Exposition Fountains, above.)
Today some of the best-known musical fountains in the world are at the Bellagio Hotel & Casino in Las Vegas, (2009); the Dubai Fountain in the United Arab Emirates; the World of Color at Disney California Adventure Park (2010) and Aquanura at the Efteling in the Netherlands (2012).
Splash fountains
A splash fountain or bathing fountain is intended for people to come in and cool off on hot summer days. These fountains are also referred to as interactive fountains. These fountains are designed to allow easy access, and feature nonslip surfaces, and have no standing water, to eliminate possible drowning hazards, so that no lifeguards or supervision is required. These splash pads are often located in public pools, public parks, or public playgrounds (known as "spraygrounds"). In some splash fountains, such as Dundas Square in Toronto, Canada, the water is heated by solar energy captured by the special dark-colored granite slabs. The fountain at Dundas Square features 600 ground nozzles arranged in groups of 30 (3 rows of 10 nozzles). Each group of 30 nozzles is located beneath a stainless steel grille. Twenty such grilles are arranged in two rows of 10, in the middle of the main walkway through Dundas Square.
Drinking fountain
A water fountain or drinking fountain is designed to provide drinking water and has a basin arrangement with either continuously running water or a tap. The drinker bends down to the stream of water and swallows water directly from the stream. Modern indoor drinking fountains may incorporate filters to remove impurities from the water and chillers to reduce its temperature. In some regional dialects, water fountains are called bubblers. Water fountains are usually found in public places, like schools, rest areas, libraries, and grocery stores. Many jurisdictions require water fountains to be wheelchair accessible (by sticking out horizontally from the wall), and to include an additional unit of a lower height for children and short adults. The design that this replaced often had one spout atop a refrigeration unit.
In 1859, The Metropolitan Drinking Fountain and Cattle Trough Association was established to promote the provision of drinking water for people and animals in the United Kingdom and overseas. More recently, in 2010, the FindaFountain campaign was launched in the UK to encourage people to use drinking fountains instead of environmentally damaging bottled water. A map showing the location of UK drinking water fountains is published on the FindaFountain website.
How fountains work
From Roman times until the end of the 19th century, fountains operated by gravity, requiring a source of water higher than the fountain itself to make the water flow. The greater the difference between the elevation of the source of water and the fountain, the higher the water would go upwards from the fountain.
In Roman cities, water for fountains came from lakes and rivers and springs in the hills, brought into city in aqueducts and then distributed to fountains through a system of lead pipes.
From the Middle Ages onwards, fountains in villages or towns were connected to springs, or to channels which brought water from lakes or rivers. In Provence, a typical village fountain consisted of a pipe or underground duct from a spring at a higher elevation than the fountain. The water from the spring flowed down to the fountain, then up a tube into a bulb-shaped stone vessel, like a large vase with a cover on top. The inside of the vase, called the bassin de répartition, was filled with water up to a level just above the mouths of the canons, or spouts, which slanted downwards. The water poured down through the canons, creating a siphon, so that the fountain ran continually.
In cities and towns, residents filled vessels or jars of water jets from the canons of the fountain or paid a water porter to bring the water to their home. Horses and domestic animals could drink the water in the basin below the fountain. The water not used often flowed into a separate series of basins, a lavoir, used for washing and rinsing clothes. After being used for washing, the same water then ran through a channel to the town's kitchen garden. In Provence, since clothes were washed with ashes, the water that flowed into the garden contained potassium, and was valuable as fertilizer.
The most famous fountains of the Renaissance, at the Villa d'Este in Tivoli, were located on a steep slope near a river; the builders ran a channel from the river to a large fountain at top of the garden, which then fed other fountains and basins on the levels below. The fountains of Rome, built from the Renaissance through the 18th century, took their water from rebuilt Roman aqueducts which brought water from lakes and rivers at a higher elevation than the fountains. Those fountains with a high source of water, such as the Triton Fountain, could shoot water in air. Fountains with a lower source, such as the Trevi Fountain, could only have water pour downwards. The architect of the Trevi Fountain placed it below street level to make the flow of water seem more dramatic.
The fountains of Versailles depended upon water from reservoirs just above the fountains. As King Louis XIV built more fountains, he was forced to construct an enormous complex of pumps, called the Machine de Marly, with fourteen water wheels and 220 pumps, to raise water 162 meters above the Seine River to the reservoirs to keep his fountains flowing. Even with the Machine de Marly, the fountains used so much water that they could not be all turned on at the same time. Fontainiers watched the progress of the King when he toured the gardens and turned on each fountain just before he arrived.
The architects of the fountains at Versailles designed specially-shaped nozzles, or tuyaux, to form the water into different shapes, such as fans, bouquets, and umbrellas.
In Germany, some courts and palace gardens were situated in flat areas, thus fountains depending on pumped pressurized water were developed at a fairly early point in history. The Great Fountain in Herrenhausen Gardens at Hanover was based on ideas of Gottfried Leibniz conceived in 1694 and was inaugurated in 1719 during the visit of George I. After some improvements, it reached a height of some 35 m in 1721 which made it the highest fountain in European courts. The fountains at the Nymphenburg Palace initially were fed by water pumped to water towers, but as from 1803 were operated by the water powered Nymphenburg Pumping Stations which are still working.
Beginning in the 19th century, fountains ceased to be used for drinking water and became purely ornamental. By the beginning of the 20th century, cities began using steam pumps and later electric pumps to send water to the city fountains. Later in the 20th century, urban fountains began to recycle their water through a closed recirculating system. An electric pump, often placed under the water, pushes the water through the pipes. The water must be regularly topped up to offset water lost to evaporation, and allowance must be made to handle overflow after heavy rain.
In modern fountains a water filter, typically a media filter, removes particles from the water—this filter requires its own pump to force water through it and plumbing to remove the water from the pool to the filter and then back to the pool. The water may need chlorination or anti-algal treatment, or may use biological methods to filter and clean water.
The pumps, filter, electrical switch box and plumbing controls are often housed in a "plant room".
Low-voltage lighting, typically 12 volt direct current, is used to minimise electrical hazards. Lighting is often submerged and must be suitably designed. High wattage lighting (incandescent and halogen) either as submerged lighting or accent lighting on waterwall fountains have been implicated in every documented Legionnaires' disease outbreak associated with fountains. This is detailed in the "Guidelines for Control of Legionella in Ornamental Features".
Floating fountains are also popular for ponds and lakes; they consist of a float pump nozzle and water chamber.
The tallest fountains in the world
King Fahd's Fountain (1985) in Jeddah, Saudi Arabia. The fountain jets water above the Red Sea and is currently the tallest fountain in the world.
The World Cup Fountain in the Han-gang River in Seoul, Korea (2002), advertises a height of .
The Gateway Geyser (1995), next to the Mississippi River in St. Louis, Missouri, shoots water in the air. It is the tallest fountain in the United States.
Port Fountain (2006) in Karachi, Pakistan, rises to height of making it the fourth tallest fountain.
Fountain Park, Fountain Hills, Arizona (1970). Can reach when all three pumps are operating, but normally runs at .
The Dubai Fountain, opened in 2009 next to Burj Khalifa, the world's tallest building. The fountain performs once every half-hour to recorded music, and shoots water to height of . The fountain also has extreme shooters, not used in every show, which can reach .
The Captain James Cook Memorial Jet in Canberra (1970),
The Jet d'eau, in Geneva (1951),
Magic Fountain of Montjuïc (1929), Barcelona, Catalonia, Spain. 170 Feet, Created by Carles Buïgas. Tallest Fountains in the World
Gallery of notable fountains around the world
See also
Wishing well, for the practice of dropping coins into fountains
Bibliography
Helen Attlee, Italian Gardens – A Cultural History. Frances Lincoln Limited, London, 2006.
Paris et ses Fontaines, del la Renaissance a nos jours, edited by Béatrice de Andia, Dominique Massounie, Pauline Prevost-Marcilhacy and Daniel Rabreau, from the Collection Paris et son Patrimoine, Paris, 1995.
Les Aqueducs de la ville de Rome, translation and commentary by Pierre Grimal, Société d'édition Les Belles Lettres, Paris, 1944.
Louis Plantier, Fontaines de Provence et de la Côte d'Azur, Édisud, Aix-en-Provence, 2007
Frédérick Cope and Tazartes Maurizia, Les Fontaines de Rome, Éditions Citadelles et Mazenod, 2004
André Jean Tardy, Fontaines toulonnaises, Les Éditions de la Nerthe, 2001.
Hortense Lyon, La Fontaine Stravinsky, Collection Baccalauréat arts plastiques 2004, Centre national de documentation pédagogique
Marilyn Symmes (editor), Fountains-Splash and Spectacle- Water and Design from the Renaissance to the Present. Thames and Hudson, in cooperation with the Cooper-Hewitt National Design Museum of the Smithsonian Institution. (1998).
Yves Porter et Arthur Thévenart, Palais et Jardins de Perse, Flammarion, Paris (2002). ().
Raimund O.A. Becker-Ritterspach, Water Conduits in the Kathmandu Valley, Munshriram Manoharlal Publishers, Pvt.Ltd, New Delhi 1995,
References
External links
King Fahd Fountain tops in the World as Tallest fountain
2nd-millennium BC introductions
Water
Landscape architecture
Landscape garden features
Architectural elements
Outdoor sculptures
Public art
Lagash | Fountain | [
"Technology",
"Engineering",
"Environmental_science"
] | 10,640 | [
"Hydrology",
"Building engineering",
"Landscape architecture",
"Architectural elements",
"Water",
"Components",
"Architecture"
] |
384,743 | https://en.wikipedia.org/wiki/Cyclopropane | Cyclopropane is the cycloalkane with the molecular formula (CH2)3, consisting of three methylene groups (CH2) linked to each other to form a triangular ring. The small size of the ring creates substantial ring strain in the structure. Cyclopropane itself is mainly of theoretical interest but many of its derivatives - cyclopropanes - are of commercial or biological significance.
Cyclopropane was used as a clinical inhalational anesthetic from the 1930s through the 1980s. The substance's high flammability poses a risk of fire and explosions in operating rooms due to its tendency to accumulate in confined spaces, as its density is higher than that of air.
History
Cyclopropane was discovered in 1881 by August Freund, who also proposed the correct structure for the substance in his first paper. Freund treated 1,3-dibromopropane with sodium, causing an intramolecular Wurtz reaction leading directly to cyclopropane. The yield of the reaction was improved by Gustavson in 1887 with the use of zinc instead of sodium. Cyclopropane had no commercial application until Henderson and Lucas discovered its anaesthetic properties in 1929; industrial production had begun by 1936. In modern anaesthetic practice, it has been superseded by other agents.
Anaesthesia
Cyclopropane was introduced into clinical use by the American anaesthetist Ralph Waters who used a closed system with carbon dioxide absorption to conserve this then-costly agent.
Cyclopropane is a relatively potent, non-irritating and sweet smelling agent with a minimum alveolar concentration of 17.5% and a blood/gas partition coefficient of 0.55. This meant induction of anaesthesia by inhalation of cyclopropane and oxygen was rapid and not unpleasant. However at the conclusion of prolonged anaesthesia patients could suffer a sudden decrease in blood pressure, potentially leading to cardiac dysrhythmia: a reaction known as "cyclopropane shock". For this reason, as well as its high cost and its explosive nature, it was latterly used only for the induction of anaesthesia, and has not been available for clinical use since the mid-1980s.
Cylinders and flow meters were colored orange.
Pharmacology
Cyclopropane is inactive at the GABAA and glycine receptors, and instead acts as an NMDA receptor antagonist. It also inhibits the AMPA receptor and nicotinic acetylcholine receptors, and activates certain K2P channels.
Structure and bonding
The triangular structure of cyclopropane requires the bond angles between carbon-carbon covalent bonds to be 60°. The molecule has D3h molecular symmetry. The C-C distances are 151 pm versus 153-155 pm.
Despite their shortness, the C-C bonds in cyclopropane are weakened by 34 kcal/mol vs ordinary C-C bonds. In addition to ring strain, the molecule also has torsional strain due to the eclipsed conformation of its hydrogen atoms. The C-H bonds in cyclopropane are stronger than ordinary C-H bonds as reflected by NMR coupling constants.
Bonding between the carbon centres is generally described in terms of bent bonds. In this model the carbon-carbon bonds are bent outwards so that the inter-orbital angle is 104°.
The unusual structural properties of cyclopropane have spawned many theoretical discussions. One theory invokes σ-aromaticity: the stabilization afforded by delocalization of the six electrons of cyclopropane's three C-C σ bonds to explain why the strain of cyclopropane is "only" 27.6 kcal/mol as compared to cyclobutane (26.2 kcal/mol) with cyclohexane as reference with Estr=0 kcal/mol, in contrast to the usual π aromaticity, that, for example, has a highly stabilizing effect in benzene. Other studies do not support the role of σ-aromaticity in cyclopropane and the existence of an induced ring current; such studies provide an alternative explanation for the energetic stabilization and abnormal magnetic behaviour of cyclopropane.
Synthesis
Cyclopropane was first produced via a Wurtz coupling, in which 1,3-dibromopropane was cyclised using sodium. The yield of this reaction can be improved by the use of zinc as the dehalogenating agent and sodium iodide as a catalyst.
BrCH2CH2CH2Br + 2 Na → (CH2)3 + 2 NaBr
The preparation of cyclopropane rings is referred to as cyclopropanation.
Reactions
Owing to the increased π-character of its C-C bonds, cyclopropane is often assumed to add bromine to give 1,3-dibromopropane, but this reaction proceeds poorly. Hydrohalogenation with hydrohalic acids gives linear 1-halopropanes. Substituted cyclopropanes also react, following Markovnikov's rule.
Cyclopropane and its derivatives can oxidatively add to transition metals, in a process referred to as C–C activation.
Safety
Cyclopropane is highly flammable. However, despite its strain energy it does not exhibit explosive behavior substantially different from other alkanes.
See also
Tetrahedrane contains four fused cyclopropane rings that form the faces of a tetrahedron
Propellane contains three cyclopropane rings that share a single central carbon-carbon bond.
Spiropentane is two cyclopropane rings fused at a vertex
Cyclopropene
Methylenecyclopropane
References
External links
Synthesis of Cyclopropanes and related compounds
Carbon triangle
General anesthetics
NMDA receptor antagonists
Nicotinic antagonists
AMPA receptor antagonists
Gases | Cyclopropane | [
"Physics",
"Chemistry"
] | 1,255 | [
"Statistical mechanics",
"Gases",
"Phases of matter",
"Matter"
] |
384,948 | https://en.wikipedia.org/wiki/Methyl%20violet | Methyl violet is a family of organic compounds that are mainly used as dyes. Depending on the number of attached methyl groups, the color of the dye can be altered. Its main use is as a purple dye for textiles and to give deep violet colors in paint and ink. It is also used as a hydration indicator for silica gel. Methyl violet 10B is also known as crystal violet (and many other names) and has medical uses.
Structure
The term methyl violet encompasses three compounds that differ in the number of methyl groups attached to the amine functional group. Methyl violets are mixtures of tetramethyl (2B), pentamethyl (6B) and hexamethyl (10B) pararosanilins.
They are all soluble in water, ethanol, diethylene glycol and dipropylene glycol.
{|class="wikitable" style="text-align:center"
|-
!Name
| Methyl violet 2B || Methyl violet 6B || Methyl violet 10B (Crystal violet)
|-
!Structure
| || ||
|-
!Formula (salt)
| C23H26ClN3 || C24H28ClN3 || C25H30ClN3
|-
!CAS no
| 84215-49-6 || 8004-87-3 || 548-62-9
|-
!C.I.
| 42536 || 42535 || 42555
|-
!ChemSpider ID
| 21164086 || 170606 || 10588
|-
!PubChem ID
| 91997555 || 164877 || 11057
|-
!Formula (cation)
| C23H26N3+ || C24H28N3+ || C25H30N3+
|-
!ChemSpider ID
| || 2006225 || 3349, 9080056, 10354393
|-
!PubChem ID
| || 2724053 || 3468
|}
Methyl violet 2B
Methyl violet 2B (IUPAC name: 4,4′-((4-Iminocyclohexa-2,5-dien-1-ylidene)methylene)bis(N,N-dimethylaniline) monohydrochloride) is a green powder which is soluble in water and ethanol but not in xylene. It appears yellow in solution of low pH (approximately 0.15) and changes to violet with pH increasing toward 3.2.
Methyl violet 10B
Methyl violet 10B has six methyl groups. It is known in medicine as Gentian violet (or crystal violet or pyoctanin(e)) and is the active ingredient in a Gram stain, used to classify bacteria. It is used as a pH indicator, with a range between 0 and 1.6. The protonated form (found in acidic conditions) is yellow, turning blue-violet above pH levels of 1.6.
Methyl violet 10B inhibits the growth of many Gram positive bacteria, except streptococci. When used in conjunction with nalidixic acid (which destroys gram-negative bacteria), it can be used to isolate the streptococci bacteria for the diagnosis of an infection.
Degradation
Methyl violet is a mutagen and mitotic poison, therefore concerns exist regarding the ecological impact of the release of methyl violet into the environment. Methyl violet has been used in vast quantities for textile and paper dyeing, and 15% of such dyes produced worldwide are released to environment in wastewater. Numerous methods have been developed to treat methyl violet pollution. The three most prominent are chemical bleaching, biodegradation, and photodegradation.
Chemical bleaching
Chemical bleaching is achieved by oxidation or reduction. Oxidation can destroy the dye completely, e.g. through the use of sodium hypochlorite (NaClO, common bleach) or hydrogen peroxide. Reduction of methyl violet occurs in microorganisms but can be attained chemically using sodium dithionite.
Biodegradation
Biodegradation has been well investigated because of its relevance to sewage plants with specialized microorganisms. Two microorganisms that have been studied in depth are the white rot fungus and the bacterium Nocardia corallina.
Photodegradation
Light alone does not rapidly degrade methyl violet, but the process is accelerated upon the addition of large band-gap semiconductors, titanium dioxide or zinc oxide.
Other methods
Many other methods have been developed to treat the contamination of dyes in a solution, including electrochemical degradation, ion exchange, laser degradation, and absorption onto various solids such as activated charcoal.
See also
Fuchsine
Methylene blue
Methyl blue
Egyptian Blue
Han Purple
Fluorescein
References
Triarylmethane dyes
Disinfectants
PH indicators
Staining dyes
Anilines
Chlorides
Dimethylamino compounds | Methyl violet | [
"Chemistry",
"Materials_science"
] | 1,039 | [
"Chlorides",
"Titration",
"Inorganic compounds",
"PH indicators",
"Chromism",
"Chemical tests",
"Salts",
"Equilibrium chemistry"
] |
385,162 | https://en.wikipedia.org/wiki/Substitution%E2%80%93permutation%20network | In cryptography, an SP-network, or substitution–permutation network (SPN), is a series of linked mathematical operations used in block cipher algorithms such as AES (Rijndael), 3-Way, Kalyna, Kuznyechik, PRESENT, SAFER, SHARK, and Square.
Such a network takes a block of the plaintext and the key as inputs, and applies several alternating rounds or layers of substitution boxes (S-boxes) and permutation boxes (P-boxes) to produce the ciphertext block. The S-boxes and P-boxes transform of input bits into output bits. It is common for these transformations to be operations that are efficient to perform in hardware, such as exclusive or (XOR) and bitwise rotation. The key is introduced in each round, usually in the form of "round keys" derived from it. (In some designs, the S-boxes themselves depend on the key.)
Decryption is done by simply reversing the process (using the inverses of the S-boxes and P-boxes and applying the round keys in reversed order).
Components
An S-box substitutes a small block of bits (the input of the S-box) by another block of bits (the output of the S-box). This substitution should be one-to-one, to ensure invertibility (hence decryption). In particular, the length of the output should be the same as the length of the input (the picture on the right has S-boxes with 4 input and 4 output bits), which is different from S-boxes in general that could also change the length, as in Data Encryption Standard (DES), for example. An S-box is usually not simply a permutation of the bits. Rather, in a good S-box each output bit will be affected by every input bit. More precisely, in a good S-box each output bit will be changed with 50% probability by every input bit. Since each output bit changes with the 50% probability, about half of the output bits will actually change with an input bit change (cf. Strict avalanche criterion).
A P-box is a permutation of all the bits: it takes the outputs of all the S-boxes of one round, permutes the bits, and feeds them into the S-boxes of the next round. A good P-box has the property that the output bits of any S-box are distributed to as many S-box inputs as possible.
At each round, the round key (obtained from the key with some simple operations, for instance, using S-boxes and P-boxes) is combined using some group operation, typically XOR.
Properties
A single typical S-box or a single P-box alone does not have much cryptographic strength: an S-box could be thought of as a substitution cipher, while a P-box could be thought of as a transposition cipher. However, a well-designed SP network with several alternating rounds of S- and P-boxes already satisfies Shannon's confusion and diffusion properties:
The reason for diffusion is the following: If one changes one bit of the plaintext, then it is fed into an S-box, whose output will change at several bits, then all these changes are distributed by the P-box among several S-boxes, hence the outputs of all of these S-boxes are again changed at several bits, and so on. Doing several rounds, each bit changes several times back and forth, therefore, by the end, the ciphertext has changed completely, in a pseudorandom manner. In particular, for a randomly chosen input block, if one flips the i-th bit, then the probability that the j-th output bit will change is approximately a half, for any i and j, which is the strict avalanche criterion. Vice versa, if one changes one bit of the ciphertext, then attempts to decrypt it, the result is a message completely different from the original plaintext—SP ciphers are not easily malleable.
The reason for confusion is exactly the same as for diffusion: changing one bit of the key changes several of the round keys, and every change in every round key diffuses over all the bits, changing the ciphertext in a very complex manner.
If an attacker somehow obtains one plaintext corresponding to one ciphertext—a known-plaintext attack, or worse, a chosen plaintext or chosen-ciphertext attack—the confusion and diffusion make it difficult for the attacker to recover the key.
Performance
Although a Feistel network that uses S-boxes (such as DES) is quite similar to SP networks, there are some differences that make either this or that more applicable in certain situations. For a given amount of confusion and diffusion, an SP network has more "inherent parallelism" and so — given a CPU with many execution units — can be computed faster than a Feistel network. CPUs with few execution units — such as most smart cards — cannot take advantage of this inherent parallelism. Also SP ciphers require S-boxes to be invertible (to perform decryption); Feistel inner functions have no such restriction and can be constructed as one-way functions.
See also
Feistel network
Product cipher
Square (cipher)
International Data Encryption Algorithm
References
Further reading
Cryptographic algorithms
Block ciphers
Permutations | Substitution–permutation network | [
"Mathematics"
] | 1,119 | [
"Functions and mappings",
"Permutations",
"Mathematical objects",
"Combinatorics",
"Mathematical relations"
] |
385,334 | https://en.wikipedia.org/wiki/List%20of%20particles | This is a list of known and hypothesized microscopic particles in particle physics, condensed matter physics and cosmology.
Standard Model elementary particles
Elementary particles are particles with no measurable internal structure; that is, it is unknown whether they are composed of other particles. They are the fundamental objects of quantum field theory. Many families and sub-families of elementary particles exist. Elementary particles are classified according to their spin. Fermions have half-integer spin while bosons have integer spin. All the particles of the Standard Model have been experimentally observed, including the Higgs boson in 2012. Many other hypothetical elementary particles, such as the graviton, have been proposed, but not observed experimentally.
Fermions
Fermions are one of the two fundamental classes of particles, the other being bosons. Fermion particles are described by Fermi–Dirac statistics and have quantum numbers described by the Pauli exclusion principle. They include the quarks and leptons, as well as any composite particles consisting of an odd number of these, such as all baryons and many atoms and nuclei.
Fermions have half-integer spin; for all known elementary fermions this is . All known fermions except neutrinos, are also Dirac fermions; that is, each known fermion has its own distinct antiparticle. It is not known whether the neutrino is a Dirac fermion or a Majorana fermion. Fermions are the basic building blocks of all matter. They are classified according to whether they interact via the strong interaction or not. In the Standard Model, there are 12 types of elementary fermions: six quarks and six leptons.
Quarks
Quarks are the fundamental constituents of hadrons and interact via the strong force. Quarks are the only known carriers of fractional charge, but because they combine in groups of three quarks (baryons) or in pairs of one quark and one antiquark (mesons), only integer charge is observed in nature. Their respective antiparticles are the antiquarks, which are identical except that they carry the opposite electric charge (for example the up quark carries charge +, while the up antiquark carries charge −), color charge, and baryon number. There are six flavors of quarks; the three positively charged quarks are called "up-type quarks" while the three negatively charged quarks are called "down-type quarks".
Leptons
Leptons do not interact via the strong interaction. Their respective antiparticles are the antileptons, which are identical, except that they carry the opposite electric charge and lepton number. The antiparticle of an electron is an antielectron, which is almost always called a "positron" for historical reasons. There are six leptons in total; the three charged leptons are called "electron-like leptons", while the neutral leptons are called "neutrinos". Neutrinos are known to oscillate, so that neutrinos of definite flavor do not have definite mass: Instead, they exist in a superposition of mass eigenstates. The hypothetical heavy right-handed neutrino, called a "sterile neutrino", has been omitted.
Bosons
Bosons are one of the two fundamental particles having integral spinclasses of particles, the other being fermions. Bosons are characterized by Bose–Einstein statistics and all have integer spins. Bosons may be either elementary, like photons and gluons, or composite, like mesons.
According to the Standard Model, the elementary bosons are:
The Higgs boson is postulated by the electroweak theory primarily to explain the origin of particle masses. In a process known as the "Higgs mechanism", the Higgs boson and the other gauge bosons in the Standard Model acquire mass via spontaneous symmetry breaking of the SU(2) gauge symmetry. The Minimal Supersymmetric Standard Model (MSSM) predicts several Higgs bosons. On 4 July 2012, the discovery of a new particle with a mass between was announced; physicists suspected that it was the Higgs boson. Since then, the particle has been shown to behave, interact, and decay in many of the ways predicted for Higgs particles by the Standard Model, as well as having even parity and zero spin, two fundamental attributes of a Higgs boson. This also means it is the first elementary scalar particle discovered in nature.
Elementary bosons responsible for the four fundamental forces of nature are called force particles (gauge bosons). The strong interaction is mediated by the gluon, the weak interaction is mediated by the W and Z bosons, electromagnetism by the photon, and gravity by the graviton, which is still hypothetical.
Composite particles
Composite particles are bound states of elementary particles.
Hadrons
Hadrons are defined as strongly interacting composite particles. Hadrons are either:
Composite fermions (especially 3 quarks), in which case they are called baryons.
Composite bosons (especially 2 quarks), in which case they are called mesons.
Quark models, first proposed in 1964 independently by Murray Gell-Mann and George Zweig (who called quarks "aces"), describe the known hadrons as composed of valence quarks and/or antiquarks, tightly bound by the color force, which is mediated by gluons. (The interaction between quarks and gluons is described by the theory of quantum chromodynamics.) A "sea" of virtual quark-antiquark pairs is also present in each hadron.
Baryons
Ordinary baryons (composite fermions) contain three valence quarks or three valence antiquarks each.
Nucleons are the fermionic constituents of normal atomic nuclei:
Protons, composed of two up and one down quark (uud)
Neutrons, composed of two down and one up quark (ddu)
Hyperons, such as the Λ, Σ, Ξ, and Ω particles, which contain one or more strange quarks, are short-lived and heavier than nucleons. Although not normally present in atomic nuclei, they can appear in short-lived hypernuclei.
A number of charmed and bottom baryons have also been observed.
Pentaquarks consist of four valence quarks and one valence antiquark.
Other exotic baryons may also exist.
Mesons
Ordinary mesons are made up of a valence quark and a valence antiquark. Because mesons have integer spin (0 or 1) and are not themselves elementary particles, they are classified as "composite" bosons, although being made of elementary fermions. Examples of mesons include the pion, kaon, and the J/ψ. In quantum hadrodynamics, mesons mediate the residual strong force between nucleons.
At one time or another, positive signatures have been reported for all of the following exotic mesons but their existences have yet to be confirmed.
A tetraquark consists of two valence quarks and two valence antiquarks;
A glueball is a bound state of gluons with no valence quarks;
Hybrid mesons consist of one or more valence quark–antiquark pairs and one or more real gluons.
Atomic nuclei
Atomic nuclei typically consist of protons and neutrons, although exotic nuclei may consist of other baryons, such as hypertriton which contains a hyperon. These baryons (protons, neutrons, hyperons, etc.) which comprise the nucleus are called nucleons. Each type of nucleus is called a "nuclide", and each nuclide is defined by the specific number of each type of nucleon.
"Isotopes" are nuclides which have the same number of protons but differing numbers of neutrons.
Conversely, "isotones" are nuclides which have the same number of neutrons but differing numbers of protons.
"Isobars" are nuclides which have the same total number of nucleons but which differ in the number of each type of nucleon. Nuclear reactions can change one nuclide into another.
Atoms
Atoms are the smallest neutral particles into which matter can be divided by chemical reactions. An atom consists of a small, heavy nucleus surrounded by a relatively large, light cloud of electrons. An atomic nucleus consists of 1 or more protons and 0 or more neutrons. Protons and neutrons are, in turn, made of quarks. Each type of atom corresponds to a specific chemical element. To date, 118 elements have been discovered or created.
Exotic atoms may be composed of particles in addition to or in place of protons, neutrons, and electrons, such as hyperons or muons. Examples include pionium () and quarkonium atoms.
Leptonic atoms
Leptonic atoms, named using -onium, are exotic atoms constituted by the bound state of a lepton and an antilepton. Examples of such atoms include positronium (), muonium (), and "true muonium" (). Of these positronium and muonium have been experimentally observed, while "true muonium" remains only theoretical.
Molecules
Molecules are the smallest particles into which a substance can be divided while maintaining the chemical properties of the substance. Each type of molecule corresponds to a specific chemical substance. A molecule is a composite of two or more atoms. Atoms are combined in a fixed proportion to form a molecule. Molecule is one of the most basic units of matter.
Ions
Ions are charged atoms (monatomic ions) or molecules (polyatomic ions). They include cations which have a net positive charge, and anions which have a net negative charge.
Other categories
Goldstone bosons are a massless excitation of a field that has been spontaneously broken. The pions are quasi-goldstone bosons (quasi- because they are not exactly massless) of the broken chiral isospin symmetry of quantum chromodynamics.
Parton, is a generic term coined by Feynman for the sub-particles making up a composite particle – at that time a baryon – hence, it originally referred to what are now called "quarks" and "gluons".
Odderon, a particle composed of an odd number of gluons, detected in 2021.
Quasiparticles
Quasiparticles are effective particles that exist in many particle systems. The field equations of condensed matter physics are remarkably similar to those of high energy particle physics. As a result, much of the theory of particle physics applies to condensed matter physics as well; in particular, there are a selection of field excitations, called quasi-particles, that can be created and explored. These include:
Anyons are a generalization of fermions and bosons in two-dimensional systems like sheets of graphene that obeys braid statistics.
Excitons are bound states of an electron and a hole.
Magnons are coherent excitations of electron spins in a material.
Phonons are vibrational modes in a crystal lattice.
Plasmons are coherent excitations of a plasma.
Polaritons are mixtures of photons with other quasi-particles.
Polarons are moving, charged (quasi-) particles that are surrounded by ions in a material.
Hypothetical particles
Graviton
The graviton is a hypothetical particle that has been included in some extensions to the Standard Model to mediate the gravitational force. It is in a peculiar category between known and hypothetical particles: As an unobserved particle that is not predicted by, nor required for the Standard Model, it belongs in the table of hypothetical particles. But gravitational force itself is a certainty, and expressing that known force in the framework of a quantum field theory requires a boson to mediate it.
If it exists, the graviton is expected to be massless because the gravitational force has a very long range, and appears to propagate at the speed of light. The graviton must be a spin-2 boson because the source of gravitation is the stress–energy tensor, a second-order tensor (compared with electromagnetism's spin-1 photon, the source of which is the four-current, a first-order tensor). Additionally, it can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field would couple to the stress–energy tensor in the same way that gravitational interactions do. This result suggests that, if a massless spin-2 particle is discovered, it must be the graviton.
Dark matter candidates
Many hypothetical particle candidates for dark matter have been proposed like weakly interacting massive particles (WIMP), weakly interacting slender particles or a feebly interacting particles (FIP).
Dark energy candidates
Hypothetical particle candidates to explain dark energy include the chameleon particle and the acceleron.
Auxiliary particles
Virtual particles are mathematical tools used in calculations that exhibits some of the characteristics of an ordinary particle but do not obey the mass-shell relation. These particles are unphysical and unobservable. These include:
Ghost particles, like Faddeev–Popov ghosts and Pauli–Villars ghosts
Spurions, auxiliary field in a quantum field theory that can be used to parameterize any symmetry
Soft photons, photons with energies below detectable in experiment.
There are also instantons are field configurations which are a local minimum of the Yang–Mills field equation. Instantons are used in nonperturbative calculations of tunneling rates. Instantons have properties similar to particles, specific examples include:
Calorons, finite temperature generalization of instantons.
Merons, a field configuration which is a non-self-dual solution of the Yang–Mills field equation. The instanton is believed to be composed of two merons.
Sphalerons are a field configuration which is a saddle point of the Yang–Mills field equations. Sphalerons are used in nonperturbative calculations of non-tunneling rates.
Renormalons, a possible type of singularity arising when using Borel summation. It is a counterpart of an instanton singularity.
Classification by speed
A bradyon (or tardyon) travels slower than the speed of light in vacuum and has a non-zero, real rest mass.
A luxon travels as fast as light in vacuum and has no rest mass.
A tachyon is a hypothetical particle that travels faster than the speed of light so they would paradoxically experience time in reverse (due to inversion of the theory of relativity) and would violate the known laws of causality. A tachyon has an imaginary rest mass.
See also
References
Particles
∗
Unsolved problems in physics | List of particles | [
"Physics"
] | 3,149 | [
"Unsolved problems in physics",
"Subatomic particles",
"Particle physics",
"Physical objects",
"Nuclear physics",
"Particles",
"Atoms",
"Matter"
] |
385,510 | https://en.wikipedia.org/wiki/Finsler%20manifold | In mathematics, particularly differential geometry, a Finsler manifold is a differentiable manifold where a (possibly asymmetric) Minkowski norm is provided on each tangent space , that enables one to define the length of any smooth curve as
Finsler manifolds are more general than Riemannian manifolds since the tangent norms need not be induced by inner products.
Every Finsler manifold becomes an intrinsic quasimetric space when the distance between two points is defined as the infimum length of the curves that join them.
named Finsler manifolds after Paul Finsler, who studied this geometry in his dissertation .
Definition
A Finsler manifold is a differentiable manifold together with a Finsler metric, which is a continuous nonnegative function defined on the tangent bundle so that for each point of ,
for every two vectors tangent to at (subadditivity).
for all (but not necessarily for (positive homogeneity).
unless (positive definiteness).
In other words, is an asymmetric norm on each tangent space . The Finsler metric is also required to be smooth, more precisely:
is smooth on the complement of the zero section of .
The subadditivity axiom may then be replaced by the following strong convexity condition:
For each tangent vector , the Hessian matrix of at is positive definite.
Here the Hessian of at is the symmetric bilinear form
also known as the fundamental tensor of at . Strong convexity of implies the subadditivity with a strict inequality if . If is strongly convex, then it is a Minkowski norm on each tangent space.
A Finsler metric is reversible if, in addition,
for all tangent vectors v.
A reversible Finsler metric defines a norm (in the usual sense) on each tangent space.
Examples
Smooth submanifolds (including open subsets) of a normed vector space of finite dimension are Finsler manifolds if the norm of the vector space is smooth outside the origin.
Riemannian manifolds (but not pseudo-Riemannian manifolds) are special cases of Finsler manifolds.
Randers manifolds
Let be a Riemannian manifold and b a differential one-form on M with
where is the inverse matrix of and the Einstein notation is used. Then
defines a Randers metric on M and is a Randers manifold, a special case of a non-reversible Finsler manifold.
Smooth quasimetric spaces
Let (M, d) be a quasimetric so that M is also a differentiable manifold and d is compatible with the differential structure of M in the following sense:
Around any point z on M there exists a smooth chart (U, φ) of M and a constant C ≥ 1 such that for every x, y ∈ U
The function d: M × M → [0, ∞] is smooth in some punctured neighborhood of the diagonal.
Then one can define a Finsler function F: TM →[0, ∞] by
where γ is any curve in M with γ(0) = x and γ(0) = v. The Finsler function F obtained in this way restricts to an asymmetric (typically non-Minkowski) norm on each tangent space of M. The induced intrinsic metric of the original quasimetric can be recovered from
and in fact any Finsler function F: TM → [0, ∞) defines an intrinsic quasimetric dL on M by this formula.
Geodesics
Due to the homogeneity of F the length
of a differentiable curve γ: [a, b] → M in M is invariant under positively oriented reparametrizations. A constant speed curve γ is a geodesic of a Finsler manifold if its short enough segments γ|[c,d] are length-minimizing in M from γ(c) to γ(d). Equivalently, γ is a geodesic if it is stationary for the energy functional
in the sense that its functional derivative vanishes among differentiable curves with fixed endpoints and .
Canonical spray structure on a Finsler manifold
The Euler–Lagrange equation for the energy functional E[γ] reads in the local coordinates (x1, ..., xn, v1, ..., vn) of TM as
where k = 1, ..., n and gij is the coordinate representation of the fundamental tensor, defined as
Assuming the strong convexity of F2(x, v) with respect to v ∈ TxM, the matrix gij(x, v) is invertible and its inverse is denoted by gij(x, v). Then is a geodesic of (M, F) if and only if its tangent curve {{nobreak|γ''': [a, b] → TM∖{0} }} is an integral curve of the smooth vector field H on TM∖{0} locally defined by
where the local spray coefficients Gi are given by
The vector field H on TM∖{0} satisfies JH = V and [V, H] = H, where J and V are the canonical endomorphism and the canonical vector field on TM∖{0}. Hence, by definition, H is a spray on M. The spray H defines a nonlinear connection on the fibre bundle through the vertical projection
In analogy with the Riemannian case, there is a version
of the Jacobi equation for a general spray structure (M, H) in terms of the Ehresmann curvature and nonlinear covariant derivative.
Uniqueness and minimizing properties of geodesics
By Hopf–Rinow theorem there always exist length minimizing curves (at least in small enough neighborhoods) on (M, F). Length minimizing curves can always be positively reparametrized to be geodesics, and any geodesic must satisfy the Euler–Lagrange equation for E[γ]. Assuming the strong convexity of F2 there exists a unique maximal geodesic γ with γ(0) = x and γ(0) = v for any (x, v) ∈ TM∖{0} by the uniqueness of integral curves.
If F2 is strongly convex, geodesics γ: [0, b] → M are length-minimizing among nearby curves until the first point γ(s) conjugate to γ(0) along γ, and for t > s there always exist shorter curves from γ(0) to γ(t) near γ'', as in the Riemannian case.
Notes
See also
Global analysis – which uses Hilbert manifolds and other kinds of infinite-dimensional manifolds
References
(Reprinted by Birkhäuser (1951))
External links
The (New) Finsler Newsletter
Differential geometry
Finsler geometry
Riemannian geometry
Riemannian manifolds
Smooth manifolds | Finsler manifold | [
"Mathematics"
] | 1,436 | [
"Riemannian manifolds",
"Space (mathematics)",
"Metric spaces"
] |
385,621 | https://en.wikipedia.org/wiki/Plasma%20diagnostics | Plasma diagnostics are a pool of methods, instruments, and experimental techniques used to measure properties of a plasma, such as plasma components' density, distribution function over energy (temperature), their spatial profiles and dynamics, which enable to derive plasma parameters.
Invasive probe methods
Ball-pen probe
A ball-pen probe is novel technique used to measure directly the plasma potential in magnetized plasmas. The probe was invented by Jiří Adámek in the Institute of Plasma Physics AS CR in 2004. The ball-pen probe balances the electron saturation current to the same magnitude as that of the ion saturation current. In this case, its floating potential becomes identical to the plasma potential. This goal is attained by a ceramic shield, which screens off an adjustable part of the electron current from the probe collector due to the much smaller gyro–radius of the electrons. The electron temperature is proportional to the difference of ball-pen probe(plasma potential) and Langmuir probe (floating potential) potential. Thus, the electron temperature can be obtained directly with high temporal resolution without additional power supply.
Faraday cup
The conventional Faraday cup is applied for measurements of ion (or electron) flows from plasma boundaries and for mass spectrometry.
Langmuir probe
Measurements with electric probes, called Langmuir probes, are the oldest and most often used procedures for low-temperature plasmas. The method was developed by Irving Langmuir and his co-workers in the 1920s, and has since been further developed in order to extend its applicability to more general conditions than those presumed by Langmuir. Langmuir probe measurements are based on the estimation of current versus voltage characteristics of a circuit consisting of two metallic electrodes that are both immersed in the plasma under study. Two cases are of interest:
(a) The surface areas of the two electrodes differ by several orders of magnitude. This is known as the single-probe method.
(b) The surface areas are very small in comparison with the dimensions of the vessel containing the plasma and approximately equal to each other. This is the double-probe method.
Conventional Langmuir probe theory assumes collisionless movement of charge carriers in the space charge sheath around the probe. Further it is assumed that the sheath boundary is well-defined and that beyond this boundary the plasma is completely undisturbed by the presence of the probe. This means that the electric field caused by the difference between the potential of the probe and the plasma potential at the place where the probe is located is limited to the volume inside the probe sheath boundary.
The general theoretical description of a Langmuir probe measurement requires the simultaneous solution of the Poisson equation, the collision-free Boltzmann equation or Vlasov equation, and the continuity equation with regard to the boundary condition at the probe surface and requiring that, at large distances from the probe, the solution approaches that expected in an undisturbed plasma.
Magnetic (B-dot) probe
If the magnetic field in the plasma is not stationary, either because the plasma as a whole is transient or because the fields are periodic (radio-frequency heating), the rate of change of the magnetic field with time (, read "B-dot") can be measured locally with a loop or coil of wire. Such coils exploit Faraday's law, whereby a changing magnetic field induces an electric field. The induced voltage can be measured and recorded with common instruments.
Also, by Ampere's law, the magnetic field is proportional to the currents that produce it, so the measured magnetic field gives information about the currents flowing in the plasma. Both currents and magnetic fields are important in understanding fundamental plasma physics.
Energy analyzer
An energy analyzer is a probe used to measure the energy distribution of the particles in a plasma. The charged particles are typically separated by their velocities from the electric and/or magnetic fields in the energy analyzer, and then discriminated by only allowing particles with the selected energy range to reach the detector.
Energy analyzers that use an electric field as the discriminator are also known as retarding field analyzers. It usually consists of a set of grids biased at different potentials to set up an electric field to repel particles lower than the desired amount of energy away from the detector. Analyzers with cylindrical or conical face-field can be more effective in such measurements.
In contrast, energy analyzers that employ the use of a magnetic field as a discriminator are very similar to mass spectrometers. Particles travel through a magnetic field in the probe and require a specific velocity in order to reach the detector. These were first developed in the 1960s, and are typically built to measure ions. (The size of the device is on the order the particle's gyroradius because the discriminator intercepts the path of the gyrating particle.)
The energy of neutral particles can also be measured by an energy analyzer, but they first have to be ionized by an electron impact ionizer.
Proton radiography
Proton radiography uses a proton beam from a single source to interact with the magnetic field and/or the electric field in the plasma and the intensity profile of the beam is measured on a screen after the interaction. The magnetic and electric fields in the plasma deflect the beam's trajectory and the deflection causes modulation in the intensity profile. From the intensity profile, one can measure the integrated magnetic field and/or electric field.
Self Excited Electron Plasma Resonance Spectroscopy (SEERS)
Nonlinear effects like the I-V characteristic of the boundary sheath are utilized for Langmuir probe measurements but they are usually neglected for modelling of RF discharges due to their very inconvenient mathematical treatment. The Self Excited Electron Plasma Resonance Spectroscopy (SEERS) utilizes exactly these nonlinear effects and known resonance effects in RF discharges. The nonlinear elements, in particular the sheaths, provide harmonics in the discharge current and excite the plasma and the sheath at their series resonance characterized by the so-called geometric resonance frequency.
SEERS provides the spatially and reciprocally averaged electron plasma density and the effective electron collision rate. The electron collision rate reflects stochastic (pressure) heating and ohmic heating of the electrons.
The model for the plasma bulk is based on 2d-fluid model (zero and first order moments of Boltzmann equation) and the full set of the Maxwellian equations leading to the Helmholtz equation for the magnetic field. The sheath model is based additionally on the Poisson equation.
Passive spectroscopy
Passive spectroscopic methods simply observe the radiation emitted by the plasma. They can be collected by diagnostics such as the filterscope, which is used in various tokamak devices.
Doppler shift
If the plasma (or one ionic component of the plasma) is flowing in the direction of the line of sight to the observer, emission lines will be seen at a different frequency due to the Doppler effect.
Doppler broadening
The thermal motion of ions will result in a shift of emission lines up or down, depending on whether the ion is moving toward or away from the observer. The magnitude of the shift is proportional to the velocity along the line of sight. The net effect is a characteristic broadening of spectral lines, known as Doppler broadening, from which the ion temperature can be determined.
Stark effect
The splitting of some emission lines due to the Stark effect can be used to determine the local electric field.
Stark broadening
Irrespectively of the presence of macroscopic electric fields, any single atom is affected by microscopic electric fields due to the neighboring charged plasma particles. This results in the Stark broadening of spectral lines that can be used to determine the plasma density.
Spectral line ratios
The brightness of spectral lines emitted by atoms in a plasma depends on the plasma temperature and density.
If a sufficiently complete collisional radiative model is used, the temperature (and, to a lesser degree, density) of plasmas can often be inferred by taking ratios of the emission intensities of various atomic spectral lines.
Zeeman effect
The presence of a magnetic field splits the atomic energy levels due to the Zeeman effect. This leads to broadening or splitting of spectral lines. Analyzing these lines can, therefore, yield the magnetic field strength in the plasma.
Active spectroscopy
Active spectroscopic methods stimulate the plasma atoms in some way and observe the result (emission of radiation, absorption of the stimulating light or others).
Absorption spectroscopy
By shining through the plasma a laser with a wavelength, tuned to a certain transition of one of the species present in the plasma, the absorption profile of that transition could be obtained. This profile provides information not only for the plasma parameters, that could be obtained from the emission profile, but also for the line-integrated number density of the absorbing species.
Beam emission spectroscopy
A beam of neutral atoms is fired into a plasma. Some atoms are excited by collisions within the plasma and emit radiation. This can be used to probe density fluctuations in a turbulent plasma.
Charge exchange recombination spectroscopy
In extremely high-temperature plasmas, such as those found in magnetic fusion experiments, light elements become fully ionized and do not emit line radiation. However, when a beam of neutral atoms is fired into the plasma, a process known as charge exchange occurs. During charge exchange, electrons from the neutral beam atoms are transferred to the highly energetic plasma ions, leading to the formation of hydrogenic ions. These newly formed ions promptly emit line radiation, which is subsequently analyzed to obtain information about the plasma, including ion density, temperature, and velocity.
One example of this is the Fast-Ion Deuterium-Alpha (FIDA) method employed in tokamaks. In this technique, charge exchange occurs between the neutral beam atoms and the fast deuterium ions present in the plasma. This method exploits the substantial Doppler shift exhibited by Balmer-alpha light emitted by the energetic atoms in order to determine the density of the fast ions.
Laser-induced fluorescence
Laser-induced fluorescence (LIF) is a spectroscopic technique employed for the investigation of plasma properties by observing the fluorescence emitted when the plasma is stimulated by laser radiation. This method allows for the measurement of plasma parameters such as ion flow, ion temperature, magnetic field strength, and plasma density. Typically, tunable dye lasers are utilized to carry out these measurements. The pioneering application of LIF in plasma physics occurred in 1975 when researchers used it to measure the ion velocity distribution function in an argon plasma. Various LIF techniques have since been developed, including the one-photon LIF technique and the two-photon absorption laser-induced fluorescence (TALIF).
Two-photon absorption laser-induced fluorescence
TALIF is a modification of the laser-induced fluorescence technique. In this approach, the upper energy level is excited through the absorption of two photons, and subsequent fluorescence resulting from the radiative decay of the excited level is observed. TALIF is capable of providing precise measurements of absolute ground state atomic densities, such as those of hydrogen, oxygen, and nitrogen. However, achieving such precision necessitates appropriate calibration methods, which can be accomplished through titration or a more modern approach involving a comparison with a noble gases.
TALIF also offers insight into the temperature of species within the plasma, apart from atomic densities. However, this requires the use of lasers with a high spectral resolution to distinguish the Gaussian contribution of temperature broadening against the natural broadening of the two-photon excitation profile and the spectral broadening of the laser itself.
Photodetachment
Photodetachment combines Langmuir probe measurements with an incident laser beam. The incident laser beam is optimised, spatially, spectrally, and pulse energy, to detach an electron bound to a negative ion. Langmuir probe measurements are conducted to measure the electron density in two situations, one without the incident laser and one with the incident laser. The increase in the electron density with the incident laser gives the negative ion density.
Motional Stark effect
If an atom is moving in a magnetic field, the Lorentz force will act in opposite directions on the nucleus and the electrons, just as an electric field does. In the frame of reference of the atom, there is an electric field, even if there is none in the laboratory frame. Consequently, certain lines will be split by the Stark effect. With an appropriate choice of beam species and velocity and of geometry, this effect can be used to determine the magnetic field in the plasma.
Optical effects from free electrons
The optical diagnostics above measure line radiation from atoms. Alternatively, the effects of free charges on electromagnetic radiation can be used as a diagnostic.
Electron cyclotron emission
In magnetized plasmas, electrons will gyrate around magnetic field lines and emit cyclotron radiation. The frequency of the emission is given by the cyclotron resonance condition. In a sufficiently thick and dense plasma, the intensity of the emission will follow Planck's law, and only depend on the electron temperature.
Faraday rotation
The Faraday effect will rotate the plane of polarization of a beam passing through a plasma with a magnetic field in the direction of the beam. This effect can be used as a diagnostic of the magnetic field, although the information is mixed with the density profile and is usually an integral value only.
Interferometry
If a plasma is placed in one arm of an interferometer, the phase shift will be proportional to the plasma density integrated along the path.
Thomson scattering
Scattering of laser light from the electrons in a plasma is known as Thomson scattering. The electron temperature can be determined very reliably from the Doppler broadening of the laser line. The electron density can be determined from the intensity of the scattered light, but a careful absolute calibration is required. Although Thomson scattering is dominated by scattering from electrons, since the electrons interact with the ions, in some circumstances information on the ion temperature can also be extracted.
Neutron diagnostics
Fusion plasmas using D-T fuel produce 3.5 MeV alpha particles and 14.1 MeV neutrons. By measuring the neutron flux, plasma properties such as ion temperature and fusion power can be determined.
See also
Laser schlieren deflectometry
References
Further reading
Measuring instruments | Plasma diagnostics | [
"Physics",
"Technology",
"Engineering"
] | 2,926 | [
"Plasma diagnostics",
"Measuring instruments",
"Plasma physics"
] |
385,661 | https://en.wikipedia.org/wiki/Inverse%20scattering%20problem | In mathematics and physics, the inverse scattering problem is the problem of determining characteristics of an object, based on data of how it scatters incoming radiation or particles. It is the inverse problem to the direct scattering problem, which is to determine how radiation or particles are scattered based on the properties of the scatterer.
Soliton equations are a class of partial differential equations which can be studied and solved by a method called the inverse scattering transform, which reduces the nonlinear PDEs to a linear inverse scattering problem. The nonlinear Schrödinger equation, the Korteweg–de Vries equation and the KP equation are examples of soliton equations. In one space dimension the inverse scattering problem is equivalent to a Riemann-Hilbert problem. Inverse scattering has been applied to many problems including radiolocation, echolocation, geophysical survey, nondestructive testing, medical imaging, and quantum field theory.
Citations
References
Reprint
Scattering theory
Scattering, absorption and radiative transfer (optics)
Inverse problems | Inverse scattering problem | [
"Chemistry",
"Mathematics"
] | 207 | [
"Scattering theory",
" absorption and radiative transfer (optics)",
"Applied mathematics",
"Scattering stubs",
"Scattering",
"Inverse problems"
] |
385,748 | https://en.wikipedia.org/wiki/Super%20high%20frequency | Super high frequency (SHF) is the ITU designation for radio frequencies (RF) in the range between 3 and 30 gigahertz (GHz). This band of frequencies is also known as the centimetre band or centimetre wave as the wavelengths range from one to ten centimetres. These frequencies fall within the microwave band, so radio waves with these frequencies are called microwaves. The small wavelength of microwaves allows them to be directed in narrow beams by aperture antennas such as parabolic dishes and horn antennas, so they are used for point-to-point communication and data links and for radar. This frequency range is used for most radar transmitters, wireless LANs, satellite communication, microwave radio relay links, satellite phones (S band), and numerous short range terrestrial data links. They are also used for heating in industrial microwave heating, medical diathermy, microwave hyperthermy to treat cancer, and to cook food in microwave ovens.
Frequencies in the SHF range are often referred to by their IEEE radar band designations: S, C, X, Ku, K, or Ka band, or by similar NATO or EU designations.
Propagation
Microwaves propagate solely by line of sight; because of the small refraction due to their short wavelength, the groundwave and ionospheric reflection (skywave or "skip" propagation) seen with lower frequency radio waves do not occur. Although in some cases they can penetrate building walls enough for useful reception, unobstructed rights of way cleared to the first Fresnel zone are usually required. Wavelengths are small enough at microwave frequencies that the antenna can be much larger than a wavelength, allowing highly directional (high gain) antennas to be built which can produce narrow beams. Therefore, they are used in point-to-point terrestrial communications links, limited by the visual horizon to 30–40 miles (48–64 km). Such high gain antennas allow frequency reuse by nearby transmitters. They are also used for communication with spacecraft since the waves are not refracted (bent) when passing through the ionosphere like lower frequencies.
The wavelength of SHF waves creates strong reflections from metal objects the size of automobiles, aircraft, and ships, and other vehicles. This and the narrow beamwidths possible with high gain antennas and the low atmospheric attenuation as compared with higher frequencies make SHF the main frequencies used in radar. Attenuation and scattering by moisture in the atmosphere increase with frequency, limiting the use of high SHF frequencies for long range applications.
Small amounts of microwave energy are randomly scattered by water vapor molecules in the troposphere. This is used in troposcatter communications systems, operating at a few GHz, to communicate beyond the horizon. A powerful microwave beam is aimed just above the horizon; as it passes through the tropopause some of the microwaves are scattered back to Earth to a receiver beyond the horizon. Distances of 300 km can be achieved. These are mainly used for military communication.
Antennas
The wavelength of SHF waves is short enough that efficient transmitting antennas are small enough to be conveniently mounted on handheld devices, so these frequencies are widely used for wireless applications. For example a quarter wave whip antenna for the SHF band is between 2.5 and 0.25 centimeters long. Omnidirectional antennas have been developed for applications like wireless devices and cellphones that are small enough to be enclosed inside the device's case. The main antenna used for these devices is the printed inverted F antenna (PIFA) consisting of a monopole antenna bent in an L shape, fabricated of copper foil on the printed circuit board inside the device. Small sleeve dipoles or quarter-wave monopoles are also used. The patch antenna is another common type, often integrated into the skin of aircraft.
The wavelengths are also small enough that SHF waves can be focused into narrow beams by high gain directional antennas from a half meter to five meters in diameter. Directive antennas at SHF frequencies are mostly aperture antennas, such as parabolic antennas (the most common type), lens, slot and horn antennas. Large parabolic antennas can produce very narrow beams of a few degrees or less, and often must be aimed with the aid of a boresight. Another type of antenna practical at microwave frequencies is the phased array, consisting of many dipoles or patch antennas on a flat surface, each fed through a phase shifter, which allows the array's beam to be steered electronically. The short wavelength requires great mechanical rigidity in large antennas, to ensure that the radio waves arrive at the feed point in phase.
Waveguide
At microwave frequencies, the types of cable (transmission line) used to conduct lower frequency radio waves, such as coaxial cable, have high power losses. Therefore, to transport microwaves between the transmitter or receiver and the antenna with low losses, a special type of metal pipe called waveguide must be used. Because of the high cost and maintenance requirements of long waveguide runs, in many microwave antennas the output stage of the transmitter or the RF front end of the receiver is located at the antenna.
Advantages
SHF frequencies occupy a "sweet spot" in the radio spectrum which is currently being exploited by many new radio services. They are the lowest frequency band where radio waves can be directed in narrow beams by conveniently sized antennas so they do not interfere with nearby transmitters on the same frequency, allowing frequency reuse. On the other hand, they are the highest frequencies which can be used for long distance terrestrial communication; higher frequencies in the EHF (millimeter wave) band are highly absorbed by the atmosphere, limiting practical propagation distances to one kilometer or less. The high frequency gives microwave communication links a very large information-carrying capacity (bandwidth). In recent decades many new solid state sources of microwave energy have been developed, and microwave integrated circuits for the first time allow significant signal processing to be done at these frequencies. Sources of EHF energy are much more limited and in an earlier state of development.
See also
Knife-edge effect
Microwave burn
References
External links
Tomislav Stimac, "Definition of frequency bands (VLF, ELF... etc.)". IK1QFK Home Page (vlf.it).
Inés Vidal Castiñeira, "Celeria: Wireless Access To Cable Networks"
Radio spectrum
Wireless | Super high frequency | [
"Physics",
"Engineering"
] | 1,290 | [
"Telecommunications engineering",
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Wireless"
] |
5,810,488 | https://en.wikipedia.org/wiki/CW%20Leonis | CW Leonis or IRC +10216 is a variable carbon star that is embedded in a thick dust envelope. It was first discovered in 1969 by a group of astronomers led by Eric Becklin, based upon infrared observations made with the 62-inch Caltech Infrared Telescope at Mount Wilson Observatory. Its energy is emitted mostly at infrared wavelengths. At a wavelength of 5 μm, it was found to have the highest flux of any object outside the Solar System.
Properties
CW Leonis is believed to be in a late stage of its life, blowing off its own sooty atmosphere to form a white dwarf. Based upon isotope ratios of magnesium, the initial mass of this star has been constrained to lie between 3–5 solar masses. The mass of the star's core, and the final mass of the star once it becomes a white dwarf, is about 0.7–0.9 solar masses. Its bolometric luminosity varies over the course of a 649-day pulsation cycle, ranging from a minimum of about 6,250 times the Sun's luminosity up to a peak of around 15,800 times. The overall output of the star is best represented by a luminosity of . The brightness of the star varies by about two magnitudes over its pulsation period, and may have been increasing over a period of years. One study finds an increase in the mean brightness of about a magnitude between 2004 and 2014. Many studies of this star are done at infrared wavelengths because of its very red colour; published visual magnitudes are uncommon and often dramatically different. The Guide Star Catalog from 2006 gives an apparent visual magnitude of 19.23. The ASAS-SN variable star catalog based on observations from 2014 to 2018 reports a mean magnitude of 17.56 and an amplitude of 0.68 magnitudes. An even later study gives a mean magnitude of 14.5 and an amplitude of 2.0 magnitudes.
The carbon-rich gaseous envelope surrounding this star is at least 69,000 years old and the star is losing about solar masses per year. The extended envelope contains at least 1.4 solar masses of material. Speckle observations from 1999 show a complex structure to this dust envelope, including partial arcs and unfinished shells. This clumpiness may be caused by a magnetic cycle in the star that is comparable to the solar cycle in the Sun and results in periodic increases in mass loss.
Various chemical elements and about 50 molecules have been detected in the outflows from CW Leonis, among others nitrogen, oxygen and water, silicon, and iron. One theory was that the star was once surrounded by comets that melted once the star started expanding, but water is now thought to form naturally in the atmospheres of all carbon stars.
Distance
If the distance to this star is assumed to be at the lower end of the estimate range, 120 pc, then the astrosphere surrounding the star spans a radius of about 84,000 AU. The star and its surrounding envelope are advancing at a velocity of more than 91 km/s through the surrounding interstellar medium. It is moving with a space velocity of [U, V, W] = [, , ] km s−1.
Companion
Several papers have suggested that CW Leonis has a close binary companion. ALMA and astrometric measurements may show orbital motion. The astrometric measurements, combined with a model including the companion, provide a parallax measurement showing that CW Leonis is the closest carbon star to the Earth.
See also
List of largest known stars
References
External links
Water Found Around Nearby Star CW Leonis Astronomy Picture of the Day July 16, 2001
Variations in the dust envelope around IRC +10216 revealed by aperture masking interferometry
http://jumk.de/astronomie/special-stars/cw-leonis.shtml
Carbon stars
Protoplanetary nebulae
Leo (constellation)
?
Mira variables
Leonis, CW
IRAS catalogue objects
J09475740+1316435
Emission-line stars | CW Leonis | [
"Astronomy"
] | 820 | [
"Leo (constellation)",
"Constellations"
] |
5,810,868 | https://en.wikipedia.org/wiki/Epoetin%20alfa | Epoetin alfa, sold under the brand name Epogen among others, is a human erythropoietin produced in cell culture using recombinant DNA technology. Epoetin alfa is an erythropoiesis-stimulating agent. It stimulates erythropoiesis (increasing red blood cell levels) and is used to treat anemia, commonly associated with chronic kidney failure and cancer chemotherapy. Epoetin alfa is developed by Amgen.
It is on the World Health Organization's List of Essential Medicines. It was approved for medical use in the European Union in August 2007,
Medical uses
Epoetin alfa is indicated for the treatment of anemia due to chronic kidney disease; zidovudine in people with human immunodeficiency virus; HIV infection; the effects of concomitant myelosuppressive chemotherapy; reduction of allogeneic red blood cell transfusions.
Anemia caused by kidney disease
For people who require dialysis or have chronic kidney disease, iron should be given with erythropoietin, depending on some laboratory parameters such as ferritin and transferrin saturation.
Erythropoietin is also used to treat anemia in people, and cats and dogs, with chronic kidney disease who are not on dialysis (those in Stage 3 or 4 disease and those living with a kidney transplant). There are two types of erythropoietin for people, and cats and dogs, with anemia due to chronic kidney disease (not on dialysis).
Anemia in critically ill people
Erythropoietin is used to treat people with anemia resulting from critical illness.
In a randomized controlled trial, erythropoietin was shown to not change the number of blood transfusions required by critically ill patients. A surprising finding in this study was a small mortality reduction in patients receiving erythropoietin. This result was statistically significant after 29 days but not at 140 days. The mortality difference was most marked in patients admitted to the ICU for trauma. The authors provide several hypotheses for potential etiologies of this reduced mortality, but, given the known increase in thrombosis and increased benefit in trauma patients as well as marginal nonsignificant benefit (adjusted hazard ratio of 0.9) in surgery patients, it could be speculated that some of the benefit might be secondary to the procoagulant effect of erythropoietin. Regardless, this study suggests further research may be necessary to see which critical care patients, if any, might benefit from administration of erythropoietin.
Adverse effects
Epoetin alfa is generally well tolerated. Common side effects include high blood pressure, headache, disabling cluster migraine (resistant to remedies), joint pain, and clotting at the injection site. Rare cases of stinging at the injection site, skin rash, and flu-like symptoms (joint and muscle pain) have occurred within a few hours following administration. More serious side effects, including allergic reactions, seizures and thrombotic events (e.g., heart attacks, strokes, and pulmonary embolism) rarely occur. Chronic self-administration of the drug has been shown to cause increases in blood hemoglobin and hematocrit to abnormally high levels, resulting in dyspnea and abdominal pain.
Erythropoietin is associated with an increased risk of adverse cardiovascular complications in patients with kidney disease if it is used to target an increase of hemoglobin levels above 13.0 g/dl.
Early treatment (before an infant is 8 days old) with erythropoietin correlated with an increase in the risk of retinopathy of prematurity in premature and anemic infants, raising concern that the angiogenic actions of erythropoietin may exacerbate retinopathy. Since anemia itself increases the risk of retinopathy, the correlation with erythropoietin treatment may be incidental.
Safety advisories in anemic cancer patients
Amgen advised the U.S. Food and Drug Administration (FDA) regarding the results of the DAHANCA 10 clinical trial. The DAHANCA 10 data monitoring committee found that three-year loco-regional cancer control in subjects treated with Aranesp was significantly worse than for those not receiving Aranesp (p=0.01).
In response to these advisories, the FDA released a Public Health Advisory
on 9 March 2007, and a clinical alert for doctors in February 2007, about the use of erythropoiesis-stimulating agents (ESAs) such as epogen and darbepoetin. The advisory recommended caution in using these agents in cancer patients receiving chemotherapy or off chemotherapy, and indicated a lack of clinical evidence to support improvements in quality of life or transfusion requirements in these settings.
Several publications and FDA communications have increased the level of concern related to adverse effects of ESA therapy in selected groups. In a revised black box warning, the FDA notes significant risks, advising that ESAs should be used only in patients with cancer when treating anemia specifically caused by chemotherapy, and not for other causes of anemia. Further, the warning states that ESAs should be discontinued once the patient's chemotherapy course has been completed.
Interactions
Drug interactions with erythropoietin include:
Major: lenalidomide—risk of thrombosis
Moderate: cyclosporine—risk of high blood pressure may be greater in combination with EPO. EPO may lead to variability in blood levels of cyclosporine.
Minor: ACE inhibitors and angiotensin receptor blockers may interfere with hematopoiesis, possibly by decreasing the synthesis of endogenous erythropoietin and decreasing bone marrow production of red blood cells.
Society and culture
The publication of an editorial questioning the benefits of high-dose epoetin was canceled by the marketing branch of a journal after being accepted by the editorial branch highlighting concerns of conflict of interest in publishing.
In 2011, author Kathleen Sharp published a book, Blood Feud: The Man Who Blew the Whistle on One of the Deadliest Prescription Drugs Ever,
alleging drug maker Johnson & Johnson encouraged doctors to prescribe epoetin in high doses, particularly for cancer patients, because this would increase sales by hundreds of millions of dollars. Former sales representatives Mark Duxbury and Dean McClennan, claimed that the bulk of their business selling epoetin to hospitals and clinics was Medicare fraud, totaling billion.
Economics
The average cost per patient in the US was in 2009.
Epoetin alfa has accounted for the single greatest drug expenditure paid by the US Medicare system; in 2010, the program paid for the medication.
Biosimilars
In August 2007, Binocrit, Epoetin Alfa Hexal, and Abseamed were approved for use in the European Union.
Research
Neurological diseases
Erythropoietin has been hypothesized to be beneficial in treating certain neurological diseases such as schizophrenia and stroke. Some research has suggested that erythropoietin improves the survival rate in children with cerebral malaria, which is caused by the malaria parasite's blockage of blood vessels in the brain. However, the possibility that erythropoietin may be neuroprotective is inconsistent with the poor transport of the chemical into the brain and the low levels of erythropoietin receptors expressed on neuronal cells.
Psychiatric diseases
Randomized clinical control trials have shown promising results of EPO in improving cognition which is often intractable with the current treatment of mood disorders and schizophrenia.These domains include speed of complex cognitive processing across attention,memory and executive function.
Preterm infants
Infants born early often require transfusions with red blood cells and have low levels of erythropoietin. Erythropoietin has been studied as a treatment option to reduce anemia in preterm infants. Treating infants less than 8 days old with erythropoietin may slightly reduce the need for red blood cell transfusions, but increases the risk of retinopathy. Due to the limited clinical benefit and increased risk of retinopathy, early or late erythropoietin treatment is not recommended for preterm infants.
References
Antianemic preparations
Growth factors
Erythropoiesis-stimulating agents
Amgen
Drugs developed by Johnson & Johnson
Recombinant proteins | Epoetin alfa | [
"Chemistry",
"Biology"
] | 1,765 | [
"Growth factors",
"Recombinant proteins",
"Biotechnology products",
"Signal transduction"
] |
5,810,957 | https://en.wikipedia.org/wiki/Autacoid | Autacoids or autocoids are biological factors (molecules) which act like local hormones, have a brief duration, and act near their site of biosynthesis. The word autacoid comes from the Greek words "autos" (self) and "acos" (relief; i.e., drug). The effects of autacoids are primarily local, though large quantities can be produced and moved into circulation. Autacoids may thus have systemic effects by being transported via the circulation. These regulating molecules are also metabolized locally. In sum, these compounds typically are produced locally, act locally and are metabolized locally. Autacoids can have a variety of different biological actions, including modulating the activities of smooth muscles, glands, nerves, platelets and other tissues.
Some autacoids are chiefly characterized by the effect they have on specific tissues, such as smooth muscle. With respect to vascular smooth muscle, there exist both vasoconstrictor and vasodilator autacoids. Vasodilator autacoids are released during periods of exercise. Their main effect is seen in the skin, where they facilitate heat loss.
These are local hormones; they therefore have a paracrine effect. Some notable autacoids are: eicosanoids, angiotensin, neurotensin, NO (nitric oxide), kinins, histamine, serotonin, endothelins and palmitoylethanolamide.
In 2015, a more precise definition of autacoids was proposed: "An autacoid is a locally produced modulating factor, influencing locally the function of cells and/or tissues, which is produced on demand and which subsequently is metabolized in the same cells and/or tissues".
References
Biomolecules | Autacoid | [
"Chemistry",
"Biology"
] | 386 | [
"Natural products",
"Biotechnology stubs",
"Organic compounds",
"Biochemistry stubs",
"Structural biology",
"Biomolecules",
"Biochemistry",
"Molecular biology"
] |
5,811,014 | https://en.wikipedia.org/wiki/Holothurin | The holothurins are a group of toxins originally isolated from the sea cucumber Actinopyga agassizii. They are contained within clusters of sticky threads called Cuvierian tubules which are expelled from the sea cucumber as a mode of self-defence. The holothurins belong to the class of compounds known as saponins and are anionic surfactants which can cause red blood cells to rupture. The holothurins can be toxic to humans if ingested in high amounts.
Pharmacology
Effects on nerves
Holothurin is shown to have a blocking effect on nerves in desheathed bullfrog sciatic nerve and rat phrenic nerve preparations, and its potency can be compared to that of cocaine, procaine, and physostigmine. Unlike the other mentioned blocking agents, the disrupting effect of holothurin appears to be quite irreversible upon washing.
In another experiment on frog sciatic nerve, holothurin A is shown to be capable of destroying electrical excitability of a node of Ranvier along with basophilic macromolecular material found in and near the cytoplasm of the node. In another experiment on rat phrenic nerve, the nerve-disrupting effect of holothurin A is found to be preventable when specific concentrations of physostigmine are present.
Other effects
Holothurin A and holothurin A1, along with other sea cucumber saponins, are found to reduce weight gain in mice. They improve glucose tolerance, reduce levels of lipids in blood and liver, and inhibit the absorption of lipids in the intestine. They also inhibit the activity of pancreatic lipase, decrease the growth of white adipocytes, a factor which contributes to obesity, and stimulate the production of LXR-β nuclear receptor and ABCA1 protein. These findings suggest a possibility of the holothurins and other sea cucumber saponins being used in the development of anti-obesity drug.
The holothurins are shown to have anti-melanogenic and anti-wrinkling effects on human skin by inhibiting melanin production in Melan-A cells and promoting collagen production in human dermal fibroblasts via the ERK pathway.
References
Further reading
Saponins
Sulfate esters
Tetrahydrofurans | Holothurin | [
"Chemistry"
] | 506 | [
"Biomolecules by chemical classification",
"Natural products",
"Saponins"
] |
5,811,067 | https://en.wikipedia.org/wiki/Physalaemin | Physalaemin is a tachykinin peptide obtained from the Physalaemus frog, closely related to substance P. Its structure was first elucidated in 1964.
Like all tachykinins, physalaemin is a sialagogue (increases salivation) and a potent vasodilator with hypotensive effects.
Structure
Physalaemin (PHY) is known to take on both a linear and helical three dimensional structure. Grace et al. (2010) have shown that in aqueous environments, PHY preferentially takes on the linear conformation whereas in an environment that simulates a cellular membrane, PHY takes on a helical confirmation from the Pro4 residue to the C-Terminus. This helical conformation is essential to allow the binding of PHY to neurokinin-1 (NK1) receptors. Consensus sequences between Substance P (a mammalian tachykinin and agonist of NK1) and PHY have been used to confirm that the helical confirmation is necessary for PHY to bind to NK1.
Use In Research
Not only is PHY closely related to Substance P (SP), but it also has a higher affinity for the mammalian neurokinin receptors that Substance P can bind to. Researchers can make use of this behavior of PHY to study the behavior of smooth muscle - a tissue where NK1 can be found. Shiina et al. (2010) used PHY to show that tachykinins as a whole can cause the longitudinal contraction of smooth muscle tissue in esophageal tissue.
Singh et Maji made use of PHY's similarity to SP along with its sequence similarity to Amyloid B-peptide 25-35 [AB(25-35)]. Despite its sequence similarity to SP, Singh et Maji showed that PHY had distinct amyloid forming capabilities . Under artificially elevated concentrations of tetrafluoroethylene (TFE) and a short incubation time, PHY was able to form amyloid fibrils. These fibrils originating from tackynins like PHY were also shown to reduce the neurotoxicity of other Amyloid fibers associated with amyloid induced diseases such as Alzheimer's disease.
References
Neuropeptides
Pyrrolidones
4-Hydroxyphenyl compounds | Physalaemin | [
"Chemistry",
"Biology"
] | 497 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
5,811,728 | https://en.wikipedia.org/wiki/Volterra%20series | The Volterra series is a model for non-linear behavior similar to the Taylor series. It differs from the Taylor series in its ability to capture "memory" effects. The Taylor series can be used for approximating the response of a nonlinear system to a given input if the output of the system depends strictly on the input at that particular time. In the Volterra series, the output of the nonlinear system depends on the input to the system at all other times. This provides the ability to capture the "memory" effect of devices like capacitors and inductors.
It has been applied in the fields of medicine (biomedical engineering) and biology, especially neuroscience. It is also used in electrical engineering to model intermodulation distortion in many devices, including power amplifiers and frequency mixers. Its main advantage lies in its generalizability: it can represent a wide range of systems. Thus, it is sometimes considered a non-parametric model.
In mathematics, a Volterra series denotes a functional expansion of a dynamic, nonlinear, time-invariant functional. The Volterra series are frequently used in system identification. The Volterra series, which is used to prove the Volterra theorem, is an infinite sum of multidimensional convolutional integrals.
History
The Volterra series is a modernized version of the theory of analytic functionals from the Italian mathematician Vito Volterra, in his work dating from 1887. Norbert Wiener became interested in this theory in the 1920s due to his contact with Volterra's student Paul Lévy. Wiener applied his theory of Brownian motion for the integration of Volterra analytic functionals. The use of the Volterra series for system analysis originated from a restricted 1942 wartime report of Wiener's, who was then a professor of mathematics at MIT. He used the series to make an approximate analysis of the effect of radar noise in a nonlinear receiver circuit. The report became public after the war. As a general method of analysis of nonlinear systems, the Volterra series came into use after about 1957 as the result of a series of reports, at first privately circulated, from MIT and elsewhere. The name itself, Volterra series, came into use a few years later.
Mathematical theory
The theory of the Volterra series can be viewed from two different perspectives:
An operator mapping between two function spaces (real or complex)
A real or complex functional mapping from a function space into real or complex numbers
The latter functional mapping perspective is more frequently used due to the assumed time-invariance of the system.
Continuous time
A continuous time-invariant system with x(t) as input and y(t) as output can be expanded in the Volterra series as
Here the constant term on the right side is usually taken to be zero by suitable choice of output level . The function is called the n-th-order Volterra kernel. It can be regarded as a higher-order impulse response of the system. For the representation to be unique, the kernels must be symmetrical in the n variables . If it is not symmetrical, it can be replaced by a symmetrized kernel, which is the average over the n! permutations of these n variables .
If N is finite, the series is said to be truncated. If a, b, and N are finite, the series is called doubly finite.
Sometimes the n-th-order term is divided by n!, a convention which is convenient when taking the output of one Volterra system as the input of another ("cascading").
The causality condition: Since in any physically realizable system the output can only depend on previous values of the input, the kernels will be zero if any of the variables are negative. The integrals may then be written over the half range from zero to infinity.
So if the operator is causal, .
Fréchet's approximation theorem: The use of the Volterra series to represent a time-invariant functional relation is often justified by appealing to a theorem due to Fréchet. This theorem states that a time-invariant functional relation (satisfying certain very general conditions) can be approximated uniformly and to an arbitrary degree of precision by a sufficiently high finite-order Volterra series. Among other conditions, the set of admissible input functions for which the approximation will hold is required to be compact. It is usually taken to be an equicontinuous, uniformly bounded set of functions, which is compact by the Arzelà–Ascoli theorem. In many physical situations, this assumption about the input set is a reasonable one. The theorem, however, gives no indication as to how many terms are needed for a good approximation, which is an essential question in applications.
Discrete time
The discrete-time case is similar to the continuous-time case, except that the integrals are replaced by summations:
where
Each function is called a discrete-time Volterra kernels.
If P is finite, the series operator is said to be truncated. If a, b and P are finite, the series operator is called a doubly finite Volterra series. If , the operator is said to be causal.
We can always consider, without loss of the generality, the kernel as symmetrical. In fact, for the commutativity of the multiplication it is always possible to symmetrize it by forming a new kernel taken as the average of the kernels for all permutations of the variables .
For a causal system with symmetrical kernels we can rewrite the n-th term approximately in triangular form
Methods to estimate the kernel coefficients
Estimating the Volterra coefficients individually is complicated, since the basis functionals of the Volterra series are correlated. This leads to the problem of simultaneously solving a set of integral equations for the coefficients. Hence, estimation of Volterra coefficients is generally performed by estimating the coefficients of an orthogonalized series, e.g. the Wiener series, and then recomputing the coefficients of the original Volterra series. The Volterra series main appeal over the orthogonalized series lies in its intuitive, canonical structure, i.e. all interactions of the input have one fixed degree. The orthogonalized basis functionals will generally be quite complicated.
An important aspect, with respect to which the following methods differ, is whether the orthogonalization of the basis functionals is to be performed over the idealized specification of the input signal (e.g. gaussian, white noise) or over the actual realization of the input (i.e. the pseudo-random, bounded, almost-white version of gaussian white noise, or any other stimulus). The latter methods, despite their lack of mathematical elegance, have been shown to be more flexible (as arbitrary inputs can be easily accommodated) and precise (due to the effect that the idealized version of the input signal is not always realizable).
Crosscorrelation method
This method, developed by Lee and Schetzen, orthogonalizes with respect to the actual mathematical description of the signal, i.e. the projection onto the new basis functionals is based on the knowledge of the moments of the random signal.
We can write the Volterra series in terms of homogeneous operators, as
where
To allow identification orthogonalization, Volterra series must be rearranged in terms of orthogonal non-homogeneous G operators (Wiener series):
The G operators can be defined by the following:
whenever is arbitrary homogeneous Volterra, x(n) is some stationary white noise (SWN) with zero mean and variance A.
Recalling that every Volterra functional is orthogonal to all Wiener functional of greater order, and considering the following Volterra functional:
we can write
If x is SWN, and by letting , we have
So if we exclude the diagonal elements, , it is
If we want to consider the diagonal elements, the solution proposed by Lee and Schetzen is
The main drawback of this technique is that the estimation errors, made on all elements of lower-order kernels, will affect each diagonal element of order p by means of the summation , conceived as the solution for the estimation of the diagonal elements themselves.
Efficient formulas to avoid this drawback and references for diagonal kernel element estimation exist
Once the Wiener kernels were identified, Volterra kernels can be obtained by using Wiener-to-Volterra formulas, in the following reported for a fifth-order Volterra series:
Multiple-variance method
In the traditional orthogonal algorithm, using inputs with high has the advantage of stimulating high-order nonlinearity, so as to achieve more accurate high-order kernel identification.
As a drawback, the use of high values causes high identification error in lower-order kernels, mainly due to nonideality of the input and truncation errors.
On the contrary, the use of lower in the identification process can lead to a better estimation of lower-order kernel, but can be insufficient to stimulate high-order nonlinearity.
This phenomenon, which can be called locality of truncated Volterra series, can be revealed by calculating the output error of a series as a function of different variances of input.
This test can be repeated with series identified with different input variances, obtaining different curves, each with a minimum in correspondence of the variance used in the identification.
To overcome this limitation, a low value should be used for the lower-order kernel and gradually increased for higher-order kernels.
This is not a theoretical problem in Wiener kernel identification, since the Wiener functional are orthogonal to each other, but an appropriate normalization is needed in Wiener-to-Volterra conversion formulas for taking into account the use of different variances.
Furthermore, new Wiener to Volterra conversion formulas are needed.
The traditional Wiener kernel identification should be changed as follows:
In the above formulas the impulse functions are introduced for the identification of diagonal kernel points.
If the Wiener kernels are extracted with the new formulas, the following Wiener-to-Volterra formulas (explicited up the fifth order) are needed:
As can be seen, the drawback with respect to the previous formula is that for the identification of the n-th-order kernel, all lower kernels must be identified again with the higher variance.
However, an outstanding improvement in the output MSE will be obtained if the Wiener and Volterra kernels are obtained with the new formulas.
Feedforward network
This method was developed by Wray and Green (1994) and utilizes the fact that a simple 2-fully connected layer neural network (i.e., a multilayer perceptron) is computationally equivalent to the Volterra series and therefore contains the kernels hidden in its architecture. After such a network has been trained to successfully predict the output based on the current state and memory of the system, the kernels can then be computed from the weights and biases of that network.
The general notation for the n-th-order Volterra kernel is given by
where is the order, the weights to the linear output node, the coefficients of the polynomial expansion of the output function of the hidden nodes, and are the weights from the input layer to the non-linear hidden layer. It is important to note that this method allows kernel extraction up until the number of input delays in the architecture of the network. Furthermore, it is vital to carefully construct the size of the network input layer so that it represents the effective memory of the system.
Exact orthogonal algorithm
This method and its more efficient version (fast orthogonal algorithm) were invented by Korenberg.
In this method the orthogonalization is performed empirically over the actual input. It has been shown to perform more precisely than the crosscorrelation method. Another advantage is that arbitrary inputs can be used for the orthogonalization and that fewer data points suffice to reach a desired level of accuracy. Also, estimation can be performed incrementally until some criterion is fulfilled.
Linear regression
Linear regression is a standard tool from linear analysis. Hence, one of its main advantages is the widespread existence of standard tools for solving linear regressions efficiently. It has some educational value, since it highlights the basic property of Volterra series: linear combination of non-linear basis-functionals. For estimation, the order of the original should be known, since the Volterra basis functionals are not orthogonal, and thus estimation cannot be performed incrementally.
Kernel method
This method was invented by Franz and Schölkopf and is based on statistical learning theory. Consequently, this approach is also based on minimizing the empirical error (often called empirical risk minimization). Franz and Schölkopf proposed that the kernel method could essentially replace the Volterra series representation, although noting that the latter is more intuitive.
Differential sampling
This method was developed by van Hemmen and coworkers and utilizes Dirac delta functions to sample the Volterra coefficients.
See also
Wiener series
Polynomial signal processing
References
Further reading
Barrett J.F: Bibliography of Volterra series, Hermite functional expansions, and related subjects. Dept. Electr. Engrg, Univ.Tech. Eindhoven, NL 1977, T-H report 77-E-71. (Chronological listing of early papers to 1977) URL: http://alexandria.tue.nl/extra1/erap/publichtml/7704263.pdf
Bussgang, J.J.; Ehrman, L.; Graham, J.W: Analysis of nonlinear systems with multiple inputs, Proc. IEEE, vol.62, no.8, pp. 1088–1119, Aug. 1974
Giannakis G.B & Serpendin E: A bibliography on nonlinear system identification. Signal Processing, 81 2001 533–580. (Alphabetic listing to 2001) www.elsevier.nl/locate/sigpro
Korenberg M.J. Hunter I.W: The Identification of Nonlinear Biological Systems: Volterra Kernel Approaches, Annals Biomedical Engineering (1996), Volume 24, Number 2.
Kuo Y L: Frequency-domain analysis of weakly nonlinear networks, IEEE Trans. Circuits & Systems, vol.CS-11(4) Aug 1977; vol.CS-11(5) Oct 1977 2–6.
Rugh W J: Nonlinear System Theory: The Volterra–Wiener Approach. Baltimore 1981 (Johns Hopkins Univ Press) http://rfic.eecs.berkeley.edu/~niknejad/ee242/pdf/volterra_book.pdf
Schetzen M: The Volterra and Wiener Theories of Nonlinear Systems, New York: Wiley, 1980.
Mathematical series
Functional analysis | Volterra series | [
"Mathematics"
] | 3,027 | [
"Sequences and series",
"Functions and mappings",
"Mathematical structures",
"Functional analysis",
"Series (mathematics)",
"Calculus",
"Mathematical objects",
"Mathematical relations"
] |
5,811,775 | https://en.wikipedia.org/wiki/Electrical%20capacitance%20tomography | Electrical capacitance tomography (ECT) is a method for determination of the dielectric permittivity distribution in the interior of an object from external capacitance measurements. It is a close relative of electrical impedance tomography and is proposed as a method for industrial process monitoring.
Although capacitance sensing methods were in widespread use the idea of using capacitance measurement to form images is attributed to Maurice Beck and co-workers at UMIST in the 1980s.
Although usually called tomography, the technique differs from conventional tomographic methods, in which high resolution images are formed of slices of a material. The measurement electrodes, which are metallic plates, must be sufficiently large to give a measureable change in capacitance. This means that very few electrodes are used, typically eight to sixteen electrodes. An N-electrode system can only provide N(N−1)/2 independent measurements. This means that the technique is limited to producing very low resolution images of approximate slices. However, ECT is fast, and relatively inexpensive.
Applications
Applications of ECT include the measurement of flow of fluids in pipes and measurement of the concentration of one fluid in another, or the distribution of a solid in a fluid. ECT enables the visualization of multiphase flow, which play an important role in the technological processes of the chemical, petrochemical and food industries.
Due to its very low spatial resolution, ECT has not yet been used in medical diagnostics. Potentially, ECT may have similar medical applications to electrical impedance tomography, such as monitoring lung function or detecting ischemia or hemorrhage in the brain.
See also
Three-dimensional electrical capacitance tomography
Electrical impedance tomography
Electrical resistivity tomography
Industrial Tomography Systems
Process tomography
References
Electrical engineering
Nondestructive testing
Inverse problems | Electrical capacitance tomography | [
"Materials_science",
"Mathematics",
"Engineering"
] | 380 | [
"Applied mathematics",
"Nondestructive testing",
"Materials testing",
"Inverse problems",
"Electrical engineering"
] |
5,811,855 | https://en.wikipedia.org/wiki/Riesz%20mean | In mathematics, the Riesz mean is a certain mean of the terms in a series. They were introduced by Marcel Riesz in 1911 as an improvement over the Cesàro mean. The Riesz mean should not be confused with the Bochner–Riesz mean or the Strong–Riesz mean.
Definition
Given a series , the Riesz mean of the series is defined by
Sometimes, a generalized Riesz mean is defined as
Here, the are a sequence with and with as . Other than this, the are taken as arbitrary.
Riesz means are often used to explore the summability of sequences; typical summability theorems discuss the case of for some sequence . Typically, a sequence is summable when the limit exists, or the limit exists, although the precise summability theorems in question often impose additional conditions.
Special cases
Let for all . Then
Here, one must take ; is the Gamma function and is the Riemann zeta function. The power series
can be shown to be convergent for . Note that the integral is of the form of an inverse Mellin transform.
Another interesting case connected with number theory arises by taking where is the Von Mangoldt function. Then
Again, one must take c > 1. The sum over ρ is the sum over the zeroes of the Riemann zeta function, and
is convergent for λ > 1.
The integrals that occur here are similar to the Nörlund–Rice integral; very roughly, they can be connected to that integral via Perron's formula.
References
M. Riesz, Comptes Rendus, 12 June 1911
Means
Summability methods
Zeta and L-functions | Riesz mean | [
"Physics",
"Mathematics"
] | 344 | [
"Means",
"Sequences and series",
"Mathematical analysis",
"Point (geometry)",
"Mathematical structures",
"Geometric centers",
"Summability methods",
"Symmetry"
] |
5,813,192 | https://en.wikipedia.org/wiki/CDK-activating%20kinase | CDK-activating kinase (CAK) activates the cyclin-CDK complex by phosphorylating threonine residue 160 in the CDK activation loop. CAK itself is a member of the Cdk family and functions as a positive regulator of Cdk1, Cdk2, Cdk4, and Cdk6.
Catalytic activity
Cdk activation requires two steps. First, cyclin must bind to the Cdk. In the second step, CAK must phosphorylate the cyclin-Cdk complex on the threonine residue 160, which is located in the Cdk activation segment. Since Cdks need to be free of Cdk inhibitor proteins (CKIs) and associated with cyclins in order to be activated, CAK activity is considered to be indirectly regulated by cyclins.
Phosphorylation is generally considered a reversible modification used to change enzyme activity in different conditions. However, activating phosphorylation of Cdk by CAK appears to be an exception to this trend. In fact, CAK activity remains high throughout the cell cycle and is not regulated by any known cell-cycle control mechanism. However compared to normal cells, CAK activity is reduced in quiescent G0 cells and slightly elevated in tumor cells.
In mammals, activating phosphorylation by CAK can only occur once cyclin is bound. In budding yeast, activating phosphorylation by CAK can take place before cyclin binding. In both humans and yeast, cyclin binding is the rate limiting step in the activation of Cdk. Therefore, phosphorylation of Cdk by CAK is considered a post-translational modification that is necessary for enzyme activity. Although activating phosphorylation by CAK is not exploited for cell-cycle regulation purposes, it is a highly conserved process because CAK also regulates transcription.
Orthologs
CAK varies dramatically in different species. In vertebrates and Drosophila, CAK is a trimeric protein complex consisting of Cdk7 (a Cdk-related protein kinase), cyclin H, and Mat1. The Cdk7 subunit is responsible for Cdk activation while the Mat1 subunit is responsible for transcription. The CAK trimer can be phosphorylated on the activation segment of Cdk7 subunit. However, unlike other Cdks, this phosphorylation is might not be essential for CAK activity. In the presence of Mat1, activation of CAK does not require phosphorylation of the activation segment. However, in the absence of Mat1, phosphorylation of the activation segment is required for CAK activity.
In vertebrates, CAK localizes to the nucleus. This suggests that CAK is not only involved in cell-cycle regulation but is also involved in transcription. In fact, the Cdk7 subunit of vertebrate CAK phosphorylates several components of the transcriptional machinery.
In budding yeast, CAK is a monomeric protein kinase and is referred to as Cak1. Cak1 is distantly homologous to Cdks. Cak1 localizes to the cytoplasm and is responsible for Cdk activation. Budding yeast Cdk7 homolog, Kin28, does not have CAK activity.
Fission yeasts have two CAKs with both overlapping and specialized functions. The first CAK is a complex of Msc6 and Msc2. The Msc6 and Msc2 complex is related to the vertebrate Cdk7-cyclinH complex. Msc6 and Msc2 complex not only activates cell cycle Cdks but also regulates gene expression because it is part of the transcription factor TFIIH. The second fission yeast CAK, Csk1, is an ortholog of budding yeast Cak1. Csk1 can activate Cdks but is not essential for Cdk activity.
Table of Cdk-activating Kinases
http://www.oup.com/uk/orc/bin/9780199206100/resources/figures/nsp-cellcycle-3-3-3_7.jpg.
Credit to: Oxford University Press "Morgan: The Cell Cycle"
Cdkactivation
http://www.oup.com/uk/orc/bin/9780199206100/resources/figures/nsp-cellcycle-3-3-3_8.jpg
Credit to: Oxford University Press "Morgan: The Cell Cycle"
Structure
The conformation of the Cdk2 active site changes dramatically upon cyclin binding and CAK phosphorylation. The active site of Cdk2 lies in a cleft between the two lobes of the kinase. ATP binds deep within the cleft and its phosphate is oriented outwards. Protein substrates bind to the entrance of the active site cleft.
In its inactive form, Cdk2 cannot bind substrate because the entrance of its active site is blocked by the T-loop. Inactive Cdk2 also has a misoriented ATP binding site. When Cdk2 is inactive, the small L12 helix pushes the large PSTAIRE helix outwards. The PSTAIRE helix contains a residue, glutamate 51, that is important for positioning the ATP phosphates.
When cyclinA binds, several conformational changes take place. The T-loop moves out of active site entrance and no longer blocks the substrate binding site. The PSTAIRE helix moves in. The L12 helix becomes a beta strand. This allows glutamate 51 to interact with lysine 33. Aspartate 145 also changes position. Together these structural changes allow ATP phosphates to bind correctly.
When CAK phosphorylates Cdk's threonine residue160, the T-loop flattens and interacts more closely with cyclin A. Phosphorylation also allows the Cdk to interact more effectively with substrates that contain the SPXK sequence. Phosphorylation also increases the activity of cyclinA-Cdk2 complex. Different cyclins produce different conformation changes in Cdk.
Image Link - Structural Basis of Cdk Activation
http://www.oup.com/uk/orc/bin/9780199206100/resources/figures/nsp-cellcycle-3-4-3_12.jpg
Credit to: Oxford University Press "Morgan: The Cell Cycle"
Additional functions
In addition to activating Cdks, CAK also regulates transcription. Two forms of CAK have been identified: free CAK and TFIIH-associated CAK. Free CAK is more abundant than TFIIH-associated CAK. Free CAK phosphorylates Cdks and is involved in cell cycle regulation. Associated CAK is part of the general transcription factor TFIIH. CAK associated with TFIIH phosphorylates proteins involved in transcription including RNA polymerase II. More specifically, associated CAK is involved in promoter clearance and progression of transcription from the preinitiation to the initiation stage.
In vertebrates, the trimeric CAK complex is responsible for transcription regulation. In budding yeast, the Cdk7 homolog, Kin28, regulates transcription. In fission yeast, the Msc6 Msc2 complex controls basal gene transcription.
In addition to regulating transcription, CAK also enhances transcription by phosphorylating retinoic acid and estrogen receptors. Phosphorylation of these receptors leads to increased expression of target genes. In leukemic cells, where DNA is damaged, CAK’s ability to phosphorylate retinoic acid and estrogen receptors is decreased. Decreased CAK activity creates a feedback loop, which turns off TFIIH activity.
CAK also plays a role in DNA damage response. The activity of CAK associated with TFIIH decreases when DNA is damaged by UV irradiation. Inhibition of CAK prevents cell cycle from progressing. This mechanism ensures the fidelity of chromosome transmission.
References
External links
Cell cycle | CDK-activating kinase | [
"Biology"
] | 1,703 | [
"Cell cycle",
"Cellular processes"
] |
5,814,292 | https://en.wikipedia.org/wiki/Angiopoietin | Angiopoietin is part of a family of vascular growth factors that play a role in embryonic and postnatal angiogenesis. Angiopoietin signaling most directly corresponds with angiogenesis, the process by which new arteries and veins form from preexisting blood vessels. Angiogenesis proceeds through sprouting, endothelial cell migration, proliferation, and vessel destabilization and stabilization. They are responsible for assembling and disassembling the endothelial lining of blood vessels. Angiopoietin cytokines are involved with controlling microvascular permeability, vasodilation, and vasoconstriction by signaling smooth muscle cells surrounding vessels.
There are now four identified angiopoietins: ANGPT1, ANGPT2, ANGPTL3, ANGPT4.
In addition, there are a number of proteins that are closely related to ('like') angiopoietins (Angiopoietin-related protein 1, , , , , , , ).
Angiopoietin-1 is critical for vessel maturation, adhesion, migration, and survival. Angiopoietin-2, on the other hand, promotes cell death and disrupts vascularization. Yet, when it is in conjunction with vascular endothelial growth factors, or VEGF, it can promote neo-vascularization.
Structure
Structurally, angiopoietins have an N-terminal super clustering domain, a central coiled domain, a linker region, and a C-terminal fibrinogen-related domain responsible for the binding between the ligand and receptor.
Angiopoietin-1 encodes a 498 amino acid polypeptide with a molecular weight of 57 kDa whereas angiopoietin-2 encodes a 496 amino acid polypeptide.
Only clusters/multimers activate receptors
Angiopoietin-1 and angiopoietin-2 can form dimers, trimers, and tetramers. Angiopoietin-1 has the ability to form higher order multimers through its super clustering domain. However, not all of the structures can interact with the tyrosine kinase receptor. The receptor can only be activated at the tetramer level or higher.
Specific mechanisms
Tie pathway
The collective interactions between angiopoietins, receptor tyrosine kinases, vascular endothelial growth factors and their receptors form the two signaling pathways— Tie-1 and Tie-2. The two receptor pathways are named as a result of their role in mediating cell signals by inducing the phosphorylation of specific tyrosines. This in turn initiates the binding and activation of downstream intracellular enzymes, a process known as cell signaling.
Tie-2
Tie-2/Ang-1 signaling activates β1-integrin and N-cadherin in LSK-Tie2+ cells and promotes hematopoietic stem cell (HSC) interactions with extracellular matrix and its cellular components. Ang-1 promotes quiescence of HSC in vivo. This quiescence or slow cell cycling of HSCs induced by Tie-2/Ang-1 signaling contributes to the maintenance of long-term repopulating ability of HSC and the protection of the HSC compartment from various cellular stresses. Tie-2/Ang-1 signaling plays a critical role in the HSC that is required for the long-term maintenance and survival of HSC in bone marrow. In the endosteum, Tie-2/Ang-1 signaling is predominantly expressed by osteoblastic cells. Although which specific TIE receptors mediate signals downstream of angiogenesis stimulation is highly contested, it is clear that TIE-2 is capable of activation as a result of binding angiopoietins.
Angiopoietin proteins 1 through 4 are all ligands for Tie-2 receptors. Tie-1 heterodimerizes with Tie-2 to enhance and modulate signal transduction of Tie-2 for vascular development and maturation. These Tyrosine kinase receptors are typically expressed on vascular endothelial cells and specific macrophages for immune responses. Angiopoietin-1 is a growth factor produced by vascular support cells, specialized pericytes in the kidney, and hepatic stellate cells (ITO) cells in the liver. This growth factor is also a glycoprotein and functions as an agonist for the tyrosine receptor found in endothelial cells. Angiopoietin-1 and tyrosine kinase signaling are essential for regulating blood vessel development and the stability of mature vessels.
The expression of Angiopoietin-2 in the absence of vascular endothelial growth factor (VEGF) leads to endothelial cell death and vascular regression. Increased levels of Ang2 promote tumor angiogenesis, metastasis, and inflammation. Effective means to control Ang2 in inflammation and cancer should have clinical value. Angiopoeitin, more specifically Ang-1 and Ang-2, work hand in hand with VEGF to mediate angiogenesis. Ang-2 works as an antagonist of Ang-1 and promotes vessel regression if VEGF is not present. Ang-2 works with VEGF to facilitate cell proliferation and migration of endothelial cells. Changes in expression of Ang-1, Ang-2 and VEGF have been reported in the rat brain after cerebral ischemia.
Angiogenesis signaling
To migrate, the endothelial cells need to loosen the endothelial connections by breaking down the basal lamina and the ECM scaffold of blood vessels. These connections are a key determinant of vascular permeability and relieve peri-endothelial cell contact, which is also a major factor in vessel stability and maturity. After the physical barrier is removed, under the influence of the growth factors VEGF with addition contributions of other factors like angiopoietin-1, integrins, and chemokines play an essential role. VEGF and ang-1 are involved in endothelial tube formation.
Vascular permeability signaling
Angiopoietin-1 and angiopoietin-2 are modulators of endothelial permeability and barrier function. Endothelial cells secrete angiopoietin-2 for autocrine signaling while parenchymal cells of the extravascular tissue secrete angiopoietin-2 onto endothelial cells for paracrine signaling, which then binds to the extracellular matrix and is stored within the endothelial cells.
Cancer
Angiopoietin-2 has been proposed as a biomarker in different cancer types. Angiopoietin-2 expression levels are proportional to the cancer stage for both small and non-small cell lung cancers. It has been also implicated to play role in hepatocellular and endometrial carcinoma-induced angiogenesis. Experiments using blocking antibodies for angiopoietin-2 have shown to decrease metastasis to lungs and lymph nodes.
Clinical relevance
Deregulation of angiopoietin and the tyrosine kinase pathway is common in blood-related diseases such as diabetes, malaria, sepsis, and pulmonary hypertension. This is demonstrated by an increased ratio of angiopoietin-2 and angiopoietin-1 in blood serum. To be specific, angiopoietin levels provide an indication for sepsis. Research on angiopoietin-2 has shown that it is involved in the onset of septic shock. The combination of fever and high levels of angiopoietin-2 are correlated with a greater prospect of the development of septic shock. It has also been shown that imbalances between angiopoietin-1 and angiopoietin-2 signaling can act independently of each other. One angiopoietin factor can signal at high levels while the other angiopoieting factor remains at baseline level signaling.
Angiopoietin-2 is produced and stored in Weibel-Palade bodies in endothelial cells and acts as a TEK tyrosine kinase antagonist. As a result, the promotion of endothelial activation, destabilization, and inflammation are promoted. Its role during angiogenesis depends on the presence of Vegf-a.
Serum levels of angiopoietin-2 expression are associated with the growth of multiple myeloma, angiogenesis, and overall survival in oral squamous cell carcinoma. Circulating angiopoietin-2 is a marker for early cardiovascular disease in children on chronic dialysis. Kaposi's sarcoma-associated herpesvirus induces rapid release of angiopoietin-2 from endothelial cells.
Angiopoietin-2 is elevated in patients with angiosarcoma.
Research has shown angiopoietin signaling to be relevant in treating cancer as well. During tumor growth, pro-angiogenic molecules and anti-angiogenic molecules are off balance. Equilibrium is disrupted such that the number of pro-angiogenic molecules are increased. Angiopoietins have been known to be recruited as well as VEGFs and platelet-derived growth factors (PDGFs). This is relevant for clinical use relative to cancer treatments because the inhibition of angiogenesis can aid in suppressing tumor proliferation.
References
External links
Angiogenesis
Growth factors | Angiopoietin | [
"Chemistry",
"Biology"
] | 1,967 | [
"Angiogenesis",
"Growth factors",
"Signal transduction"
] |
3,216,500 | https://en.wikipedia.org/wiki/Capacitively%20coupled%20plasma | A capacitively coupled plasma (CCP) is one of the most common types of industrial plasma sources. It essentially consists of two metal electrodes separated by a small distance, placed in a reactor. The gas pressure in the reactor can be lower than atmosphere or it can be atmospheric.
Description
A typical CCP system is driven by a single radio-frequency (RF) power supply, typically at 13.56 MHz. One of two electrodes is connected to the power supply, and the other one is grounded. As this configuration is similar in principle to a capacitor in an electric circuit, the plasma formed in this configuration is called a capacitively coupled plasma.
When an electric field is generated between electrodes, atoms are ionized and release electrons. The electrons in the gas are accelerated by the RF field and can ionize the gas directly or indirectly by collisions, producing secondary electrons. When the electric field is strong enough, it can lead to what is known as electron avalanche. After avalanche breakdown, the gas becomes electrically conductive due to abundant free electrons. Often it accompanies light emission from excited atoms or molecules in the gas. When visible light is produced, plasma generation can be indirectly observed even with bare eyes.
A variation on capacitively coupled plasma involves isolating one of the electrodes, usually with a capacitor. The capacitor acts like a short circuit to the high frequency RF field, but like an open circuit to direct current (DC) field. Electrons impinge on the electrode in the sheath, and the electrode quickly acquires a negative charge (or self-bias) because the capacitor does not allow it to discharge to ground. This sets up a secondary, DC field across the plasma in addition to the alternating current (AC) field. Massive ions are unable to react to the quickly changing AC field, but the strong, persistent DC field accelerates them toward the self-biased electrode. These energetic ions are exploited in many microfabrication processes (see reactive-ion etching (RIE)) by placing a substrate on the isolated (self-biased) electrode.
Capacitively coupled plasmas have wide applications in the semiconductor processing industry for thin film deposition (see sputtering, plasma-enhanced chemical vapor deposition (PECVD)) and etching.
See also
Inductively coupled plasma
Multipactor effect
Plasma etching
References
Plasma types
Electronics manufacturing | Capacitively coupled plasma | [
"Physics",
"Engineering"
] | 496 | [
"Electronic engineering",
"Plasma types",
"Electronics manufacturing",
"Plasma physics"
] |
3,218,417 | https://en.wikipedia.org/wiki/Polysulfone | Polysulfones are a family of high performance thermoplastics. These polymers are known for their toughness and stability at high temperatures. Technically used polysulfones contain an aryl-SO2-aryl subunit. Due to the high cost of raw materials and processing, polysulfones are used in specialty applications and often are a superior replacement for polycarbonates.
Three polysulfones are used industrially: polysulfone (PSU), polyethersulfone (PES/PESU) and polyphenylene sulfone (PPSU). They can be used in the temperature range from -100 to +200 °C and are used for electrical equipment, in vehicle construction and medical technology. They are composed of para-linked aromatics, sulfonyl groups and ether groups and partly also alkyl groups. Polysulfones have outstanding resistance to heat and oxidation, hydrolysis resistance to aqueous and alkaline media and good electrical properties.
Nomenclature
The term "polysulfone" is normally used for polyarylethersulfones (PAES), since only aromatic polysulfones are used commercially. Furthermore, since ether groups are always present in these polysulfones, PAESs are also referred to as polyether sulfones (PES), poly(arylene sulfone)s or simply polysulfone (PSU).
Production
Historical
The simplest polysulfone poly(phenylene sulfone), known as early as 1960, is produced in a Friedel-Crafts reaction from benzenesulfonyl chloride:
n C6H5SO2Cl → (C6H4SO2)n + n HCl
With a melting point over 500 °C, the product is difficult to process. It exhibits attractive heat resistance, but its mechanical properties are rather poor. Polyarylether sulphones (PAES) represent a suitable alternative. Appropriate synthetic routes to PAES were developed almost simultaneously, and yet independently, from 3M Corporation, Union Carbide Corporation in the United States, and ICI's Plastics Division in the United Kingdom. The polymers found at that time are still used today, but produced by a different synthesis process.
The original synthesis of PAES involved electrophilic aromatic substitution of an diaryl ether with the bis (sulfonyl chloride) of benzene. Reactions typically use a Friedel-Crafts catalyst, such as ferric chloride or antimony pentachloride:
n O(C6H5)2 + n SO2Cl2 → {[O(C6H4)2]SO2}n + 2n HCl
This route is complicated by the formation of isomers arising from both para- and ortho- substitution. Furthermore, cross-linking was observed, which strongly affects the mechanical properties of the polymer. This method has been abandoned.
Contemporary production methods
PAES are currently prepared by a polycondensation reaction of diphenoxide and bis(4-chlorophenyl)sulfone (DCDPS). The sulfone group activates the chloride groups toward substitution. The required diphenoxide is produced in situ from a diphenol and sodium hydroxide. The cogenerated water is removed by azeotropic distillation using toluene or chlorobenzene). The polymerization is carried out at 130–160 °C under inert conditions in a polar, aprotic solvent, e.g. dimethyl sulfoxide, forming a polyether concomitant with elimination of sodium chloride:
Bis(4-fluorophenyl)sulfone can be used in place of bis(4-chlorophenyl)sulfone. The difluoride is more reactive than the dichloride but more expensive. Through chain terminators (e.g. methyl chloride), the chain length can be controlled for melt-processing.
The diphenol is typically bisphenol-A or 1,4-dihydroxybenzene. Such step polymerizations require highly pure monomer and precise stoichiometry to ensure high molecular weight products.
DCDPS is the precursor to polymers known as Udel (from bisphenol A), PES, and Radel R. Udel is a high-performance amorphous sulfone polymer that can molded into a variety of different shapes. It is both rigid and temperature-resistant, and has applications in everything from plumbing pipes, to printer cartridges, to automobile fuses. DCDPS also reacts with bisphenol S to form PES. Like Udel, PES is a rigid and thermally-resistant material with numerous applications.
Properties
Polysulfones are rigid, high-strength and transparent. They are also characterized by high strength and stiffness, retaining these properties between −100 °C and 150 °C. The glass transition temperature of polysulfones is between 190 and 230 °C. They have a high dimensional stability, the size change when exposed to boiling water or 150 °C air or steam generally falls below 0.1%. Polysulfone is highly resistant to mineral acids, alkali, and electrolytes, in pH ranging from 2 to 13. It is resistant to oxidizing agents (although PES will degrade over time), therefore it can be cleaned by bleaches. It is also resistant to surfactants and hydrocarbon oils. It is not resistant to low-polar organic solvents (e.g. ketones and chlorinated hydrocarbons) and aromatic hydrocarbons. Mechanically, polysulfone has high compaction resistance, recommending its use under high pressures. It is also stable in aqueous acids and bases and many non-polar solvents; however, it is soluble in dichloromethane and methylpyrrolidone.
Polysulfones are counted among the high performance plastics. They can be processed by injection molding, extrusion or hot forming.
Structure-property relationship
Poly(aryl ether sulfone)s are composed of aromatic groups, ether groups and sulfonyl groups. For a comparison of the properties of individual constituents poly(phenylene sulfone) can serve as an example, which consists of sulfonyl and phenyl groups only. Since both groups are thermally very stable, poly(phenylene sulfone) has an extremely high melting temperature (520 °C). However, the polymer chains are also so rigid that poly(phenylene sulfone) (PAS) decomposes before melting and can thus not be thermoplastically processed. Therefore, flexible elements must be incorporated into the chains, this is done in the form of ether groups. Ether groups allow a free rotation of the polymer chains. This leads to a significantly reduced melting point and also improves the mechanical properties by an increased impact strength. The alkyl groups in bisphenol A act also as a flexible element.
The stability of the polymer can also be attributed to individual structural elements: The sulfonyl group (in which sulfur is in the highest possible oxidation state) attracts electrons from neighboring benzene rings, causing electron deficiency. The polymer therefore opposes further electron loss, thus substantiating the high oxidation resistance. The sulfonyl group is also linked to the aromatic system by mesomerism and the bond therefore strengthened by mesomeric energy. As a result, larger amounts of energy from heat or radiation can be absorbed by the molecular structure without causing any reactions (decomposition). The result of the mesomerism is that the configuration is particularly rigid. Based on the biphenylsulfonyl group, the polymer is thus durable heat resistant, oxidation resistant and still has a high stiffness even at elevated temperatures. The ether bond provides (as opposed to esters) hydrolysis resistance as well as some flexibility, which leads to impact strength. In addition, the ether bond leads to good heat resistance and better flow of the melt.
Applications
Polysulfone has one of the highest service temperatures among all melt-processable thermoplastics. Its resistance to high temperatures gives it a role of a flame retardant, without compromising its strength that usually results from addition of flame retardants. Its high hydrolysis stability allows its use in medical applications requiring autoclave and steam sterilization. However, it has low resistance to some solvents and undergoes weathering; this weathering instability can be offset by adding other materials into the polymer.
Membranes
Polysulfone allows easy manufacturing of membranes, with reproducible properties and controllable size of pores down to 40 nanometers. Such membranes can be used in applications like hemodialysis, waste water recovery, food and beverage processing, and gas separation. These polymers are also used in the automotive and electronic industries. Filter cartridges made from polysulfone membranes offer extremely high flow rates at very low differential pressures when compared with nylon or polypropylene media.
Polysulfone can be used as filtration media in filter sterilization.
Materials
Polysulfone can be reinforced with glass fibers. The resulting composite material has twice the tensile strength and three times increase of its Young's modulus.
Fuel cells
Polysulfone is often used as a copolymer. Recently, sulfonated polyethersulfones (SPES) have been studied as a promising material candidate among many other aromatic hydrocarbon-based polymers for highly durable proton-exchange membranes in fuel cells. Several reviews have reported progress on durability from many reports on this work. The biggest challenge for SPES application in fuel cells is improving its chemical durability. Under oxidative environment, SPES can undergo sulfonic group detachment and main chain scission. However the latter is more dominant; midpoint scission and unzip mechanism have been proposed as the degradation mechanism depending on the strength of the polymer backbone.
Food service industry
Polysulfone food pans are used for the storage, heating, and serving of foods. The pans are made to Gastronorm standards and are available in the natural transparent amber colour of polysulfone. The wide working temperature range of -40°C to 190°C allow these pans to go from a deep freezer directly to a steam table or microwave oven. Polysulfone provides a non-stick surface for minimal food wastage and easy cleaning.
Industrially relevant polysulfones
Some industrially relevant polysulfones are listed in the following table:
References
Polymers
Plastics
Thermoplastics
Sulfones
Engineering plastic | Polysulfone | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,221 | [
"Unsolved problems in physics",
"Functional groups",
"Sulfones",
"Polymer chemistry",
"Polymers",
"Amorphous solids",
"Plastics"
] |
3,219,248 | https://en.wikipedia.org/wiki/Transgranular%20fracture | Transgranular fracture is a type of fracture that occurs through the crystal grains of a material. In contrast to intergranular fractures, which occur when a fracture follows the grain boundaries, this type of fracture traverses the material's microstructure directly through individual grains. This type of fracture typically results from a combination of high stresses and material defects, such as voids or inclusions, that create a path for crack propagation through the grains. A broad range of ductile or brittle materials, including metals, ceramics, and polymers, can experience transgranular fracture. When examined under scanning electron microscopy, this type of fracture reveals cleavage steps, river patterns, feather markings, dimples, and tongues. The fracture may change directions somewhat when entering a new grain in order to follow the new lattice orientation of that grain but this is a less severe direction change then would be required to follow the grain boundary. This results in a fairly smooth looking fracture with fewer sharp edges than one that follows the grain boundaries. This can be visualized as a jigsaw puzzle cut from a single sheet of wood with the wood grain showing. A transgranular fracture follows the grains in the wood, not the jigsaw edges of the puzzle pieces. This is in contrast to an intergranular fracture which, in this analogy, would follow the jigsaw edges, not the wood grain.
Mechanism of transgranular fracture
The mechanism of transgranular fracture may vary depending on the material and surrounding conditions under which the fracture occurs. However, some general steps are typically involved in the transgranular fracture process:
Crack initiation: The first step in transgranular fracture is the initiation of a crack within the material. This can be caused by a range of factors, such as manufacturing defects, surface defects, or exposure to high-stress conditions.
Crack propagation: Once the crack has initiated, it may spread throughout the material as a result of stress concentrations and other factors.
Plastic deformation: As the crack propagates, the material near the crack undergoes significant plastic deformation due to the local stress concentration. This deformation may lead to small voids or defects within the material, further promoting crack propagation.
Void coalescence: As the crack propagates, these small voids can grow and merge, forming larger voids or cavities within the material. These voids can further weaken the material and promote the propagation of the crack.
Final rupture: Eventually, the combined effects of crack propagation, plastic deformation, and void coalescence can lead to the final break of the material, resulting in transgranular fracture.
In ductile metals, the plastic deformation of the material can be a critical factor in the transgranular fracture process, while in brittle materials such as ceramics, the formation and growth of cracks can be influenced by factors such as grain size, porosity, and the presence of impurities or other defects.
Factors affecting transgranular fracture
Temperature: The temperature at which a material is loaded can also affect the occurrence and characteristics of transgranular fractures. In some materials, the occurrence of transgranular fracture may increase at lower temperatures due to increased embrittlement or reduced ductility.
Presence of defects or inclusions: As mentioned earlier, the presence of voids or inclusions within a material can create localized areas of stress concentration and weaken the material, making it more susceptible to transgranular fracture. The size, shape, and orientation of these defects can all affect the likelihood and severity of fracture.
Environmental factors: The presence of certain gases, liquids, or other environmental factors can also affect the likelihood of transgranular fracture. For example, hydrogen embrittlement can cause transgranular fractures in some materials by weakening the material at a microscopic level.
Surface conditions: The surface condition of a material, including the presence of scratches, cracks, or other defects, can also affect the occurrence and path of transgranular fracture.
Loading conditions: High-stress concentrations, rapid loading rates, and cyclic loading can all increase the likelihood of transgranular fractures. The direction of the applied stress can also influence the orientation and path of the crack propagation.
Transition from intergranular to transgranular fracture
The fracture behavior of materials can be significantly changed by the use of precipitation-based grain boundary design. For example, Meindlhumer et. al. produced a thin film of AlCrN containing a specific distribution of precipitates within the grain boundaries in precipitation-based grain boundary design. The precipitates acted as a barrier to crack propagation, increasing the material's resistance to intergranular cracking. Additionally, the precipitates altered the stress distribution within the material, promoting transgranular crack propagation instead. Furthermore, smaller precipitates with a more uniform distribution have been shown to be more effective at promoting transgranular fracture.
References
Fracture mechanics
Granularity of materials | Transgranular fracture | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,017 | [
"Structural engineering",
"Fracture mechanics",
"Materials science",
"Materials degradation",
"Materials",
"Particle technology",
"Granularity of materials",
"Matter"
] |
3,219,289 | https://en.wikipedia.org/wiki/Intergranular%20fracture | In fracture mechanics, intergranular fracture, intergranular cracking or intergranular embrittlement occurs when a crack propagates along the grain boundaries of a material, usually when these grain boundaries are weakened. The more commonly seen transgranular fracture occurs when the crack grows through the material grains. As an analogy, in a wall of bricks, intergranular fracture would correspond to a fracture that takes place in the mortar that keeps the bricks together.
Intergranular cracking is likely to occur if there is a hostile environmental influence and is favored by larger grain sizes and higher stresses. Intergranular cracking is possible over a wide range of temperatures. While transgranular cracking is favored by strain localization (which in turn is encouraged by smaller grain sizes), intergranular fracture is promoted by strain homogenization resulting from coarse grains.
Embrittlement, or loss of ductility, is often accompanied by a change in fracture mode from transgranular to intergranular fracture. This transition is particularly significant in the mechanism of impurity-atom embrittlement. Additionally, hydrogen embrittlement is a common category of embrittlement in which intergranular fracture can be observed.
Intergranular fracture can occur in a wide variety of materials, including steel alloys, copper alloys, aluminum alloys, and ceramics. In metals with multiple lattice orientations, when one lattice ends and another begins, the fracture changes direction to follow the new grain. This results in a fairly jagged looking fracture with straight edges of the grain and a shiny surface may be seen. In ceramics, intergranular fractures propagate through grain boundaries, producing smooth bumpy surfaces where grains can be easily identified.
Mechanisms of intergranular fracture
Though it is easy to identify intergranular cracking, pinpointing the cause is more complex as the mechanisms are more varied, compared to transgranular fracture. There are several other processes that can lead to intergranular fracture or preferential crack propagation at the grain boundaries:
Microvoid nucleation and coalescence at inclusions or second phase particles located along grain boundaries
Grain boundary crack and cavity formations associated with elevated temperature stress rupture conditions
Decohesion between contiguous grains due to the presence of impurity elements at grain boundaries and in association with aggressive atmospheres such as gaseous hydrogen and liquid metals
Stress corrosion cracking processes associated with chemical dissolution along grain boundaries
Cyclic loading conditions
When the material has an insufficient number of independent slip systems to accommodate plastic deformation between contiguous grains. This is also known as intercrystalline fracture or grain-boundary separation.
More rapid diffusion along grain boundaries than along grain interiors
Faster nucleation and growth of precipitates at the grain boundaries
Quench cracking, or crack growth following a quenching process, is another example of intergranular fracture and almost always occurs by intergranular processes. This process of quench cracking is promoted by weakened grain boundaries and large grain sizes and additionally influenced by the temperature gradient at which quenching occurs and volume expansion during transformation.
From an energy standpoint, the energy released by intergranular crack propagation is higher than that predicted by Griffith theory, implying that the additional energy term to propagate a crack comes from a grain-boundary mechanism.
Types of intergranular fracture
Intergranular fracture can be categorized into the following:
Dimpled intergranular fracture involves cases in which microvoid coalescence occurs in grain boundaries as a result of creep cavitation or void nucleation at grain boundary precipitates. Such fracture is characterized by dimples at the surface. Dimpled intergranular fracture typically leads to low macroscopic ductility, with dimpled topology revealed at the grain facets when observed at higher magnifications (1000 to 5000x). Impurities that adsorb at the grain boundaries promote dimpled intergranular fracture.
Intergranular brittle fracture involves cases in which the grain surfaces do not have dimples that signify microvoid coalescence. Such fracture is termed brittle due to fracture prior to plastic yielding. Causes include brittle second-phase particles at grain boundaries, impurity or atom segregation at grain boundaries, and environmentally-assisted embrittlement.
Intergranular fatigue fracture involves cases in which the integranular fracture occurs as a result of cyclic loading, or fatigue. This specific type of intergranular fracture is often associated with improper materials processing or harsh environmental conditions where the grains are severely weakened. Stress applied at elevated temperatures (creep), grain boundary precipitates, thermal treatment causing segregation at grain boundaries, and environmentally assisted weakening of grain boundaries can lead to intergranular fatigue.
Role of solutes and impurities
At room temperature, intergranular fracture is commonly associated with altered cohesion resulting from segregation of solutes or impurities at the grain boundaries. Examples of solutes known to influence intergranular fracture are sulfur, phosphorus, arsenic, and antimony specifically in steels, lead in aluminum alloys, and hydrogen in numerous structural alloys. At high impurity levels, especially in the case of hydrogen embrittlement, the likelihood of intergranular fracture is greater. Solutes like hydrogen are hypothesized to stabilize and increase the density of strain-induced vacancies, leading to microcracks and microvoids at grain boundaries.
Role of grain boundary orientation
Intergranular cracking is dependent on the relative orientation of the common boundary between two grains. The path of intergranular fracture typically occurs along the highest-angle grain boundary. In a study, it was shown that cracking was never exhibited for boundaries with misorientation of up to 20 degrees, regardless of boundary type. At greater angles, large areas of cracked, uncracked, and mixed behavior were seen. The results imply that the degree of grain boundary cracking, and hence intergranular fracture, is largely determined by boundary porosity, or the amount of atomic misfit.
See also
Transgranular fracture
Intergranular corrosion
Crystallite
References
Granularity of materials
Fracture mechanics | Intergranular fracture | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,271 | [
"Structural engineering",
"Fracture mechanics",
"Materials science",
"Materials degradation",
"Materials",
"Particle technology",
"Granularity of materials",
"Matter"
] |
3,219,865 | https://en.wikipedia.org/wiki/Barium%20titanate | Barium titanate (BTO) is an inorganic compound with chemical formula BaTiO3. It is the barium salt of metatitanic acid. Barium titanate appears white as a powder and is transparent when prepared as large crystals. It is a ferroelectric, pyroelectric, and piezoelectric ceramic material that exhibits the photorefractive effect. It is used in capacitors, electromechanical transducers and nonlinear optics.
Structure
The solid exists in one of four polymorphs depending on temperature. From high to low temperature, these crystal symmetries of the four polymorphs are cubic, tetragonal, orthorhombic and rhombohedral crystal structure. All of these phases exhibit the ferroelectric effect apart from the cubic phase. The high temperature cubic phase is easiest to describe, as it consists of regular corner-sharing octahedral TiO6 units that define a cube with O vertices and Ti-O-Ti edges. In the cubic phase, Ba2+ is located at the center of the cube, with a nominal coordination number of 12. Lower symmetry phases are stabilized at lower temperatures and involve movement of the Ti4+ to off-center positions. The remarkable properties of this material arise from the cooperative behavior of the Ti4+ distortions.
Above the melting point, the liquid has a remarkably different local structure to the solid forms, with the majority of Ti4+ coordinated to four oxygen, in tetrahedral TiO4 units, which coexist with more highly coordinated units.
Production and handling properties
Barium titanate can be synthesized by the relatively simple sol–hydrothermal method.
Barium titanate can also be manufactured by heating barium carbonate and titanium dioxide. The reaction proceeds via liquid phase sintering. Single crystals can be grown at around 1100 °C from molten potassium fluoride. Other materials are often added as dopants, e.g., Sr to form solid solutions with strontium titanate. Barium titanate reacts with nitrogen trichloride and produces a greenish or gray mixture; the ferroelectric properties of the mixture are still present in this form.
Much effort has been spent studying the relationship between particle morphology and its properties. Barium titanate is one of the few ceramic compounds known to exhibit abnormal grain growth, in which large faceted grains grow in a matrix of finer grains, with profound implications on densification and physical properties. Fully dense nanocrystalline barium titanate has 40% higher permittivity than the same material prepared in classic ways. The addition of inclusions of barium titanate to tin has been shown to produce a bulk material with a higher viscoelastic stiffness than that of diamonds. Barium titanate goes through two phase transitions that change the crystal shape and volume. This phase change leads to composites where the barium titanates have a negative bulk modulus (Young's modulus), meaning that when a force acts on the inclusions, there is displacement in the opposite direction, further stiffening the composite.
Like many oxides, barium titanate is insoluble in water but attacked by sulfuric acid. It is also soluble in concentrated hydrochloric acid, and hydrofluoric acid. Its bulk room-temperature bandgap is 3.2 eV, but this increases to ~3.5 eV when the particle size is reduced from about 15 to 7 nm.
Uses
Barium titanate is a dielectric ceramic used in capacitors, with dielectric constant values as high as 7,000. Over a narrow temperature range, values as high as 15,000 are possible; most common ceramic and polymer materials are less than 10, while others, such as titanium dioxide (TiO2), have values between 20 and 70.
It is a piezoelectric material used in microphones and other transducers. The spontaneous polarization of barium titanate single crystals at room temperature range between 0.15C/m2 in earlier studies, and 0.26C/m2 in more recent publications, and its Curie temperature is between 120 and 130 °C. The differences are related to the growth technique, with earlier flux grown crystals being less pure than current crystals grown with the Czochralski process, which therefore have a larger spontaneous polarization and a higher Curie temperature.
As a piezoelectric material, it has been largely replaced by lead zirconate titanate, also known as PZT. Polycrystalline barium titanate has a positive temperature coefficient of resistance, making it a useful material for thermistors and self-regulating electric heating systems.
Barium titanate crystals find use in nonlinear optics. The material has high beam-coupling gain, and can be operated at visible and near-infrared wavelengths. It has the highest reflectivity of the materials used for self-pumped phase conjugation (SPPC) applications. It can be used for continuous-wave four-wave mixing with milliwatt-range optical power. For photorefractive applications, barium titanate can be doped by various other elements, e.g. iron.
Thin films of barium titanate display electrooptic modulation to frequencies over 40 GHz.
The pyroelectric and ferroelectric properties of barium titanate are used in some types of uncooled sensors for thermal cameras.
Barium titanate is widely used in thermistors and positive temperature coefficient heating elements. For these applications, barium titanate is manufactured with dopants to give the material semiconductor properties. Specific applications include overcurrent protection for motors, ballasts for fluorescent lights, automobile cabin air heaters, and consumer space heaters.
High-purity barium titanate powder is reported to be a key component of new barium titanate capacitor energy storage systems for use in electric vehicles.
Due to their elevated biocompatibility, barium titanate nanoparticles (BTNPs) have been recently employed as nanocarriers for drug delivery.
Magnetoelectric effect of giant strengths have been reported in thin films grown on barium titanate substrates.
Natural occurrence
Barioperovskite is a very rare natural analogue of BaTiO3, found as microinclusions in benitoite.
See also
Strontium titanate
Lead zirconate titanate
References
External links
Nanoparticle Compatibility: New Nanocomposite Processing Technique Creates More Powerful Capacitors
EEStor's "instant-charge" capacitor batteries
Titanates
Barium compounds
Ceramic materials
Piezoelectric materials
Ferroelectric materials
Infrared sensor materials
Nonlinear optical materials
Perovskites | Barium titanate | [
"Physics",
"Materials_science",
"Engineering"
] | 1,389 | [
"Physical phenomena",
"Ferroelectric materials",
"Materials",
"Electrical phenomena",
"Ceramic materials",
"Ceramic engineering",
"Piezoelectric materials",
"Hysteresis",
"Matter"
] |
3,220,879 | https://en.wikipedia.org/wiki/Metal%20aromaticity | Metal aromaticity or metalloaromaticity is the concept of aromaticity, found in many organic compounds, extended to metals and metal-containing compounds. The first experimental evidence for the existence of aromaticity in metals was found in aluminium cluster compounds of the type where M stands for lithium, sodium or copper. These anions can be generated in a helium gas by laser vaporization of an aluminium / lithium carbonate composite or a copper or sodium / aluminium alloy, separated and selected by mass spectrometry and analyzed by photoelectron spectroscopy. The evidence for aromaticity in these compounds is based on several considerations. Computational chemistry shows that these aluminium clusters consist of a tetranuclear plane and a counterion at the apex of a square pyramid. The unit is perfectly planar and is not perturbed by the presence of the counterion or even the presence of two counterions in the neutral compound . In addition its HOMO is calculated to be a doubly occupied delocalized pi system making it obey Hückel's rule. Finally a match exists between the calculated values and the experimental photoelectron values for the energy required to remove the first 4 valence electrons. The first fully metal aromatic compound was a cyclogallane with a Ga32- core discovered by Gregory Robinson in 1995.
D-orbital aromaticity is found in trinuclear tungsten and molybdenum metal clusters generated by laser vaporization of the pure metals in the presence of oxygen in a helium stream. In these clusters the three metal centers are bridged by oxygen and each metal has two terminal oxygen atoms. The first signal in the photoelectron spectrum corresponds to the removal of the valence electron with the lowest energy in the anion to the neutral compound. This energy turns out to be comparable to that of bulk tungsten trioxide and molybdenum trioxide. The photoelectric signal is also broad which suggests a large difference in conformation between the anion and the neutral species. Computational chemistry shows that the anions and dianions are ideal hexagons with identical metal-to-metal bond lengths. Tritantalum oxide clusters (Ta3O3−) also are observed to exhibit possible D-orbital aromaticity.
The molecules discussed thus far only exist diluted in the gas phase. A study exploring the properties of a compound formed in water from sodium molybdate () and iminodiacetic acid also revealed evidence of aromaticity, but this compound has actually been isolated. X-ray crystallography showed that the sodium atoms are arranged in layers of hexagonal clusters akin to pentacenes. The sodium-to-sodium bond lengths are unusually short (327 pm versus 380 pm in elemental sodium) and, like benzene, the ring is planar. In this compound each sodium atom has a distorted octahedral molecular geometry with coordination to molybdenum atoms and water molecules. The experimental evidence is supported by computed NICS aromaticity values.
See also
References
Cluster chemistry
Chemical bonding | Metal aromaticity | [
"Physics",
"Chemistry",
"Materials_science"
] | 627 | [
"Cluster chemistry",
"Condensed matter physics",
"nan",
"Chemical bonding",
"Organometallic chemistry"
] |
3,221,283 | https://en.wikipedia.org/wiki/Base%20excess | In physiology, base excess and base deficit refer to an excess or deficit, respectively, in the amount of base present in the blood. The value is usually reported as a concentration in units of mEq/L (mmol/L), with positive numbers indicating an excess of base and negative a deficit. A typical reference range for base excess is −2 to +2 mEq/L.
Comparison of the base excess with the reference range assists in determining whether an acid/base disturbance is caused by a respiratory, metabolic, or mixed metabolic/respiratory problem. While carbon dioxide defines the respiratory component of acid–base balance, base excess defines the metabolic component. Accordingly, measurement of base excess is defined, under a standardized pressure of carbon dioxide, by titrating back to a standardized blood pH of 7.40.
The predominant base contributing to base excess is bicarbonate. Thus, a deviation of serum bicarbonate from the reference range is ordinarily mirrored by a deviation in base excess. However, base excess is a more comprehensive measurement, encompassing all metabolic contributions.
Definition
Base excess is defined as the amount of strong acid that must be added to each liter of fully oxygenated blood to return the pH to 7.40 at a temperature of 37°C and a pCO2 of . A base deficit (i.e., a negative base excess) can be correspondingly defined by the amount of strong base that must be added.
A further distinction can be made between actual and standard base excess: actual base excess is that present in the blood, while standard base excess is the value when the hemoglobin is at 5 g/dl. The latter gives a better view of the base excess of the entire extracellular fluid.
Base excess (or deficit) is one of several values typically reported with arterial blood gas analysis that is derived from other measured data.
The term and concept of base excess were first introduced by Poul Astrup and Ole Siggaard-Andersen in 1958.
Estimation
Base excess can be estimated from the bicarbonate concentration ([HCO3−]) and pH by the equation:
with units of mEq/L. The same can be alternatively expressed as
Calculations are based on the Henderson-Hasselbalch equation:
Ultimately the end result is:
Interpretation
Base excess beyond the reference range indicates
metabolic alkalosis, or respiratory acidosis with renal compensation if too high (more than +2 mEq/L)
metabolic acidosis, or respiratory alkalosis with renal compensation if too low (less than −2 mEq/L)
Blood pH is determined by both a metabolic component, measured by base excess, and a respiratory component, measured by PaCO2 (partial pressure of carbon dioxide). Often a disturbance in one triggers a partial compensation in the other. A secondary (compensatory) process can be readily identified because it opposes the observed deviation in blood pH.
For example, inadequate ventilation, a respiratory problem, causes a buildup of CO2, hence respiratory acidosis; the kidneys then attempt to compensate for the low pH by raising blood bicarbonate. The kidneys only partially compensate, so the patient may still have a low blood pH, i.e. acidemia. In summary, the kidneys partially compensate for respiratory acidosis by raising blood bicarbonate.
A high base excess, thus metabolic alkalosis, usually involves an excess of bicarbonate. It can be caused by
Compensation for primary respiratory acidosis
Excessive loss of HCl in gastric acid by vomiting
Renal overproduction of bicarbonate, in either contraction alkalosis or Cushing's disease
A base deficit (a below-normal base excess), thus metabolic acidosis, usually involves either excretion of bicarbonate or neutralization of bicarbonate by excess organic acids. Common causes include
Compensation for primary respiratory alkalosis
Diabetic ketoacidosis, in which high levels of acidic ketone bodies are produced
Lactic acidosis, due to anaerobic metabolism during heavy exercise or hypoxia
Chronic kidney failure, preventing excretion of acid and resorption and production of bicarbonate
Diarrhea, in which large amounts of bicarbonate are excreted
Ingestion of poisons such as methanol, ethylene glycol, or excessive aspirin
The serum anion gap is useful for determining whether a base deficit is caused by addition of acid or loss of bicarbonate.
Base deficit with elevated anion gap indicates addition of acid (e.g., ketoacidosis).
Base deficit with normal anion gap indicates loss of bicarbonate (e.g., diarrhea). The anion gap is maintained because bicarbonate is exchanged for chloride during excretion.
See
Acid–base homeostasis
Metabolic acidosis / Metabolic alkalosis
Arterial blood gas
References
External links
acid-base.com
Anthology on Base Excess (O.Siggaard-Andersen)
Emedicine: Lactic Acidosis
Chemical pathology
Diagnostic intensive care medicine | Base excess | [
"Chemistry",
"Biology"
] | 1,036 | [
"Biochemistry",
"Chemical pathology"
] |
16,644,505 | https://en.wikipedia.org/wiki/List%20of%20RNAs | Ribonucleic acid (RNA) occurs in different forms within organisms and serves many different roles. Listed here are the types of RNA, grouped by role. Abbreviations for the different types of RNA are listed and explained.
By role
RNA abbreviations
See also
List of cis-regulatory RNA elements
RNA: Types of RNA
Non-coding RNA
References
External links
Rfam database — a collection of RNA families
European ribosomal RNA database
RNA
RNA
RNA | List of RNAs | [
"Chemistry"
] | 91 | [
"Molecular-biology-related lists",
"Molecular biology"
] |
16,646,139 | https://en.wikipedia.org/wiki/Dash%20Express | The Dash Express was an Internet-enabled personal navigation device manufactured by Dash Navigation Dash Express transmitted information using a GPRS connection back to Dash Navigation in order to enhance traffic routing as well as use Wi-Fi for the purpose of updating GPS. At the time of its availability, the Dash Express was only available for use in the US.
In June 2009, Research in Motion has acquired Dash Navigation, and discontinued service and support of the Dash Express product effective June 30, 2010.
Hardware
The hardware of the dash express was developed by Taiwanese hardware manufacturer FIC (First International Computers), in its Openmoko division. It was developed under the code name "Dash Cavalier" with the model number HXD8v2.
References
External links
Official site of Dash.net
Global Positioning System | Dash Express | [
"Technology",
"Engineering"
] | 160 | [
"Global Positioning System",
"Wireless locating",
"Aircraft instruments",
"Aerospace engineering"
] |
16,646,920 | https://en.wikipedia.org/wiki/Koenig%27s%20manometric%20flame%20apparatus | Koenig's manometric flame apparatus was a laboratory instrument invented in 1862 by the German physicist Rudolph Koenig, and used to visualize sound waves. It was the nearest equivalent of the modern oscilloscope in the late nineteenth and early twentieth centuries.
Description
The manometric flame apparatus consisted of a chamber which acted in the same way as a modern microphone. Sound from the source to be measured was concentrated by means of a horn or tube into one half of the capsule chamber. The chamber was divided in two by an elastic diaphragm, usually rubber. The sound caused the diaphragm to vibrate which modulated a flow of flammable illumination gas passing through the other half of the chamber. The illumination gas was passed to a Bunsen burner, the flame of which would then increase or decrease in size at the same frequency as the sound source.
The change in flame size was too fast to be easily seen with the naked eye, and a stroboscope — usually in the form of a rotating many sided mirror — was used to view the flame. The frequency of the sound could then be calculated from the apparent distance between the flame images in the mirror and the known speed of its rotation.
Alexander Graham Bell used this type of equipment to study the performance of his microphones and demonstrated it in his display at the 1876
Philadelphia Centenarian Exhibition. He replaced the rubber diaphragm with an iron disc which was driven by an electromagnet with current fed from a microphone. This apparatus was capable of giving quantitative measures of the performance of his microphones.
A type of Fourier analyzer can be constructed by connecting a number of manometric flame capsules each to a Helmholtz resonator tuned to either the fundamental frequency of the sound to be analyzed, or one of its harmonics. The flames produced from each capsule are then an indication of the strength of each of the Fourier components of the sound.
Notes
References
1. The Koinge manometric flame apparatus Jim & Rhoda Morris at SciTechAntiques. Accessed March 2008
2.Manometric Flame Apparatus Kenyon College. Gambier, Ohio. Accessed March 2008
3.Fourier Analysis Kenyon College. Gambier, Ohio. Accessed March 2008
4.Flame manometer Case Western Reserve University Physics Department. Accessed March 2008
Measuring instruments
Laboratory equipment
History of physics | Koenig's manometric flame apparatus | [
"Technology",
"Engineering"
] | 481 | [
"Measuring instruments"
] |
16,647,678 | https://en.wikipedia.org/wiki/Segmented%20spindle | A segmented spindle, also known by the trademark Kataka, is a specialized mechanical linear actuator conceived by the Danish mechanical engineer Jens Joerren Soerensen during the mid-1990s. The actuator forms a telescoping tubular column, or spindle, from linked segments resembling curved parallelograms. The telescoping linear actuator has a lifting capacity up to 200 kg (~440 pounds) for a travel of 400 mm (~15.75 inches).
A short elongated housing forms the base of the actuator and includes an electrical gear drive and storage magazine for the spindle segments. The drive spins a helically grooved wheel that engages the similarly grooved inside face of the spindle segments. As the wheel spins it simultaneously pull the segments from their horizontal arrangement in the magazine and stacks them along the vertical path of a helix into a rigid tubular column. The reverse process lowers the column.
See also
Helical band actuator
Rigid belt actuator
Rigid chain actuator
References
External links
Kataka web site
Actuators
Hardware (mechanical)
Gears | Segmented spindle | [
"Physics",
"Technology",
"Engineering"
] | 230 | [
"Physical systems",
"Machines",
"Hardware (mechanical)",
"Construction"
] |
16,649,410 | https://en.wikipedia.org/wiki/Pushforward%20%28homology%29 | In algebraic topology, the pushforward of a continuous function : between two topological spaces is a homomorphism between the homology groups for .
Homology is a functor which converts a topological space into a sequence of homology groups . (Often, the collection of all such groups is referred to using the notation ; this collection has the structure of a graded ring.) In any category, a functor must induce a corresponding morphism. The pushforward is the morphism corresponding to the homology functor.
Definition for singular and simplicial homology
We build the pushforward homomorphism as follows (for singular or simplicial homology):
First, the map induces a homomorphism between the singular or simplicial chain complex and defined by composing each singular n-simplex with to obtain a singular n-simplex of , , and extending this linearly via .
The maps satisfy where is the boundary operator between chain groups, so defines a chain map.
Therefore, takes cycles to cycles, since implies . Also takes boundaries to boundaries since .
Hence induces a homomorphism between the homology groups for .
Properties and homotopy invariance
Two basic properties of the push-forward are:
for the composition of maps .
where : refers to identity function of and refers to the identity isomorphism of homology groups.
(This shows the functoriality of the pushforward.)
A main result about the push-forward is the homotopy invariance: if two maps are homotopic, then they induce the same homomorphism .
This immediately implies (by the above properties) that the homology groups of homotopy equivalent spaces are isomorphic: The maps induced by a homotopy equivalence are isomorphisms for all .
See also
Pullback (cohomology)
References
Allen Hatcher, Algebraic topology. Cambridge University Press, and
Topology
Homology theory | Pushforward (homology) | [
"Physics",
"Mathematics"
] | 393 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
16,652,317 | https://en.wikipedia.org/wiki/Linear%20motion | Linear motion, also called rectilinear motion, is one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion, with constant velocity (zero acceleration); and non-uniform linear motion, with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running a 100-meter dash along a straight track.
Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear.
One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude.
Background
Displacement
The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mathematically the displacement is given by:
The equivalent of displacement in rotational motion is the angular displacement measured in radians.
The displacement of an object cannot be greater than the distance because it is also a distance but the shortest one. Consider a person travelling to work daily. Overall displacement when he returns home is zero, since the person ends up back where he started, but the distance travelled is clearly not zero.
Velocity
Velocity refers to a displacement in one direction with respect to an interval of time. It is defined as the rate of change of displacement over change in time. Velocity is a vector quantity, representing a direction and a magnitude of movement. The magnitude of a velocity is called speed. The SI unit of speed is that is metre per second.
Average velocity
The average velocity of a moving body is its total displacement divided by the total time needed to travel from the initial point to the final point. It is an estimated velocity for a distance to travel. Mathematically, it is given by:
where:
is the time at which the object was at position and
is the time at which the object was at position
The magnitude of the average velocity is called an average speed.
Instantaneous velocity
In contrast to an average velocity, referring to the overall motion in a finite time interval, the instantaneous velocity of an object describes the state of motion at a specific point in time. It is defined by letting the length of the time interval tend to zero, that is, the velocity is the time derivative of the displacement as a function of time.
The magnitude of the instantaneous velocity is called the instantaneous speed. The instantaneous velocity equation comes from finding the limit as t approaches 0 of the average velocity. The instantaneous velocity shows the position function with respect to time. From the instantaneous velocity the instantaneous speed can be derived by getting the magnitude of the instantaneous velocity.
Acceleration
Acceleration is defined as the rate of change of velocity with respect to time. Acceleration is the second derivative of displacement i.e. acceleration can be found by differentiating position with respect to time twice or differentiating velocity with respect to time once. The SI unit of acceleration is or metre per second squared.
If is the average acceleration and is the change in velocity over the time interval then mathematically,
The instantaneous acceleration is the limit, as approaches zero, of the ratio and , i.e.,
Jerk
The rate of change of acceleration, the third derivative of displacement is known as jerk. The SI unit of jerk is . In the UK jerk is also referred to as jolt.
Jounce
The rate of change of jerk, the fourth derivative of displacement is known as jounce. The SI unit of jounce is which can be pronounced as metres per quartic second.
Formulation
In case of constant acceleration, the four physical quantities acceleration, velocity, time and displacement can be related by using the equations of motion.
Here,
is the initial velocity
is the final velocity
is acceleration
is displacement
is time
These relationships can be demonstrated graphically. The gradient of a line on a displacement time graph represents the velocity. The gradient of the velocity time graph gives the acceleration while the area under the velocity time graph gives the displacement. The area under a graph of acceleration versus time is equal to the change in velocity.
Comparison to circular motion
The following table refers to rotation of a rigid body about a fixed axis: is arc length, is the distance from the axis to any point, and is the tangential acceleration, which is the component of the acceleration that is parallel to the motion. In contrast, the centripetal acceleration, , is perpendicular to the motion. The component of the force parallel to the motion, or equivalently, perpendicular to the line connecting the point of application to the axis is . The sum is over from to particles and/or points of application.
The following table shows the analogy in derived SI units:
See also
Angular motion
Centripetal force
Inertial frame of reference
Linear actuator
Linear bearing
Linear motor
Motion graphs and derivatives
Reciprocating motion
Rectilinear propagation
Uniformly accelerated linear motion
References
Further reading
Resnick, Robert and Halliday, David (1966), Physics, Chapter 3 (Vol I and II, Combined edition), Wiley International Edition, Library of Congress Catalog Card No. 66-11527
Tipler P.A., Mosca G., "Physics for Scientists and Engineers", Chapter 2 (5th edition), W. H. Freeman and company: New York and Basing stoke, 2003.
External links
Classical mechanics | Linear motion | [
"Physics"
] | 1,311 | [
"Physical phenomena",
"Classical mechanics",
"Motion (physics)",
"Mechanics",
"Linear motion"
] |
16,653,044 | https://en.wikipedia.org/wiki/Landau%E2%80%93Lifshitz%20model | In solid-state physics, the Landau–Lifshitz equation (LLE), named for Lev Landau and Evgeny Lifshitz, is a partial differential equation describing time evolution of magnetism in solids, depending on 1 time variable and 1, 2, or 3 space variables.
Landau–Lifshitz equation
The LLE describes an anisotropic magnet. The equation is described in as follows: it is an equation for a vector field S, in other words a function on R1+n taking values in R3. The equation depends on a fixed symmetric 3-by-3 matrix J, usually assumed to be diagonal; that is, . The LLE is then given by Hamilton's equation of motion for the Hamiltonian
(where J(S) is the quadratic form of J applied to the vector S)
which is
In 1+1 dimensions, this equation is
In 2+1 dimensions, this equation takes the form
which is the (2+1)-dimensional LLE. For the (3+1)-dimensional case, the LLE looks like
Integrable reductions
In the general case LLE (2) is nonintegrable, but it admits two integrable reductions:
a) in 1+1 dimensions, that is Eq. (3), it is integrable
b) when . In this case the (1+1)-dimensional LLE (3) turns into the continuous classical Heisenberg ferromagnet equation (see e.g. Heisenberg model (classical)) which is already integrable.
See also
Nonlinear Schrödinger equation
Heisenberg model (classical)
Spin wave
Micromagnetism
Ishimori equation
Magnet
Ferromagnetism
References
Kosevich A.M., Ivanov B.A., Kovalev A.S. Nonlinear magnetization waves. Dynamical and topological solitons. – Kiev: Naukova Dumka, 1988. – 192 p.
Magnetic ordering
Partial differential equations
Lev Landau | Landau–Lifshitz model | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 420 | [
"Magnetic ordering",
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
2,334,098 | https://en.wikipedia.org/wiki/Particular%20values%20of%20the%20Riemann%20zeta%20function | In mathematics, the Riemann zeta function is a function in complex analysis, which is also important in number theory. It is often denoted and is named after the mathematician Bernhard Riemann. When the argument is a real number greater than one, the zeta function satisfies the equation
It can therefore provide the sum of various convergent infinite series, such as Explicit or numerically efficient formulae exist for at integer arguments, all of which have real values, including this example. This article lists these formulae, together with tables of values. It also includes derivatives and some series composed of the zeta function at integer arguments.
The same equation in above also holds when is a complex number whose real part is greater than one, ensuring that the infinite sum still converges. The zeta function can then be extended to the whole of the complex plane by analytic continuation, except for a simple pole at . The complex derivative exists in this more general region, making the zeta function a meromorphic function. The above equation no longer applies for these extended values of , for which the corresponding summation would diverge. For example, the full zeta function exists at (and is therefore finite there), but the corresponding series would be whose partial sums would grow indefinitely large.
The zeta function values listed below include function values at the negative even numbers (, ), for which and which make up the so-called trivial zeros. The Riemann zeta function article includes a colour plot illustrating how the function varies over a continuous rectangular region of the complex plane. The successful characterisation of its non-trivial zeros in the wider plane is important in number theory, because of the Riemann hypothesis.
The Riemann zeta function at 0 and 1
At zero, one has
At 1 there is a pole, so ζ(1) is not finite but the left and right limits are:
Since it is a pole of first order, it has a complex residue
Positive integers
Even positive integers
For the even positive integers , one has the relationship to the Bernoulli numbers :
The computation of is known as the Basel problem. The value of is related to the Stefan–Boltzmann law and Wien approximation in physics. The first few values are given by:
Taking the limit , one obtains .
The relationship between zeta at the positive even integers and powers of pi may be written as
where and are coprime positive integers for all . These are given by the integer sequences and , respectively, in OEIS. Some of these values are reproduced below:
If we let be the coefficient of as above,
then we find recursively,
This recurrence relation may be derived from that for the Bernoulli numbers.
Also, there is another recurrence:
which can be proved, using that
The values of the zeta function at non-negative even integers have the generating function:
Since
The formula also shows that for ,
Odd positive integers
The sum of the harmonic series is infinite.
The value is also known as Apéry's constant and has a role in the electron's gyromagnetic ratio.
The value also appears in Planck's law.
These and additional values are:
It is known that is irrational (Apéry's theorem) and that infinitely many of the numbers , are irrational. There are also results on the irrationality of values of the Riemann zeta function at the elements of certain subsets of the positive odd integers; for example, at least one of is irrational.
The positive odd integers of the zeta function appear in physics, specifically correlation functions of antiferromagnetic XXX spin chain.
Most of the identities following below are provided by Simon Plouffe. They are notable in that they converge quite rapidly, giving almost three digits of precision per iteration, and are thus useful for high-precision calculations.
Plouffe stated the following identities without proof. Proofs were later given by other authors.
ζ(5)
ζ(7)
Note that the sum is in the form of a Lambert series.
ζ(2n + 1)
By defining the quantities
a series of relationships can be given in the form
where an, bn, cn and dn are positive integers. Plouffe gives a table of values:
These integer constants may be expressed as sums over Bernoulli numbers, as given in (Vepstas, 2006) below.
A fast algorithm for the calculation of Riemann's zeta function for any integer argument is given by E. A. Karatsuba.
Negative integers
In general, for negative integers (and also zero), one has
The so-called "trivial zeros" occur at the negative even integers:
(Ramanujan summation)
The first few values for negative odd integers are
However, just like the Bernoulli numbers, these do not stay small for increasingly negative odd values. For details on the first value, see 1 + 2 + 3 + 4 + · · ·.
So ζ(m) can be used as the definition of all (including those for index 0 and 1) Bernoulli numbers.
Derivatives
The derivative of the zeta function at the negative even integers is given by
The first few values of which are
One also has
where A is the Glaisher–Kinkelin constant. The first of these identities implies that the regularized product of the reciprocals of the positive integers is , thus the amusing "equation" .
From the logarithmic derivative of the functional equation,
Series involving ζ(n)
The following sums can be derived from the generating function:
where is the digamma function.
Series related to the Euler–Mascheroni constant (denoted by ) are
and using the principal value
which of course affects only the value at 1, these formulae can be stated as
and show that they depend on the principal value of
Nontrivial zeros
Zeros of the Riemann zeta except negative even integers are called "nontrivial zeros". The Riemann hypothesis states that the real part of every nontrivial zero must be . In other words, all known nontrivial zeros of the Riemann zeta are of the form where y is a real number. The following table contains the decimal expansion of Im(z) for the first few nontrivial zeros:
Andrew Odlyzko computed the first 2 million nontrivial zeros accurate to within 4, and the first 100 zeros accurate within 1000 decimal places. See their website for the tables and bibliographies.
A table of about 103 billion zeros with high precision (of ±2−102≈±2·10−31) is available for interactive access and download (although in a very inconvenient compressed format) via LMFDB.
Ratios
Although evaluating particular values of the zeta function is difficult, often certain ratios can be found by inserting particular values of the gamma function into the functional equation
We have simple relations for half-integer arguments
Other examples follow for more complicated evaluations and relations of the gamma function. For example a consequence of the relation
is the zeta ratio relation
where AGM is the arithmetic–geometric mean. In a similar vein, it is possible to form radical relations, such as from
the analogous zeta relation is
References
Further reading
Simon Plouffe, "Identities inspired from Ramanujan Notebooks ", (1998).
Simon Plouffe, "Identities inspired by Ramanujan Notebooks part 2 PDF " (2006).
PDF PDF Russian PS Russian
Nontrival zeros reference by Andrew Odlyzko:
Bibliography
Tables
Mathematical constants
Zeta and L-functions
Irrational numbers | Particular values of the Riemann zeta function | [
"Mathematics"
] | 1,533 | [
"Irrational numbers",
"Mathematical objects",
"nan",
"Mathematical constants",
"Numbers"
] |
2,334,148 | https://en.wikipedia.org/wiki/Formwork | Formwork is molds into which concrete or similar materials are either precast or cast-in-place. In the context of concrete construction, the falsework supports the shuttering molds. In specialty applications formwork may be permanently incorporated into the final structure, adding insulation or helping reinforce the finished structure.
Types
Formwork may be made of wood, metal, plastic, or composite materials:
Traditional timber formwork. The formwork is built on site out of timber and plywood or moisture-resistant particleboard. It is easy to produce but time-consuming for larger structures, and the plywood facing has a relatively short lifespan. It is still used extensively where the labour costs are lower than the costs for procuring reusable formwork. It is also the most flexible type of formwork, so even where other systems are in use, complicated sections may use it.
Engineered Formwork System. This formwork is built out of prefabricated modules with a metal frame (usually steel or aluminium) and covered on the application (concrete) side with material having the wanted surface structure (steel, aluminum, timber, etc.). The two major advantages of formwork systems, compared to traditional timber formwork, are speed of construction (modular systems pin, clip, or screw together quickly) and lower life-cycle costs (barring major force, the frame is almost indestructible, while the covering if made of wood; may have to be replaced after a few - or a few dozen - uses, but if the covering is made with steel or aluminium the form can achieve up to two thousand uses depending on care and the applications). Metal formwork systems are better protected against rot and fire than traditional timber formwork.
Re-usable plastic formwork. These interlocking and modular systems are used to build widely variable, but relatively simple, concrete structures. The panels are lightweight and very robust. They are especially suited for similar structure projects and low-cost, mass housing schemes. To get an added layer of protection against destructive weather, galvanized roofs will help by eliminating the risk of corrosion and rust. These types of modular enclosures can have load-bearing roofs to maximize space by stacking on top of one another. They can either be mounted on an existing roof, or constructed without a floor and lifted onto existing enclosures using a crane.
Permanent Insulated Formwork. This formwork is assembled on site, usually out of insulating concrete forms (ICF). The formwork stays in place after the concrete has cured, and may provide advantages in terms of speed, strength, superior thermal and acoustic insulation, space to run utilities within the EPS layer, and integrated furring strip for cladding finishes.
Stay-In-Place structural formwork systems. This formwork is assembled on site, usually out of prefabricated fiber-reinforced plastic forms. These are in the shape of hollow tubes, and are usually used for columns and piers. The formwork stays in place after the concrete has cured and acts as axial and shear reinforcement, as well as serving to confine the concrete and prevent against environmental effects, such as corrosion and freeze-thaw cycles.
Flexible formwork. In contrast to the rigid moulds described above, flexible formwork is a system that uses lightweight, high strength sheets of fabric to take advantage of the fluidity of concrete and create highly optimised, architecturally interesting, building forms. Using flexible formwork it is possible to cast optimised structures that use significantly less concrete than an equivalent strength prismatic section, thereby offering the potential for significant embodied energy savings in new concrete structures.
Slab formwork (deck formwork)
History
Some of the earliest examples of concrete slabs were built by Roman engineers. Because concrete is quite strong in resisting compressive loads, but has relatively poor tensile or torsional strength, these early structures consisted of compression-resistant arches, vaults and domes. The most notable concrete structure from this period is the Pantheon in Rome. To mould this structure, temporary scaffolding and formwork or falsework was built in the future shape of the structure. These building techniques were not isolated to pouring concrete, but were and are widely used in masonry construction. Because of the complexity and the limited production capacity of the building material, concrete's rise as a favored building material did not occur until the invention of Portland cement and reinforced concrete.
Timber beam slab formwork
Similar to the traditional method, but stringers and joists are typically replaced with engineered wood beams and supports are replaced with adjustable metal props. This makes this method more systematic and reusable.
Traditional slab formwork
On the dawn of the revival of concrete in slab structures, building techniques for the temporary structures were derived again from masonry and carpentry. The traditional slab formwork technique consists of supports out of lumber or young tree trunks, that support rows of stringers assembled roughly 3 to 6 feet or 1 to 2 metres apart, depending on thickness of slab. Between these stringers, joists are positioned roughly apart, upon which boards or plywood are placed. The stringers and joists are usually 4 by 4 inch or 4 by 6 inch lumber. The most common imperial plywood thickness is inch and the most common metric thickness is 18 mm.
Metal beam slab formwork
Similar to the traditional method, but stringers and joist are replaced with aluminium forming systems or steel beams and supports are replaced with metal props. This also makes this method more systematic and reusable. Aluminum beams are fabricated as telescoping units which allows them to span supports that are located at varying distances apart. Telescoping aluminium beams can be used and reused in the construction of structures of varying size.
Modular slab formwork
These systems consist of prefabricated timber, steel or aluminum beams and formwork modules. Modules are often no larger than 3 to 6 feet or 1 to 2 metres in size. The beams and formwork are typically set by hand and pinned, clipped, or screwed together. The advantages of a modular system are: does not require a crane to place the formwork, speed of construction with unskilled labor, formwork modules can be removed after concrete sets leaving only beams in place prior to achieving design strength.
Table or flying form systems
These systems consist of slab formwork "tables" that are reused on multiple stories of a building without being dismantled. The assembled sections are either lifted per elevator or "flown" by crane from one story to the next. Once in position the gaps between the tables or table and wall are filled with temporary formwork. Table forms vary in shape and size as well as their building material, with some supported by integral trusses. The use of these systems can greatly reduce the time and manual labor involved in setting and striking (or "stripping") the formwork. Their advantages are best used by large area and simple structures. It is also common for architects and engineers to design building around one of these systems.
Structure
A table is built pretty much the same way as a beam formwork but the single parts of this system are connected together in a way that makes them transportable. The most common sheathing is plywood, but steel and fiberglass are used. The joists are either made from timber, engineered lumber (often in the form of I-beams), aluminium or steel. The stringers are sometimes made of wood I-beams but usually from steel channels. These are fastened together (screwed, weld or bolted) to become a "deck". These decks are usually rectangular but can also be other shapes.
Support
All support systems have to be height adjustable to allow the formwork to be placed at the correct height and to be removed after the concrete is cured. Normally adjustable metal props similar to (or the same as) those used by beam slab formwork are used to support these systems. Some systems combine stringers and supports into steel or aluminum trusses. Yet other systems use metal frame shoring towers, which the decks are attached to. Another common method is to attach the formwork decks to previously cast walls or columns, thus eradicating the use of vertical props altogether. In this method, adjustable support shoes are bolted through holes (sometimes tie holes) or attached to cast anchors.
Size
The size of these tables can vary from . There are two general approaches in this system:
Crane handled: this approach consists of assembling or producing the tables with a large formwork area that can only be moved up a level by crane. Typical widths can be 15, 18 or 20 feet, or 5 to 7 metres, but their width can be limited, so that it is possible to transport them assembled, without having to pay for an oversize load. The length might vary and can be up to 100 feet (or more) depending on the crane capacity. After the concrete is cured, the decks are lowered and moved with rollers or trolleys to the edge of the building. From then on the protruding side of the table is lifted by crane while the rest of the table is rolled out of the building. After the centre of gravity is outside of the building the table is attached to another crane and flown to the next level or position.
This technique is fairly common in the United States and east Asian countries. The advantages of this approach are the further reduction of manual labour time and cost per unit area of slab and a simple and systematic building technique. The disadvantages of this approach are the necessary high lifting capacity of building site cranes, additional expensive crane time, higher material costs and little flexibility.
Crane fork or elevator handled:
By this approach the tables are limited in size and weight. Typical widths are between , typical lengths are between , though table sizes may vary in size and form. The major distinction of this approach is that the tables are lifted either with a crane transport fork or by material platform elevators attached to the side of the building. They are usually transported horizontally to the elevator or crane lifting platform singlehandedly with shifting trolleys depending on their size and construction. Final positioning adjustments can be made by trolley. This technique enjoys popularity in the US, Europe and generally in high labor cost countries. The advantages of this approach in comparison to beam formwork or modular formwork is a further reduction of labor time and cost. Smaller tables are generally easier to customize around geometrically complicated buildings, (round or non rectangular) or to form around columns in comparison to their large counterparts. The disadvantages of this approach are the higher material costs and increased crane time (if lifted with crane fork).
Tunnel forms
Tunnel forms are large, room size forms that allows walls and floors to be cast in a single pour. With multiple forms, the entire floor of a building can be done in a single pour. Tunnel forms require sufficient space exterior to the building for the entire form to be slipped out and hoisted up to the next level. A section of the walls is left uncasted to remove the forms. Typically castings are done with a frequency of 4 days. Tunnel forms are most suited for buildings that have the same or similar cells to allow re-use of the forms within the floor and from one floor to the next, in regions which have high labor prices.Tunnel formwork saves the time and the cost.
See structural coffer.
Concrete-form oil
The main purpose of concrete-form oil is to reduce the adhesion between the foundation structure and the concrete mixture poured into it. It also reduces the possibility of cracks and chips occurring due to drying out or concrete overstressing. Without concrete-form oil, which reduces the adhesion between surfaces, it becomes virtually impossible to remove the structure without damaging the foundation, wall or bulkhead. The risk also increases with the size of the tier.
Climbing formwork
Climbing forms are commonly used on:
Skyscrapers
Bridge pylons
Concrete columns
Airport control towers
High rise buildings
Elevator shafts
Silos
Flexible formwork
There is an increasing focus on sustainability in design, backed up by carbon dioxide emissions reduction targets. The low embodied energy of concrete by volume is offset by its rate of consumption which make the manufacture of cement accountable for some 5% of global emissions.
Concrete is a fluid that offers the opportunity to economically create structures of almost any geometry - concrete can be poured into a mould of almost any shape. The result, however, is high material use structures with large carbon footprints.
By replacing conventional moulds with a flexible system composed primarily of low cost fabric sheets, flexible formwork takes advantage of the fluidity of concrete to create highly optimised, architecturally interesting building forms. Significant material savings can be achieved. The optimised section provides ultimate limit state capacity while reducing embodied carbon, thus improving the life cycle performance of the entire structure.
Control of the flexibly formed beam cross section is key to achieving low-material use design. The basic assumption is that a sheet of flexible permeable fabric is held in a system of falsework before reinforcement and concrete are added. By varying the geometry of the fabric mould with distance along the beam, the optimised shape is created. Flexible formwork therefore has the potential to facilitate the change in design and construction philosophy that will be required for a move towards a less material intensive, more sustainable, construction industry.
Fabric formwork is a small niche in concrete technology. It uses soft, flexible materials as formwork against the fresh concrete, normally with some sort of strong tension textile or plastic material. The International Society of Fabric Forming conducts research on fabric formwork.
Iron sheet formwork
A design from Russian NPO-22 factory (trademarked as Proster, with model 21 designed to serve as formwork) uses iron "sheets" (with perforations) which, if necessary, can be bent to form a curve. The sheet-based formwork with V-shaped rails keeps shape in one direction (vertically) but, before being reinforced with steel beams, can be bent. Multiple sheets can be fixed together in same manner fences made of iron "sheets" can be.
A circle can be made from a single sheet of "21" formwork, allowing cylindrical columns to be poured.
Usage
For removable forms, once the concrete has been poured into formwork and has set (or cured), the formwork is struck or stripped to expose the finished concrete. The time between pouring and stripping depends on the job specifications, which include the cure required, and whether the form is supporting any weight; it is usually at least 24 hours after the pour is completed. For example, the California Department of Transportation requires the forms to be in place for 1–7 days after pouring, while the Washington State Department of Transportation requires the forms to stay in place for 3 days with a damp blanket on the outside.
Spectacular accidents have occurred when the forms were either removed too soon or had been under-designed to carry the load imposed by the weight of the uncured concrete. "Form blowouts" also occur when under-designed formwork bends or breaks during the concrete pour (especially if filled with a high-pressure concrete pump). Consequences can vary from minor leaks, easily patched during the pour, to catastrophic form failure, even death.
Concrete exerts less pressure against the forms as it hardens. The hardening is an asymptotic process, meaning that most of the final strength will be achieved after a short time, with further hardening over time reflecting the cement type, admixtures, and pour conditions such as temperature and ambient moisture.
Wet concrete also applies hydrostatic pressure to formwork. The pressure at the bottom of the form is therefore greater than at the top, causing most blowouts to occur low in the formwork. In the illustration of the column formwork above, the 'column clamps' are closer together at the bottom. Note that the column is braced with steel adjustable 'formwork props' and uses 20 mm 'through bolts' to further support the long side of the column.
Some models of "permanent formwork" also can serve as extra reinforcement of the structure.
Gallery
See also
Cast in place concrete
Climbing formwork, formwork that climbs up the rising building during the construction
Concrete cover, depth of the concrete between reinforcing steel and outer surface
Precast concrete
Slip forming, construction method in which concrete is poured into a continuously moving form
Literature
Matthias Dupke: Einsatzgebiete der Gleitschalung und der Kletter-Umsetz-Schalung: Ein Vergleich der Systeme. 2010, Verlag Diplomarbeiten Agentur, Hamburg, .
The Concrete Society, Formwork: A guide to good practice
References
External links
Stripping time of form work as per india standards
An illustrated glossary of the terms used in temporary types of construction work. Formwork, scaffolding etc.
Concrete
Concrete buildings and structures
Building materials
Building engineering
Articles containing video clips | Formwork | [
"Physics",
"Engineering"
] | 3,434 | [
"Structural engineering",
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Civil engineering",
"Concrete",
"Matter",
"Building materials"
] |
2,336,108 | https://en.wikipedia.org/wiki/Hilbert%27s%20twenty-third%20problem | Hilbert's twenty-third problem is the last of Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. In contrast with Hilbert's other 22 problems, his 23rd is not so much a specific "problem" as an encouragement towards further development of the calculus of variations. His statement of the problem is a summary of the state-of-the-art (in 1900) of the theory of calculus of variations, with some introductory comments decrying the lack of work that had been done of the theory in the mid to late 19th century.
Original statement
The problem statement begins with the following paragraph:
So far, I have generally mentioned problems as definite and special as possible.... Nevertheless, I should like to close with a general problem, namely with the indication of a branch of mathematics repeatedly mentioned in this lecture-which, in spite of the considerable advancement lately given it by Weierstrass, does not receive the general appreciation which, in my opinion, it is due—I mean the calculus of variations.
Calculus of variations
Calculus of variations is a field of mathematical analysis that deals with maximizing or minimizing functionals, which are mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. The interest is in extremal functions that make the functional attain a maximum or minimum value – or stationary functions – those where the rate of change of the functional is zero.
Progress
Following the problem statement, David Hilbert, Emmy Noether, Leonida Tonelli, Henri Lebesgue and Jacques Hadamard among others made significant contributions to the calculus of variations. Marston Morse applied calculus of variations in what is now called Morse theory. Lev Pontryagin, Ralph Rockafellar and F. H. Clarke developed new mathematical tools for the calculus of variations in optimal control theory. The dynamic programming of Richard Bellman is an alternative to the calculus of variations.
References
Further reading
23 | Hilbert's twenty-third problem | [
"Mathematics"
] | 404 | [
"Hilbert's problems",
"Mathematical problems"
] |
2,336,109 | https://en.wikipedia.org/wiki/Frame%20fields%20in%20general%20relativity | A frame field in general relativity (also called a tetrad or vierbein) is a set of four pointwise-orthonormal vector fields, one timelike and three spacelike, defined on a Lorentzian manifold that is physically interpreted as a model of spacetime. The timelike unit vector field is often denoted by and the three spacelike unit vector fields by . All tensorial quantities defined on the manifold can be expressed using the frame field and its dual coframe field.
Frame fields were introduced into general relativity by Albert Einstein in 1928 and by Hermann Weyl in 1929.
The index notation for tetrads is explained in tetrad (index notation).
Physical interpretation
Frame fields of a Lorentzian manifold always correspond to a family of ideal observers immersed in the given spacetime; the integral curves of the timelike unit vector field are the worldlines of these observers, and at each event along a given worldline, the three spacelike unit vector fields specify the spatial triad carried by the observer. The triad may be thought of as defining the spatial coordinate axes of a local laboratory frame, which is valid very near the observer's worldline.
In general, the worldlines of these observers need not be timelike geodesics. If any of the worldlines bends away from a geodesic path in some region, we can think of the observers as test particles that accelerate by using ideal rocket engines with a thrust equal to the magnitude of their acceleration vector. Alternatively, if our observer is attached to a bit of matter in a ball of fluid in hydrostatic equilibrium, this bit of matter will in general be accelerated outward by the net effect of pressure holding up the fluid ball against the attraction of its own gravity. Other possibilities include an observer attached to a free charged test particle in an electrovacuum solution, which will of course be accelerated by the Lorentz force, or an observer attached to a spinning test particle, which may be accelerated by a spin–spin force.
It is important to recognize that frames are geometric objects. That is, vector fields make sense (in a smooth manifold) independently of choice of a coordinate chart, and (in a Lorentzian manifold), so do the notions of orthogonality and length. Thus, just like vector fields and other geometric quantities, frame fields can be represented in various coordinate charts. Computations of the components of tensorial quantities, with respect to a given frame, will always yield the same result, whichever coordinate chart is used to represent the frame.
These fields are required to write the Dirac equation in curved spacetime.
Specifying a frame
To write down a frame, a coordinate chart on the Lorentzian manifold needs to be chosen. Then, every vector field on the manifold can be written down as a linear combination of the four coordinate basis vector fields:
Here, the Einstein summation convention is used, and the vector fields are thought of as first order linear differential operators, and the components are often called contravariant components. This follows the standard notational conventions for sections of a tangent bundle. Alternative notations for the coordinate basis vector fields in common use are
In particular, the vector fields in the frame can be expressed this way:
In "designing" a frame, one naturally needs to ensure, using the given metric, that the four vector fields are everywhere orthonormal.
More modern texts adopt the notation for and or for . This permits the visually clever trick of writing the spacetime metric as the outer product of the coordinate tangent vectors:
and the flat-space Minkowski metric as the product of the gammas:
The choice of for the notation is an intentional conflation with the notation used for the Dirac matrices; it allows the to be taken not only as vectors, but as elements of an algebra, the spacetime algebra. Appropriately used, this can simplify some of the notation used in writing a spin connection.
Once a signature is adopted, by duality every vector of a basis has a dual covector in the cobasis and conversely. Thus, every frame field is associated with a unique coframe field, and vice versa; a coframe field is a set of four orthogonal sections of the cotangent bundle.
Specifying the metric using a coframe
Alternatively, the metric tensor can be specified by writing down a coframe in terms of a coordinate basis and stipulating that the metric tensor is given by
where denotes tensor product.
This is just a fancy way of saying that the coframe is orthonormal. Whether this is used to obtain the metric tensor after writing down the frame (and passing to the dual coframe), or starting with the metric tensor and using it to verify that a frame has been obtained by other means, it must always hold true.
Relationship with metric tensor, in a coordinate basis
The vierbein field, , has two kinds of indices: labels the general spacetime coordinate and labels the local Lorentz spacetime or local laboratory coordinates.
The vierbein field or frame fields can be regarded as the "matrix square root" of the metric tensor, , since in a coordinate basis,
where is the Lorentz metric.
Local Lorentz indices are raised and lowered with the Lorentz metric in the same way as general spacetime coordinates are raised and lowered with the metric tensor. For example:
The vierbein field enables conversion between spacetime and local Lorentz indices. For example:
The vierbein field itself can be manipulated in the same fashion:
, since
And these can combine.
A few more examples: Spacetime and local Lorentz coordinates can be mixed together:
The local Lorentz coordinates transform differently from the general spacetime coordinates. Under a general coordinate transformation we have:
whilst under a local Lorentz transformation we have:
Comparison with coordinate basis
Coordinate basis vectors have the special property that their pairwise Lie brackets vanish. Except in locally flat regions, at least some Lie brackets of vector fields from a frame will not vanish. The resulting baggage needed to compute with them is acceptable, as components of tensorial objects with respect to a frame (but not with respect to a coordinate basis) have a direct interpretation in terms of measurements made by the family of ideal observers corresponding to the frame.
Coordinate basis vectors can be null, which, by definition, cannot happen for frame vectors.
Nonspinning and inertial frames
Some frames are nicer than others. Particularly in vacuum or electrovacuum solutions, the physical experience of inertial observers (who feel no forces) may be of particular interest. The mathematical characterization of an inertial frame is very simple: the integral curves of the timelike unit vector field must define a geodesic congruence, or in other words, its acceleration vector must vanish:
It is also often desirable to ensure that the spatial triad carried by each observer does not rotate. In this case, the triad can be viewed as being gyrostabilized. The criterion for a nonspinning inertial (NSI) frame is again very simple:
This says that as we move along the worldline of each observer, their spatial triad is parallel-transported. Nonspinning inertial frames hold a special place in general relativity, because they are as close as we can get in a curved Lorentzian manifold to the Lorentz frames used in special relativity (these are special nonspinning inertial frames in the Minkowski vacuum).
More generally, if the acceleration of our observers is nonzero, , we can replace the covariant derivatives
with the (spatially projected) Fermi–Walker derivatives to define a nonspinning frame.
Given a Lorentzian manifold, we can find infinitely many frame fields, even if we require additional properties such as inertial motion. However, a given frame field might very well be defined on only part of the manifold.
Example: Static observers in Schwarzschild vacuum
It will be instructive to consider in some detail a few simple examples. Consider the famous Schwarzschild vacuum that models spacetime outside an isolated nonspinning spherically symmetric massive object, such as a star. In most textbooks one finds the metric tensor written in terms of a static polar spherical chart, as follows:
More formally, the metric tensor can be expanded with respect to the coordinate cobasis as
A coframe can be read off from this expression:
To see that this coframe really does correspond to the Schwarzschild metric tensor, just plug this coframe into
The frame dual is the coframe inverse as below: (frame dual is also transposed to keep local index in same position.)
(The plus sign on ensures that is future pointing.) This is the frame that models the experience of static observers who use rocket engines to "hover" over the massive object.
The thrust they require to maintain their position is given by the magnitude of the acceleration vector
This is radially inward pointing, since the observers need to accelerate away from the object to avoid falling toward it. On the other hand, the spatially projected Fermi derivatives of the spatial basis vectors (with respect to ) vanish, so this is a nonspinning frame.
The components of various tensorial quantities with respect to our frame and its dual coframe can now be computed.
For example, the tidal tensor for our static observers is defined using tensor notation (for a coordinate basis) as
where we write to avoid cluttering the notation. Its only non-zero components with respect to our coframe turn out to be
The corresponding coordinate basis components are
(A quick note concerning notation: many authors put carets over abstract indices referring to a frame. When writing down specific components, it is convenient to denote frame components by 0,1,2,3 and coordinate components by . Since an expression like doesn't make sense as a tensor equation, there should be no possibility of confusion.)
Compare the tidal tensor of Newtonian gravity, which is the traceless part of the Hessian of the gravitational potential . Using tensor notation for a tensor field defined on three-dimensional euclidean space, this can be written
The reader may wish to crank this through (notice that the trace term actually vanishes identically when U is harmonic) and compare results with the following elementary approach:
we can compare the gravitational forces on two nearby observers lying on the same radial line:
Because in discussing tensors we are dealing with multilinear algebra, we retain only first order terms, so . Similarly, we can compare the gravitational force on two nearby observers lying on the same sphere . Using some elementary trigonometry and the small angle approximation, we find that the force vectors differ by a vector tangent to the sphere which has magnitude
By using the small angle approximation, we have ignored all terms of order , so the tangential components are . Here, we are referring to the obvious frame obtained from the polar spherical chart for our three-dimensional euclidean space:
Plainly, the coordinate components computed above don't even scale the right way, so they clearly cannot correspond to what an observer will measure even approximately. (By coincidence, the Newtonian tidal tensor components agree exactly with the relativistic tidal tensor components we wrote out above.)
Example: Lemaître observers in the Schwarzschild vacuum
To find an inertial frame, we can boost our static frame in the direction by an undetermined boost parameter (depending on the radial coordinate), compute the acceleration vector of the new undetermined frame, set this equal to zero, and solve for the unknown boost parameter. The result will be a frame which we can use to study the physical experience of observers who fall freely and radially toward the massive object. By appropriately choosing an integration constant, we obtain the frame of Lemaître observers, who fall in from rest at spatial infinity. (This phrase doesn't make sense, but the reader will no doubt have no difficulty in understanding our meaning.) In the static polar spherical chart, this frame is obtained from Lemaître coordinates and can be written as
Note that
, and that "leans inwards", as it should, since its integral curves are timelike geodesics representing the world lines of infalling observers. Indeed, since the covariant derivatives of all four basis vectors (taken with respect to ) vanish identically, our new frame is a nonspinning inertial frame.
If our massive object is in fact a (nonrotating) black hole, we probably wish to follow the experience of the Lemaître observers as they fall through the event horizon at . Since the static polar spherical coordinates have a coordinate singularity at the horizon, we'll need to switch to a more appropriate coordinate chart. The simplest possible choice is to define a new time coordinate by
This gives the Painlevé chart. The new line element is
With respect to the Painlevé chart, the Lemaître frame is
Notice that their spatial triad looks exactly like the frame for three-dimensional euclidean space which we mentioned above (when we computed the Newtonian tidal tensor). Indeed, the spatial hyperslices turn out to be locally isometric to flat three-dimensional euclidean space! (This is a remarkable and rather special property of the Schwarzschild vacuum; most spacetimes do not admit a slicing into flat spatial sections.)
The tidal tensor taken with respect to the Lemaître observers is
where we write to avoid cluttering the notation. This is a different tensor from the one we obtained above, because it is defined using a different family of observers. Nonetheless, its nonvanishing components look familiar: . (This is again a rather special property of the Schwarzschild vacuum.)
Notice that there is simply no way of defining static observers on or inside the event horizon. On the other hand, the Lemaître observers are not defined on the entire exterior region covered by the static polar spherical chart either, so in these examples, neither the Lemaître frame nor the static frame are defined on the entire manifold.
Example: Hagihara observers in the Schwarzschild vacuum
In the same way that we found the Lemaître observers, we can boost our static frame in the direction by an undetermined parameter (depending on the radial coordinate), compute the acceleration vector, and require that this vanish in the equatorial plane . The new Hagihara frame describes the physical experience of observers in stable circular orbits around our massive object. It was apparently first discussed by the astronomer Yusuke Hagihara.
In the static polar spherical chart, the Hagihara frame is
which in the equatorial plane becomes
The tidal tensor where turns out to be given (in the equatorial plane) by
Thus, compared to a static observer hovering at a given coordinate radius,
a Hagihara observer in a stable circular orbit with the same coordinate radius will measure radial tidal forces which are slightly larger in magnitude, and transverse tidal forces which are no longer isotropic (but slightly larger orthogonal to the direction of motion).
Note that the Hagihara frame is only defined on the region . Indeed, stable circular orbits only exist on , so the frame should not be used inside this locus.
Computing Fermi derivatives shows that the frame field just given is in fact spinning with respect to a gyrostabilized frame. The principal reason why is easy to spot: in this frame, each Hagihara observer keeps his spatial vectors radially aligned, so rotate about as the observer orbits around the central massive object. However, after correcting for this observation, a small precession of the spin axis of a gyroscope carried by a Hagihara observer still remains; this is the de Sitter precession effect (also called the geodetic precession effect).
Generalizations
This article has focused on the application of frames to general relativity, and particularly on their physical interpretation. Here we very briefly outline the general concept. In an n-dimensional Riemannian manifold or pseudo-Riemannian manifold, a frame field is a set of orthonormal vector fields which forms a basis for the tangent space at each point in the manifold. This is possible globally in a continuous fashion if and only if the manifold is parallelizable. As before, frames can be specified in terms of a given coordinate basis, and in a non-flat region, some of their pairwise Lie brackets will fail to vanish.
In fact, given any inner-product space , we can define a new space consisting of all tuples of orthonormal bases for . Applying this construction to each tangent space yields the orthonormal frame bundle of a (pseudo-)Riemannian manifold and a frame field is a section of this bundle. More generally still, we can consider frame bundles associated to any vector bundle, or even arbitrary principal fiber bundles. The notation becomes a bit more involved because it is harder to avoid distinguishing between indices referring to the base, and indices referring to the fiber. Many authors speak of internal components when referring to components indexed by the fiber.
See also
Exact solutions in general relativity
Georges Lemaître
Karl Schwarzschild
Moving frame
Paul Painlevé
Tetrad formalism
Yusuke Hagihara
References
See Chapter IV for frames in E3, then see Chapter VIII for frame fields in Riemannian manifolds. This book doesn't really cover Lorentzian manifolds, but with this background in hand the reader is well prepared for the next citation.
In this book, a frame field (coframe field) is called an anholonomic basis of vectors (covectors). Essential information is widely scattered about, but can be easily found using the extensive index.
In this book, a frame field is called a tetrad (not to be confused with the now standard term NP tetrad used in the Newman–Penrose formalism). See Section 98.
See Chapter 4 for frames and coframes. If you ever need more information about frame fields, this might be a good place to look!
Frames of reference
Mathematical methods in general relativity
General relativity | Frame fields in general relativity | [
"Physics",
"Mathematics"
] | 3,702 | [
"Frames of reference",
"Classical mechanics",
"Theory of relativity",
"General relativity",
"Coordinate systems"
] |
2,336,224 | https://en.wikipedia.org/wiki/Hilbert%27s%20eighteenth%20problem | Hilbert's eighteenth problem is one of the 23 Hilbert problems set out in a celebrated list compiled in 1900 by mathematician David Hilbert. It asks three separate questions about lattices and sphere packing in Euclidean space.
Symmetry groups in dimensions
The first part of the problem asks whether there are only finitely many essentially different space groups in -dimensional Euclidean space. This was answered affirmatively by Bieberbach.
Anisohedral tiling in 3 dimensions
The second part of the problem asks whether there exists a polyhedron which tiles 3-dimensional Euclidean space but is not the fundamental region of any space group; that is, which tiles but does not admit an isohedral (tile-transitive) tiling. Such tiles are now known as anisohedral. In asking the problem in three dimensions, Hilbert was probably assuming that no such tile exists in two dimensions; this assumption later turned out to be incorrect.
The first such tile in three dimensions was found by Karl Reinhardt in 1928. The first example in two dimensions was found by Heesch in 1935. The related einstein problem asks for a shape that can tile space but not with an infinite cyclic group of symmetries.
Sphere packing
The third part of the problem asks for the densest sphere packing or packing of other specified shapes. Although it expressly includes shapes other than spheres, it is generally taken as equivalent to the Kepler conjecture.
In 1998, American mathematician Thomas Callister Hales gave a computer-aided proof of the Kepler conjecture. It shows that the most space-efficient way to pack spheres is in a pyramid shape.
Notes
References
18
Tessellation
Geometry problems | Hilbert's eighteenth problem | [
"Physics",
"Mathematics"
] | 325 | [
"Geometry problems",
"Tessellation",
"Euclidean plane geometry",
"Hilbert's problems",
"Geometry",
"Planes (geometry)",
"Mathematical problems",
"Symmetry"
] |
2,336,592 | https://en.wikipedia.org/wiki/Flight%20envelope | In aerodynamics, the flight envelope, service envelope, or performance envelope of an aircraft or spacecraft refers to the capabilities of a design in terms of airspeed and load factor or atmospheric density, often simplified to altitude.
The term is somewhat loosely applied, and can also refer to other measurements such as maneuverability. For example, when a plane is pushed, for instance by diving it at high speeds, it is said to be flown "outside the envelope", something considered rather dangerous. During vehicle test programs, flight envelope simply means that part of the aircraft or spacecraft's design capabilities that have already been successfully tested, and have therefore moved from theoretical or designed capability into a demonstrated/certified capability.
Flight envelope is one of a number of related terms that are used in a similar fashion. It is perhaps the most common term because it is the oldest, first being used in the early days of test flight. It is closely related to more modern terms known as extra power and a doghouse plot which are different ways of describing the flight envelope of an aircraft. In addition, the term has been widened in scope outside the field of engineering, to refer to the strict limits in which an event will take place or more generally to the predictable behavior of a given phenomenon or situation, and hence, its "flight envelope".
Extra power
Extra power, or specific excess power, is a very basic method of determining an aircraft's flight envelope. It is easily calculated but as a downside does not tell very much about the actual performance of the aircraft at different altitudes.
Choosing any particular set of parameters will generate the needed power for a particular aircraft for those conditions. For instance a Cessna 150 at altitude and speed needs about to fly straight and level. The C150 is normally equipped with a engine, so in this particular case the plane has of extra power. In overall terms this is very little extra power, 60% of the engine's output is already used up just keeping the plane in the air. The leftover 40 hp is all that the aircraft has to maneuver with, meaning it can climb, turn, or speed up only a small amount. To put this in perspective, the C150 could not maintain a 2g (20 m/s²) turn, which would require a minimum of under the same conditions.
For the same conditions a fighter aircraft might require considerably more power due to their wings being designed for high speed, high agility, or both. It could require to achieve similar performance. However modern jet engines can provide considerable power with the equivalent of not being atypical. With this amount of extra power the aircraft can achieve very high maximum rate of climb, even climb straight up, make powerful continual maneuvers, or fly at very high speeds.
Doghouse plot
A doghouse plot generally shows the relation between speed at level flight and altitude, although other variables are also possible. It takes more effort to make than an extra power calculation, but in turn provides much more information such as ideal flight altitude. The plot typically looks something like an upside-down U and is commonly referred to as a doghouse plot due to its resemblance to a kennel (sometimes known as a 'doghouse' in American English). The diagram on the right shows a very simplified plot which shall be used to explain the general shape of the plot.
The outer edges of the diagram, the envelope, show the possible conditions that the aircraft can reach in straight and level flight. For instance, the aircraft described by the black altitude envelope on the right can fly at altitudes up to about , at which point the thinner air means it can no longer climb. The aircraft can also fly at up to Mach 1.1 at sea level, but no faster. This outer surface of the curve represents the zero-extra-power condition. All of the area under the curve represents conditions that the plane can fly at with power to spare, for instance, this aircraft can fly at Mach 0.5 at while using less than full power.
In the case of high-performance aircraft, including fighters, this "1-g" line showing straight-and-level flight is augmented with additional lines showing the maximum performance at various g loadings. In the diagram at right, the green line represents, 2-g, the blue line 3-g, and so on. The F-16 Fighting Falcon has a very small area just below Mach 1 and close to sea level where it can maintain a 9-g turn.
Flying outside the envelope is possible, since it represents the straight-and-level condition only. For instance diving the aircraft allows higher speeds, using gravity as a source of additional power. Likewise higher altitude can be reached by first speeding up and then going ballistic, a maneuver known as a zoom climb.
Stalling speed
All fixed-wing aircraft have a minimum speed at which they can maintain level flight, the stall speed (left limit line in the diagram). As the aircraft gains altitude the stall speed increases; since the wing is not growing any larger the only way to support the aircraft's weight with less air is to increase speed. While the exact numbers will vary widely from aircraft to aircraft, the nature of this relationship is typically the same; plotted on a graph of speed (x-axis) vs. altitude (y-axis) it forms a diagonal line.
Service ceiling
Inefficiencies in the wings also make this line "tilt over" with increased altitude, until it becomes horizontal and additional speed will not result in increased altitude. This maximum altitude is known as the service ceiling (top limit line in the diagram), and is often quoted for aircraft performance. The area where the altitude for a given speed can no longer be increased at level flight is known as zero rate of climb and is caused by the lift of the aircraft getting smaller at higher altitudes, until it no longer exceeds gravity.
Top speed
The right side of the graph represents the maximum speed of the aircraft. This is typically sloped in the same manner as the stall line due to air resistance getting lower at higher altitudes, up to the point where an increase in altitude no longer increases the maximum speed due to lack of oxygen to feed the engines.
The power needed varies almost linearly with altitude, but the nature of drag means that it varies with the square of speed—in other words it is typically easier to go higher than faster, up to the altitude where lack of oxygen for the engines starts to play a significant role.
Velocity vs. load factor chart
A chart of velocity versus load factor (or V-n diagram) is another way of showing limits of aircraft performance. It shows how much load factor can be safely achieved at different airspeeds.
At higher temperatures, air is less dense and planes must fly faster to generate the same amount of lift. High heat may reduce the amount of cargo a plane can carry, increase the length of runway a plane needs to take off,
and make it more difficult to avoid obstacles such as mountains. In unusual weather conditions this may make it unsafe or uneconomical to fly, occasionally resulting in the cancellation of commercial flights.
Side notes
Although it is easy to compare aircraft on simple numbers such as maximum speed or service ceiling, an examination of the flight envelope will reveal far more information. Generally a design with a larger area under the curve will have better all-around performance. This is because when the plane is not flying at the edges of the envelope, its extra power will be greater, and that means more power for things like climbing or maneuvering. General aviation aircraft have very small flight envelopes, with speeds ranging from perhaps 50 to 200 mph, whereas the extra power available to modern fighter aircraft result in huge flight envelopes with many times the area. As a trade-off however, military aircraft often have a higher stalling speed. As a result of this, the landing speed is also higher.
"Pushing the envelope"
This phrase is used to refer to an aircraft being taken to or beyond its designated altitude and speed limits. By extension, this phrase may be used to mean testing other limits, either within aerospace or in other fields e.g. Plus ultra (motto).
See also
Coffin corner (aviation)
Helicopter height–velocity diagram
Küssner effect
Maneuvring speed
Corner case
Notes
Aerospace engineering
Aircraft aerodynamics
Aircraft wing design
Aircraft performance
ja:飛行性#飛行包絡線 | Flight envelope | [
"Engineering"
] | 1,698 | [
"Aerospace engineering"
] |
2,339,273 | https://en.wikipedia.org/wiki/Best%20available%20technology | The best available technology or best available techniques (BAT) is the technology approved by legislators or regulators for meeting output standards for a particular process, such as pollution abatement. Similar terms are best practicable means or best practicable environmental option. BAT is a moving target on practices, since developing societal values and advancing techniques may change what is currently regarded as "reasonably achievable", "best practicable" and "best available".
A literal understanding will connect it with a "spare no expense" doctrine which prescribes the acquisition of the best state of the art technology available, without regard for traditional cost-benefit analysis. In practical use, the cost aspect is also taken into account. See also discussions on the topic of the precautionary principle which, along with considerations of best available technologies and cost-benefit analyses, is also involved in discussions leading to formulation of environmental policies and regulations (or opposition to same).
History
Best practicable means was used for the first time in UK national primary legislation in section 5 of the Salmon Fishery Act 1861, and another early use was found in the Alkali Act Amendment Act 1874, but before that appeared in the Leeds Act of 1848.
Best available techniques not entailing excessive costs (BATNEEC), sometimes referred to as best available technology, was introduced in 1984 into European Economic Community law with Directive 84/360/EEC.
The BAT concept was first time used in the 1992 OSPAR Convention for the protection of the marine environment of the North-East Atlantic for all types of industrial installations (for instance, chemical plants).
Some doctrine deem it already acquired the status of customary law.
In the United States, BAT or similar terminology is used in the Clean Air Act and Clean Water Act.
European Union directives
Best available techniques not entailing excessive costs (BATNEEC), sometimes referred to as best available technology, was introduced in 1984 with Directive 84/360/EEC and applied to air pollution emissions from large industrial installations.
In 1996, Directive 84/360/EEC was superseded by the Integrated Pollution Prevention and Control (IPPC) Directive 96/61/EC, which applied the framework concept of Best Available Techniques (BAT) to, amongst others, the integrated control of pollution to the three media air, water and soil. The concept is also part of the directive's recast in 2008 (Directive 2008/1/EC) and its successor directive, the Industrial Emissions Directive 2010/75/EU published in 2010. A list, with "Adopted Documents", of industries which are subject to the IPPC directive contains more than 30 entries, including everything from the ceramic manufacturing industry to the wood-based panels production industry.
BAT for a given industrial sector are described in reference documents called BREFs (Best Available Techniques Reference documents), as defined in article 3(11) of the Industrial Emissions Directive. BREFs are the result of an exchange of information between European Union Member States, the industries concerned, non-governmental organizations promoting environmental protection and the European Commission pursuant to article 13 of the directive. This exchange of information is referred to as the Sevilla process because it is steered by the European IPPC Bureau within the Institute for Prospective Technological Studies of the European Commissions' Joint Research Centre, which is based in Seville. The process is codified into law by Commission Implementing Decision 2012/119/EU. The most important chapter of the BREFs, the BAT conclusions, are published as implementing decisions of the European Commission in the Official Journal of the European Union. According to article 14(3) of the Industrial Emissions Directive, the BAT conclusions shall be the reference for setting permit conditions of large industrial installations.
Pollution control
According to article 15(2) of the Industrial Emissions Directive, emission limit values and the equivalent parameters and technical measures in permits shall be based on the best available techniques, without prescribing the use of any technique or specific technology.
The directive includes a definition of best available techniques in article 3(10):
"best available techniques" means the most effective and advanced stage in the development of activities and their methods of operation which indicates the practical suitability of particular techniques for providing the basis for emission limit values and other permit conditions designed to prevent and, where that is not practicable, to reduce emissions and the impact on the environment as a whole:
- "techniques" includes both the technology used and the way in which the installation is designed, built, maintained, operated and decommissioned;
- "available" means those developed on a scale which allows implementation in the relevant industrial sector, under economically and technically viable conditions, taking into consideration the costs and advantages, whether or not the techniques are used or produced inside the Member State in question, as long as they are reasonably accessible to the operator;
- "best" means most effective in achieving a high general level of protection of the environment as a whole.
Food, drink and milk industries
A Reference Document on Best Available Techniques (BREF) in the food, drink and milk
industries of the European Union was published in August 2006, and reflected an information exchange carried out according to Article 16.2 of Council Directive 96/61/EC. It runs to more than 600 pages, and is replete with tables and flowchart diagrams. The 2006 BREF on these industries was superseded by another published in January 2017, which runs to more than 1000 pages.
United States environmental law
Clean Air Act
The Clean Air Act Amendments of 1990 require that certain facilities employ Best Available Control Technology to limit emissions.
...an emission limitation based on the maximum degree of reduction of each pollutant subject to regulation under this Act emitted from or which results from any major emitting facility, which the permitting authority, on a case-by-case basis, taking into account energy, environmental, and economic impacts and other costs, determines is achievable for such facility through application of production processes and available methods, systems, and techniques, including fuel cleaning, clean fuels, or treatment or innovative fuel combustion techniques for control of each such pollutant.
Clean Water Act
The Clean Water Act (CWA) requires issuance of national industrial wastewater discharge regulations (called "effluent guidelines"), which are based on BAT and several related standards.
...effluent limitations for categories and classes of point sources,... which (i) shall require application of the best available technology economically achievable for such category or class, which will result in reasonable further progress toward the national goal of eliminating the discharge of all pollutants. ...Factors relating to the assessment of best available technology shall take into account the age of equipment and facilities involved, the process employed, the engineering aspects of the application of various types of control techniques, process changes, the cost of achieving such effluent reduction, non-water quality environmental impact (including energy requirements), and such other factors as the Administrator deems appropriate.
In the development of the effluent standards, the BAT concept is a "model" technology rather than a specific regulatory requirement. The U.S. Environmental Protection Agency (EPA) identifies a particular model technology for an industry, and then writes a regulatory performance standard based on the model. The performance standard is typically expressed as a numeric effluent limit measured at the discharge point. The industrial facility may use any technology that meets the performance standard.
A related CWA provision for cooling water intake structures requires standards based on "best technology available."
...the location, design, construction, and capacity of cooling water intake structures reflect the best technology available for minimizing adverse environmental impact.
International conventions
The concept of BAT is also used in a number of international conventions such as the Minamata Convention on Mercury, the Stockholm Convention on Persistent Organic Pollutants, or the OSPAR Convention for the protection of the marine environment of the North-East Atlantic.
See also
Appropriate technology
Best Available Control Technology
Lowest Achievable Emissions Rate
References
External links
BAT reference documents and BAT conclusions of the European Union
OECD overview on BAT and similar concepts worldwide
Air pollution
European Union directives
European Union food law
Pollution control technologies
United States federal environmental legislation
Water pollution
Industrial ecology
History of agriculture in the United Kingdom
Agricultural health and safety
Food law | Best available technology | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,688 | [
"Industrial engineering",
"Pollution control technologies",
"Water pollution",
"Environmental engineering",
"Industrial ecology"
] |
17,833,137 | https://en.wikipedia.org/wiki/Gravitational%20lensing%20formalism | In general relativity, a point mass deflects a light ray with impact parameter by an angle approximately equal to
where G is the gravitational constant, M the mass of the deflecting object and c the speed of light. A naive application of Newtonian gravity can yield exactly half this value, where the light ray is assumed as a massed particle and scattered by the gravitational potential well. This approximation is good when is small.
In situations where general relativity can be approximated by linearized gravity, the deflection due to a spatially extended mass can be written simply as a vector sum over point masses. In the continuum limit, this becomes an integral over the density , and if the deflection is small we can approximate the gravitational potential along the deflected trajectory by the potential along the undeflected trajectory, as in the Born approximation in quantum mechanics. The deflection is then
where is the line-of-sight coordinate, and is the vector impact parameter of the actual ray path from the infinitesimal mass located at the coordinates .
Thin lens approximation
In the limit of a "thin lens", where the distances between the source, lens, and observer are much larger than the size of the lens (this is almost always true for astronomical objects), we can define the projected mass density
where is a vector in the plane of the sky. The deflection angle is then
As shown in the diagram on the right, the difference between the unlensed angular position and the observed position is this deflection angle, reduced by a ratio of distances, described as the lens equation
where is the distance from the lens to the source, is the distance from the observer to the source, and is the distance from the observer to the lens. For extragalactic lenses, these must be angular diameter distances.
In strong gravitational lensing, this equation can have multiple solutions, because a single source at can be lensed into multiple images.
Convergence and deflection potential
The reduced deflection angle can be written as
where we define the convergence
and the critical surface density (not to be confused with the critical density of the universe)
We can also define the deflection potential
such that the scaled deflection angle is just the gradient of the potential and the convergence is half the Laplacian of the potential:
The deflection potential can also be written as a scaled projection of the Newtonian gravitational potential of the lens
Lensing Jacobian
The Jacobian between the unlensed and lensed coordinate systems is
where is the Kronecker delta. Because the matrix of second derivatives must be symmetric, the Jacobian can be decomposed into a diagonal term involving the convergence and a trace-free term involving the shear
where is the angle between and the x-axis. The term involving the convergence magnifies the image by increasing its size while conserving surface brightness. The term involving the shear stretches the image tangentially around the lens, as discussed in weak lensing observables.
The shear defined here is not equivalent to the shear traditionally defined in mathematics, though both stretch an image non-uniformly.
Fermat surface
There is an alternative way of deriving the lens equation, starting from the photon arrival time (Fermat surface)
where is the time to travel an infinitesimal line element along the source-observer straight line in vacuum, which is
then corrected by the factor
to get the line element along the bended path with a varying small pitch angle and the refraction index
for the "aether", i.e., the gravitational field. The last can be obtained from the fact that a photon travels on a null geodesic of a weakly perturbed static Minkowski universe
where the uneven gravitational potential drives a changing the speed of light
So the refraction index
The refraction index greater than unity because of the negative gravitational potential .
Put these together and keep the leading terms we have the time arrival surface
The first term is the straight path travel time, the second term is the extra geometric path, and the third is the gravitational delay.
Make the triangle approximation that for the path between the observer and the lens,
and for the path between the lens and the source.
The geometric delay term becomes
(How? There is no on the left. Angular diameter distances don't add in a simple way, in general.)
So the Fermat surface becomes
where is so-called dimensionless time delay, and the 2D lensing potential
The images lie at the extrema of this surface, so the variation of with is zero,
which is the lens equation. Take the Poisson's equation for 3D potential
and we find the 2D lensing potential
Here we assumed the lens is a collection of point masses at angular coordinates and distances
Use for very small we find
One can compute the convergence by applying the 2D Laplacian of the 2D lensing potential
in agreement with earlier definition as the ratio of projected density with the critical density.
Here we used and
We can also confirm the previously defined reduced deflection angle
where is the so-called Einstein angular radius of a point lens . For a single point lens at the origin we recover the standard result
that there will be two images at the two solutions of the essentially quadratic equation
The amplification matrix can be obtained by double derivatives of the dimensionless time delay
where we have define the derivatives
which takes the meaning of convergence and shear. The amplification is the inverse of the Jacobian
where a positive means either a maxima or a minima, and a negative means a saddle point in the arrival surface.
For a single point lens, one can show (albeit a lengthy calculation) that
So the amplification of a point lens is given by
Note A diverges for images at the Einstein radius
In cases there are multiple point lenses plus a smooth background of (dark) particles of surface density the time arrival surface is
To compute the amplification, e.g., at the origin (0,0), due to identical point masses distributed at
we have to add up the total shear, and include a convergence of the smooth background,
This generally creates a network of critical curves, lines connecting image points of infinite amplification.
General weak lensing
In weak lensing by large-scale structure, the thin-lens approximation may break down, and low-density extended structures may not be well approximated by multiple thin-lens planes. In this case, the deflection can be derived by instead assuming that the gravitational potential is slowly varying everywhere (for this reason, this approximation is not valid for strong lensing).
This approach assumes the universe is well described by a Newtonian-perturbed FRW metric, but it makes no other assumptions about the distribution of the lensing mass.
As in the thin-lens case, the effect can be written as a mapping from the unlensed angular position to the lensed position . The Jacobian of the transform can be written as an integral over the gravitational potential along the line of sight
where is the comoving distance, are the transverse distances, and
is the lensing kernel, which defines the efficiency of lensing for a distribution of sources .
The Jacobian can be decomposed into convergence and shear terms just as with the thin-lens case, and in the limit of a lens that is both thin and weak, their physical interpretations are the same.
Weak lensing observables
In weak gravitational lensing, the Jacobian is mapped out by observing the effect of the shear on the ellipticities of background galaxies. This effect is purely statistical; the shape of any galaxy will be dominated by its random, unlensed shape, but lensing will produce a spatially coherent distortion of these shapes.
Measures of ellipticity
In most fields of astronomy, the ellipticity is defined as , where is the axis ratio of the ellipse. In weak gravitational lensing, two different definitions are commonly used, and both are complex quantities which specify both the axis ratio and the position angle :
Like the traditional ellipticity, the magnitudes of both of these quantities range from 0 (circular) to 1 (a line segment). The position angle is encoded in the complex phase, but because of the factor of 2 in the trigonometric arguments, ellipticity is invariant under a rotation of 180 degrees. This is to be expected; an ellipse is unchanged by a 180° rotation. Taken as imaginary and real parts, the real part of the complex ellipticity describes the elongation along the coordinate axes, while the imaginary part describes the elongation at 45° from the axes.
The ellipticity is often written as a two-component vector instead of a complex number, though it is not a true vector with regard to transforms:
Real astronomical background sources are not perfect ellipses. Their ellipticities can be measured by finding a best-fit elliptical model to the data, or by measuring the second moments of the image about some centroid
The complex ellipticities are then
This can be used to relate the second moments to traditional ellipse parameters:
and in reverse:
The unweighted second moments above are problematic in the presence of noise, neighboring objects, or extended galaxy profiles, so it is typical to use apodized moments instead:
Here is a weight function that typically goes to zero or quickly approaches zero at some finite radius.
Image moments cannot generally be used to measure the ellipticity of galaxies without correcting for observational effects, particularly the point spread function.
Shear and reduced shear
Recall that the lensing Jacobian can be decomposed into shear and convergence .
Acting on a circular background source with radius , lensing generates an ellipse with major and minor axes
as long as the shear and convergence do not change appreciably over the size of the source (in that case, the lensed image is not an ellipse). Galaxies are not intrinsically circular, however, so it is necessary to quantify the effect of lensing on a non-zero ellipticity.
We can define the complex shear in analogy to the complex ellipticities defined above
as well as the reduced shear
The lensing Jacobian can now be written as
For a reduced shear and unlensed complex ellipticities and , the lensed ellipticities are
In the weak lensing limit, and , so
If we can assume that the sources are randomly oriented, their complex ellipticities average to zero, so
and .
This is the principal equation of weak lensing: the average ellipticity of background galaxies is a direct measure of the shear induced by foreground mass.
Magnification
While gravitational lensing preserves surface brightness, as dictated by Liouville's theorem, lensing does change the apparent solid angle of a source. The amount of magnification is given by the ratio of the image area to the source area. For a circularly symmetric lens, the magnification factor μ is given by
In terms of convergence and shear
For this reason, the Jacobian is also known as the "inverse magnification matrix".
The reduced shear is invariant with the scaling of the Jacobian by a scalar , which is equivalent to the transformations
and
.
Thus, can only be determined up to a transformation , which is known as the "mass sheet degeneracy." In principle, this degeneracy can be broken if an independent measurement of the magnification is available because the magnification is not invariant under the aforementioned degeneracy transformation. Specifically, scales with as .
References
Astrophysics
Effects of gravity
Gravitational lensing | Gravitational lensing formalism | [
"Physics",
"Astronomy"
] | 2,353 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
17,841,907 | https://en.wikipedia.org/wiki/Malthusian%20equilibrium | A population is in Malthusian equilibrium when all of its production is used only for subsistence. Malthusian equilibrium is a locally stable and a dynamic equilibrium.
See also
Thomas Malthus — See this article for further exposition.
An Essay on the Principle of Population
Malthusian growth model
Malthusian trap
Population dynamics
References
Population
Mathematical modeling | Malthusian equilibrium | [
"Mathematics"
] | 69 | [
"Applied mathematics",
"Mathematical modeling"
] |
17,842,616 | https://en.wikipedia.org/wiki/Keith%20Campbell%20%28biologist%29 | Keith Henry Stockman Campbell (23 May 1954 – 5 October 2012) was a British biologist who was a member of the team at Roslin Institute that in 1996 first cloned a mammal, a Finnish Dorset lamb named Dolly, from fully differentiated adult mammary cells. He was Professor of Animal Development at the University of Nottingham. In 2008, he received the Shaw Prize for Medicine and Life Sciences jointly with Ian Wilmut and Shinya Yamanaka for "their works on the cell differentiation in mammals".
Education
Campbell was born in Birmingham, England, to an English mother and Scottish father. He started his education in Perth, Scotland, but, when he was eight years old, his family returned to Birmingham, where he attended King Edward VI Camp Hill School for Boys. He obtained his Bachelor of Science degree in microbiology from the Queen Elizabeth College, University of London (now part of King's College London). In 1983 Campbell was awarded the Marie Curie Research Scholarship, which led to postgraduate studies and later his PhD from the University of Sussex (Brighton, England, UK).
Research and career
Campbell's interest in cloning mammals was inspired by work done by Karl Illmensee and John Gurdon. Working at the Roslin Institute since 1991, Campbell became involved with the cloning efforts led by Ian Wilmut. In July 1995 Keith Campbell and Bill Ritchie succeeded in producing a pair of lambs, Megan and Morag from embryonic cells, which had differentiated in culture.
In 1996, a team led by Ian Wilmut with Keith Campbell as the main contributor, used the same technique and shocked the world by successfully cloning a sheep from adult mammary cells. Dolly, a Finn Dorset sheep named after the singer Dolly Parton, was born in 1996 and lived to be six years old (dying from a viral infection and not old age, as has been suggested). Campbell had a key role in the creation of Dolly, as he had the crucial idea of co-ordinating the stages of the "cell cycle" of the donor somatic cells and the recipient eggs and using diploid quiescent or "G0" arrested somatic cells as nuclear donors. In 2006, Ian Wilmut admitted that Campbell deserved "66 per cent" of the credit.
In 1997, Ritchie and Campbell in collaboration with PPL (Pharmaceutical Proteins Limited) created another sheep named "Polly", created from genetically altered skin cells containing a human gene. In 2000, after joining PPL Ltd, Campbell and his PPL team (based in North America) were successful in producing the world's first piglets by Somatic-cell nuclear transfer (SCNT), the so-called cloning technique. Furthermore, the PPL teams based in Roslin, Scotland and Blacksburg (USA) used the technique to produce the first gene targeted domestic animals as well as a range of animals producing human therapeutic proteins in their milk.
From November 1999, Campbell held the post of Professor of Animal Development, Division of Animal Physiology, School of Biosciences at the University of Nottingham where he continued to study embryo growth and differentiation. He supported the use of SCNT for the production of personalised stem cell therapies and for the study of human diseases and the use of cybrid embryo production to overcome the lack of human eggs available for research. Stem cells can be isolated from embryonic, fetal and adult derived material and more recently by overexpression of certain genes for the production of "induced pluripotent cells". Campbell believed all potential stem cell populations should be used for both basic and applied research which may provide basic scientific knowledge and lead to the development of cell therapies.
Awards and honours
In 2008, he received the Shaw Prize for Medicine and Life Sciences jointly with Ian Wilmut and Shinya Yamanaka. He was awarded the Pioneer Award from the International Embryo Transfer Society posthumously in 2015.
Personal life
Campbell died on 5 October 2012, aged 58, after accidentally hanging himself in his bedroom at his Ingleby, Derbyshire home, whilst heavily intoxicated. It was determined at the inquest that he had been behaving erratically at the time and had no actual intention to kill himself; the verdict was a death by misadventure. He was buried at Bretby Crematorium, Derbyshire. He is survived by his wife, Kathy, and two daughters, Claire and Lauren.
References
1954 births
2012 deaths
20th-century British biologists
20th-century British inventors
21st-century British biologists
Academics of the University of Nottingham
Accidental deaths in England
Alcohol-related deaths in England
Alumni of the University of London
Alumni of the University of Sussex
Cloning
Deaths by hanging
People from Perth, Scotland
Scientists from Birmingham, West Midlands | Keith Campbell (biologist) | [
"Engineering",
"Biology"
] | 963 | [
"Cloning",
"Genetic engineering"
] |
14,942,693 | https://en.wikipedia.org/wiki/RAP6 | RAP6 is the abbreviation for Rab5-activating protein 6, a novel endosomal protein with a role in endocytosis. RAP6 was discovered by Alejandro Barbieri and his group of researchers (Christine Hunker, Adriana Galvis, Ivan Kruk, Hugo Giambini, Lina Torres and Maria Luisa Veisaga) working at Florida International University.
This novel human protein has been reported to be involved in membrane trafficking. It has been shown that RAP6 has a guanine nucleotide exchange factor (GEF) activity specific to Rab5 and a GTPase activating protein (GAP) activity specific to RAS.
The original GeneBank Identifications (GIs) have been published in the NCBI Nucleotide databases with GIs 77176718 and 77176720. Since then, many names have been coined to the validated protein such as RabGEF1, GeneID: 27342. RAP6 belongs to the family of the GAPVD1, GeneID: 26130
References
Cellular processes
Transport proteins | RAP6 | [
"Biology"
] | 222 | [
"Cellular processes"
] |
14,943,400 | https://en.wikipedia.org/wiki/Free%20ideal%20ring | In mathematics, especially in the field of ring theory, a (right) free ideal ring, or fir, is a ring in which all right ideals are free modules with unique rank. A ring such that all right ideals with at most n generators are free and have unique rank is called an n-fir. A semifir is a ring in which all finitely generated right ideals are free modules of unique rank. (Thus, a ring is semifir if it is n-fir for all n ≥ 0.) The semifir property is left-right symmetric, but the fir property is not.
Properties and examples
It turns out that a left and right fir is a domain. Furthermore, a commutative fir is precisely a principal ideal domain, while a commutative semifir is precisely a Bézout domain. These last facts are not generally true for noncommutative rings, however .
Every principal right ideal domain R is a right fir, since every nonzero principal right ideal of a domain is isomorphic to R. In the same way, a right Bézout domain is a semifir.
Since all right ideals of a right fir are free, they are projective. So, any right fir is a right hereditary ring, and likewise a right semifir is a right semihereditary ring. Because projective modules over local rings are free, and because local rings have invariant basis number, it follows that a local, right hereditary ring is a right fir, and a local, right semihereditary ring is a right semifir.
Unlike a principal right ideal domain, a right fir is not necessarily right Noetherian, however in the commutative case, R is a Dedekind domain since it is a hereditary domain, and so is necessarily Noetherian.
Another important and motivating example of a free ideal ring are the free associative (unital) k-algebras for division rings k, also called non-commutative polynomial rings .
Semifirs have invariant basis number and every semifir is a Sylvester domain.
References
Further reading
Ring theory | Free ideal ring | [
"Mathematics"
] | 436 | [
"Fields of abstract algebra",
"Ring theory"
] |
14,948,065 | https://en.wikipedia.org/wiki/Color%E2%80%93flavor%20locking | Color–flavor locking (CFL) is a phenomenon that is expected to occur in ultra-high-density strange matter, a form of quark matter. The quarks form Cooper pairs, whose color properties are correlated with their flavor properties in a one-to-one correspondence between three color pairs and three flavor pairs. According to the Standard Model of particle physics, the color-flavor-locked phase is the highest-density phase of three-flavor colored matter.
Color-flavor-locked Cooper pairing
If each quark is represented as , with color index taking values 1, 2, 3 corresponding to red, green, and blue, and flavor index taking values 1, 2, 3 corresponding to up, down, and strange, then the color-flavor-locked pattern of Cooper pairing is
This means that a Cooper pair of an up quark and a down quark must have colors red and green, and so on. This pairing pattern is special because it leaves a large unbroken symmetry group.
Physical properties
The CFL phase has several remarkable properties.
It breaks chiral symmetry.
It is a superfluid.
It is an electromagnetic insulator, in which there is a "rotated" photon, containing a small admixture of one of the gluons.
It has the same symmetries as sufficiently dense hyperonic matter.
There are several variants of the CFL phase, representing distortions of the pairing structure in response to external stresses such as a difference between the mass of the strange quark and the mass of the up and down quarks.
See also
Color superconductivity
References
Quark matter
Quantum chromodynamics
Phases of matter | Color–flavor locking | [
"Physics",
"Chemistry"
] | 337 | [
"Quark matter",
"Phases of matter",
"Astrophysics",
"Nuclear physics",
"Matter"
] |
14,948,458 | https://en.wikipedia.org/wiki/IEC%2060038 | International Standard IEC 60038, IEC standard voltages, defines a set of standard voltages for use in low voltage and high voltage AC and DC electricity supply systems.
Low voltage
Where two voltages are given below separated by "/", the first is the root-mean-square voltage between a phase and the neutral connector, whereas the second is the corresponding root-mean-square voltage between two phases (exception: the category shown below called "One Phase", where 240 V is the root-mean-square voltage between the two legs of a split phase). The three-phase voltages are for use in either four-wire (with neutral) or three-wire (without neutral) systems.
Three-phase 50 Hz
230 V / 400 V (formerly 220/380 V)
400 V / 690 V (formerly 380/660 V)
1000 V phase to phase (3 wire)
Suppliers using 220 V / 380 V or 240 V / 415 V systems were expected by the standard to migrate to the recommended value of 230 V / 400 V by the year 2003. This migration has already been largely completed, at least within the European Union.
Voltage conversion schedule
Three-phase 60 Hz
120 V / 208 V
240 V
230 V / 400 V
277 V / 480 V
480 V
347 V / 600 V
600 V / 1000 V
One-phase, three-wire 60 Hz (American split-phase)
120 V / 240 V
Table 3 1 kV to 35 kV
Table 3 of IEC 60038 lists nominal voltages above 1 kV and not exceeding 35 kV. There are two series, one from 3 kV up to 35 kV
Table 4 35 kV - 230 kV
Table 4 shows nominal voltages above 35 kV and not exceeding 230 kV.
Table 5 245 - 1,200 kV
Table 5 is systematically different, as the highest voltage for equipment is the characteristic value exceeding 245 kV. The enumeration begins at 300 kV and ends with 1200 kV.
See also
Mains electricity by country
References
External links
Definition of Voltage ranges as per IEC 60038 – WIKI - Electrical Installation Guide
https://webstore.iec.ch/preview/info_iec60038%7Bed7.0%7Db.pdf
Electric power distribution
60038
Electrical wiring | IEC 60038 | [
"Physics",
"Technology",
"Engineering"
] | 459 | [
"Electrical systems",
"Building engineering",
"Computer standards",
"IEC standards",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
14,949,005 | https://en.wikipedia.org/wiki/Dry%20lubricant | Dry lubricants or solid lubricants are materials that, despite being in the solid phase, are able to reduce friction between two surfaces sliding against each other without the need for a liquid oil medium.
The two main dry lubricants are graphite and molybdenum disulfide. They offer lubrication at temperatures higher than liquid and oil-based lubricants operate. Dry lubricants are often used in applications such as locks or dry lubricated bearings. Such materials can operate up to 350 °C (662 °F) in oxidizing environments and even higher in reducing / non-oxidizing environments (molybdenum disulfide up to 1100 °C, 2012 °F). The low-friction characteristics of most dry lubricants are attributed to a layered structure on the molecular level with weak bonding between layers. Such layers are able to slide relative to each other with minimal applied force, thus giving them their low friction properties.
However, a layered crystal structure alone is not necessarily sufficient for lubrication. In fact, there are some solids with non-lamellar structures that function well as dry lubricants in some applications. These include certain soft metals (indium, lead, silver, tin), polytetrafluroethylene, some solid oxides, rare-earth fluorides, and even diamond.
Limited interest has been shown in low friction properties of compacted oxide glaze layers formed at several hundred degrees Celsius in metallic sliding systems. However, practical use is still many years away due to their physically unstable nature.
The four most commonly used solid lubricants are:
Graphite. Used in air compressors, food industry, railway track joints, brass instrument valves, piano actions, open gear, ball bearings, machine-shop works, etc. It is also very common for lubricating locks, since a liquid lubricant allows particles to get stuck in the lock worsening the problem. It is often used to lubricate the internal moving parts of firearms in sandy environments.
Molybdenum disulfide (MoS2). Used in CV joints and space vehicles. Does lubricate in vacuum.
Hexagonal boron nitride. Used in space vehicles. Also called "white graphite."
Tungsten disulfide. Similar usage as molybdenum disulfide, but due to the high cost only found in some dry lubricated bearings.
Graphite and molybdenum disulfide are the predominant materials used as dry lubricants.
Structure-function relationship
The lubricity of many solids is attributable to a lamellar structure. The lamellae orient parallel to the surface in the direction of motion and slide easily over each other resulting in low friction and preventing contact between sliding components even under high loads. Large particles perform best on rough surfaces at low speed, finer particles on smoother surfaces and at higher speeds. These materials may be added in the form of dry powder to liquid lubricants to modify or enhance their properties.
Other components that are useful solid lubricants include boron nitride, polytetrafluorethylene (PTFE), talc, calcium fluoride, cerium fluoride, and tungsten disulfide.
Applications
Solid lubricants are useful for conditions when conventional lubricants are inadequate, such as:
Reciprocating motion. A typical application is a sliding or reciprocating motion that requires lubrication to minimize wear, as, for example, in gear and chain lubrication. Liquid lubricants will squeeze out while solid lubricants do not escape, preventing fretting, corrosion, and galling.
Ceramics. Another application is for cases where chemically active lubricant additives have not been found for a particular surface, such as polymers and ceramics.
High temperature. Graphite and MoS2 act as lubricants at high temperature and in oxidizing atmosphere environments, where liquid lubricants typically will not survive. A typical application involves fasteners that are easily tightened and unscrewed after a long stay at high temperatures.
Extreme contact pressures. The lamellar structure orients parallel to the sliding surface, resulting in high bearing-load combined with a low shear stress. Most applications in metal forming that involve plastic deformation use solid lubricants.
Graphite
Graphite is structurally composed of planes of polycyclic carbon atoms that are hexagonal in orientation. The distance of carbon atoms between planes is longer and, therefore, the bonding is weaker.
Graphite is best suited for lubrication in air. Water vapor is a necessary component for graphite lubrication. The adsorption of water reduces the bonding energy between the hexagonal planes of the graphite to a lower level than the adhesion energy between a substrate and the graphite. Because water vapor is a requirement for lubrication, graphite is not effective in vacuum. Because it is electrically conductive, graphite can promote galvanic corrosion. In an oxidative atmosphere, graphite is effective at high temperatures up to 450 °C continuously and can withstand much higher temperature peaks.
Graphite is characterized by two main groups: natural and synthetic.
Synthetic graphite is a high temperature sintered product and is characterized by its high purity of carbon (99.5−99.9%). Primary grade synthetic graphite can approach the good lubricity of quality natural graphite.
Natural graphite is derived from mining. The quality of natural graphite varies as a result of the ore quality and its post-mining processing. The end product is graphite with a content of carbon (high grade graphite 96−98% carbon), sulfur, SiO2, and ash. The higher the carbon content and the degree of graphitization (high crystalline) the better the lubricity and resistance to oxidation.
For applications where only a minor lubricity is needed and a more thermally insulating coating is required, then amorphous graphite would be chosen (80% carbon).
Molybdenum disulfide
MoS2 is mined from some sulfide-rich deposits and refined to achieve a purity suitable for lubricants. Like graphite, MoS2 has a hexagonal crystal structure with the intrinsic property of easy shear. MoS2 lubrication performance often exceeds that of graphite and is effective in vacuum as well, whereas graphite is not. The temperature limitation of MoS2 at 400 °C is restricted by oxidation. Particle size and film thickness are important parameters that should be matched to the surface roughness of the substrate. Large particles may result in excessive wear by abrasion caused by impurities in the MoS2, and small particles may result in accelerated oxidation.
Boron nitride
Hexagonal boron nitride is a ceramic powder lubricant. The most interesting lubricant feature is its high temperature resistance of 1200 °C service temperature in an oxidizing atmosphere. Furthermore, boron nitride has a high thermal conductivity. (Cubic boron nitride is very hard and used as an abrasive and cutting tool component.)
Polytetrafluorethylene
Polytetrafluorethylene (PTFE) is widely used as an additive in lubricating oils and greases. Due to the low surface energy of PTFE, stable unflocculated dispersions of PTFE in oil or water can be produced. Contrary to the other solid lubricants discussed, PTFE does not have a layered structure. The macro molecules of PTFE slip easily along each other, similar to lamellar structures. PTFE shows one of the smallest coefficients of static and dynamic friction, down to 0.04. Operating temperatures are limited to about 260 °C.
Application methods
Spraying/dipping/brushing
Dispersion of solid lubricant as an additive in oil, water, or grease is most commonly used. For parts that are inaccessible for lubrication after assembly, a dry film lubricant can be sprayed. After the solvent evaporates, the coating cures at room temperature to form a solid lubricant. Pastes are grease-like lubricants containing a high percentage of solid lubricants used for assembly and lubrication of highly loaded, slow-moving parts. Black pastes generally contain MoS2. For high temperatures above 500 °C, pastes are composed on the basis of metal powders to protect metal parts from oxidation necessary to facilitate disassembly of threaded connections and other assemblies.
Free powders
Dry-powder tumbling is an effective application method. The bonding can be improved by prior phosphating of the substrate. Use of free powders has its limitations, since adhesion of the solid particles to the substrate is usually insufficient to provide any service life in continuous applications. However, to improve running-in conditions or in metal-forming processes, a short duration of the improved slide conditions may suffice.
Anti-friction coatings
Anti-friction (AF) coatings are "lubricating paints" consisting of fine particles of lubricating pigments, such as molydisulfide, PTFE or graphite, blended with a binder. After application and proper curing, these "slippery" or dry lubricants bond to the metal surface and form a dark gray solid film. Many dry film lubricants contain special rust inhibitors which offer exceptional corrosion protection. Most long-wearing films are of the bonded type but are still restricted to applications where sliding distances are not too long. AF coatings are applied where fretting and galling is a problem (such as splines, universal joints and keyed bearings), where operating pressures exceed the load-bearing capacities of ordinary oils and greases, where smooth running in is desired (piston, camshaft), where clean operation is desired (AF coatings will not collect dirt and debris like greases and oils), and where parts may be stored for long periods.
Composites
Self-lubricating composites: Solid lubricants such as PTFE, graphite, MoS2 and some other anti-friction and anti-wear additives are often compounded in polymers and all kind of sintered materials. MoS2, for example, is compounded in materials for sleeve bearings, elastomer O-rings, carbon brushes, etc. Solid lubricants are compounded in plastics to form a "self-lubricating" or "internally lubricated" thermoplastic composite. For example, PTFE particles compounded in the plastic form a PTFE film over the mating surface, resulting in a reduction of friction and wear. MoS2 compounded in nylon reduces wear, friction and stick-slip. Furthermore, it acts as a nucleating agent effecting in a very fine crystalline structure. The primary use of graphite lubricated thermoplastics is in applications operating in aqueous environments.
References
Further reading
Sliney, Harnold E, Solid Lubricants, NASA Technical Memorandum TM-103803, 1991. Available at hdl.handle.net/2060/19910013083.
Lubricants
Tribology | Dry lubricant | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,355 | [
"Tribology",
"Mechanical engineering",
"Surface science",
"Materials science"
] |
9,821,563 | https://en.wikipedia.org/wiki/Metal%E2%80%93organic%20framework | Metal–organic frameworks (MOFs) are a class of porous polymers consisting of metal clusters (also known as Secondary Building Units - SBUs) coordinated to organic ligands to form one-, two- or three-dimensional structures. The organic ligands included are sometimes referred to as "struts" or "linkers", one example being 1,4-benzenedicarboxylic acid (BDC).
More formally, a metal–organic framework is a potentially porous extended structure made from metal ions and organic linkers. An extended structure is a structure whose sub-units occur in a constant ratio and are arranged in a repeating pattern. MOFs are a subclass of coordination networks, which is a coordination compound extending, through repeating coordination entities, in one dimension, but with cross-links between two or more individual chains, loops, or spiro-links, or a coordination compound extending through repeating coordination entities in two or three dimensions. Coordination networks including MOFs further belong to coordination polymers, which is a coordination compound with repeating coordination entities extending in one, two, or three dimensions. Most of the MOFs reported in the literature are crystalline compounds, but there are also amorphous MOFs, and other disordered phases.
In most cases for MOFs, the pores are stable during the elimination of the guest molecules (often solvents) and could be refilled with other compounds. Because of this property, MOFs are of interest for the storage of gases such as hydrogen and carbon dioxide. Other possible applications of MOFs are in gas purification, in gas separation, in water remediation, in catalysis, as conducting solids and as supercapacitors.
The synthesis and properties of MOFs constitute the primary focus of the discipline called (from Latin , "small net"). In contrast to MOFs, covalent organic frameworks (COFs) are made entirely from light elements (H, B, C, N, and O) with extended structures.
Structure
MOFs are composed of two main components: an inorganic metal cluster (often referred to as a secondary-building unit or SBU) and an organic molecule called a linker. For this reason, the materials are often referred to as hybrid organic-inorganic materials. The organic units are typically mono-, di-, tri-, or tetravalent ligands. The choice of metal and linker dictates the structure and hence properties of the MOF. For example, the metal's coordination preference influences the size and shape of pores by dictating how many ligands can bind to the metal, and in which orientation.
To describe and organize the structures of MOFs, a system of nomenclature has been developed. Subunits of a MOF, called secondary building units (SBUs), can be described by topologies common to several structures. Each topology, also called a net, is assigned a symbol, consisting of three lower-case letters in bold. MOF-5, for example, has a pcu net.
Attached to the SBUs are bridging ligands. For MOFs, typical bridging ligands are di- and tricarboxylic acids. These ligands typically have rigid backbones. Examples are benzene-1,4-dicarboxylic acid (BDC or terephthalic acid), biphenyl-4,4-dicarboxylic acid (BPDC), and the tricarboxylic acid trimesic acid.
Synthesis
General synthesis
The study of MOFs has roots in coordination chemistry and solid-state inorganic chemistry, but it developed into a new field. In addition, MOFs are constructed from bridging organic ligands that remain intact throughout the synthesis. Zeolite synthesis often makes use of a "template". Templates are ions that influence the structure of the growing inorganic framework. Typical templating ions are quaternary ammonium cations, which are removed later. In MOFs, the framework is templated by the SBU (secondary building unit) and the organic ligands. A templating approach that is useful for MOFs intended for gas storage is the use of metal-binding solvents such as N,N-diethylformamide and water. In these cases, metal sites are exposed when the solvent is evacuated, allowing hydrogen to bind at these sites.
Four developments were particularly important in advancing the chemistry of MOFs. (1) The geometric principle of construction where metal-containing units were kept in rigid shapes. Early MOFs contained single atoms linked to ditopic coordinating linkers. The approach not only led to the identification of a small number of preferred topologies that could be targeted in designed synthesis, but was the central point to achieve a permanent porosity. (2) The use of the isoreticular principle where the size and the nature of a structure changes without changing its topology led to MOFs with ultrahigh porosity and unusually large pore openings. (3) Post- synthetic modification of MOFs increased their functionality by reacting organic units and metal-organic complexes with linkers. (4) Multifunctional MOFs incorporated multiple functionalities in a single framework.
Since ligands in MOFs typically bind reversibly, the slow growth of crystals often allows defects to be redissolved, resulting in a material with millimeter-scale crystals and a near-equilibrium defect density. Solvothermal synthesis is useful for growing crystals suitable to structure determination, because crystals grow over the course of hours to days. However, the use of MOFs as storage materials for consumer products demands an immense scale-up of their synthesis. Scale-up of MOFs has not been widely studied, though several groups have demonstrated that microwaves can be used to nucleate MOF crystals rapidly from solution. This technique, termed "microwave-assisted solvothermal synthesis", is widely used in the zeolite literature, and produces micron-scale crystals in a matter of seconds to minutes, in yields similar to the slow growth methods.
Some MOFs, such as the mesoporous MIL-100(Fe), can be obtained under mild conditions at room temperature and in green solvents (water, ethanol) through scalable synthesis methods.
A solvent-free synthesis of a range of crystalline MOFs has been described. Usually the metal acetate and the organic proligand are mixed and ground up with a ball mill. Cu3(BTC)2 can be quickly synthesised in this way in quantitative yield. In the case of Cu3(BTC)2 the morphology of the solvent free synthesised product was the same as the industrially made Basolite C300. It is thought that localised melting of the components due to the high collision energy in the ball mill may assist the reaction. The formation of acetic acid as a by-product in the reactions in the ball mill may also help in the reaction having a solvent effect in the ball mill. It has been shown that the addition of small quantities of ethanol for the mechanochemical synthesis of Cu3(BTC)2 significantly reduces the amounts of structural defects in the obtained material.
A recent advancement in the solvent-free preparation of MOF films and composites is their synthesis by chemical vapor deposition. This process, MOF-CVD, was first demonstrated for ZIF-8 and consists of two steps. In a first step, metal oxide precursor layers are deposited. In the second step, these precursor layers are exposed to sublimed ligand molecules, that induce a phase transformation to the MOF crystal lattice. Formation of water during this reaction plays a crucial role in directing the transformation. This process was successfully scaled up to an integrated cleanroom process, conforming to industrial microfabrication standards.
Numerous methods have been reported for the growth of MOFs as oriented thin films. However, these methods are suitable only for the synthesis of a small number of MOF topologies. One such example being the vapor-assisted conversion (VAC) which can be used for the thin film synthesis of several UiO-type MOFs.
High-throughput synthesis
High-throughput (HT) methods are a part of combinatorial chemistry and a tool for increasing efficiency. There are two synthetic strategies within the HT-methods: In the combinatorial approach, all reactions take place in one vessel, which leads to product mixtures. In the parallel synthesis, the reactions take place in different vessels. Furthermore, a distinction is made between thin films and solvent-based methods.
Solvothermal synthesis can be carried out conventionally in a teflon reactor in a convection oven or in glass reactors in a microwave oven (high-throughput microwave synthesis). The use of a microwave oven changes, in part dramatically, the reaction parameters.
In addition to solvothermal synthesis, there have been advances in using supercritical fluid as a solvent in a continuous flow reactor. Supercritical water was first used in 2012 to synthesize copper and nickel-based MOFs in just seconds. In 2020, supercritical carbon dioxide was used in a continuous flow reactor along the same time scale as the supercritical water-based method, but the lower critical point of carbon dioxide allowed for the synthesis of the zirconium-based MOF UiO-66.
High-throughput solvothermal synthesis
In high-throughput solvothermal synthesis, a solvothermal reactor with (e.g.) 24 cavities for teflon reactors is used. Such a reactor is sometimes referred to as a multiclav. The reactor block or reactor insert is made of stainless steel and contains 24 reaction chambers, which are arranged in four rows. With the miniaturized teflon reactors, volumes of up to 2 mL can be used. The reactor block is sealed in a stainless steel autoclave; for this purpose, the filled reactors are inserted into the bottom of the reactor, the teflon reactors are sealed with two teflon films and the reactor top side is put on. The autoclave is then closed in a hydraulic press. The sealed solvothermal reactor can then be subjected to a temperature-time program. The reusable teflon film serves to withstand the mechanical stress, while the disposable teflon film seals the reaction vessels. After the reaction, the products can be isolated and washed in parallel in a vacuum filter device. On the filter paper, the products are then present separately in a so-called sample library and can subsequently be characterized by automated X-ray powder diffraction. The informations obtained are then used to plan further syntheses.
Pseudomorphic replication
Pseudomorphic mineral replacement events occur whenever a mineral phase comes into contact with a fluid with which it is out of equilibrium. Re-equilibration will tend to take place to reduce the free energy and transform the initial phase into a more thermodynamically stable phase, involving dissolution and reprecipitation subprocesses.
Inspired by such geological processes, MOF thin films can be grown through the combination of atomic layer deposition (ALD) of aluminum oxide onto a suitable substrate (e.g. FTO) and subsequent solvothermal microwave synthesis. The aluminum oxide layer serves both as an architecture-directing agent and as a metal source for the backbone of the MOF structure. The construction of the porous 3D metal-organic framework takes place during the microwave synthesis, when the atomic layer deposited substrate is exposed to a solution of the requisite linker in a DMF/H2O 3:1 mixture (v/v) at elevated temperature. Analogous, Kornienko and coworkers described in 2015 the synthesis of a cobalt-porphyrin MOF (Al2(OH)2TCPP-Co; TCPP-H2=4,4,4,4‴-(porphyrin-5,10,15,20-tetrayl)tetrabenzoate), the first MOF catalyst constructed for the electrocatalytic conversion of aqueous to CO.
Post-synthetic modification
Although the three-dimensional structure and internal environment of the pores can be in theory controlled through proper selection of nodes and organic linking groups, the direct synthesis of such materials with the desired functionalities can be difficult due to the high sensitivity of MOF systems. Thermal and chemical sensitivity, as well as high reactivity of reaction materials, can make forming desired products challenging to achieve. The exchange of guest molecules and counter-ions and the removal of solvents allow for some additional functionality but are still limited to the integral parts of the framework. The post-synthetic exchange of organic linkers and metal ions is an expanding area of the field and opens up possibilities for more complex structures, increased functionality, and greater system control.
Ligand exchange
Post-synthetic modification techniques can be used to exchange an existing organic linking group in a prefabricated MOF with a new linker by ligand exchange or partial ligand exchange. This exchange allows for the pores and, in some cases the overall framework of MOFs, to be tailored for specific purposes. Some of these uses include fine-tuning the material for selective adsorption, gas storage, and catalysis. To perform ligand exchange prefabricated MOF crystals are washed with solvent and then soaked in a solution of the new linker. The exchange often requires heat and occurs on the time scale of a few days. Post-synthetic ligand exchange also enables the incorporation of functional groups into MOFs that otherwise would not survive MOF synthesis, due to temperature, pH, or other reaction conditions, or hinder the synthesis itself by competition with donor groups on the loaning ligand.
Metal exchange
Post-synthetic modification techniques can also be used to exchange an existing metal ion in a prefabricated MOF with a new metal ion by metal ion exchange. The complete metal metathesis from an integral part of the framework has been achieved without altering the framework or pore structure of the MOF. Similarly to post-synthetic ligand exchange, post-synthetic metal exchange is performed by washing prefabricated MOF crystals with solvent and then soaking the crystal in a solution of the new metal. Post-synthetic metal exchange allows for a simple route to the formation of MOFs with the same framework yet different metal ions.
Stratified synthesis
In addition to modifying the functionality of the ligands and metals themselves, post-synthetic modification can be used to expand upon the structure of the MOF. Using post-synthetic modification MOFs can be converted from a highly ordered crystalline material toward a heterogeneous porous material. Using post-synthetic techniques, it is possible for the controlled installation of domains within a MOF crystal which exhibit unique structural and functional characteristics. Core-shell MOFs and other layered MOFs have been prepared where layers have unique functionalization but in most cases are crystallographically compatible from layer to layer.
Open coordination sites
In some cases MOF metal nodes have an unsaturated environment, and it is possible to modify this environment using different techniques. If the size of the ligand matches the size of the pore aperture, it is possible to install additional ligands to existing MOF structure. Sometimes metal nodes have a good binding affinity for inorganic species. For instance, it was shown that metal nodes can perform an extension, and create a bond with the uranyl cation.
Composite materials
Another approach to increasing adsorption in MOFs is to alter the system in such a way that chemisorption becomes possible. This functionality has been introduced by making a composite material, which contains a MOF and a complex of platinum with activated carbon. In an effect known as hydrogen spillover, H2 can bind to the platinum surface through a dissociative mechanism which cleaves the hydrogen molecule into two hydrogen atoms and enables them to travel down the activated carbon onto the surface of the MOF. This innovation produced a threefold increase in the room-temperature storage capacity of a MOF; however, desorption can take upwards of 12 hours, and reversible desorption is sometimes observed for only two cycles. The relationship between hydrogen spillover and hydrogen storage properties in MOFs is not well understood but may prove relevant to hydrogen storage.
Catalysis
MOFs have potential as heterogeneous catalysts, although applications have not been commercialized. Their high surface area, tunable porosity, diversity in metal and functional groups make them especially attractive for use as catalysts. Zeolites are extraordinarily useful in catalysis. Zeolites are limited by the fixed tetrahedral coordination of the Si/Al connecting points and the two-coordinated oxide linkers. Fewer than 200 zeolites are known. In contrast with this limited scope, MOFs exhibit more diverse coordination geometries, polytopic linkers, and ancillary ligands (F−, OH−, H2O among others). It is also difficult to obtain zeolites with pore sizes larger than 1 nm, which limits the catalytic applications of zeolites to relatively small organic molecules (typically no larger than xylenes). Furthermore, mild synthetic conditions typically employed for MOF synthesis allow direct incorporation of delicate functionalities into the framework structures. Such a process would not be possible with zeolites or other microporous crystalline oxide-based materials because of the harsh conditions typically used for their synthesis (e.g., calcination at high temperatures to remove organic templates). Metal–organic framework MIL-101 is one of the most used MOFs for catalysis incorporating different transition metals such as Cr. However, the stability of some MOF photocatalysts in aqueous medium and under strongly oxidizing conditions is very low.
Zeolites still cannot be obtained in enantiopure form, which precludes their applications in catalytic asymmetric synthesis, e.g., for the pharmaceutical, agrochemical, and fragrance industries. Enantiopure chiral ligands or their metal complexes have been incorporated into MOFs to lead to efficient asymmetric catalysts. Even some MOF materials may bridge the gap between zeolites and enzymes when they combine isolated polynuclear sites, dynamic host–guest responses, and a hydrophobic cavity environment. MOFs might be useful for making semi-conductors. Theoretical calculations show that MOFs are semiconductors or insulators with band gaps between 1.0 and 5.5 eV which can be altered by changing the degree of conjugation in the ligands indicating its possibility for being photocatalysts.
Design
Like other heterogeneous catalysts, MOFs may allow for easier post-reaction separation and recyclability than homogeneous catalysts. In some cases, they also give a highly enhanced catalyst stability. Additionally, they typically offer substrate-size selectivity. Nevertheless, while clearly important for reactions in living systems, selectivity on the basis of substrate size is of limited value in abiotic catalysis, as reasonably pure feedstocks are generally available.
Metal ions or metal clusters
Among the earliest reports of MOF-based catalysis was the cyanosilylation of aldehydes by a 2D MOF (layered square grids) of formula Cd(4,4-bpy)2(NO3)2. This investigation centered mainly on size- and shape-selective clathration. A second set of examples was based on a two-dimensional, square-grid MOF containing single Pd(II) ions as nodes and 2-hydroxypyrimidinolates as struts. Despite initial coordinative saturation, the palladium centers in this MOF catalyze alcohol oxidation, olefin hydrogenation, and Suzuki C–C coupling. At a minimum, these reactions necessarily entail redox oscillations of the metal nodes between Pd(II) and Pd(0) intermediates accompanying by drastic changes in coordination number, which would certainly lead to destabilization and potential destruction of the original framework if all the Pd centers are catalytically active. The observation of substrate shape- and size-selectivity implies that the catalytic reactions are heterogeneous and are indeed occurring within the MOF. Nevertheless, at least for hydrogenation, it is difficult to rule out the possibility that catalysis is occurring at the surface of MOF-encapsulated palladium clusters/nanoparticles (i.e., partial decomposition sites) or defect sites, rather than at transiently labile, but otherwise intact, single-atom MOF nodes. "Opportunistic" MOF-based catalysis has been described for the cubic compound, MOF-5. This material comprises coordinatively saturated Zn4O nodes and fully complexed BDC struts (see above for abbreviation); yet it apparently catalyzes the Friedel–Crafts tert-butylation of both toluene and biphenyl. Furthermore, para alkylation is strongly favored over ortho alkylation, a behavior thought to reflect the encapsulation of reactants by the MOF.
Functional struts
The porous-framework material [Cu3(btc)2(H2O)3], also known as HKUST-1, contains large cavities having windows of diameter ~6 Å. The coordinated water molecules are easily removed, leaving open Cu(II) sites. Kaskel and co-workers showed that these Lewis acid sites could catalyze the cyanosilylation of benzaldehyde or acetone. The anhydrous version of HKUST-1 is an acid catalyst. Compared to Brønsted vs. Lewis acid-catalyzed pathways, the product selectivity are distinctive for three reactions: isomerization of α-pinene oxide, cyclization of citronellal, and rearrangement of α-bromoacetals, indicating that indeed [Cu3(btc)2] functions primarily as a Lewis acid catalyst. The product selectivity and yield of catalytic reactions (e.g. cyclopropanation) have also been shown to be impacted by defective sites, such as Cu(I) or incompletely deprotonated carboxylic acid moities of the linkers.
MIL-101, a large-cavity MOF having the formula [Cr3F(H2O)2O(BDC)3], is a cyanosilylation catalyst. The coordinated water molecules in MIL-101 are easily removed to expose Cr(III) sites. As one might expect, given the greater Lewis acidity of Cr(III) vs. Cu(II), MIL-101 is much more active than HKUST-1 as a catalyst for the cyanosilylation of aldehydes. Additionally, the Kaskel group observed that the catalytic sites of MIL-101, in contrast to those of HKUST-1, are immune to unwanted reduction by benzaldehyde. The Lewis-acid-catalyzed cyanosilylation of aromatic aldehydes has also been carried out by Long and co-workers using a MOF of the formula Mn3[(Mn4Cl)3BTT8(CH3OH)10]. This material contains a three-dimensional pore structure, with the pore diameter equaling 10 Å. In principle, either of the two types of Mn(II) sites could function as a catalyst. Noteworthy features of this catalyst are high conversion yields (for small substrates) and good substrate-size-selectivity, consistent with channellocalized catalysis.
Encapsulated catalysts
The MOF encapsulation approach invites comparison to earlier studies of oxidative catalysis by zeolite-encapsulated Fe(porphyrin) as well as Mn(porphyrin) systems. The zeolite studies generally employed iodosylbenzene (PhIO), rather than TPHP as oxidant. The difference is likely mechanistically significant, thus complicating comparisons. Briefly, PhIO is a single oxygen atom donor, while TBHP is capable of more complex behavior. In addition, for the MOF-based system, it is conceivable that oxidation proceeds via both oxygen transfer from a manganese oxo intermediate as well as a manganese-initiated radical chain reaction pathway. Regardless of mechanism, the approach is a promising one for isolating and thereby stabilizing the porphyrins against both oxo-bridged dimer formation and oxidative degradation.
Metal-free organic cavity modifiers
Most examples of MOF-based catalysis make use of metal ions or atoms as active sites. Among the few exceptions are two nickel- and two copper-containing MOFs synthesized by Rosseinsky and co-workers. These compounds employ amino acids (L- or D-aspartate) together with dipyridyls as struts. The coordination chemistry is such that the amine group of the aspartate cannot be protonated by added HCl, but one of the aspartate carboxylates can. Thus, the framework-incorporated amino acid can exist in a form that is not accessible for the free amino acid. While the nickel-based compounds are marginally porous, on account of tiny channel dimensions, the copper versions are clearly porous.
The Rosseinsky group showed that the carboxylic acids behave as Brønsted acidic catalysts, facilitating (in the copper cases) the ring-opening methanolysis of a small, cavity-accessible epoxide at up to 65% yield. Superior homogeneous catalysts exist however.
Kitagawa and co-workers have reported the synthesis of a catalytic MOF having the formula [Cd(4-BTAPA)2(NO3)2]. The MOF is three-dimensional, consisting of an identical catenated pair of networks, yet still featuring pores of molecular dimensions. The nodes consist of single cadmium ions, octahedrally ligated by pyridyl nitrogens. From a catalysis standpoint, however, the most interesting feature of this material is the presence of guest-accessible amide functionalities. The amides are capable of base-catalyzing the Knoevenagel condensation of benzaldehyde with malononitrile. Reactions with larger nitriles, however, are only marginally accelerated, implying that catalysis takes place chiefly within the material's channels rather than on its exterior. A noteworthy finding is the lack of catalysis by the free strut in homogeneous solution, evidently due to intermolecular H-bonding between bptda molecules. Thus, the MOF architecture elicits catalytic activity not otherwise encountered.
In an interesting alternative approach, Férey and coworkers were able to modify the interior of MIL-101 via Cr(III) coordination of one of the two available nitrogen atoms of each of several ethylenediamine molecules. The free non-coordinated ends of the ethylenediamines were then used as Brønsted basic catalysts, again for Knoevenagel condensation of benzaldehyde with nitriles.
A third approach has been described by Kim Kimoon and coworkers. Using a pyridine-functionalized derivative of tartaric acid and a Zn(II) source they were able to synthesize a 2D MOF termed POST-1. POST-1 possesses 1D channels whose cross sections are defined by six trinuclear zinc clusters and six struts. While three of the six pyridines are coordinated by zinc ions, the remaining three are protonated and directed toward the channel interior. When neutralized, the noncoordinated pyridyl groups are found to catalyze transesterification reactions, presumably by facilitating deprotonation of the reactant alcohol. The absence of significant catalysis when large alcohols are employed strongly suggests that the catalysis occurs within the channels of the MOF.
Achiral catalysis
Metals as catalytic sites
The metals in the MOF structure often act as Lewis acids. The metals in MOFs often coordinate to labile solvent molecules or counter ions which can be removed after activation of the framework. The Lewis acidic nature of such unsaturated metal centers can activate the coordinated organic substrates for subsequent organic transformations. The use of unsaturated metal centers was demonstrated in the cyanosilylation of aldehydes and imines by Fujita and coworkers in 2004. They reported MOF of composition {[Cd(4,4-bpy)2(H2O)2] • (NO3)2 • 4H2O} which was obtained by treating linear bridging ligand 4,4-bipyridine (bpy) with . The Cd(II) centers in this MOF possess a distorted octahedral geometry having four pyridines in the equatorial positions, and two water molecules in the axial positions to form a two-dimensional infinite network. On activation, two water molecules were removed leaving the metal centers unsaturated and Lewis acidic. The Lewis acidic character of metal center was tested on cyanosilylation reactions of imine where the imine gets attached to the Lewis-acidic metal centre resulting in higher electrophilicity of imines. For the cyanosilylation of imines, most of the reactions were complete within 1 h affording aminonitriles in quantitative yield. Kaskel and coworkers carried out similar cyanosilylation reactions with coordinatively unsaturated metals in three-dimensional (3D) MOFs as heterogeneous catalysts. The 3D framework [Cu3(btc)2(H2O)3] (btc: benzene-1,3,5-tricarboxylate) (HKUST-1) used in this study was first reported by Williams et al. The open framework of [Cu3(btc)2(H2O)3] is built from dimeric cupric tetracarboxylate units (paddle-wheels) with aqua molecules coordinating to the axial positions and btc bridging ligands. The resulting framework after removal of two water molecules from axial positions possesses porous channel. This activated MOF catalyzes the trimethylcyanosilylation of benzaldehydes with a very low conversion (<5% in 24 h) at 293 K. As the reaction temperature was raised to 313 K, a good conversion of 57% with a selectivity of 89% was obtained after 72 h. In comparison, less than 10% conversion was observed for the background reaction (without MOF) under the same conditions. But this strategy suffers from some problems like 1) the decomposition of the framework with increase of the reaction temperature due to the reduction of Cu(II) to Cu(I) by aldehydes; 2) strong solvent inhibition effect; electron donating solvents such as THF competed with aldehydes for coordination to the Cu(II) sites, and no cyanosilylation product was observed in these solvents; 3) the framework instability in some organic solvents. Several other groups have also reported the use of metal centres in MOFs as catalysts. Again, electron-deficient nature of some metals and metal clusters makes the resulting MOFs efficient oxidation catalysts. Mori and coworkers reported MOFs with Cu2 paddle wheel units as heterogeneous catalysts for the oxidation of alcohols. The catalytic activity of the resulting MOF was examined by carrying out alcohol oxidation with H2O2 as the oxidant. It also catalyzed the oxidation of primary alcohol, secondary alcohol and benzyl alcohols with high selectivity. Hill et al. have demonstrated the sulfoxidation of thioethers using a MOF based on vanadium-oxo cluster V6O13 building units.
Functional linkers as catalytic sites
Functional linkers can be also utilized as catalytic sites. A 3D MOF {[Cd(4-BTAPA)2(NO3)2] • 6H2O • 2DMF} (4-BTAPA = 1,3,5-benzene tricarboxylic acid tris [N-(4-pyridyl)amide], DMF = N,N-dimethylformamide) constructed by tridentate amide linkers and cadmium salt catalyzes the Knoevenagel condensation reaction. The pyridine groups on the ligand 4-BTAPA act as ligands binding to the octahedral cadmium centers, while the amide groups can provide the functionality for interaction with the incoming substrates. Specifically, the −NH moiety of the amide group can act as electron acceptor whereas the C=O group can act as electron donor to activate organic substrates for subsequent reactions. Ferey et al. reported a robust and highly porous MOF [Cr3(μ3-O)F(H2O)2(BDC)3] (BDC: benzene-1,4-dicarboxylate) where instead of directly using the unsaturated Cr(III) centers as catalytic sites, the authors grafted ethylenediamine (ED) onto the Cr(III) sites. The uncoordinated ends of ED can act as base catalytic sites. ED-grafted MOF was investigated for Knoevenagel condensation reactions. A significant increase in conversion was observed for ED-grafted MOF compared to untreated framework (98% vs. 36%). Another example of linker modification to generate catalytic site is iodo-functionalized well-known Al-based MOFs (MIL-53 and DUT-5) and Zr-based MOFs (UiO-66 and UiO-67) for the catalytic oxidation of diols.
Entrapment of catalytically active noble metal nanoparticles
The entrapment of catalytically active noble metals can be accomplished by grafting on functional groups to the unsaturated metal site on MOFs. Ethylenediamine (ED) has been shown to be grafted on the Cr metal sites and can be further modified to encapsulate noble metals such as Pd. The entrapped Pd has similar catalytic activity as Pd/C in the Heck reaction. Ruthenium nanoparticles have catalytic activity in a number of reactions when entrapped in the MOF-5 framework. This Ru-encapsulated MOF catalyzes oxidation of benzyl alcohol to benzaldehyde, although degradation of the MOF occurs. The same catalyst was used in the hydrogenation of benzene to cyclohexane. In another example, Pd nanoparticles embedded within defective HKUST-1 framework enable the generation of tunable Lewis basic sites. Therefore, this multifunctional Pd/MOF composite is able to perform stepwise benzyl alcohol oxidation and Knoevenagel condensation.
Reaction hosts with size selectivity
MOFs might prove useful for both photochemical and polymerization reactions due to the tuneability of the size and shape of their pores. A 3D MOF {[Co(bpdc)3(bpy)] • 4DMF • H2O} (bpdc: biphenyldicarboxylate, bpy: 4,4-bipyridine) was synthesized by Li and coworkers. Using this MOF photochemistry of o-methyl dibenzyl ketone (o-MeDBK) was extensively studied. This molecule was found to have a variety of photochemical reaction properties including the production of cyclopentanol. MOFs have been used to study polymerization in the confined space of MOF channels. Polymerization reactions in confined space might have different properties than polymerization in open space. Styrene, divinylbenzene, substituted acetylenes, methyl methacrylate, and vinyl acetate have all been studied by Kitagawa and coworkers as possible activated monomers for radical polymerization. Due to the different linker size the MOF channel size could be tunable on the order of roughly 25 and 100 Å2. The channels were shown to stabilize propagating radicals and suppress termination reactions when used as radical polymerization sites.
Asymmetric catalysis
Several strategies exist for constructing homochiral MOFs. Crystallization of homochiral MOFs via self-resolution from achiral linker ligands is one of the way to accomplish such a goal. However, the resulting bulk samples contain both enantiomorphs and are racemic. Aoyama and coworkers successfully obtained homochiral MOFs in the bulk from achiral ligands by carefully controlling nucleation in the crystal growth process. Zheng and coworkers reported the synthesis of homochiral MOFs from achiral ligands by chemically manipulating the statistical fluctuation of the formation of enantiomeric pairs of crystals. Growing MOF crystals under chiral influences is another approach to obtain homochiral MOFs using achiral linker ligands. Rosseinsky and coworkers have introduced a chiral coligand to direct the formation of homochiral MOFs by controlling the handedness of the helices during the crystal growth. Morris and coworkers utilized ionic liquid with chiral cations as reaction media for synthesizing MOFs, and obtained homochiral MOFs. The most straightforward and rational strategy for synthesizing homochiral MOFs is, however, to use the readily available chiral linker ligands for their construction.
Homochiral MOFs with interesting functionalities and reagent-accessible channels
Homochiral MOFs have been made by Lin and coworkers using 2,2-bis(diphenylphosphino)-1,1-binaphthyl (BINAP) and 1,1-bi-2,2-naphthol (BINOL) as chiral ligands. These ligands can coordinate with catalytically active metal sites to enhance the enantioselectivity. A variety of linking groups such as pyridine, phosphonic acid, and carboxylic acid can be selectively introduced to the 3,3, 4,4, and the 6,6 positions of the 1,1'-binaphthyl moiety. Moreover, by changing the length of the linker ligands the porosity and framework structure of the MOF can be selectively tuned.
Postmodification of homochiral MOFs
Lin and coworkers have shown that the postmodification of MOFs can be achieved to produce enantioselective homochiral MOFs for use as catalysts. The resulting 3D homochiral MOF {[Cd3(L)3Cl6] • 4DMF • 6MeOH • 3H2O} (L=(R)-6,6'-dichloro-2,2'-dihydroxyl-1,1'-binaphthyl-bipyridine) synthesized by Lin was shown to have a similar catalytic efficiency for the diethylzinc addition reaction as compared to the homogeneous analogue when was pretreated by Ti(OiPr)4 to generate the grafted Ti- BINOLate species. The catalytic activity of MOFs can vary depending on the framework structure. Lin and others found that MOFs synthesized from the same materials could have drastically different catalytic activities depending on the framework structure present.
Homochiral MOFs with precatalysts as building blocks
Another approach to construct catalytically active homochiral MOFs is to incorporate chiral metal complexes which are either active catalysts or precatalysts directly into the framework structures. For example, Hupp and coworkers have combined a chiral ligand and bpdc (bpdc: biphenyldicarboxylate) with and obtained twofold interpenetrating 3D networks. The orientation of chiral ligand in the frameworks makes all Mn(III) sites accessible through the channels. The resulting open frameworks showed catalytic activity toward asymmetric olefin epoxidation reactions. No significant decrease of catalyst activity was observed during the reaction and the catalyst could be recycled and reused several times. Lin and coworkers have reported zirconium phosphonate-derived Ru-BINAP systems. Zirconium phosphonate-based chiral porous hybrid materials containing the Ru(BINAP)(diamine)Cl2 precatalysts showed excellent enantioselectivity (up to 99.2% ee) in the asymmetric hydrogenation of aromatic ketones.
Biomimetic design and photocatalysis
Some MOF materials may resemble enzymes when they combine isolated polynuclear sites, dynamic host–guest responses, and hydrophobic cavity environment which are characteristics of an enzyme. Some well-known examples of cooperative catalysis involving two metal ions in biological systems include: the diiron sites in methane monooxygenase, dicopper in cytochrome c oxidase, and tricopper oxidases which have analogy with polynuclear clusters found in the 0D coordination polymers, such as binuclear Cu2 paddlewheel units found in MOP-1 and [Cu3(btc)2] (btc=benzene-1,3,5-tricarboxylate) in HKUST-1 or trinuclear units such as {} in MIL-88, and IRMOP-51. Thus, 0D MOFs have accessible biomimetic catalytic centers. In enzymatic systems, protein units show "molecular recognition", high affinity for specific substrates. It seems that molecular recognition effects are limited in zeolites by the rigid zeolite structure. In contrast, dynamic features and guest-shape response make MOFs more similar to enzymes. Indeed, many hybrid frameworks contain organic parts that can rotate as a result of stimuli, such as light and heat. The porous channels in MOF structures can be used as photocatalysis sites. In photocatalysis, the use of mononuclear complexes is usually limited either because they only undergo single-electron process or from the need for high-energy irradiation. In this case, binuclear systems have a number of attractive features for the development of photocatalysts. For 0D MOF structures, polycationic nodes can act as semiconductor quantum dots which can be activated upon photostimuli with the linkers serving as photon antennae. Theoretical calculations show that MOFs are semiconductors or insulators with band gaps between 1.0 and 5.5 eV which can be altered by changing the degree of conjugation in the ligands. Experimental results show that the band gap of IRMOF-type samples can be tuned by varying the functionality of the linker. An integrated MOF nanozyme was developed for anti-inflammation therapy.
Mechanical properties
Implementing MOFs in industry necessitates a thorough understanding of the mechanical properties since most processing techniques (e.g. extrusion and pelletization) expose the MOFs to substantial mechanical compressive stresses. The mechanical response of porous structures is of interest as these structures can exhibit unusual response to high pressures. While zeolites (microporous, aluminosilicate minerals) can give some insights into the mechanical response of MOFs, the presence of organic linkers as opposed to zeolites, makes for novel mechanical responses. MOFs are very structurally diverse meaning that it is challenging to classify all of their mechanical properties. Additionally, variability in MOFs from batch to batch and extreme experimental conditions (diamond anvil cells) mean that experimental determination of mechanical response to loading is limited, however many computational models have been made to determine structure-property relationships. Main MOF systems that have been explored are zeolitic imidazolate frameworks (ZIFs), Carboxylate MOFs, Zirconium-based MOFs, among others. Generally, the MOFs undergo three processes under compressive loading (which is relevant in a processing context): amorphization, hyperfilling, and/or pressure induced phase transitions. During amorphization linkers buckle and the internal porosity within the MOF collapses. During hyperfilling the MOF which is being hydrostatically compressed in a liquid (typically solvent) will expand rather than contract due to a filling of pores with the loading media. Finally, pressure induced phase transitions where the structure of the crystal is altered during the loading are possible. The response of the MOF is predominantly dependent on the linker species and the inorganic nodes.
Zeolitic imidazolate frameworks (ZIFs)
Several different mechanical phenomena have been observed in zeolitic imidazolate frameworks (ZIFs), the most widely studied MOF for mechanical properties due to their many similarities to zeolites. General trends for the ZIF family are the tendency of the Young's modulus and hardness of the ZIFs to decrease as the accessible pore volume increases. The bulk moduli of ZIF-62 series increase with the increasing of benzoimidazolate (bim−) concentration. ZIF-62 shows a continuous phase transition from open pore (op) to close pore (cp) phase when bim− concentration is over 0.35 per formular unit. The accessible pore size and volume of ZIF-62-bim0.35 can be precisely tuned by applying adequate pressures. Another study has shown that under hydrostatic loading in solvent the ZIF-8 material expands as opposed to contracting. This is a result of hyperfilling of the internal pores with solvent. A computational study demonstrated that ZIF-4 and ZIF-8 materials undergo a shear softening mechanism with amorphizing (at ~ 0.34 GPa) of the material under hydrostatic loading, while still possessing a bulk modulus on the order of 6.5 GPa. Additionally, the ZIF-4 and ZIF-8 MOFs are subject to many pressure dependent phase transitions.
Carboxylate-based MOFs
Carboxylate MOFs come in many forms and have been widely studied. Herein, HKUST-1, MOF-5, and the MIL series are discussed as representative examples of the carboxylate MOF class.
HKUST-1
HKUST-1 consists of a dimeric Cu-paddlewheel that possesses two pore types. Under pelletization MOFs such as HKUST-1 exhibit a pore collapse. Although most carboxylate MOFs have a negative thermal expansion (they densify during heating), it was found that the hardness and Young's moduli unexpectedly decrease with increasing temperature from disordering of linkers. It was also found computationally that a more mesoporous structure has a lower bulk modulus. However, an increased bulk modulus was observed in systems with a few large mesopores versus many small mesopores even though both pore size distributions had the same total pore volume. The HKUST-1 shows a similar, "hyperfilling" phenomenon to the ZIF structures under hydrostatic loading.
MOF-5
MOF-5 has tetranuclear nodes in an octahedral configuration with an overall cubic structure. MOF-5 has a compressibility and Young's modulus (~14.9 GPa) comparable to wood, which was confirmed with density functional theory (DFT) and nanoindentation. While it was shown that the MOF-5 can demonstrate the hyperfilling phenomenon within a loading media of solvent, these MOFs are very sensitive to pressure and undergo amorphization/pressure induced pore collapse at a pressure of 3.5 MPa when there is no fluid in the pores.
MIL-53
MIL-53 MOFs possess a "wine rack" structure. These MOFs have been explored for anisotropy in Young's modulus due to the flexibility of loading, and the potential for negative linear compressibility when compressing in one direction, due to the ability of the wine rack opening during loading.
Zirconium-based MOFs
Zirconium-based MOFs such as UiO-66 are a very robust class of MOFs (attributed to strong hexanuclear Zr_6 metallic nodes) with increased resistance to heat, solvents, and other harsh conditions, which makes them of interest in terms of mechanical properties. Determinations of shear modulus and pelletization have shown that the UiO-66 MOFs are very mechanically robust and have high tolerance for pore collapse when compared to ZIFs and carboxylate MOFs. Although the UiO-66 MOF shows increased stability under pelletization, the UiO-66 MOFs amorphized fairly rapidly under ball milling conditions due to destruction of linker coordinating inorganic nodes.
Applications
Hydrogen storage
Molecular hydrogen has the highest specific energy of any fuel. However unless the hydrogen gas is compressed, its volumetric energy density is very low, so the transportation and storage of hydrogen require energy-intensive compression and liquefaction processes. Therefore, development of new hydrogen storage methods which decrease the concomitant pressure required for practical volumetric energy density is an active area of research. MOFs attract attention as materials for adsorptive hydrogen storage because of their high specific surface areas and surface to volume ratios, as well as their chemically tunable structures.
Compared to an empty gas cylinder, a MOF-filled gas cylinder can store more hydrogen at a given pressure because hydrogen molecules adsorb to the surface of MOFs. Furthermore, MOFs are free of dead-volume, so there is almost no loss of storage capacity as a result of space-blocking by non-accessible volume. Also, because the hydrogen uptake is based primarily on physisorption, many MOFs have a fully reversible uptake-and-release behavior. No large activation barriers are required when liberating the adsorbed hydrogen. The storage capacity of a MOF is limited by the liquid-phase density of hydrogen because the benefits provided by MOFs can be realized only if the hydrogen is in its gaseous state.
The extent to which a gas can adsorb to a MOF's surface depends on the temperature and pressure of the gas. In general, adsorption increases with decreasing temperature and increasing pressure (until a maximum is reached, typically 20–30 bar, after which the adsorption capacity decreases). However, MOFs to be used for hydrogen storage in automotive fuel cells need to operate efficiently at ambient temperature and pressures between 1 and 100 bar, as these are the values that are deemed safe for automotive applications.
The U.S. Department of Energy (DOE) has published a list of yearly technical system targets for on-board hydrogen storage for light-duty fuel cell vehicles which guide researchers in the field (5.5 wt %/40 g L−1 by 2017; 7.5 wt %/70 g L−1 ultimate). Materials with high porosity and high surface area such as MOFs have been designed and synthesized in an effort to meet these targets. These adsorptive materials generally work via physical adsorption rather than chemisorption due to the large HOMO-LUMO gap and low HOMO energy level of molecular hydrogen. A benchmark material to this end is MOF-177 which was found to store hydrogen at 7.5 wt % with a volumetric capacity of 32 g L−1 at 77 K and 70 bar. MOF-177 consists of [Zn4O]6+ clusters interconnected by 1,3,5-benzenetribenzoate organic linkers and has a measured BET surface area of 4630 m2 g−1. Another exemplary material is PCN-61 which exhibits a hydrogen uptake of 6.24 wt % and 42.5 g L−1 at 35 bar and 77 K and 2.25 wt % at atmospheric pressure. PCN-61 consists of [Cu2]4+ paddle-wheel units connected through 5,5,5-benzene-1,3,5-triyltris(1-ethynyl-2-isophthalate) organic linkers and has a measured BET surface area of 3000 m2 g−1. Despite these promising MOF examples, the classes of synthetic porous materials with the highest performance for practical hydrogen storage are activated carbon and covalent organic frameworks (COFs).
Design principles
Practical applications of MOFs for hydrogen storage are met with several challenges. For hydrogen adsorption near room temperature, the hydrogen binding energy would need to be increased considerably. Several classes of MOFs have been explored, including carboxylate-based MOFs, heterocyclic azolate-based MOFs, metal-cyanide MOFs, and covalent organic frameworks. Carboxylate-based MOFs have by far received the most attention because
they are either commercially available or easily synthesized,
they have high acidity (pKa ~ 4) allowing for facile in situ deprotonation,
the metal-carboxylate bond formation is reversible, facilitating the formation of well-ordered crystalline MOFs, and
the bridging bidentate coordination ability of carboxylate groups favors the high degree of framework connectivity and strong metal-ligand bonds necessary to maintain MOF architecture under the conditions required to evacuate the solvent from the pores.
The most common transition metals employed in carboxylate-based frameworks are Cu2+ and Zn2+. Lighter main-group metal ions have also been explored. Be12(OH)12(BTB)4, the first successfully synthesized and structurally characterized MOF consisting of a light main group metal ion, shows high hydrogen storage capacity, but it is too toxic to be employed practically. There is considerable effort being put forth in developing MOFs composed of other light main group metal ions, such as magnesium in Mg4(BDC)3.
The following is a list of several MOFs that are considered to have the best properties for hydrogen storage as of May 2012 (in order of decreasing hydrogen storage capacity). While each MOF described has its advantages, none of these MOFs reach all of the standards set by the U.S. DOE. Therefore, it is not yet known whether materials with high surface areas, small pores, or di- or trivalent metal clusters produce the most favorable MOFs for hydrogen storage.
Structural impacts on hydrogen storage capacity
To date, hydrogen storage in MOFs at room temperature is a battle between maximizing storage capacity and maintaining reasonable desorption rates, while conserving the integrity of the adsorbent framework (e.g. completely evacuating pores, preserving the MOF structure, etc.) over many cycles. There are two major strategies governing the design of MOFs for hydrogen storage:
1) to increase the theoretical storage capacity of the material, and
2) to bring the operating conditions closer to ambient temperature and pressure. Rowsell and Yaghi have identified several directions to these ends in some of the early papers.
Surface area
The general trend in MOFs used for hydrogen storage is that the greater the surface area, the more hydrogen the MOF can store. High surface area materials tend to exhibit increased micropore volume and inherently low bulk density, allowing for more hydrogen adsorption to occur.
Hydrogen adsorption enthalpy
High hydrogen adsorption enthalpy is also important. Theoretical studies have shown that 22–25 kJ/mol interactions are ideal for hydrogen storage at room temperature, as they are strong enough to adsorb H2, but weak enough to allow for quick desorption. The interaction between hydrogen and uncharged organic linkers is not this strong, and so a considerable amount of work has gone in synthesis of MOFs with exposed metal sites, to which hydrogen adsorbs with an enthalpy of 5–10 kJ/mol. Synthetically, this may be achieved by using ligands whose geometries prevent the metal from being fully coordinated, by removing volatile metal-bound solvent molecules over the course of synthesis, and by post-synthetic impregnation with additional metal cations. and are great examples of increased binding energy due to open metal coordination sites; however, their high metal-hydrogen bond dissociation energies result in a tremendous release of heat upon loading with hydrogen, which is not favorable for fuel cells. MOFs, therefore, should avoid orbital interactions that lead to such strong metal-hydrogen bonds and employ simple charge-induced dipole interactions, as demonstrated in Mn3[(Mn4Cl)3(BTT)8]2.
An association energy of 22–25 kJ/mol is typical of charge-induced dipole interactions, and so there is interest in the use of charged linkers and metals. The metal–hydrogen bond strength is diminished in MOFs, probably due to charge diffusion, so 2+ and 3+ metal ions are being studied to strengthen this interaction even further. A problem with this approach is that MOFs with exposed metal surfaces have lower concentrations of linkers; this makes them difficult to synthesize, as they are prone to framework collapse. This may diminish their useful lifetimes as well.
Sensitivity to airborne moisture
MOFs are frequently sensitive to moisture in the air. In particular, IRMOF-1 degrades in the presence of small amounts of water at room temperature. Studies on metal analogues have unraveled the ability of metals other than Zn to stand higher water concentrations at high temperatures.
To compensate for this, specially constructed storage containers are required, which can be costly. Strong metal-ligand bonds, such as in metal-imidazolate, -triazolate, and -pyrazolate frameworks, are known to decrease a MOF's sensitivity to air, reducing the expense of storage.
Pore size
In a microporous material where physisorption and weak van der Waals forces dominate adsorption, the storage density is greatly dependent on the size of the pores. Calculations of idealized homogeneous materials, such as graphitic carbons and carbon nanotubes, predict that a microporous material with 7 Å-wide pores will exhibit maximum hydrogen uptake at room temperature. At this width, exactly two layers of hydrogen molecules adsorb on opposing surfaces with no space left in between. 10 Å-wide pores are also of ideal size because at this width, exactly three layers of hydrogen can exist with no space in between. (A hydrogen molecule has a bond length of 0.74 Å with a van der Waals radius of 1.17 Å for each atom; therefore, its effective van der Waals length is 3.08 Å.)
Structural defects
Structural defects also play an important role in the performance of MOFs. Room-temperature hydrogen uptake via bridged spillover is mainly governed by structural defects, which can have two effects:
1) a partially collapsed framework can block access to pores; thereby reducing hydrogen uptake, and
2) lattice defects can create an intricate array of new pores and channels causing increased hydrogen uptake.
Structural defects can also leave metal-containing nodes incompletely coordinated. This enhances the performance of MOFs used for hydrogen storage by increasing the number of accessible metal centers. Finally, structural defects can affect the transport of phonons, which affects the thermal conductivity of the MOF.
Hydrogen adsorption
Adsorption is the process of trapping atoms or molecules that are incident on a surface; therefore the adsorption capacity of a material increases with its surface area. In three dimensions, the maximum surface area will be obtained by a structure which is highly porous, such that atoms and molecules can access internal surfaces. This simple qualitative argument suggests that the highly porous metal-organic frameworks (MOFs) should be excellent candidates for hydrogen storage devices.
Adsorption can be broadly classified as being one of two types: physisorption or chemisorption. Physisorption is characterized by weak van der Waals interactions, and bond enthalpies typically less than 20 kJ/mol. Chemisorption, alternatively, is defined by stronger covalent and ionic bonds, with bond enthalpies between 250 and 500 kJ/mol. In both cases, the adsorbate atoms or molecules (i.e. the particles which adhere to the surface) are attracted to the adsorbent (solid) surface because of the surface energy that results from unoccupied bonding locations at the surface. The degree of orbital overlap then determines if the interactions will be physisorptive or chemisorptive.
Adsorption of molecular hydrogen in MOFs is physisorptive. Since molecular hydrogen only has two electrons, dispersion forces are weak, typically 4–7 kJ/mol, and are only sufficient for adsorption at temperatures below 298 K.
A complete explanation of the H2 sorption mechanism in MOFs was achieved by statistical averaging in the grand canonical ensemble, exploring a wide range of pressures and temperatures.
Determining hydrogen storage capacity
Two hydrogen-uptake measurement methods are used for the characterization of MOFs as hydrogen storage materials: gravimetric and volumetric. To obtain the total amount of hydrogen in the MOF, both the amount of hydrogen absorbed on its surface and the amount of hydrogen residing in its pores should be considered. To calculate the absolute absorbed amount (Nabs), the surface excess amount (Nex) is added to the product of the bulk density of hydrogen (ρbulk) and the pore volume of the MOF (Vpore), as shown in the following equation:
Gravimetric method
The increased mass of the MOF due to the stored hydrogen is directly calculated by a highly sensitive microbalance. Due to buoyancy, the detected mass of adsorbed hydrogen decreases again when a sufficiently high pressure is applied to the system because the density of the surrounding gaseous hydrogen becomes more and more important at higher pressures. Thus, this "weight loss" has to be corrected using the volume of the MOF's frame and the density of hydrogen.
Volumetric method
The changing of amount of hydrogen stored in the MOF is measured by detecting the varied pressure of hydrogen at constant volume. The volume of adsorbed hydrogen in the MOF is then calculated by subtracting the volume of hydrogen in free space from the total volume of dosed hydrogen.
Other methods of hydrogen storage
There are six possible methods that can be used for the reversible storage of hydrogen with a high volumetric and gravimetric density, which are summarized in the following table, (where is the gravimetric density, is the volumetric density, T is the working temperature, and P is the working pressure):
Of these, high-pressure gas cylinders and liquid hydrogen in cryogenic tanks are the least practical ways to store hydrogen for the purpose of fuel due to the extremely high pressure required for storing hydrogen gas or the extremely low temperature required for storing hydrogen liquid. The other methods are all being studied and developed extensively.
Electrocatalysis
The high surface area and atomic metal sites feature of MOFs make them a suitable candidate for electrocatalysts, especially energy-related ones.
Until now, MOFs have been used extensively as electrocatalyst for water splitting (hydrogen evolution reaction and oxygen evolution reaction), carbon dioxide reduction, and oxygen reduction reaction. Currently there are two routes: 1. Using MOFs as precursors to prepare electrocatalysts with carbon support. 2. Using MOFs directly as electrocatalysts. However, some results have shown that some MOFs are not stable under electrochemical environment. The electrochemical conversion of MOFs during electrocatalysis may produce the real catalyst materials, and the MOFs are precatalysts under such conditions. Therefore, claiming MOFs as the electrocatalysts requires in situ techniques coupled with electrocatalysis.
Biological imaging and sensing
A potential application for MOFs is biological imaging and sensing via photoluminescence. A large subset of luminescent MOFs use lanthanides in the metal clusters. Lanthanide photoluminescence has many unique properties that make them ideal for imaging applications, such as characteristically sharp and generally non-overlapping emission bands in the visible and near-infrared (NIR) regions of the spectrum, resistance to photobleaching or "blinking", and long luminescence lifetimes. However, lanthanide emissions are difficult to sensitize directly because they must undergo LaPorte forbidden f-f transitions. Indirect sensitization of lanthanide emission can be accomplished by employing the "antenna effect", where the organic linkers act as antennae and absorb the excitation energy, transfer the energy to the excited state of the lanthanide, and yield lanthanide luminescence upon relaxation. A prime example of the antenna effect is demonstrated by MOF-76, which combines trivalent lanthanide ions and 1,3,5-benzenetricarboxylate (BTC) linkers to form infinite rod SBUs coordinated into a three dimensional lattice. As demonstrated by multiple research groups, the BTC linker can effectively sensitize the lanthanide emission, resulting in a MOF with variable emission wavelengths depending on the lanthanide identity. Additionally, the Yan group has shown that Eu3+- and Tb3+- MOF-76 can be used for selective detection of acetophenone from other volatile monoaromatic hydrocarbons. Upon acetophenone uptake, the MOF shows a very sharp decrease, or quenching, of the luminescence intensity.
For use in biological imaging, however, two main obstacles must be overcome:
MOFs must be synthesized on the nanoscale so as not to affect the target's normal interactions or behavior
The absorbance and emission wavelengths must occur in regions with minimal overlap from sample autofluorescence, other absorbing species, and maximum tissue penetration.
Regarding the first point, nanoscale MOF (NMOF) synthesis has been mentioned in an earlier section. The latter obstacle addresses the limitation of the antenna effect. Smaller linkers tend to improve MOF stability, but have higher energy absorptions, predominantly in the ultraviolet (UV) and high-energy visible regions. A design strategy for MOFs with redshifted absorption properties has been accomplished by using large, chromophoric linkers. These linkers are often composed of polyaromatic species, leading to large pore sizes and thus decreased stability. To circumvent the use of large linkers, other methods are required to redshift the absorbance of the MOF so lower energy excitation sources can be used. Post-synthetic modification (PSM) is one promising strategy. Luo et al. introduced a new family of lanthanide MOFs with functionalized organic linkers. The MOFs, deemed MOF-1114, MOF-1115, MOF-1130, and MOF-1131, are composed of octahedral SBUs bridged by amino functionalized dicarboxylate linkers. The amino groups on the linkers served as sites for covalent PSM reactions with either salicylaldehyde or 3-hydroxynaphthalene-2-carboxaldehyde. Both of these reactions extend the π-conjugation of the linker, causing a redshift in the absorbance wavelength from 450 nm to 650 nm. The authors also propose that this technique could be adapted to similar MOF systems and, by increasing pore volumes with increasing linker lengths, larger pi-conjugated reactants can be used to further redshift the absorption wavelengths. Biological imaging using MOFs has been realized by several groups, namely Foucault-Collet and co-workers. In 2013, they synthesized a NIR-emitting Yb3+-NMOF using phenylenevinylene dicarboxylate (PVDC) linkers. They observed cellular uptake in both HeLa cells and NIH-3T3 cells using confocal, visible, and NIR spectroscopy. Although low quantum yields persist in water and Hepes buffer solution, the luminescence intensity is still strong enough to image cellular uptake in both the visible and NIR regimes.
Nuclear wasteform materials
The development of new pathways for efficient nuclear waste administration is essential in wake of increased public concern about radioactive contamination, due to nuclear plant operation and nuclear weapon decommission. Synthesis of novel materials capable of selective actinide sequestration and separation is one of the current challenges acknowledged in the nuclear waste sector. Metal–organic frameworks (MOFs) are a promising class of materials to address this challenge due to their porosity, modularity, crystallinity, and tunability. Every building block of MOF structures can incorporate actinides. First, a MOF can be synthesized starting from actinide salts. In this case the metal nodes are actinides. In addition, metal nodes can be extended, or cation exchange can exchange metals for actinides. Organic linkers can be functionalized with groups capable of actinide uptake. Lastly, the porosity of MOFs can be used to incorporate contained guest molecules and trap them in a structure by installation of additional or capping linkers.
Drug delivery systems
The synthesis, characterization, and drug-related studies of low toxicity, biocompatible MOFs has shown that they have potential for medical applications. Many groups have synthesized various low toxicity MOFs and have studied their uses in loading and releasing various therapeutic drugs for potential medical applications. A variety of methods exist for inducing drug release, such as pH-response, magnetic-response, ion-response, temperature-response, and pressure response.
In 2010 Smaldone et al., an international research group, synthesized a biocompatible MOF termed CD-MOF-1 from cheap edible natural products. CD-MOF-1 consists of repeating base units of 6 γ-cyclodextrin rings bound together by potassium ions. γ-cyclodextrin (γ-CD) is a symmetrical cyclic oligosaccharide that is mass-produced enzymatically from starch and consists of eight asymmetric α-1,4-linked D-glucopyranosyl residues. The molecular structure of these glucose derivatives, which approximates a truncated cone, bucket, or torus, generates a hydrophilic exterior surface and a nonpolar interior cavity. Cyclodextrins can interact with appropriately sized drug molecules to yield an inclusion complex. Smaldone's group proposed a cheap and simple synthesis of the CD-MOF-1 from natural products. They dissolved sugar (γ-cyclodextrin) and an alkali salt (KOH, KCl, potassium benzoate) in distilled bottled water and allowed 190 proof grain alcohol (Everclear) to vapor diffuse into the solution for a week. The synthesis resulted in a cubic (γ-CD)6 repeating motif with a pore size of approximately 1 nm. Subsequently, in 2017 Hartlieb et al. at Northwestern did further research with CD-MOF-1 involving the encapsulation of ibuprofen. The group studied different methods of loading the MOF with ibuprofen as well as performing related bioavailability studies on the ibuprofen-loaded MOF. They investigated two different methods of loading CD-MOF-1 with ibuprofen; crystallization using the potassium salt of ibuprofen as the alkali cation source for production of the MOF, and absorption and deprotonation of the free-acid of ibuprofen into the MOF. From there the group performed in vitro and in vivo studies to determine the applicability of CD-MOF-1 as a viable delivery method for ibuprofen and other NSAIDs. In vitro studies showed no toxicity or effect on cell viability up to 100 μM. In vivo studies in mice showed the same rapid uptake of ibuprofen as the ibuprofen potassium salt control sample with a peak plasma concentration observed within 20 minutes, and the cocrystal has the added benefit of double the half-life in blood plasma samples. The increase in half-life is due to CD-MOF-1 increasing the solubility of ibuprofen compared to the pure salt form.
Since these developments many groups have done further research into drug delivery with water-soluble, biocompatible MOFs involving common over-the-counter drugs. In March 2018 Sara Rojas and her team published their research on drug incorporation and delivery with various biocompatible MOFs other than CD-MOF-1 through simulated cutaneous administration. The group studied the loading and release of ibuprofen (hydrophobic) and aspirin (hydrophilic) in three biocompatible MOFs (MIL-100(Fe), UiO-66(Zr), and MIL-127(Fe)). Under simulated cutaneous conditions (aqueous media at 37 °C) the six different combinations of drug-loaded MOFs fulfilled "the requirements to be used as topical drug delivery systems, such as released payload between 1 and 7 days" and delivering a therapeutic concentration of the drug of choice without causing unwanted side effects. The group discovered that the drug uptake is "governed by the hydrophilic/hydrophobic balance between cargo and matrix" and "the accessibility of the drug through the framework". The "controlled release under cutaneous conditions follows different kinetics profiles depending on: (i) the structure of the framework, with either a fast delivery from the very open structure MIL-100 or a slower drug release from the narrow 1D pore system of MIL-127 or (ii) the hydrophobic/hydrophilic nature of the cargo, with a fast (Aspirin) and slow (Ibuprofen) release from the UiO-66 matrix." Moreover, a simple ball milling technique is used to efficiently encapsulate the model drugs 5-fluorouracil, caffeine, para-aminobenzoic acid, and benzocaine. Both computational and experimental studies confirm the suitability of [Zn4O(dmcapz)3] to incorporate high loadings of the studied bioactive molecules.
Recent research involving MOFs as a drug delivery method includes more than just the encapsulation of everyday drugs like ibuprofen and aspirin. In early 2018 Chen et al., published detailing their work on the use of MOF, ZIF-8 (zeolitic imidazolate framework-8) in antitumor research "to control the release of an autophagy inhibitor, 3-methyladenine (3-MA), and prevent it from dissipating in a large quantity before reaching the target." The group performed in vitro studies and determined that "the autophagy-related proteins and autophagy flux in HeLa cells treated with 3-MA@ZIF-8 NPs show that the autophagosome formation is significantly blocked, which reveals that the pH-sensitive dissociation increases the efficiency of autophagy inhibition at the equivalent concentration of 3-MA." This shows promise for future research and applicability with MOFs as drug delivery methods in the fight against cancer.
Semiconductors
In 2014 researchers proved that they can create electrically conductive thin films of MOFs (Cu3(BTC)2 (also known as HKUST-1; BTC, benzene-1,3,5-tricarboxylic acid) infiltrated with the molecule 7,7,8,8-tetracyanoquinododimethane) that could be used in applications including photovoltaics, sensors, and electronic materials and a path toward creating semiconductors. The team demonstrated tunable, air-stable electrical conductivity with values as high as 7 siemens per meter, comparable to bronze.
(2,3,6,7,10,11-hexaiminotriphenylene)2 was shown to be a metal-organic graphene analogue that has a natural band gap, making it a semiconductor, and is able to self-assemble. It is an example of conductive metal-organic framework. It represents a family of similar compounds. Because of the symmetry and geometry in 2,3,6,7,10,11-hexaiminotriphenylene (HITP), the overall organometallic complex has an almost fractal nature that allows it to perfectly self-organize. By contrast, graphene must be doped to give it the properties of a semiconductor. Ni3(HITP)2 pellets had a conductivity of 2 S/cm, a record for a metal-organic compound.
In 2018 researchers synthesized a two-dimensional semiconducting MOF (Fe3(THT)2(NH4)3, also known as THT, 2,3,6,7,10,11-triphenylenehexathiol) and showed high electric mobility at room temperature. In 2020 the same material was integrated in a photo-detecting device, detecting a broad wavelength range from UV to NIR (400–1575 nm). This was the first time a two-dimensional semiconducting MOF was demonstrated to be used in opto-electronic devices.
Cu3(HHTP)2 is a 2D MOF structure, and there are limited examples of materials which are intrinsically conductive, porous, and crystalline. Layered 2D MOFs have porous crystalline structure showing electrical conductivity. These materials are constructed from trigonal linker molecules (phenylene or triphenylene) and six functional groups of –OH, -NH2, or –SH. The trigonal linker molecules and square-planarly coordinated metal ions such as Cu^{2+}, Ni^{2+}, Co^{2+}, and Pt^{2+} form layers with hexagonal structures which look like graphene in larger scale. Stacking of these layers can build one-dimensional pore systems. Graphene-like 2D MOFs have shown decent conductivities. This makes them a good choice to be tested as electrode material for evolution of hydrogen from water, oxygen reduction reactions, supercapacitors, and sensing of volatile organic compounds (VOCs). Among these MOFs, Cu3(HHTP)2 has exhibited the lowest conductivity, but also the strongest reaction in sensing of VOCs.
Bio-mimetic mineralization
Biomolecules can be incorporated during the MOF crystallization process. Biomolecules including proteins, DNA, and antibodies could be encapsulated within ZIF-8. Enzymes encapsulated in this way were stable and active even after being exposed to harsh conditions (e.g. aggressive solvents and high temperature). ZIF-8, MIL-88A, HKUST-1, and several luminescent MOFs containing lanthanide metals were used for the biomimetic mineralization process.
Carbon capture
Adsorbent
MOF's small, tunable pore sizes and high void fractions are promising as an adsorbent to capture CO2. MOFs could provide a more efficient alternative to traditional amine solvent-based methods in CO2 capture from coal-fired power plants.
MOFs could be employed in each of the main three carbon capture configurations for coal-fired power plants: pre-combustion, post-combustion, and oxy-combustion. The post-combustion configuration is the only one that can be retrofitted to existing plants, drawing the most interest and research. The flue gas would be fed through a MOF in a packed-bed reactor setup. Flue gas is generally 40 to 60 °C with a partial pressure of CO2 at 0.13 – 0.16 bar. CO2 can bind to the MOF surface through either physisorption (via Van der Waals interactions) or chemisorption (via covalent bond formation).
Once the MOF is saturated, the CO2 is extracted from the MOF through either a temperature swing or a pressure swing. This process is known as regeneration. In a temperature swing regeneration, the MOF would be heated until CO2 desorbs. To achieve working capacities comparable to the amine process, the MOF must be heated to around 200 °C. In a pressure swing, the pressure would be decreased until CO2 desorbs.
Another relevant MOF property is their low heat capacities. Monoethanolamine (MEA) solutions, the leading capture method, have a heat capacity between 3-4 J/(g⋅K) since they are mostly water. This high heat capacity contributes to the energy penalty in the solvent regeneration step, i.e. when the adsorbed CO2 is removed from the MEA solution. MOF-177, a MOF designed for CO2 capture, has a heat capacity of 0.5 J/(g⋅K) at ambient temperature.
MOFs adsorb 90% of the CO2 using a vacuum pressure swing process. The MOF Mg(dobdc) has a 21.7 wt% CO2 loading capacity. Applied to a large scale power plant, the cost of energy would increase by 65%, while a U.S. NETL baseline amine-based system would cause an increase of 81% (goal is 35%). The capture cost would be $57 / ton, while for the amine system the cost is estimated to be $72 / ton. At that rate the capital required to implement such project in a 580 MW power plant would be $354 million.
Catalyst
A MOF loaded with propylene oxide can act as a catalyst, converting into cyclic carbonates (ring-shaped molecules with many applications). They can also remove carbon from biogas. This MOF is based on lanthanides, which provide chemical stability. This is especially important because the gases the MOF will be exposed to are hot, high in humidity, and acidic. Triaminoguanidinium-based POFs and Zn/POFs are new multifunctional materials for environmental remediation and biomedical applications.
Desalination/ion separation
MOF membranes can achieve substantial ion selectivity due to their small repeating structures. This offers the potential for use in desalination and water treatment. As of 2020, reverse osmosis supplied more than two-thirds of global desalination capacity, and the last stage of most water treatment processes. Osmosis does not use dehydration of ions, or selective ion transport in biological channels and it is not energy efficient. The mining industry uses membrane-based processes to reduce water pollution, and to recover metals. MOFs could be used to extract metals such as lithium from seawater and waste streams.
MOF membranes such as ZIF-8 and UiO-66 membranes with uniform subnanometer pores consisting of angstrom-scale windows and nanometer-scale cavities displayed ultrafast selective transport of alkali metal ions. The windows acted as ion selectivity filters for alkali metal ions, while the cavities functioned as pores for transport. The ZIF-8 and UiO-66 membranes showed a LiCl/RbCl selectivity of ~4.6 and ~1.8, respectively, much higher than the 0.6 to 0.8 selectivity in traditional membranes. A 2020 study suggested that a new MOF called PSP-MIL-53 could be used along with sunlight to purify water in just half an hour.
Gas separation
MOFs are also predicted to be very effective media to separate gases with low energy cost using computational high throughput screening from their adsorption or gas breakthrough/diffusion properties. One example is NbOFFIVE-1-Ni, also referred to as KAUST-7 which can separate propane and propylene via diffusion at nearly 100% selectivity. The specific molecule selectivity properties provided by Cu-BDC surface mounted metal organic framework (SURMOF-2) growth on alumina layer on top of back gated Graphene Field Effect Transistor (GFET) can provide a sensor that is only sensitive to ethanol but not to methanol or isopropanol.
Water vapor capture and dehumidification
MOFs have been demonstrated that capture water vapor from the air. In 2021 under humid conditions, a polymer-MOF lab prototype yielded 17 liters (4.5 gal) of water per kg per day without added energy.
MOFs could also be used to increase energy efficiency in room temperature space cooling applications.
When cooling outdoor air, a cooling unit must deal with both the air's sensible heat and latent heat. Typical vapor-compression air-conditioning (VCAC) units manage the latent heat in air through cooling fins held below the dew point temperature of the moist air at the intake. These fins condense the water, dehydrating the air and thus substantially reducing the air's heat content. The cooler's energy usage is highly dependent on the cooling coil's temperature and would be improved greatly if the temperature of this coil could be raised above the dew point. This makes it desirable to handle dehumidification through means other than condensation. One such means is by adsorbing the water from the air into a desiccant coated onto the heat exchangers, using the waste heat exhausted from the unit to desorb the water from the sorbent and thus regenerate the desiccant for repeated usage. This is accomplished by having two condenser/evaporator units through which the flow of refrigerant can be reversed once the desiccant on the condenser is saturated, thus making the condenser the evaporator and vice versa.
MOFs' extremely high surface areas and porosities have made them the subject of much research in water adsorption applications. Chemistry can help tune the optimal relative humidity for adsorption/desorption, and the sharpness of the water uptake.
Ferroelectrics and multiferroics
Some MOFs also exhibit spontaneous electric polarization, which occurs due to the ordering of electric dipoles (polar linkers or guest molecules) below a certain phase transition temperature. If this long-range dipolar order can be controlled by the external electric field, a MOF is called ferroelectric. Some ferroelectric MOFs also exhibit magnetic ordering making them single structural phase multiferroics. This material property is highly interesting for construction of memory devices with high information density. The coupling mechanism of type-I [(CH3)2NH2][Ni(HCOO)3] molecular multiferroic is spontaneous elastic strain mediated indirect coupling.
See also
BET theory
Conjugated microporous polymer
Coordination chemistry
Coordination polymers
Covalent organic framework
Cryogenics
Electrocatalyst
Flexible metal-organic framework
Gérard Férey
Hydrogen economy
Hydrogen
Hydrogen-bonded organic framework
Liquid hydrogen
Macromolecular assembly
Metal–inorganic framework
Omar M. Yaghi
Organometallic chemistry
Crystal nets (periodic graphs)
Solid sorbents for carbon capture
Susumu Kitagawa
United States Department of Energy
X-ray Crystallography
Zeolitic imidazolate frameworks
References
External links
MOF pore characterizations
{Designed metal-organic framework composites for metal-ion batteries and metal-ion capacitors/ Gaurav Tatrari, Rong An, Faiz Ullah Shah Coordination Chemistry Reviews, Volume 512, 215876}
Hypothetical MOFs Database
MOF physical property calculator | Metal–organic framework | [
"Chemistry",
"Materials_science"
] | 18,311 | [
"Porous polymers",
"Metal-organic frameworks"
] |
9,825,116 | https://en.wikipedia.org/wiki/Diffraction%20from%20slits | Diffraction processes affecting waves are amenable to quantitative description and analysis. Such treatments are applied to a wave passing through one or more slits whose width is specified as a proportion of the wavelength. Numerical approximations may be used, including the Fresnel and Fraunhofer approximations.
General diffraction
Because diffraction is the result of addition of all waves (of given wavelength) along all unobstructed paths, the usual procedure is to consider the contribution of an infinitesimally small neighborhood around a certain path (this contribution is usually called a wavelet) and then integrate over all paths (= add all wavelets) from the source to the detector (or given point on a screen).
Thus in order to determine the pattern produced by diffraction, the phase and the amplitude of each of the wavelets is calculated. That is, at each point in space we must determine the distance to each of the simple sources on the incoming wavefront. If the distance to each of the simple sources differs by an integer number of wavelengths, all the wavelets will be in phase, resulting in constructive interference. If the distance to each source is an integer plus one half of a wavelength, there will be complete destructive interference. Usually, it is sufficient to determine these minima and maxima to explain the observed diffraction effects.
The simplest descriptions of diffraction are those in which the situation can be reduced to a two-dimensional problem. For water waves, this is already the case, as water waves propagate only on the surface of the water. For light, we can often neglect one dimension if the diffracting object extends in that direction over a distance far greater than the wavelength. In the case of light shining through small circular holes we will have to take into account the full three-dimensional nature of the problem.
Several qualitative observations can be made of diffraction in general:
The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffraction. In other words: the smaller the diffracting object, the wider the resulting diffraction pattern, and vice versa. (More precisely, this is true of the sines of the angles.)
The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object.
When the diffracting object has a periodic structure, for example in a diffraction grating, the features generally become sharper. The fourth figure, for example, shows a comparison of a double-slit pattern with a pattern formed by five slits, both sets of slits having the same spacing between the center of one slit and the next.
Approximations
The problem of calculating what a diffracted wave looks like, is the problem of determining the phase of each of the simple sources on the incoming wave front. It is mathematically easier to consider the case of far-field or Fraunhofer diffraction, where the point of observation is far from that of the diffracting obstruction, and as a result, involves less complex mathematics than the more general case of near-field or Fresnel diffraction. To make this statement more quantitative, consider a diffracting object at the origin that has a size . For definiteness let us say we are diffracting light and we are interested in what the intensity looks like on a screen a distance away from the object. At some point on the screen the path length to one side of the object is given by the Pythagorean theorem
If we now consider the situation where , the path length becomes
This is the Fresnel approximation. To further simplify things: If the diffracting object is much smaller than the distance , the last term will contribute much less than a wavelength to the path length, and will then not change the phase appreciably. That is . The result is the Fraunhofer approximation, which is only valid very far away from the object
Depending on the size of the diffraction object, the distance to the object and the wavelength of the wave, the Fresnel approximation, the Fraunhofer approximation or neither approximation may be valid. As the distance between the measured point of diffraction and the obstruction point increases, the diffraction patterns or results predicted converge towards those of Fraunhofer diffraction, which is more often observed in nature due to the extremely small wavelength of visible light.
Multiple narrow slits
A simple quantitative description
Multiple-slit arrangements can be mathematically considered as multiple simple wave sources, if the slits are narrow enough. For light, a slit is an opening that is infinitely extended in one dimension, and this has the effect of reducing a wave problem in 3D-space to a simpler problem in 2D-space.
The simplest case is that of two narrow slits, spaced a distance apart. To determine the maxima and minima in the amplitude we must determine the path difference to the first slit and to the second one. In the Fraunhofer approximation, with the observer far away from the slits, the difference in path length to the two slits can be seen from the image to be
Maxima in the intensity occur if this path length difference is an integer number of wavelengths.
where
is an integer that labels the order of each maximum,
is the wavelength,
is the distance between the slits, and
is the angle at which constructive interference occurs.
The corresponding minima are at path differences of an integer number plus one half of the wavelength:
For an array of slits, positions of the minima and maxima are not changed, the fringes visible on a screen however do become sharper, as can be seen in the image.
Mathematical description
To calculate this intensity pattern, one needs to introduce some more sophisticated methods. The mathematical representation of a radial wave is given by
where , is the wavelength, is frequency of the wave and is the phase of the wave at the slits at time t = 0. The wave at a screen some distance away from the plane of the slits is given by the sum of the waves emanating from each of the slits.
To make this problem a little easier, we introduce the complex wave , the real part of which is equal to
The absolute value of this function gives the wave amplitude, and the complex phase of the function corresponds to the phase of the wave. is referred to as the complex amplitude.
With slits, the total wave at point on the screen is
Since we are for the moment only interested in the amplitude and relative phase, we can ignore any overall phase factors that are not dependent on or . We approximate . In the Fraunhofer limit we can neglect terms of order in the exponential, and any terms involving or in the denominator. The sum becomes
The sum has the form of a geometric sum and can be evaluated to give
The intensity is given by the absolute value of the complex amplitude squared
where denotes the complex conjugate of .
Single slit
As an example, an exact equation can now be derived for the intensity of the diffraction pattern as a function of angle in the case of single-slit diffraction.
A mathematical representation of Huygens' principle can be used to start an equation.
Consider a monochromatic complex plane wave of wavelength λ incident on a slit of width a.
If the slit lies in the x′-y′ plane, with its center at the origin, then it can be assumed that diffraction generates a complex wave ψ, traveling radially in the r direction away from the slit, and this is given by:
Let be a point inside the slit over which it is being integrated. If is the location at which the intensity of the diffraction pattern is being computed, the slit extends from to , and from to .
The distance r from the slot is:
Assuming Fraunhofer diffraction will result in the conclusion . In other words, the distance to the target is much larger than the diffraction width on the target.
By the binomial expansion rule, ignoring terms quadratic and higher, the quantity on the right can be estimated to be:
It can be seen that 1/r in front of the equation is non-oscillatory, i.e. its contribution to the magnitude of the intensity is small compared to our exponential factors. Therefore, we will lose little accuracy by approximating it as 1/z.
To make things cleaner, a placeholder C is used to denote constants in the equation. It is important to keep in mind that C can contain imaginary numbers, thus the wave function will be complex. However, at the end, the ψ will be bracketed, which will eliminate any imaginary components.
Now, in Fraunhofer diffraction, is small, so (note that participates in this exponential and it is being integrated).
In contrast the term can be eliminated from the equation, since when bracketed it gives 1.
(For the same reason we have also eliminated the term )
Taking results in:
It can be noted through Euler's formula and its derivatives that
and from the geometry that
.
Therefore, we have
where the (unnormalized) sinc function is defined by .
Now, substituting in , the intensity (squared amplitude) of the diffracted waves at an angle θ is given by:
Multiple slits
Let us again start with the mathematical representation of Huygens' principle.
Consider slits in the prime plane of equal size and spacing spread along the axis. As above, the distance from slit 1 is:
To generalize this to slits, we make the observation that while and remain constant, shifts by
Thus
and the sum of all contributions to the wave function is:
Again noting that is small, so , we have:
Now, we can use the following identity
Substituting into our equation, we find:
We now make our substitution as before and represent all non-oscillating constants by the variable as in the 1-slit diffraction and bracket the result. Remember that
This allows us to discard the tailing exponent and we have our answer:
General case for far field
In the far field, where is essentially constant, then the equation:
is equivalent to doing a Fourier transform on the gaps in the barrier.
See also
Diffraction grating
Envelope (waves)
Fourier analysis
N-slit interferometer
Radio telescopes
References
Equations of physics
Wave mechanics | Diffraction from slits | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 2,148 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Equations of physics",
"Mathematical objects",
"Classical mechanics",
"Equations",
"Waves",
"Wave mechanics",
"Diffraction",
"Crystallography",
"Spectroscopy"
] |
9,827,398 | https://en.wikipedia.org/wiki/Peierls%20stress | Peierls stress (or Peierls-Nabarro stress, also known as the lattice friction stress) is the force (first described by Rudolf Peierls and modified by Frank Nabarro) needed to move a dislocation within a plane of atoms in the unit cell. The magnitude varies periodically as the dislocation moves within the plane. Peierls stress depends on the size and width of a dislocation and the distance between planes. Because of this, Peierls stress decreases with increasing distance between atomic planes. Yet since the distance between planes increases with planar atomic density, slip of the dislocation is preferred on closely packed planes.
Peierls–Nabarro stress proportionality
Where:
the dislocation width
= shear modulus
= Poisson's ratio
= slip distance or Burgers vector
= interplanar spacing
The Peierls stress and yield strength temperature sensitivity
The Peierls stress also relates to the temperature sensitivity of the yield strength of material because it very much depends on both short-range atomic order and atomic bond strength. As temperature increases, the vibration of atoms increases, and thus both peierls stress and yield strength decrease as a result of weaker atomic bond strength at high temperatures.
References
Hertzberg, Richard W. Deformation and Fracture Mechanics of Engineering Materials 4th Edition
Crystallographic defects | Peierls stress | [
"Chemistry",
"Materials_science",
"Engineering"
] | 277 | [
"Materials science stubs",
"Crystallographic defects",
"Materials science",
"Crystallography stubs",
"Crystallography",
"Materials degradation"
] |
9,830,701 | https://en.wikipedia.org/wiki/List%20of%20tallest%20statues | This list of tallest statues includes completed statues that are at least tall. The height values in this list are measured to the highest part of the human (or animal) figure, but exclude the height of any pedestal (plinth), or other base platform as well as any mast, spire, or other structure that extends higher than the tallest figure in the monument.
The definition of for this list is a free-standing sculpture (as opposed to a relief), representing one or more people or animals (real or mythical), in their entirety or partially (such as a bust). Heights stated are those of the statue itself and (separately) the total height of the monument that includes structures the statue is standing on or holding. Monuments that contain statues are included in this list only if the statue fulfills these and the height criteria.
Existing
By country/region
Destroyed
Proposed or under construction
See also
List of statues
List of tallest bridges
List of tallest buildings
List of tallest structures
List of the tallest statues in India
List of the tallest statues in Mexico
List of the tallest statues in Sri Lanka
List of the tallest statues in the United States
List of tallest Hindu statues
List of colossal sculpture in situ
List of largest monoliths
New 7 Wonders of the World
Notes
References
External links
Top 10 highest monuments – Architecture Portal News
Top highest monuments in the World
中國13尊大佛
The tallest statues in the world – Video By Top 10 Hindi
Statues
Statues | List of tallest statues | [
"Physics",
"Mathematics"
] | 290 | [
"Quantity",
"Colossal statues",
"Physical quantities",
"Size"
] |
9,833,509 | https://en.wikipedia.org/wiki/CX3C%20motif%20chemokine%20receptor%201 | CX3C motif chemokine receptor 1 (CX3CR1), also known as the fractalkine receptor or G-protein coupled receptor 13 (GPR13), is a transmembrane protein of the G protein-coupled receptor 1 (GPCR1) family and the only known member of the CX3C chemokine receptor subfamily.
As the name suggests, this receptor binds the inflammatory chemokine CX3CL1 (also called neurotactin in mice or fractalkine in humans). This endogenous ligand solely binds to CX3CR1 receptor. Interaction of CX3CR1 with CX3CL1 can mediate migration, adhesion and retention of leukocytes, because Fractalkine exists as membrane-anchored protein (mCX3CL1) as well as cleaved soluble molecule (sCX3CL1) due to proteolysis by metalloproteinases (MPPs). The shedded form carries out typical function of conventional chemokines, the chemotaxis, while the membrane-bound protein behaves as adhesion molecule for facilitation of diapedesis.
Both partners of CX3CL1-CX3CR1 axis are present on numerous cell types from hematopoietic and nonhematopoietic cells throughout the body. Moreover, their distinct cell expression is dependent on specific tissues and organs, which provides broad sphere of biological activity. Hence, considering their various functional activity, they are also linked with multiple neurodegenerative and inflammatory disorders as well as with tumorigenesis.
Genetics
The coding gene for CX3CR1 is now officially called identically to its protein: CX3CR1 gene, but may be still referred to by other older names such as V28; CCRL1; GPR13; CMKDR1; GPRV28; CMKBRL1. A genome location of the gene in humans is on the short arm of the chromosome 3p22.2. It is composed of four exons (only one contains coding region) and three intronic elements. Expression of the genomic sequence is regulated via three promoters.
Two missense mutations in CX3CR1 gene, variants of single nucleotide polymorphism (SNP) of the receptor, are responsible for functional change of the protein. Names of these variants are derived from given substitution and its position: valine to isoleucine (V249I) and threonine to methionine (T280M). Polymorphism of CX3CR1 has been linked to diseases relating to cardiovascular system (e.g. Atherosclerosis), nervous system (e.g. Alzheimer's disease, Sclerosis) or infections (e.g. systemic candidiasis.
Orthologs of CX3CR1 gene are found among animals, especially in mammals with high functional similarity, namely chimpanzee, dog, cat, mouse and rat. Orthologs are located on chromosome 9qF4 in the mouse genome and in the rat 8th chromosome on position 8q32.
Expression
CX3CR1 is expressed constitutively or in inflammatory response in various cells from hematopoietic lineage: T lymphocytes, natural killer (NK) cells, dendritic cells, B lymphocytes, mast cells, monocytes, macrophages, neutrophils, microglia, osteoclasts and thrombocytes. Furthermore, this receptor can be also found in nonhematopoietic tissues such as endothelial cells, epithelial cells, myocytes and astrocytes. Considering the CX3CR1 abundance in the body, it was also found to be expressed by some types of malignant cells.
Function
The CX3CR1 receptor is part of the G-protein chemokine receptor family with the metabotropic function. Its intracellular signalling cascades are responsible for modulating cell activity rather towards higher active state as in survival, migration and proliferation.
In the recognition of immune cells during inflammation, the function of CX3CL1-CX3CR1 axis in the bloodstream is mainly recruitment of immune cells by migration through chemotaxis and diapedesis. Of course, as a part of the inflammatory immune response against pathogens this role considered as protective. However, as with most immune cells and proteins, in inflammatory or autoimmune diseases, CX3CR1 signalling is associated with some disease's pathophysiology.
Expression of this receptor appears to be associated with lymphocytes. CX3CR1 is also expressed by monocytes and plays a major role in the survival of monocytes. Communication in blood vessels through the CX3CL1-CX3CR1 axis between endothelial cells and monocytes is responsible for formation of extracellular matrix and angiogenesis. It has been shown that CX3CR1 can influence monocytes already in bone marrow by means of retention and release. Moreover in bone marrow, CX3CR1 influences bone remodeling through role in differentiation of osteoclasts and osteoblasts.
The CX3CL1/CX3CR1 axis role in the nervous system is to mediate communication between microglia, neuroglia and neurons for regulation of microglia activity, hence this axis plays a neurodegenerative and neuroprotective function based on the physiological state.
Fractalkine signaling has also recently been discovered to play a developmental role in the migration of microglia in the central nervous system to their synaptic targets, where phagocytosis and synaptic refinement occur. CX3CR1 knockout mice had more synapses on hippocampal neurons than wild-type mice.
Structure
CX3CR1 is integral membrane protein formed by 355 amino acids with molecular weight around 40 kDa, which consist of three distinguishable segments: extracellular, transmembrane and intracellular part. As a member of the biggest class of GPCR family the rhodopsin-like receptors, the intracellular part of receptor, C-terminus of the polypeptide and three intracellular loops, is a bounding place with conserved DRYLAIV motif for the heterotrimeric G protein. This family is also known as T-transmembrane receptors (7-TM) by reason of 7 α-helices of transmembrane protein, which are alternately located in the cell's cytoplasmic membrane. Extracellular side of CX3CR1 consists of N-terminus of the polypeptide chain and three extracellular loops, forming a binding place for its main ligand CX3CL1, but also CCL26 (Eotaxin-3): has lower binding affinity when compared to fractalkine), immunoglobulins or infectious agents.
Signalling cascade
CX3CL1-CX3CR1 axis' signalling commences via activation of the receptor by its agonist's binding. It is followed by conformational change and component's dissociation of the heterotrimeric G complex, which consists of three subunits: α (alpha), β (beta) and γ (gamma). Several important signalling pathways are triggered by separated parts of G protein (Gα and Gβγ) such as the PLC/PKC pathway, the PI3K/AKT/NFκB pathway, the Ras/Raf/MEK/ERK (MAPK) pathway (or p38 and JNK) and the CREB pathway. All of those signalling cascades are responsible for diverse cellular behaviours and regulations, in terms of increased proliferation, survival and cell growth, metabolic regulation, induction of migration, apoptosis resistance and secretion of hormones and inflammatory cytokines. Products of CX3CR1 signalling cascades possess importance in the immune response of CX3CR1 positive hematopoietic cells.
Clinical significance
CX3CR1 and immune cells are strongly connected due to its abundant cell surface expression. Therefore, clinical meaning of CX3CR1 can be found in diseases connected with immunity. CX3CR1 is able to increase accumulation of immune cells in the affected body part, which results in disease aggravation. Few examples: allergies, Rheumatoid arthritis, Renal diseases, Chronic liver disease or Crohn's disease.
CX3CR1 is also a coreceptor for HIV-1, and some variations in this gene lead to increased susceptibility to HIV-1 infection and rapid progression to AIDS.
Since CX3CR1 plays a major role for interaction between endothelial cells and immune cells, it can aid vascular build up on the artery walls (plaque), thus it has been associated with Atherosclerosis. In addition, this may lead to thrombosis, other cardiovascular diseases or even cerebral ischemia.
CX3CL1-CX3CR1 axis has an ability to control neurological inflammation through activation of microglia. Its role in brain pathologies can be therefore protective but also detrimental. There are connections between microglia and neurodegenerative disorders like Alzheimer's disease, Parkinson's disease or even with neurocognitive HIV-dementia. Moreover, CX3CR1 variants have been described to modify the survival time and the progression rate of patients with amyotrophic lateral sclerosis.
Mutations in CX3CR1 are associated to dysplasia of the hip.
Homozygous CX3CR1-M280 mutation impairs human monocyte survival and deteriorates outcome of human systemic candiasis.
As mentioned before, this receptor and its ligand are important for the metabolism of the bone tissue in terms of differentiation of osteoclasts and osteoblasts. Overactivation of osteoclasts as well as accumulation of other immune cells has been linked to Osteoporosis.
CX3CR1 with Fractalkine have a meaningful place also in many various types of cancer (e.g. Neuroblastoma, Prostate cancer, Gastric adenocarcinoma or B cell lymphomas) where CX3CL1-CX3CR1 axis is a double agent, providing antitumoral effects (stimulating and recruiting immune cells to target neoplasm) and protumoral effects (stimulating important activity in malignant cells like: invasion, proliferation and apoptosis resistance, for facilitating metastasis). Therefore, it has a lot of potential as therapeutical target in cancer.
References
Further reading
External links
Cytokines
Receptors
Integral membrane proteins | CX3C motif chemokine receptor 1 | [
"Chemistry"
] | 2,268 | [
"Receptors",
"Cytokines",
"Signal transduction"
] |
4,373,936 | https://en.wikipedia.org/wiki/Heat%20deflection%20temperature | The heat deflection temperature or heat distortion temperature (HDT, HDTUL, or DTUL) is the temperature at which a polymer or plastic sample deforms under a specified load. This property of a given plastic material is applied in many aspects of product design, engineering and manufacture of products using thermoplastic components.
Determination
The heat distortion temperature is determined by the following test procedure outlined in ASTM D648. The test specimen is loaded in three-point bending in the edgewise direction. The outer fiber stress used for testing is either 0.455 MPa or 1.82 MPa, and the temperature is increased at 2 °C/min until the specimen deflects 0.25 mm. This is similar to the test procedure defined in the ISO 75 standard.
Limitations that are associated with the determination of the HDT is that the sample is not thermally isotropic and, in thick samples in particular, will contain a temperature gradient. The HDT of a particular material can also be very sensitive to stress experienced by the component which is dependent on the component’s dimensions. The selected deflection of 0.25 mm (which is 0.2% additional strain) is selected arbitrarily and has no particular physical significance.
Application in injection molding
An injection molded plastic part is considered "safe" to remove from its mold once it is near or below the HDT. This means that part deformation will be held within acceptable limits after removal. The molding of plastics by necessity occurs at high temperatures (routinely 200 °C or higher) due to the low viscosity of plastics in fluid form (this issue can be addressed to some extent by the addition of plasticizers to the melt, which is a secondary function of a plasticizer). Once plastic is in the mold, it must be cooled to a temperature to which little or no dimensional change will occur after removal. In general, plastics do not conduct heat well and so will take quite a while to cool to room temperature. One way to mitigate this is to use a cold mold (thereby increasing heat loss from the part). Even so, the cooling of the part to room temperature can limit the mass production of parts.
Choosing a resin with a higher heat deflection temperature (and therefore closer to melting temperature) can allow manufacturers to achieve a much faster molding process than they would otherwise while maintaining dimensional changes within certain limits.
See also
Vicat softening point
References
Polymer chemistry
Threshold temperatures | Heat deflection temperature | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 508 | [
"Physical phenomena",
"Phase transitions",
"Threshold temperatures",
"Materials science",
"Polymer chemistry"
] |
4,376,459 | https://en.wikipedia.org/wiki/Spin%20ice | A spin ice is a magnetic substance that does not have a single minimal-energy state. It has magnetic moments (i.e. "spin") as elementary degrees of freedom which are subject to frustrated interactions. By their nature, these interactions prevent the moments from exhibiting a periodic pattern in their orientation down to a temperature much below the energy scale set by the said interactions. Spin ices show low-temperature properties, residual entropy in particular, closely related to those of common crystalline water ice. The most prominent compounds with such properties are dysprosium titanate (Dy2Ti2O7) and holmium titanate (Ho2Ti2O7). The orientation of the magnetic moments in spin ice resembles the positional organization of hydrogen atoms (more accurately, ionized hydrogen, or protons) in conventional water ice (see figure 1).
Experiments have found evidence for the existence of deconfined magnetic monopoles in these materials, with properties resembling those of the hypothetical magnetic monopoles postulated to exist in vacuum.
Technical description
In 1935, Linus Pauling noted that the hydrogen atoms in water ice would be expected to remain disordered even at absolute zero. That is, even upon cooling to zero temperature, water ice is expected to have residual entropy, i.e., intrinsic randomness. This is due to the fact that the hexagonal crystalline structure of common water ice contains oxygen atoms with four neighboring hydrogen atoms. In ice, for each oxygen atom, two of the neighboring hydrogen atoms are near (forming the traditional H2O molecule), and two are further away (being the hydrogen atoms of two neighboring water molecules). Pauling noted that the number of configurations conforming to this "two-near, two-far" ice rule grows exponentially with the system size, and, therefore, that the zero-temperature entropy of ice was expected to be extensive. Pauling's findings were confirmed by specific heat measurements, though pure crystals of water ice are particularly hard to create.
Spin ices are materials that consist of regular corner-linked tetrahedra of magnetic ions, each of which has a non-zero magnetic moment, often abridged to "spin", which must satisfy in their low-energy state a "two-in, two-out" rule on each tetrahedron making the crystalline structure (see figure 2). This is highly analogous to the two-near, two far rule in water ice (see figure 1). Just as Pauling showed that the ice rule leads to an extensive entropy in water ice, so does the two-in, two-out rule in the spin ice systems – these exhibit the same residual entropy properties as water ice. Be that as it may, depending on the specific spin ice material, it is generally much easier to create large single crystals of spin ice materials than water ice crystals. Additionally, the ease to induce interaction of the magnetic moments with an external magnetic field in a spin ice system makes the spin ices more suitable than water ice for exploring how the residual entropy can be affected by external influences.
While Philip Anderson had already noted in 1956 the connection between the problem of the frustrated Ising antiferromagnet on a (pyrochlore) lattice of corner-shared tetrahedra and Pauling's water ice problem, real spin ice materials were only discovered forty years later. The first materials identified as spin ices were the pyrochlores Dy2Ti2O7 (dysprosium titanate), Ho2Ti2O7 (holmium titanate). In addition, compelling evidence has been reported that Dy2Sn2O7 (dysprosium stannate) and Ho2Sn2O7 (holmium stannate) are spin ices. These four compounds belong to the family of rare-earth pyrochlore oxides. CdEr2Se4, a spinel in which the magnetic Er3+ ions sit on corner-linked tetrahedra, also displays spin ice behavior.
Spin ice materials are characterized by a random disorder in the orientation of the moment of the magnetic ions, even when the material is at very low temperatures. Alternating current (AC) magnetic susceptibility measurements find evidence for a dynamic freezing of the magnetic moments as the temperature is lowered somewhat below the temperature at which the specific heat displays a maximum. The broad maximum in the heat capacity does not correspond to a phase transition. Rather, the temperature at which the maximum occurs, about 1K in Dy2Ti2O7, signals a rapid change in the number of tetrahedra where the two-in, two-out rule is violated. Tetrahedra where the rule is violated are sites where the aforementioned monopoles reside. Mathematically, spin ice configurations can be described by closed Eulerian paths.
Spin ices and magnetic monopoles
Spin ices are geometrically frustrated magnetic systems. While frustration is usually associated with triangular or tetrahedral arrangements of magnetic moments coupled via antiferromagnetic exchange interactions, as in Anderson's Ising model, spin ices are frustrated ferromagnets. It is the very strong local magnetic anisotropy from the crystal field forcing the magnetic moments to point either in or out of a tetrahedron that renders ferromagnetic interactions frustrated in spin ices. Most importantly, it is the long-range magnetostatic dipole–dipole interaction, and not the nearest-neighbor exchange, that causes the frustration and the consequential two-in, two-out rule that leads to the spin ice phenomenology.
For a tetrahedron in a two-in, two-out state, the magnetization field is divergent-free; there is as much "magnetization intensity" entering a tetrahedron as there is leaving (see figure 3). In such a divergent-free situation, there exists no source or sink for the field. According to Gauss' theorem (also known as Ostrogradsky's theorem), a nonzero divergence of a field is caused, and can be characterized, by a real number called "charge". In the context of spin ice, such charges characterizing the violation of the two-in, two-out magnetic moment orientation rule are the aforementioned monopoles.
In Autumn 2009, researchers reported experimental observation of low-energy quasiparticles resembling the predicted monopoles in spin ice. A single crystal of the dysprosium titanate spin ice candidate was examined in the temperature range of 0.6–2.0K. Using neutron scattering, the magnetic moments were shown to align in the spin ice material into interwoven tube-like bundles resembling Dirac strings. At the defect formed by the end of each tube, the magnetic field looks like that of a monopole. Using an applied magnetic field, the researchers were able to control the density and orientation of these strings. A description of the heat capacity of the material in terms of an effective gas of these quasiparticles was also presented.
The effective charge of a magnetic monopole, Q (see figure 3) in both the dysprosium and holmium titanate spin ice compounds is approximately (Bohr magnetons per angstrom). The elementary magnetic constituents of spin ice are magnetic dipoles, so the emergence of monopoles is an example of the phenomenon of fractionalization.
The microscopic origin of the atomic magnetic moments in magnetic materials is quantum mechanical; the Planck constant enters explicitly in the equation defining the magnetic moment of an electron, along with its charge and its mass. Yet, the magnetic moments in the dysprosium titanate and the holmium titanate spin ice materials are effectively described by classical statistical mechanics, and not quantum statistical mechanics, over the experimentally relevant and reasonably accessible temperature range (between 0.05K and 2K) where the spin ice phenomena manifest themselves. Although the weakness of quantum effects in these two compounds is rather unusual, it is believed to be understood. There is current interest in the search of quantum spin ices, materials in which the laws of quantum mechanics now become needed to describe the behavior of the magnetic moments. Magnetic ions other than dysprosium (Dy) and holmium (Ho) are required to generate a quantum spin ice, with praseodymium (Pr), terbium (Tb) and ytterbium (Yb) being possible candidates. One reason for the interest in quantum spin ice is the belief that these systems may harbor a quantum spin liquid, a state of matter where magnetic moments continue to wiggle (fluctuate) down to absolute zero temperature. The theory describing the low-temperature and low-energy properties of quantum spin ice is akin to that of vacuum quantum electrodynamics, or QED. This constitutes an example of the idea of emergence.
Artificial spin ices
Artificial spin ices are metamaterials consisting of coupled nanomagnets arranged on periodic and aperiodic lattices.
These systems have enabled the experimental investigation of a variety of phenomena such as frustration, emergent magnetic monopoles, and phase transitions. In addition, artificial spin ices show potential as reprogrammable magnonic crystals and have been studied for their fast dynamics. A variety of geometries have been explored, including quasicrystalline systems and 3D structures, as well as different magnetic materials to modify anisotropies and blocking temperatures.
For example, polymer magnetic composites comprising 2D lattices of droplets of solid-liquid phase change material, with each droplet containing a single magnetic dipole particle, form an artificial spin ice above the droplet melting point, and, after cooling, a spin glass state with low bulk remanence. Spontaneous emergence of 2D magnetic vortices was observed in such spin ices, which vortex geometries were correlated with the external bulk remanence.
Future work in this field includes further developments in fabrication and characterization methods, exploration of new geometries and material combinations, and potential applications in computation, data storage, and reconfigurable microwave circuits.
In 2021 a study demonstrated neuromorphic reservoir computing using artificial spin ice, solving a range of computational tasks using the complex magnetic dynamics of the artificial spin ice.
In 2022, another studied achieved an artificial kagome spin ice which could potentially be used in the future for novel high-speed computers with low power consumption.
See also
Lieb's square ice constant
Spin glass
Magnetic monopole
Magnetricity
References
Magnetic ordering
Condensed matter physics
Ice | Spin ice | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,184 | [
"Phases of matter",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic ordering",
"Condensed matter physics",
"Matter"
] |
4,376,926 | https://en.wikipedia.org/wiki/Long%20Josephson%20junction | In superconductivity, a long Josephson junction (LJJ) is a Josephson junction which has one or more dimensions longer than the Josephson penetration depth . This definition is not strict.
In terms of underlying model a short Josephson junction is characterized by the Josephson phase , which is only a function of time, but not of coordinates i.e. the Josephson junction is assumed to be point-like in space. In contrast, in a long Josephson junction the Josephson phase can be a function of one or two spatial coordinates, i.e., or .
Simple model: the sine-Gordon equation
The simplest and the most frequently used model which describes the dynamics of the Josephson phase in LJJ is the so-called perturbed sine-Gordon equation. For the case of 1D LJJ it looks like:
where subscripts and denote partial derivatives with respect to and , is the Josephson penetration depth, is the Josephson plasma frequency, is the so-called characteristic frequency and is the bias current density normalized to the critical current density . In the above equation, the r.h.s. is considered as perturbation.
Usually for theoretical studies one uses normalized sine-Gordon equation:
where spatial coordinate is normalized to the Josephson penetration depth and time is normalized to the inverse plasma frequency . The parameter is the dimensionless damping parameter ( is McCumber-Stewart parameter), and, finally, is a normalized bias current.
Important solutions
Small amplitude plasma waves.
Soliton (aka fluxon, Josephson vortex):
Here , and are the normalized coordinate, normalized time and normalized velocity. The physical velocity is normalized to the so-called Swihart velocity , which represent a typical unit of velocity and equal to the unit of space divided by unit of time .
References
Superconductivity
Josephson effect | Long Josephson junction | [
"Physics",
"Materials_science",
"Engineering"
] | 395 | [
"Josephson effect",
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
4,379,833 | https://en.wikipedia.org/wiki/Chromate%20conversion%20coating | Chromate conversion coating or alodine coating is a type of conversion coating used to passivate steel, aluminium, zinc, cadmium, copper, silver, titanium, magnesium, and tin alloys. The coating serves as a corrosion inhibitor, as a primer to improve the adherence of paints and adhesives, as a decorative finish, or to preserve electrical conductivity. It also provides some resistance to abrasion and light chemical attack (such as dirty fingers) on soft metals.
Chromate conversion coatings are commonly applied to items such as screws, hardware and tools. They usually impart a distinctively iridescent, greenish-yellow color to otherwise white or gray metals. The coating has a complex composition including chromium salts, and a complex structure.
The process is sometimes called alodine coating, a term used specifically in reference to the trademarked Alodine process of Henkel Surface Technologies.
Process
Chromate conversion coatings are usually applied by immersing the part in a chemical bath until a film of the desired thickness has formed, removing the part, rinsing it and letting it dry. The process is usually carried out at room temperature, with a few minutes of immersion. Alternatively, the solution can be sprayed, or the part can be briefly dipped in the bath, in which case the coating reactions take place while the part is still wet.
The coating is soft and gelatinous when first applied, but hardens and becomes hydrophobic as it dries, typically in 24 hours or less. Curing can be accelerated by heating to , but higher temperature will gradually damage the coating on steel.
Bath composition
The composition of the bath varies greatly, depending on the material to be coated and the desired effect. Most bath formulae are proprietary.
The formulations typically contain hexavalent chromium compounds, such as chromates and dichromates.
The widely used Cronak process for zinc and cadmium consists of 5–10 seconds of immersion in a room-temperature solution consisting of 182 g/L sodium dichromate (Na2Cr2O7 · 2H2O) and 6 mL/L concentrated sulfuric acid.
Chemistry
The chromate coating process starts with a redox reaction between the hexavalent chromium and the metal. In the case of aluminum, for example,
+ 0 → +
The resulting trivalent cations react with hydroxide ions in water to form the corresponding hydroxides, or a solid solution of both hydroxides:
+ 3 →
+ 3 →
Under appropriate conditions, these hydroxides condense with elimination of water to form a colloidal sol of very small particles, that are deposited as a hydrogel on the metal's surface. The gel consists of a three-dimensional solid skeleton of oxides and hydroxides, with nanoscale elements and voids, enclosing a liquid phase. The structure of the gel depends on metal ion concentration, pH, and other ingredients of the solution, such as chelating agents and counterions.
The gel film contracts as it dries, compressing the skeleton and causing it to stiffen. Eventually shrinkage stops, and further drying leaves the pores open but dry, turning the film into a xerogel. In the case of aluminum, the dry coating consists mostly chromium(III) oxide , or mixed (III)/(VI) oxide, with very little . Typically the process variables are adjusted to give a dry coating that is 200-300 nm thick.
The coating contracts as it dries, which causes it to crack into many microscopic scales, described as "dried mud" pattern. The trapped solution keeps reacting with any metal that gets exposed in the cracks, so that the final coating is continuous and covers the entire surface.
Although the main reactions turn most of the chromium(VI) anions (chromates and dichromates) in the deposited gel into insoluble chromium(III) compounds, a small quantity of them remains un-reacted in the dried-out coating. For example, in the coating formed on aluminum by a commercial bath, about 23% of the chromium atoms were found to be hexavalent , except in a region close to the metal. These chromium(VI) residues can migrate when the coating is wetted, and are believed to play a role in preventing corrosion in the finished part—specifically, by restoring the coating in any new microscopic cracks where corrosion could start.
Substrates
Zinc
Chromating is often performed on galvanized parts to make them more durable. The chromate coating acts as paint does, protecting the zinc from white corrosion, thus making the part considerably more durable, depending on the chromate layer's thickness.
The protective effect of chromate coatings on zinc is indicated by color, progressing from clear/blue to yellow, gold, olive drab and black. Darker coatings generally provide more corrosion resistance. The coating color can also be changed with dyes, so color is not a complete indicator of the process used.
ISO 4520 specifies chromate conversion coatings on electroplated zinc and cadmium coatings. ASTM B633 Type II and III specify zinc plating plus chromate conversion on iron and steel parts. Recent revisions of ASTM B633 defer to ASTM F1941 for zinc plating mechanical fasteners, like bolts, nuts, etc. 2019 is the current revision for ASTM B633 (superseded the revision from 2015), which raised required tensile thresholds when confronting hydrogen embrittlement issues and addressed embrittlement concerns in a new appendix.
Aluminium and its alloys
For aluminum, the chromate conversion bath can be simply a solution of chromic acid. The process is rapid (1–5 min), requires a single ambient temperature process tank and associated rinse, and is relatively trouble free.
As of 1995, Henkel's Alodine 1200s commercial formula for aluminum consisted of 50-60% chromic anhydride , 20-30% potassium tetrafluoroborate , 10-15% potassium ferricyanide , 5-10% potassium hexafluorozirconate , and 5-10% sodium fluoride by weight. The formula was meant to be dissolved in water at the concentration of 9.0 g/L, giving a bath with pH = 1.5. It yielded a light gold color after 1 min, and a golden-brown film after 3 min. The average thickness ranged between 200 and 1000 nm.
Iridite 14-2 is a chromate conversion bath for aluminum. Its ingredients include chromium(IV) oxide, barium nitrate, sodium silicofluoride and ferricyanide. In the aluminum industry, the process is also called chemical film or yellow iridite, Commercial trademarked names include Iridite and Bonderite (formerly known as Alodine, or Alocrom in the UK). The main standards for chromate conversion coating of aluminium are MIL-DTL-5541 in the US, and Def Stan 03/18 in the UK.
Magnesium
Alodine may also refer to chromate-coating magnesium alloys.
Steel
Steel and iron cannot be chromated directly. Steel plated with zinc or zinc-aluminum alloy may be chromated. Chromating zinc plated steel does not enhance zinc's cathodic protection of the underlying steel from rust.
Phosphate coatings
Chromate conversion coatings can be applied over the phosphate conversion coatings often used on ferrous substrates. The process is used to enhance the phosphate coating.
Safety
Hexavalent chromium compounds have been the topic of intense workplace and public health concern for their carcinogenicity, and have become highly regulated.
In particular, concerns about the exposure of workers to chromates and dichromates while handling the immersion bath and the wet parts, as well as the small residues of those anions that remain trapped in the coating, have motivated the development of alternative commercial bath formulations that do not contain hexavalent chromium; for instance, by replacing the chromates by trivalent chromium salts, which are considerably less toxic and provide as good or better corrosion resistance than traditional hexavalent chromate conversion.
In Europe, the RoHS and REACH Directives encourage elimination of hexavalent chromium in a broad range of industrial applications and products, including chromate conversion coating processes.
References
External links
Yellow and green chromating chemistry on aluminium
Coatings
Corrosion prevention
Chromium | Chromate conversion coating | [
"Chemistry"
] | 1,783 | [
"Corrosion prevention",
"Coatings",
"Corrosion"
] |
4,380,384 | https://en.wikipedia.org/wiki/Sulindac | Sulindac is a nonsteroidal anti-inflammatory drug (NSAID) of the arylalkanoic acid class that is marketed as Clinoril. Imbaral (not to be confused with mebaral) is another name for this drug. Its name is derived from sul(finyl)+ ind(ene)+ ac(etic acid)
It was patented in 1969 and approved for medical use in 1976.
Medical uses
Like other NSAIDs, it is useful in the treatment of acute or chronic inflammatory conditions. Sulindac is a prodrug, derived from sulfinylindene, that is converted in the body to the active NSAID. More specifically, the agent is converted by liver enzymes to a sulfide that is excreted in the bile and then reabsorbed from the intestine. This is thought to help maintain constant blood levels with reduced gastrointestinal side effects. Some studies have shown sulindac to be relatively less irritating to the stomach than other NSAIDs except for drugs of the COX-2 inhibitor class. The exact mechanism of its NSAID properties is unknown, but it is thought to act on enzymes COX-1 and COX-2, inhibiting prostaglandin synthesis.
Its usual dosage is 150-200 milligrams twice per day, with food. It should not be used by persons with a history of major allergic reactions (urticaria or anaphylaxis) to aspirin or other NSAIDs, and should be used with caution by persons having pre-existing peptic ulcer disease. Sulindac is much more likely than other NSAIDs to cause damage to the liver or pancreas, though it is less likely to cause kidney damage than other NSAIDs.
Sulindac seems to have a property, independent of COX-inhibition, of reducing the growth of polyps and precancerous lesions in the colon, especially in association with familial adenomatous polyposis, and may have other anti-cancer properties.
Adverse effects
In October 2020, the U.S. Food and Drug Administration (FDA) required the drug label to be updated for all nonsteroidal anti-inflammatory medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. They recommend avoiding NSAIDs in pregnant women at 20 weeks or later in pregnancy.
Society and culture
Litigation
In September 2010 a federal jury in New Hampshire awarded $21 million to Karen Bartlett, a woman who developed Stevens–Johnson syndrome/Toxic epidermal necrolysis as a result of taking a generic brand of sulindac manufactured by Mutual Pharmaceuticals for her shoulder pain. Ms. Bartlett sustained severe injuries including the loss of over 60% of her surface skin and permanent near-blindness. The case had been appealed to the United States Supreme Court, where the main issue was whether federal law preempts Ms. Bartlett's claim. On June 24, 2013, the Supreme Court ruled 5–4 in favor of Mutual Pharmaceuticals, throwing out the earlier $21 million jury verdict.
Synthesis
Rxn of p-fluorobenzyl chloride (1) with the anion of diethylmethyl malonate (2) gives intermediate diester (3), saponification of which and subsequent decarboxylation leads to 4. {Alternatively it can be formed by Perkin reaction between p-fluorobenzaldehyde and propionic anhydride in the presence of NaOAc, followed by catalytic hydrogenation of the olefinic bond using a palladium on carbon catalyst.}
Polyphosphoric acid (PPA) cyclization leads to 5-fluoro-2-methyl-3-indanone (4). A Reformatsky reaction with zinc amalgam and bromoacetic ester leads to carbinol (5), which is then dehydrated with tosic acid to indene 6. {Alternatively, this step can be performed in a Knoevenagel condensation with cyanoacetic acid, which is then further decarboxylated.}
The active methylene group is condensed with p-methylthiobenzaldehyde, using sodium methoxide as catalyst, and then saponified to give Z (7) which in turn oxidized with sodium metaperiodate to sulfoxide 8, the antiinflammatory agent sulindac.
References
External links
RxList information on Sulindac
Drug Profile
Jury Awards $21 Million
Nonsteroidal anti-inflammatory drugs
Prodrugs
Hepatotoxins
Indenes
Fluoroarenes
Carboxylic acids
Sulfoxides | Sulindac | [
"Chemistry"
] | 973 | [
"Chemicals in medicine",
"Carboxylic acids",
"Functional groups",
"Prodrugs"
] |
4,380,587 | https://en.wikipedia.org/wiki/Nuclear%20explosion | A nuclear explosion is an explosion that occurs as a result of the rapid release of energy from a high-speed nuclear reaction. The driving reaction may be nuclear fission or nuclear fusion or a multi-stage cascading combination of the two, though to date all fusion-based weapons have used a fission device to initiate fusion, and a pure fusion weapon remains a hypothetical device. Nuclear explosions are used in nuclear weapons and nuclear testing.
Nuclear explosions are extremely destructive compared to conventional (chemical) explosives, because of the vastly greater energy density of nuclear fuel compared to chemical explosives. They are often associated with mushroom clouds, since any large atmospheric explosion can create such a cloud. Nuclear explosions produce high levels of ionizing radiation and radioactive debris that is harmful to humans and can cause moderate to severe skin burns, eye damage, radiation sickness, radiation-induced cancer and possible death depending on how far a person is from the blast radius. Nuclear explosions can also have detrimental effects on the climate, lasting from months to years. A small-scale nuclear war could release enough particles into the atmosphere to cause the planet to cool and cause crops, animals, and agriculture to disappear across the globe—an effect named nuclear winter.
History
The beginning (fission explosions)
The first manmade nuclear explosion occurred on July 16, 1945, at 5:50 am on the Trinity test site near Alamogordo, New Mexico, in the United States, an area now known as the White Sands Missile Range. The event involved the full-scale testing of an implosion-type fission atomic bomb. In a memorandum to the U.S. Secretary of War, General Leslie Groves describes the yield as equivalent to 15,000 to 20,000 tons of TNT. Following this test, a uranium-gun type nuclear bomb (Little Boy) was dropped on the Japanese city of Hiroshima on August 6, 1945, with a blast yield of 15 kilotons; and a plutonium implosion-type bomb (Fat Man) on Nagasaki on August 9, 1945, with a blast yield of 21 kilotons. Fat Man and Little Boy are the only instances in history of nuclear weapons being used as an act of war.
On August 29, 1949, the USSR became the second country to successfully test a nuclear weapon. RDS-1, dubbed "First Lightning" by the Soviets and "Joe-1" by the US, produced a 20 kiloton explosion and was essentially a copy of the American Fat Man plutonium implosion design.
Thermonuclear Era (fusion explosions)
The United States' first thermonuclear weapon, Ivy Mike, was detonated on 1 November 1952 at Enewetak Atoll and yielded 10 Megatons of explosive force. The first thermonuclear weapon tested by the USSR, RDS-6s (Joe-4), was detonated on August 12, 1953, at the Semipalatinsk Test Site in Kazakhstan and yielded about 400 kilotons. RDS-6s' design, nicknamed the Sloika, was remarkably similar to a version designed for the U.S. by Edward Teller nicknamed the "Alarm Clock", in that the nuclear device was a two-stage weapon: the first explosion was triggered by fission and the second more powerful explosion by fusion. The Sloika core consisted of a series of concentric spheres with alternating materials to help boost the explosive yield.
Proliferation Era
In the years following World War II, eight countries have conducted nuclear tests with 2475 devices fired in 2120 tests. In 1963, the United States, Soviet Union, and United Kingdom signed the Limited Test Ban Treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. The treaty permitted underground tests. Many other non-nuclear nations acceded to the Treaty following its entry into force; however, France and China (both nuclear weapons states) have not.
The primary application to date has been military (i.e. nuclear weapons), and the remainder of explosions include the following:
Nuclear pulse propulsion, including using a nuclear explosion as asteroid deflection strategy.
Power generation; see PACER
Peaceful nuclear explosions
Nuclear weapons
Two nuclear weapons have been deployed in combat—both by the United States against Japan in World War II. The first event occurred on the morning of 6 August 1945, when the United States Army Air Forces dropped a uranium gun-type device, code-named "Little Boy", on the city of Hiroshima, killing 70,000 people, including 20,000 Japanese combatants and 20,000 Korean slave laborers. The second event occurred three days later when the United States Army Air Forces dropped a plutonium implosion-type device, code-named "Fat Man", on the city of Nagasaki. It killed 39,000 people, including 27,778 Japanese munitions employees, 2,000 Korean slave laborers, and 150 Japanese combatants. In total, around 109,000 people were killed in these bombings. Nuclear weapons are largely seen as a 'deterrent' by most governments; the sheer scale of the destruction caused by nuclear weapons has discouraged their use in warfare.
Nuclear testing
Since the Trinity test and excluding combat use, countries with nuclear weapons have detonated roughly 1,700 nuclear explosions, all but six as tests. Of these, six were peaceful nuclear explosions. Nuclear tests are experiments carried out to determine the effectiveness, yield and explosive capability of nuclear weapons. Throughout the 20th century, most nations that have developed nuclear weapons had a staged test of them. Testing nuclear weapons can yield information about how the weapons work, as well as how the weapons behave under various conditions and how structures behave when subjected to a nuclear explosion. Additionally, nuclear testing has often been used as an indicator of scientific and military strength, and many tests have been overtly political in their intention; most nuclear weapons states publicly declared their nuclear status by means of a nuclear test. Nuclear tests have taken place at more than 60 locations across the world; some in secluded areas and others more densely populated. Detonation of nuclear weapons (in a test or during war) releases radioactive fallout that concerned the public in the 1950s. This led to the Limited Test Ban Treaty of 1963 signed by the United States, Great Britain, and the Soviet Union. This treaty banned nuclear weapons testing in the atmosphere, outer space, and under water.
Effects of nuclear explosions
Shockwaves and radiation
The dominant effect of a nuclear weapon (the blast and thermal radiation) are the same physical damage mechanisms as conventional explosives, but the energy produced by a nuclear explosive is millions of times more per gram and the temperatures reached are in the tens of megakelvin. Nuclear weapons are quite different from conventional weapons because of the huge amount of explosive energy that they can put out and the different kinds of effects they make, like high temperatures and ionizing radiation.
The devastating impact of the explosion does not stop after the initial blast, as with conventional explosives. A cloud of nuclear radiation travels from the hypocenter of the explosion, causing an impact to life forms even after the heat waves have ceased. The health effects on humans from nuclear explosions comes from the initial shockwave, the radiation exposure, and the fallout. The initial shockwave and radiation exposure come from the immediate blast which has different effects on the health of humans depending on the distance from the center of the blast. The shockwave can rupture eardrums and lungs, can also throw people back, and cause buildings to collapse. Radiation exposure is delivered at the initial blast and can continue for an extended amount of time in the form of nuclear fallout. The main health effect of nuclear fallout is cancer and birth defects because radiation causes changes in cells that can either kill or make them abnormal. Any nuclear explosion (or nuclear war) would have wide-ranging, long-term, catastrophic effects. Radioactive contamination would cause genetic mutations and cancer across many generations.
Nuclear winter
Another potential devastating effect of nuclear war is termed nuclear winter. The idea became popularized in mainstream culture during the 1980s, when Richard P. Turco, Owen Toon, Thomas P. Ackerman, James B. Pollack and Carl Sagan collaborated and produced a scientific study which suggested the Earth's weather and climate can be severely impacted by nuclear war. The main idea is that once a conflict begins and the aggressors start detonating nuclear weapons, the explosions will eject small particles from the Earth's surface into the atmosphere as well as nuclear particles. It's also assumed that fires will break out and become widespread, similar to what happened at Hiroshima and Nagasaki during the end of WWII, which will cause soot and other harmful particles to also be introduced into the atmosphere. Once these harmful particles are lofted, strong upper-level winds in the troposphere can transport them thousands of kilometers and can end up transporting nuclear fallout and also alter the Earth's radiation budget. Once enough small particles are in the atmosphere, they can act as cloud condensation nuclei which will cause global cloud coverage to increase which in turn blocks incoming solar insolation and starts a global cooling period. This is not unlike one of the leading theories about the extinction of most dinosaur species, in that a large explosion ejected small particulate matter into the atmosphere and resulted in a global catastrophe characterized by cooler temperatures, acid rain, and the KT Layer.
See also
Lists of nuclear disasters and radioactive incidents
Soviet nuclear well collapses
Visual depictions of nuclear explosions in fiction
References
External links
Video – Nuclear Explosion Power Comparison
NUKEMAP2.7 (modelling effects of nuclear explosion of various yield in various cities)
Nuclear physics
Nuclear chemistry
Nuclear weapon design
Nuclear accidents and incidents
Articles containing video clips | Nuclear explosion | [
"Physics",
"Chemistry"
] | 1,963 | [
"Nuclear accidents and incidents",
"Nuclear chemistry",
"nan",
"Nuclear physics",
"Radioactivity"
] |
9,118,440 | https://en.wikipedia.org/wiki/Dvoretzky%27s%20theorem | In mathematics, Dvoretzky's theorem is an important structural theorem about normed vector spaces proved by Aryeh Dvoretzky in the early 1960s, answering a question of Alexander Grothendieck. In essence, it says that every sufficiently high-dimensional normed vector space will have low-dimensional subspaces that are approximately Euclidean. Equivalently, every high-dimensional bounded symmetric convex set has low-dimensional sections that are approximately ellipsoids.
A new proof found by Vitali Milman in the 1970s was one of the starting points for the development of asymptotic geometric analysis (also called asymptotic functional analysis or the local theory of Banach spaces).
Original formulations
For every natural number k ∈ N and every ε > 0 there exists a natural number N(k, ε) ∈ N such that if (X, ‖·‖) is any normed space of dimension N(k, ε), there exists a subspace E ⊂ X of dimension k and a positive definite quadratic form Q on E such that the corresponding Euclidean norm
on E satisfies:
In terms of the multiplicative Banach-Mazur distance d the theorem's conclusion can be formulated as:
where denotes the standard k-dimensional Euclidean space.
Since the unit ball of every normed vector space is a bounded, symmetric, convex set and the unit ball of every Euclidean space is an ellipsoid, the theorem may also be formulated as a statement about ellipsoid sections of convex sets.
Further developments
In 1971, Vitali Milman gave a new proof of Dvoretzky's theorem, making use of the concentration of measure on the sphere to show that a random k-dimensional subspace satisfies the above inequality with probability very close to 1. The proof gives the sharp dependence on k:
where the constant C(ε) only depends on ε.
We can thus state: for every ε > 0 there exists a constant C(ε) > 0 such that for every normed space (X, ‖·‖) of dimension N, there exists a subspace E ⊂ X of dimension
k ≥ C(ε) log N and a Euclidean norm |⋅| on E such that
More precisely, let SN − 1 denote the unit sphere with respect to some Euclidean structure Q on X, and let σ be the invariant probability measure on SN − 1. Then:
there exists such a subspace E with
For any X one may choose Q so that the term in the brackets will be at most
Here c1 is a universal constant. For given X and ε, the largest possible k is denoted k*(X) and called the Dvoretzky dimension of X.
The dependence on ε was studied by Yehoram Gordon, who showed that k*(X) ≥ c2 ε2 log N. Another proof of this result was given by Gideon Schechtman.
Noga Alon and Vitali Milman showed that the logarithmic bound on the dimension of the subspace in Dvoretzky's theorem can be significantly improved, if one is willing to accept a subspace that is close either to a Euclidean space or to a Chebyshev space. Specifically, for some constant c, every n-dimensional space has a subspace of dimension k ≥ exp(c) that is close either to ℓ or to ℓ.
Important related results were proved by Tadeusz Figiel, Joram Lindenstrauss and Milman.
References
Further reading
Banach spaces
Asymptotic geometric analysis
Theorems in functional analysis | Dvoretzky's theorem | [
"Mathematics"
] | 736 | [
"Theorems in mathematical analysis",
"Theorems in functional analysis"
] |
9,121,625 | https://en.wikipedia.org/wiki/Band%20offset | Band offset describes the relative alignment of the energy bands at a semiconductor heterojunction.
Introduction
At semiconductor heterojunctions, energy bands of two different materials come together, leading to an interaction. Both band structures are positioned discontinuously from each other, causing them to align close to the interface. This is done to ensure that the Fermi energy level stays continuous throughout the two semiconductors. This alignment is caused by the discontinuous band structures of the semiconductors when compared to each other and the interaction of the two surfaces at the interface. This relative alignment of the energy bands at such semiconductor heterojunctions is called the Band offset.
The band offsets can be determined by both intrinsic properties, that is, determined by properties of the bulk materials, as well as non-intrinsic properties, namely, specific properties of the interface. Depending on the type of the interface, the offsets can be very accurately considered intrinsic, or be able to be modified by manipulating the interfacial structure. Isovalent heterojunctions are generally insensitive to manipulation of the interfacial structure, whilst heterovalent heterojunctions can be influenced in their band offsets by the geometry, the orientation, and the bonds of the interface and the charge transfer between the heterovalent bonds. The band offsets, especially those at heterovalent heterojunctions depend significantly on the distribution of interface charge.
The band offsets are determined by two kinds of factors for the interface, the band discontinuities and the built-in potential. These discontinuities are caused by the difference in band gaps of the semiconductors and are distributed between two band discontinuities, the valence-band discontinuity, and the conduction-band discontinuity. The built-in potential is caused by the bands which bend close at the interface due to a charge imbalance between the two semiconductors, and can be described by Poisson's equation.
Semiconductor types
The behaviour of semiconductor heterojunctions depend on the alignment of the energy bands at the interface and thus on the band offsets. The interfaces of such heterojunctions can be categorized in three types: straddling gap (referred to as type I), staggered gap (type II), and broken gap (type III).
These representations do not take into account the band bending, which is a reasonable assumption if you only look at the interface itself, as band bending exerts its influence on a length scale of generally hundreds of angström. For a more accurate picture of the situation at hand, the inclusion of band bending is important.
Experimental methods
Two kinds of experimental techniques are used to describe band offsets. The first is an older technique, the first technique to probe the heterojunction built-in potential and band discontinuities. This methods are generally called transport methods. These methods consist of two classes, either capacitance-voltage (C-V) or current-voltage (I-V) techniques. These older techniques were used to extract the built-in potential by assuming a square-root dependence for the capacitance C on bi - qV, with bi the built-in potential, q the electron charge, and V the applied voltage. If band extrema away from the interface, as well as the distance between the Fermi level, are known parameters, known a priori from bulk doping, it becomes possible to obtain the conduction band offset and the valence band offset. This square root dependence corresponds to an ideally abrupt transition at the interface and it may or may not be a good approximation of the real junction behaviour.
The second kind of technique consists of optical methods. Photon absorption is used effectively as the conduction band and valence band discontinuities define quantum wells for the electrons and the holes. Optical techniques can be used to probe the direct transitions between sub-bands within the quantum wells, and with a few parameters known, such as the geometry of the structure and the effective mass, the transition energy measured experimentally can be used to probe the well depth. Band offset values are usually estimated using the optical response as a function of certain geometrical parameters or the intensity of an applied magnetic field. Light scattering could also be used to determine the size of the well depth.
Alignment
Prediction of the band alignment is at face value dependent on the heterojunction type, as well as whether or not the heterojunction in question is heterovalent or isovalent. However, quantifying this alignment proved a difficult task for a long time. Anderson's rule is used to construct energy band diagrams at heterojunctions between two semiconductors. It states that during the construction of an energy band diagram, the vacuum levels of the semiconductors on either side of the heterojunction should be equal.
Anderson's rule states that when we construct the heterojunction, we need to have both semiconductors on an equal vacuum energy level. This ensures that the energy bands of both the semiconductors are being held to the same reference point, from which ΔEc and ΔEv, the conduction band offset and valence band offset can be calculated. By having the same reference point for both semiconductors, ΔEc becomes equal to the built-in potential, Vbi = Φ1 - Φ2, and the behaviour of the bands at the interface can be predicted as can be seen at the picture above.
Anderson's rule fails to predict real band offsets. This is primarily due to the fact that Anderson's model implies that the materials are assumed to behave the same as if they were separated by a large vacuum distance, however at these heterojunctions consisting of solids filling the space, there is no vacuum, and the use of the electron affinities at vacuum leads to wrong results. Anderson's rule ignores actual chemical bonding effects that occur on small vacuum separation or non-existent vacuum separation, which leads to wrong predictions about the band offsets.
A better theory for predicting band offsets has been linear-response theory. In this theory, interface dipoles have a significant impact on the lining up of the bands of the semiconductors. These interface dipoles however are not ions, rather they are mathematical constructs based upon the difference of charge density between the bulk and the interface. Linear-response theory is based on first-principles calculations, which are calculations aimed at solving the quantum-mechanical equations, without input from experiment. In this theory, the band offset is the sum of two terms, the first term is intrinsic and depends solely on the bulk properties, the second term, which vanishes for isovalent and abrupt non-polar heterojunctions, depends on the interface geometry, and can easily be calculated once the geometry is known, as well as certain quantities (such as the lattice parameters).
The goal of the model is to attempt to model the difference between the two semiconductors, that is, the difference with respect to a chosen optimal average (whose contribution to the band offset should vanish). An example would be GaAs-AlAs, constructing it from a virtual crystal of Al0.5Ga0.5As, then introducing an interface. After this a perturbation is added to turn the crystal into pure GaAs, whilst on the other side, the perturbation transforms the crystal in pure AlAs. These perturbations are sufficiently small so that they can be handled by linear-response theory and the electrostatic potential lineup across the interface can then be obtained up to the first order from the charge density response to those localized perturbations. Linear response theory works well for semiconductors with similar potentials (such as GaAs-AlAs) as well as dissimilar potentials (such as GaAs-Ge), which was doubted at first. However predictions made by linear response theory coincide exactly with those of self-consistent first principle calculations. If interfaces are polar however, or nonabrupt nonpolar oriented, additional effects must be taken into account. These are additional terms which require simple electrostatics, which is within the linear response approach.
References
See also
Physics
Semiconductor structures
Electronic band structures | Band offset | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,676 | [
"Electron",
"Electronic band structures",
"Condensed matter physics"
] |
9,122,349 | https://en.wikipedia.org/wiki/Indium%20%28111In%29%20altumomab%20pentetate | Indium (111In) altumomab pentetate (INN) (USP, indium In 111 altumomab pentetate; trade name Hybri-ceaker) is a mouse monoclonal antibody linked to pentetate which acts as a chelating agent for the radioisotope indium-111. The drug is used for the diagnosis of colorectal cancer but has not been approved for use.
References
Monoclonal antibodies for tumors
Indium compounds
Antibody-drug conjugates
Radiopharmaceuticals | Indium (111In) altumomab pentetate | [
"Chemistry",
"Biology"
] | 117 | [
"Antibody-drug conjugates",
"Chemicals in medicine",
"Radiopharmaceuticals",
"Medicinal radiochemistry"
] |
9,122,581 | https://en.wikipedia.org/wiki/Young%E2%80%93Laplace%20equation | In physics, the Young–Laplace equation () is an algebraic equation that describes the capillary pressure difference sustained across the interface between two static fluids, such as water and air, due to the phenomenon of surface tension or wall tension, although use of the latter is only applicable if assuming that the wall is very thin. The Young–Laplace equation relates the pressure difference to the shape of the surface or wall and it is fundamentally important in the study of static capillary surfaces. It is a statement of normal stress balance for static fluids meeting at an interface, where the interface is treated as a surface (zero thickness):
where is the Laplace pressure, the pressure difference across the fluid interface (the exterior pressure minus the interior pressure), is the surface tension (or wall tension), is the unit normal pointing out of the surface, is the mean curvature, and and are the principal radii of curvature. Note that only normal stress is considered, because a static interface is possible only in the absence of tangential stress.
The equation is named after Thomas Young, who developed the qualitative theory of surface tension in 1805, and Pierre-Simon Laplace who completed the mathematical description in the following year. It is sometimes also called the Young–Laplace–Gauss equation, as Carl Friedrich Gauss unified the work of Young and Laplace in 1830, deriving both the differential equation and boundary conditions using Johann Bernoulli's virtual work principles.
Soap films
If the pressure difference is zero, as in a soap film without gravity, the interface will assume the shape of a minimal surface.
Emulsions
The equation also explains the energy required to create an emulsion. To form the small, highly curved droplets of an emulsion, extra energy is required to overcome the large pressure that results from their small radius.
The Laplace pressure, which is greater for smaller droplets, causes the diffusion of molecules out of the smallest droplets in an emulsion and drives emulsion coarsening via Ostwald ripening.
Capillary pressure in a tube
In a sufficiently narrow (i.e., low Bond number) tube of circular cross-section (radius a), the interface between two fluids forms a meniscus that is a portion of the surface of a sphere with radius R. The pressure jump across this surface is related to the radius and the surface tension γ by
This may be shown by writing the Young–Laplace equation in spherical form with a contact angle boundary condition and also a prescribed height boundary condition at, say, the bottom of the meniscus. The solution is a portion of a sphere, and the solution will exist only for the pressure difference shown above. This is significant because there isn't another equation or law to specify the pressure difference; existence of solution for one specific value of the pressure difference prescribes it.
The radius of the sphere will be a function only of the contact angle, θ, which in turn depends on the exact properties of the fluids and the container material with which the fluids in question are contacting/interfacing:
so that the pressure difference may be written as:
In order to maintain hydrostatic equilibrium, the induced capillary pressure is balanced by a change in height, h, which can be positive or negative, depending on whether the wetting angle is less than or greater than 90°. For a fluid of density ρ:
where g is the gravitational acceleration. This is sometimes known as the Jurin's law or Jurin height after James Jurin who studied the effect in 1718.
For a water-filled glass tube in air at sea level:
γ = 0.0728 J/m2 at 20 °C
θ = 20° (0.35 rad)
ρ = 1000 kg/m3
g = 9.8 m/s2
and so the height of the water column is given by:
Thus for a 2 mm wide (1 mm radius) tube, the water would rise 14 mm. However, for a capillary tube with radius 0.1 mm, the water would rise 14 cm (about 6 inches).
Capillary action and gravity
When including also the effects of gravity, for a free surface and for a pressure difference between the fluids equal to Δp at the level h=0, there is a balance, when the interface is in equilibrium, between Δp, the hydrostatic pressure and the effects of surface tension. The Young–Laplace equation becomes:
Note that the mean curvature of the fluid-fluid interface now depends on h.
The equation can be non-dimensionalised in terms of its characteristic length-scale, the capillary length:
and characteristic pressure
For clean water at standard temperature and pressure, the capillary length is ~2 mm.
The non-dimensional equation then becomes:
Thus, the surface shape is determined by only one parameter, the over pressure of the fluid, Δp* and the scale of the surface is given by the capillary length. The solution of the equation requires an initial condition for position, and the gradient of the surface at the start point.
Axisymmetric equations
The (nondimensional) shape, r(z) of an axisymmetric surface can be found by substituting general expressions for principal curvatures to give the hydrostatic Young–Laplace equations:
Application in medicine
In medicine it is often referred to as the Law of Laplace, used in the context of cardiovascular physiology, and also respiratory physiology, though the latter use is often erroneous.
History
Francis Hauksbee performed some of the earliest observations and experiments in 1709 and these were repeated in 1718 by James Jurin who observed that the height of fluid in a capillary column was a function only of the cross-sectional area at the surface, not of any other dimensions of the column.
Thomas Young laid the foundations of the equation in his 1804 paper An Essay on the Cohesion of Fluids where he set out in descriptive terms the principles governing contact between fluids (along with many other aspects of fluid behaviour). Pierre Simon Laplace followed this up in Mécanique Céleste with the formal mathematical description given above, which reproduced in symbolic terms the relationship described earlier by Young.
Laplace accepted the idea propounded by Hauksbee in his book Physico-mechanical Experiments (1709), that the phenomenon was due to a force of attraction that was insensible at sensible distances. The part which deals with the action of a solid on a liquid and the mutual action of two liquids was not worked out thoroughly, but ultimately was completed by Carl Friedrich Gauss. Franz Ernst Neumann (1798-1895) later filled in a few details.
References
Further reading
Batchelor, G. K. (1967) An Introduction To Fluid Dynamics, Cambridge University Press
Tadros T. F. (1995) Surfactants in Agrochemicals, Surfactant Science series, vol.54, Dekker
Fluid dynamics
Physiology
Partial differential equations
Mathematics in medicine
Respiratory therapy
Equations of fluid dynamics | Young–Laplace equation | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering",
"Biology"
] | 1,423 | [
"Equations of fluid dynamics",
"Equations of physics",
"Physiology",
"Applied mathematics",
"Chemical engineering",
"Piping",
"Mathematics in medicine",
"Fluid dynamics"
] |
9,123,511 | https://en.wikipedia.org/wiki/Electronics%20Letters | Electronics Letters is a peer-reviewed scientific journal published biweekly by the Institution of Engineering and Technology. It specializes in the rapid publication of short communications on all areas of electronic engineering, including optical, communication, and biomedical engineering, as well as electronic circuits and signal processing.
In 2010 Electronics Letters was relaunched with a new section at the start of each issue. This section focuses on selected papers within the issue, providing expanded context and background to the research reported, through magazine-style news articles and interviews with the researchers behind the work. The articles are designed to be accessible to a general engineering audience and were made available free of charge, without a subscription, from the journal's website.
In 2013 a hybrid open-access model was introduced providing authors whose papers have been accepted for publication with an open access publication option.
History
In 1965, the British engineer and professor Peter Clarricoats, along with the association of Institution of Electrical Engineers, pioneered a peer-reviewed platform out of the necessity to quickly disseminate the latest researches in the field of electrical and electronic engineering. He became the first editor-in-chief of Electronics Letters. At present, professor Ian H. White, Head of Photonics Research at the University of Cambridge and professor Chris Toumazou of Imperial College London are the editors-in-chief of Electronics Letters.
References
External links
Electrical and electronic engineering journals
Biweekly journals
English-language journals
Academic journals established in 1965
Academic journals published by learned and professional societies of the United Kingdom
Institution of Engineering and Technology academic journals
Electronics journals | Electronics Letters | [
"Engineering"
] | 315 | [
"Institution of Engineering and Technology",
"Institution of Engineering and Technology academic journals",
"Electronic engineering",
"Electrical engineering",
"Electrical and electronic engineering journals"
] |
9,124,553 | https://en.wikipedia.org/wiki/Generalized%20assignment%20problem | In applied mathematics, the maximum generalized assignment problem is a problem in combinatorial optimization. This problem is a generalization of the assignment problem in which both tasks and agents have a size. Moreover, the size of each task might vary from one agent to the other.
This problem in its most general form is as follows: There are a number of agents and a number of tasks. Any agent can be assigned to perform any task, incurring some cost and profit that may vary depending on the agent-task assignment. Moreover, each agent has a budget and the sum of the costs of tasks assigned to it cannot exceed this budget. It is required to find an assignment in which all agents do not exceed their budget and total profit of the assignment is maximized.
In special cases
In the special case in which all the agents' budgets and all tasks' costs are equal to 1, this problem reduces to the assignment problem. When the costs and profits of all tasks do not vary between different agents, this problem reduces to the multiple knapsack problem. If there is a single agent, then, this problem reduces to the knapsack problem.
Explanation of definition
In the following, we have n kinds of items, through and m kinds of bins through . Each bin is associated with a budget . For a bin , each item has a profit and a weight . A solution is an assignment from items to bins. A feasible solution is a solution in which for each bin the total weight of assigned items is at most . The solution's profit is the sum of profits for each item-bin assignment. The goal is to find a maximum profit feasible solution.
Mathematically the generalized assignment problem can be formulated as an integer program:
Complexity
The generalized assignment problem is NP-hard. However, there are linear-programming relaxations which give a -approximation.
Greedy approximation algorithm
For the problem variant in which not every item must be assigned to a bin, there is a family of algorithms for solving the GAP by using a combinatorial translation of any algorithm for the knapsack problem into an approximation algorithm for the GAP.
Using any -approximation algorithm ALG for the knapsack problem, it is possible to construct a ()-approximation for the generalized assignment problem in a greedy manner using a residual profit concept.
The algorithm constructs a schedule in iterations, where during iteration a tentative selection of items to bin is selected.
The selection for bin might change as items might be reselected in a later iteration for other bins.
The residual profit of an item for bin is if is not selected for any other bin or – if is selected for bin .
Formally: We use a vector to indicate the tentative schedule during the algorithm. Specifically, means the item is scheduled on bin and means that item is not scheduled. The residual profit in iteration is denoted by , where if item is not scheduled (i.e. ) and if item is scheduled on bin (i.e. ).
Formally:
Set
For do:
Call ALG to find a solution to bin using the residual profit function . Denote the selected items by .
Update using , i.e., for all .
See also
Assignment problem
References
Further reading
NP-complete problems
Combinatorial optimization | Generalized assignment problem | [
"Mathematics"
] | 658 | [
"NP-complete problems",
"Mathematical problems",
"Computational problems"
] |
5,816,930 | https://en.wikipedia.org/wiki/Parts%20Manufacturer%20Approval | Parts Manufacturer Approval (PMA) is an approval granted by the United States Federal Aviation Administration (FAA) to a manufacturer of aircraft parts.
Approval
It is generally illegal in the United States to install replacement or modification parts on a certificated aircraft without an airworthiness release such as a Supplemental Type Certificate (STC) or Parts Manufacturing Approval (PMA). There are a number of other methods of compliance, including parts manufactured to government or industry standards, parts manufactured under technical standard order authorization [TSO], owner-/operator-produced parts, experimental aircraft, field approvals, etc.
PMA-holding manufacturers are permitted to make replacement parts for aircraft, even though they are not the original manufacturer of the aircraft. The process is analogous to 'after-market' parts for automobiles, except that the United States aircraft parts production market remains tightly regulated by the FAA.
An applicant for a PMA applies for approval from the FAA. The FAA prioritizes its review of a new application based on its internal process called Project Prioritization.
The FAA Order covering the application for PMA is Order 8110.42 revision D. This document is worded as instructions to the FAA reviewing personnel. An accompanying Advisory Circular (AC) 21.303-4 is intended to address the applicant. 8110.42C addressed both the applicant and the reviewer. Per the order, application for a PMA can be made per the following ways: Identicality in which the applicant attempts to convince the FAA that the PMA part is identical to the OAH (Original Approval Holder) part. Identicality by Licensure is accomplished by providing evidence to the FAA that the applicant has licensed the part data from the OAH. This evidence is usually in the form of an Assist Letter provided to the applicant by the OAH. PMA may also be granted based upon prior approval of an STC . As an example: If an STC were granted to alter an existing aircraft design then that approval would also apply to the parts needed to make that modification. A PMA would be required, however, to manufacture the parts. The last method to obtain a PMA is Test & Computation. This approach consist of one or a combination of both of the following methods: General Analysis and Comparative Analysis. General analysis compares the proposed part to the functional requirements of that part when installed. Comparative Analysis compares the function of the proposed part to the OAH part. As an example: If a PMA application for flight control cables were to show that the PMA part exceeds the pull strength requirements of the aircraft system it is meant for, that is general analysis. To show that it exceeds that of the OAH part is comparative analysis. The modern trend is to use a variety of techniques in combination in order to obtain approval of complicated parts - relying on the techniques that are most accurate and best able to provide the proof of airworthiness desired. The cognizant regional FAA Aircraft Certification Office (ACO) determines if the applicant has shown compliance with all relevant airworthiness regulations and is thus entitled to design approval.
The second step in the application process is to apply to the FAA Manufacturing Inspection Divisional Office (MIDO) to obtain approval of the manufacturing quality assurance system (known as production approval). Production approval will be granted when the FAA is satisfied that the system will not permit parts to leave the system until the parts have been verified to meet the requirements of the approved design, and the system otherwise meets the requirements of the FAA quality system regulations. A Production Approval Holder (PAH) will typically already have satisfied this requirement before PMA application is made.
PMA applications based upon licensure or STC do not require ACO approval (since the data has already been approved) and can go straight to the MIDO.
History
Under the Civil Air Regulations (CARs), the government had the authority to approve aircraft parts in a predecessor to the PMA rules. This authority was found in each of the sets of airworthiness standards published in the Civil Air Regulations. CAR 3.31, for example, permitted the Administrator to approve aircraft parts as early as 1947.
In 1952, the Civil Aeronautics Board adjusted the location of the parts production authority from the ".31" regulations to the ".18" regulations. For example, the CAR 3 authority for modification and replacement parts could be found in section 3.18 after 1952.
In 1955, the Civil Aeronautics Board separated the parts authority out of the airworthiness standards, and placed it in a more general location so that one standard would apply to replacement and modification parts for all different forms of aircraft.
In 1965 CAR 1.55 became Federal Aviation Regulation section 21.303.
The 1965 regulatory change also imposed specific obligations on the PMA holder related to the Fabrication Inspection System.
Amendment 21-38 of Part 21 was published May 26, 1972. This was the next rule change to affect PMAs. This rule eliminated the incorporation by reference of type certification requirements in favor of PMA-specific data submission requirements. This change established the separate process and separate requirements for data that must be submitted by an applicant for a PMA (prior to this there was no explicit distinction between the application data requirements for type certificated products and the data requirements for PMAed articles).
The aircraft parts aftermarket expanded greatly in the 1980s as airlines sought to reduce the costs of spares by finding alternative sources of parts. During this time period, though, many manufacturers failed to obtain PMA approvals from the FAA.
In the 1990s, the FAA engaged in an "Enhanced Enforcement" program that educated the industry about the importance of approval and as a consequence a huge number of parts were approved through formal FAA mechanisms. Under this program, companies that had previously manufactured aircraft parts without PMAs could apply for PMAs in order to bring their manufacturing operations into full compliance with the regulations. This movement brought an explosion of PMA parts to the marketplace.
2009 rule change
The FAA published a significant revision to the U.S. manufacturing regulations on October 16, 2009. This new rule eliminates some of the legal distinctions between forms of production approval issued by the FAA, which should have the effect of further demonstrating the FAA's support of the quality systems implemented by PMA manufacturers. Specifically, instead of having a separate body of regulations for a PMA Fabrication Inspection System (FIS), as was the case in prior regulations, the PMA regulations now include a cross reference to the 14 C.F.R. § 21.137, which is the regulation defining the elements of a quality system for all production approval holders. In practice, all production approval holders were held to the same production quality standards before the rule change – this will now be more obvious in the FAA's regulations. Accomplishing this harmonization of standards was an important goal of the Modification and Replacement Parts Association (MARPA).
The new rule became effective April 16, 2011. The FAA's FAQ on Part 21 stated that PMA quality systems would be evaluated for compliance by the FAA during certificate management activity after the compliance date of the rule. Today, all FAA production approvals – whether for complete aircraft or for piece parts – rely on a common set of quality assurance system elements. E.g. 14 C.F.R. §§ 21.137 (quality system requirements for production certificates), 21.307 (requiring PMA holders to establish a quality system that meets the requirements of § 21.137), 21.607 (requiring TSOA holders to establish a quality system that meets the requirements of § 21.137).
Relationship to repair
The FAA is also working on new policies concerning parts fabricated in the course of repair. This practice has historically been confused with PMA manufacturing, although the two are actually quite different practices supported by different FAA regulations. Today, FAA Advisory Circular 43.18 provides guidance for the fabrication of parts to be consumed purely during a maintenance operation, and additional guidance is expected to be released in the near future. One of the key features of FAC 43.18 is that it recommends implementation of a quality assurance system quite similar to the fabrication inspection systems that PMA manufacturers are required to have.
Industry association
The trade association representing the PMA industry is the Modification and Replacement Parts Association (MARPA). MARPA works closely with the FAA and other agencies to promote PMA safety.
Developments outside the United States
The United States has Bilateral Aviation Safety Agreements (BASA) with most of its major trading partners, and the standard language of these BASAs requires the trading partner to treat FAA-PMA as an importable aircraft part that is airworthy and eligible for installation on aircraft registered in the importing jurisdiction. This process has been facilitated by the International Air Transport Association (IATA) which has published a book on accepting PMA parts.
Although the PMA industry began in the United States, several countries have begun promoting production of approved aircraft parts within their own borders. These jurisdictions include:
Australia
China
The European Union (which produces them as "EPA Parts")
Other jurisdictions have established PMA regulations and are working with trading partners to achieve acceptance of their PMA industries, and thus should be expected to enter the PMA marketplace in the near future. For example, Japan has PMA regulations and has secured a bilateral agreement with the United States that authorizes the export of these parts to the United States as airworthy aircraft parts.
References
Title 14 § 21.303
External links
MARPA
YouTube Video: "What is PMA?"
Aerospace engineering | Parts Manufacturer Approval | [
"Engineering"
] | 1,933 | [
"Aerospace engineering"
] |
5,817,455 | https://en.wikipedia.org/wiki/Ant%20mimicry | Ant mimicry or myrmecomorphy is mimicry of ants by other organisms; it has evolved over 70 times. Ants are abundant all over the world, and potential predators that rely on vision to identify their prey, such as birds and wasps, normally avoid them, because they are either unpalatable or aggressive. Some arthropods mimic ants to escape predation (Batesian mimicry), while some predators of ants, especially spiders, mimic them anatomically and behaviourally in aggressive mimicry. Ant mimicry has existed almost as long as ants themselves; the earliest ant mimics in the fossil record appear in the mid-Cretaceous alongside the earliest ants.
In myrmecophily, mimic and model live commensally together; in the case of ants, the mimic is an inquiline in the ants' nest. Such mimics may in addition be Batesian or aggressive mimics. To overcome ants' powerful defences, mimics may imitate ants chemically with ant-like pheromones, visually, or by imitating an ant's surface microstructure to defeat the ants' tactile inspections.
Types
Batesian mimicry
Batesian mimics lack strong defences of their own, and make use of their resemblance to a well-defended model, in this case ants, to avoid being attacked by their predators. A special case is where the predator is itself an ant, so that only two species are involved. The mimicry can be extremely close: for instance, Dipteran flies in the genus Syringogaster "strikingly" resemble Pseudomyrmex and are hard even for experts to distinguish "until they take flight". Insects that do not share the narrow-waisted body plan of ants are sometimes elaborately camouflaged to improve their resemblance. For example, the thick waist of the Mirid ant bug Myrmecoris gracilis has white markings at the front of its abdomen and the back of its thorax, making it look ant-waisted.
Over 300 spider species mimic the social behaviours, morphological features and predatory behaviour of ants. Many genera of jumping spiders (Salticidae) mimic ants. Jumping spiders in the genus Myrmarachne are Batesian mimics which resemble the morphological and behavioural properties of ants to near perfection. These spiders mimic the behavioural features of ants such as adopting their zig-zag locomotion pattern. Further, they create an antennal illusion by waving their first or second pair of legs in the air. The slender bodies of these spiders make them more agile, allowing them to easily escape from predators. Studies on this genus have revealed that the major selection force is the avoidance of ants by predators such as spider wasps and other larger jumping spiders. Ant mimicry has a cost, given the body plan of spiders: the body of spider myrmecomorphs is much narrower than non-mimics, reducing the number of eggs per eggsac, compared to non-mimetic spiders of similar size. They seem to compensate by laying more eggsacs over their lifetimes. A study of three species of mantises suggested that they innately avoided ants as prey, and that this aversion extends to ant-mimicking jumping spiders.
Batesian mimicry of ants appears to have evolved even in certain plants, as a visual anti-herbivory strategy. Passiflora flowers of at least 22 species, such as P. incarnata, have dark dots and stripes on their flowers for this purpose.
Myrmecophily
Some arthropods are myrmecophiles, mimicking ants by non-visual means, including touch, behaviour, and pheromones. Many groups of myrmecophiles have convergently evolved similar features. They are not necessarily visual mimics of ants. The mimicry allows them to live unharmed within ant nests, some beetles even marching with the aggressive Eciton burchellii army ants. The Jesuit priest Erich Wasmann, who discovered ant mimicry, listed 1,177 myrmecophiles in 1894; many more such species have been discovered since then.
The cricket Myrmecophilus acervorum was one of the earliest myrmecophiles to be studied; its relationship with ants was first described by the Italian naturalist Paolo Savi in 1819. It has many ant species as hosts, and occurs in large and small morphs suited to large hosts like Formica and Myrmica, and the small workers of species such as Lasius. On first arriving in an ants' nest, the crickets are attacked by the workers, and are killed if they do not run fast enough. Within a few days, however, they adjust their movements to match those of their hosts, and are then tolerated. Mimicry appears to be achieved by a combination of social releasers (signals), whether by imitating the ants' solicitation (begging) signals with suitable behaviour or ant pheromones with suitable chemicals; Hölldobler and Wilson propose that Wasmannian mimicry, where the mimic lives alongside the model, be redefined to permit any such combination, making it essentially a synonym for myrmecophily.
Mites are among the most speciose mimics of ants, and can occur in large numbers in an ant colony. A single colony of Eciton burchellii army ants may contain some 20,000 inquiline mites. The phoretic mite Planodiscus (Uropodidae) attaches itself to the tibia of its host ant, Eciton hamatum. The cuticular sculpturing of the mite's body as seen under the electron microscope strongly resembles the sculpturing of the ant's leg, as do the arrangements and number of the bristles (setae). Presumably, the effect is that when the ant grooms its leg, the tactile sensation is as it would be in mite-free grooming.
The snail Allopeas myrmekophilos lives in colonies of the army ant Leptogenys distinguenda. The snails live in bivouacs of the ants except when the colony migrates, during which the ants carry along the snails. A. myrmekophilos feeds on the meat of animals killed by the ants.
Lycaenid butterflies
Some 75% of lycaenid butterfly species are myrmecophiles, their larvae and pupae living as social parasites in ant nests. These lycaenids mimic the brood pheromone and the alarm call of ants so they can integrate themselves into the nest. In Aloeides dentatis the tubercles release the mimicking pheromone which deceives its host, the ant Acantholepis caprensis, into caring for the mimics as they would their own brood. In these relationships, worker ants give the same preference to the lycaenids as they do to their own brood, demonstrating that chemical signals produced by the mimic are indistinguishable to the ant. Larvae of the mountain Alcon blue, Phengaris rebeli, similarly mimic Myrmica ants and feed on their brood.
Parasitoid wasps
The parasitoid wasp Gelis agilis (Ichneumonidae) shares many similarities with the ant Lasius niger. G. agilis is a wingless wasp which exhibits multi-trait mimicry of garden ants, imitating the ant's morphology, behaviour, and surface chemicals that serve as pheromones, cuticular hydrocarbons. When threatened it releases a toxic chemical similar to the ant's alarm pheromone. This multi-trait mimicry serves to protect G. agilis both from ants and (in Batesian mimicry) from ground predators such as wolf spiders.
Aggressive mimicry
Aggressive mimics are predators which resemble ants sufficiently to be able to approach their prey successfully. Some spiders, such as the Zodariidae and those in the genus Myrmarachne, use their disguise to hunt ants. These ant hunters often do not visually resemble ants very closely. Among the many spiders which are aggressive mimics of ants, Aphantochilus rogersi mimics its sole prey, Cephalotini ants. Like many other ant-mimicking spiders, it is also a Batesian mimic, gaining protection from predators such as spider-hunting wasps.
Special protection for young insects
Multiple groups of insects have evolved ant mimicry for their young, while their adults are protected in different ways, either being camouflaged or have conspicuous warning coloration.
The young instars of some mantids, such as Odontomantis pulchra and Tarachodes afzelii are Batesian mimics of ants. Bigger instars and adults of these mantids are not ant mimics, but are well-camouflaged predators, and in the case of Tarachodes, that eat ants.
Young instars of some bush crickets in the genus Macroxiphus, have an "uncanny resemblance" to ants, extending to their black coloration, remarkably perfect antlike shape, and convincingly antlike behaviour. Their long antennae are camouflaged to appear short, being black only at the base, and they are vibrated like ant antennae. Larger instars suddenly change into typical-looking katydids, and are entirely nocturnal, while the adult has bright warning coloration.
The phasmid Extatosoma tiaratum, resembling dried thorny leaves as an adult, hatches from the egg as a replica of a Leptomyrmex ant, with a red head and black body. The long end is curled to make the body shape appear ant-like, and the movement is erratic, while the adults move differently, if at all. In some species the eggs resemble ant-dispersed (myrmecochoric) plant seeds, complete with a mimic oil body (a "capitulum"). These eggs are collected by the ants, deceived in a different way, and taken to their nests. The capitulum is removed and eaten, leaving the eggs viable.
Taxonomic range
Ant mimicry has a wide taxonomic range, including some 2000 species of terrestrial arthropods in more than 200 genera. It has evolved over 70 times, including some 15 clades of spiders, 10 clades of plant-sucking bugs, and 7 clades of staphylinid rove beetles. Outside the arthropods, ant mimics include snails, snakes, and flowering plants.
References
External links
Pictures of Coleosoma acutiventer
Pictures of ant spiders
Myrmecology
Mimicry
Spiders
Mimicry | Ant mimicry | [
"Biology"
] | 2,216 | [
"Mimicry",
"Biological defense mechanisms"
] |
5,817,831 | https://en.wikipedia.org/wiki/Plastarch%20material | Plastarch Material (PSM) is a biodegradable, thermoplastic resin. It is composed of starch combined with several other biodegradable materials. The starch is modified in order to obtain heat-resistant properties, making PSM one of few bioplastics capable of withstanding high temperatures. PSM began to be commercially available in 2005.
PSM is stable in the atmosphere, but biodegradable in compost, wet soil, fresh water, seawater, and activated sludge where microorganisms exist. It has a softening temperature of 257 °F (125 °C) and a melting temperature of 313 °F (156 °C).
It is also hygroscopic. The material has to be dried in a material dryer at 150 °F (66 °C) for five hours or 180 °F (82 °C) for three hours. For injection molding and extrusion the barrel temperatures should be at 340° +/- 10 °F (171 °C) with the nozzle/die at 360 °F (182 °C).
Due to how similar PSM is to other plastics (such as polypropylene and CPET), PSM can run on many existing thermoforming and injection molding lines. PSM is currently used for a wide variety of applications in the plastic market, such as food packaging and utensils, personal care items, plastic bags, temporary construction tubing, industrial foam packaging, industrial and agricultural film, window insulation, construction stakes, and horticulture planters.
Since PSM is derived from a renewable resource (corn starch), it has become an attractive alternative to petrochemical-derived products. Unlike plastic, PSM can also be disposed of through incineration, resulting in non-toxic smoke and a white residue which can be used as fertilizer. However, concerns have been expressed about the impact of such technologies on food prices.
Biodegradability concerns
Some PSM products - such as cutlery - contain a mix of PSM and plastics. These plastics prevent the PSM from degrading, making the entire product non-biodegradable.
References
Polymer chemistry
Thermoplastics
Transparent materials | Plastarch material | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 457 | [
"Physical phenomena",
"Materials science",
"Optical phenomena",
"Materials",
"Transparent materials",
"Polymer chemistry",
"Matter"
] |
7,592,567 | https://en.wikipedia.org/wiki/Introduction%20to%20entropy | In thermodynamics, entropy is a numerical quantity that shows that many physical processes can go in only one direction in time. For example, cream and coffee can be mixed together, but cannot be "unmixed"; a piece of wood can be burned, but cannot be "unburned". The word 'entropy' has entered popular usage to refer to a lack of order or predictability, or of a gradual decline into disorder. A more physical interpretation of thermodynamic entropy refers to spread of energy or matter, or to extent and diversity of microscopic motion.
If a movie that shows coffee being mixed or wood being burned is played in reverse, it would depict processes highly improbable in reality. Mixing coffee and burning wood are "irreversible". Irreversibility is described by a law of nature known as the second law of thermodynamics, which states that in an isolated system (a system not connected to any other system) which is undergoing change, entropy increases over time.
Entropy does not increase indefinitely. A body of matter and radiation eventually will reach an unchanging state, with no detectable flows, and is then said to be in a state of thermodynamic equilibrium. Thermodynamic entropy has a definite value for such a body and is at its maximum value. When bodies of matter or radiation, initially in their own states of internal thermodynamic equilibrium, are brought together so as to intimately interact and reach a new joint equilibrium, then their total entropy increases. For example, a glass of warm water with an ice cube in it will have a lower entropy than that same system some time later when the ice has melted leaving a glass of cool water. Such processes are irreversible: A glass of cool water will not spontaneously turn into a glass of warm water with an ice cube in it. Some processes in nature are almost reversible. For example, the orbiting of the planets around the Sun may be thought of as practically reversible: A movie of the planets orbiting the Sun which is run in reverse would not appear to be impossible.
While the second law, and thermodynamics in general, accurately predicts the intimate interactions of complex physical systems, scientists are not content with simply knowing how a system behaves, they also want to know why it behaves the way it does. The question of why entropy increases until equilibrium is reached was answered in 1877 by physicist Ludwig Boltzmann. The theory developed by Boltzmann and others, is known as statistical mechanics. Statistical mechanics explains thermodynamics in terms of the statistical behavior of the atoms and molecules which make up the system. The theory not only explains thermodynamics, but also a host of other phenomena which are outside the scope of thermodynamics.
Explanation
Thermodynamic entropy
The concept of thermodynamic entropy arises from the second law of thermodynamics. This law of entropy increase quantifies the reduction in the capacity of an isolated compound thermodynamic system to do thermodynamic work on its surroundings, or indicates whether a thermodynamic process may occur. For example, whenever there is a suitable pathway, heat spontaneously flows from a hotter body to a colder one.
Thermodynamic entropy is measured as a change in entropy () to a system containing a sub-system which undergoes heat transfer to its surroundings (inside the system of interest). It is based on the macroscopic relationship between heat flow into the sub-system and the temperature at which it occurs summed over the boundary of that sub-system.
Following the formalism of Clausius, the basic calculation can be mathematically stated as:
where is the increase or decrease in entropy, is the heat added to the system or subtracted from it, and is temperature. The 'equals' sign and the symbol imply that the heat transfer should be so small and slow that it scarcely changes the temperature .
If the temperature is allowed to vary, the equation must be integrated over the temperature path. This calculation of entropy change does not allow the determination of absolute value, only differences. In this context, the Second Law of Thermodynamics may be stated that for heat transferred over any valid process for any system, whether isolated or not,
According to the first law of thermodynamics, which deals with the conservation of energy, the loss of heat will result in a decrease in the internal energy of the thermodynamic system. Thermodynamic entropy provides a comparative measure of the amount of decrease in internal energy and the corresponding increase in internal energy of the surroundings at a given temperature. In many cases, a visualization of the second law is that energy of all types changes from being localized to becoming dispersed or spread out, if it is not hindered from doing so. When applicable, entropy increase is the quantitative measure of that kind of a spontaneous process: how much energy has been effectively lost or become unavailable, by dispersing itself, or spreading itself out, as assessed at a specific temperature. For this assessment, when the temperature is higher, the amount of energy dispersed is assessed as 'costing' proportionately less. This is because a hotter body is generally more able to do thermodynamic work, other factors, such as internal energy, being equal. This is why a steam engine has a hot firebox.
The second law of thermodynamics deals only with changes of entropy (). The absolute entropy (S) of a system may be determined using the third law of thermodynamics, which specifies that the entropy of all perfectly crystalline substances is zero at the absolute zero of temperature. The entropy at another temperature is then equal to the increase in entropy on heating the system reversibly from absolute zero to the temperature of interest.
Statistical mechanics and information entropy
Thermodynamic entropy bears a close relationship to the concept of information entropy (H). Information entropy is a measure of the "spread" of a probability density or probability mass function. Thermodynamics makes no assumptions about the atomistic nature of matter, but when matter is viewed in this way, as a collection of particles constantly moving and exchanging energy with each other, and which may be described in a probabilistic manner, information theory may be successfully applied to explain the results of thermodynamics. The resulting theory is known as statistical mechanics.
An important concept in statistical mechanics is the idea of the microstate and the macrostate of a system. If we have a container of gas, for example, and we know the position and velocity of every molecule in that system, then we know the microstate of that system. If we only know the thermodynamic description of that system, the pressure, volume, temperature, and/or the entropy, then we know the macrostate of that system. Boltzmann realized that there are many different microstates that can yield the same macrostate, and, because the particles are colliding with each other and changing their velocities and positions, the microstate of the gas is always changing. But if the gas is in equilibrium, there seems to be no change in its macroscopic behavior: No changes in pressure, temperature, etc. Statistical mechanics relates the thermodynamic entropy of a macrostate to the number of microstates that could yield that macrostate. In statistical mechanics, the entropy of the system is given by Ludwig Boltzmann's equation:
where S is the thermodynamic entropy, W is the number of microstates that may yield the macrostate, and is the Boltzmann constant. The natural logarithm of the number of microstates () is known as the information entropy of the system. This can be illustrated by a simple example:
If you flip two coins, you can have four different results. If H is heads and T is tails, we can have (H,H), (H,T), (T,H), and (T,T). We can call each of these a "microstate" for which we know exactly the results of the process. But what if we have less information? Suppose we only know the total number of heads?. This can be either 0, 1, or 2. We can call these "macrostates". Only microstate (T,T) will give macrostate zero, (H,T) and (T,H) will give macrostate 1, and only (H,H) will give macrostate 2. So we can say that the information entropy of macrostates 0 and 2 are ln(1) which is zero, but the information entropy of macrostate 1 is ln(2) which is about 0.69. Of all the microstates, macrostate 1 accounts for half of them.
It turns out that if you flip a large number of coins, the macrostates at or near half heads and half tails accounts for almost all of the microstates. In other words, for a million coins, you can be fairly sure that about half will be heads and half tails. The macrostates around a 50–50 ratio of heads to tails will be the "equilibrium" macrostate. A real physical system in equilibrium has a huge number of possible microstates and almost all of them are the equilibrium macrostate, and that is the macrostate you will almost certainly see if you wait long enough. In the coin example, if you start out with a very unlikely macrostate (like all heads, for example with zero entropy) and begin flipping one coin at a time, the entropy of the macrostate will start increasing, just as thermodynamic entropy does, and after a while, the coins will most likely be at or near that 50–50 macrostate, which has the greatest information entropy – the equilibrium entropy.
The macrostate of a system is what we know about the system, for example the temperature, pressure, and volume of a gas in a box. For each set of values of temperature, pressure, and volume there are many arrangements of molecules which result in those values. The number of arrangements of molecules which could result in the same values for temperature, pressure and volume is the number of microstates.
The concept of information entropy has been developed to describe any of several phenomena, depending on the field and the context in which it is being used. When it is applied to the problem of a large number of interacting particles, along with some other constraints, like the conservation of energy, and the assumption that all microstates are equally likely, the resultant theory of statistical mechanics is extremely successful in explaining the laws of thermodynamics.
Example of increasing entropy
Ice melting provides an example in which entropy increases in a small system, a thermodynamic system consisting of the surroundings (the warm room) and the entity of glass container, ice and water which has been allowed to reach thermodynamic equilibrium at the melting temperature of ice. In this system, some heat (δQ) from the warmer surroundings at 298 K (25 °C; 77 °F) transfers to the cooler system of ice and water at its constant temperature (T) of 273 K (0 °C; 32 °F), the melting temperature of ice. The entropy of the system, which is , increases by . The heat δQ for this process is the energy required to change water from the solid state to the liquid state, and is called the enthalpy of fusion, i.e. ΔH for ice fusion.
The entropy of the surrounding room decreases less than the entropy of the ice and water increases: the room temperature of 298 K is larger than 273 K and therefore the ratio, (entropy change), of for the surroundings is smaller than the ratio (entropy change), of for the ice and water system. This is always true in spontaneous events in a thermodynamic system and it shows the predictive importance of entropy: the final net entropy after such an event is always greater than was the initial entropy.
As the temperature of the cool water rises to that of the room and the room further cools imperceptibly, the sum of the over the continuous range, "at many increments", in the initially cool to finally warm water can be found by calculus. The entire miniature 'universe', i.e. this thermodynamic system, has increased in entropy. Energy has spontaneously become more dispersed and spread out in that 'universe' than when the glass of ice and water was introduced and became a 'system' within it.
Origins and uses
Originally, entropy was named to describe the "waste heat", or more accurately, energy loss, from heat engines and other mechanical devices which could never run with 100% efficiency in converting energy into work. Later, the term came to acquire several additional descriptions, as more was understood about the behavior of molecules on the microscopic level. In the late 19th century, the word "disorder" was used by Ludwig Boltzmann in developing statistical views of entropy using probability theory to describe the increased molecular movement on the microscopic level. That was before quantum behavior came to be better understood by Werner Heisenberg and those who followed. Descriptions of thermodynamic (heat) entropy on the microscopic level are found in statistical thermodynamics and statistical mechanics.
For most of the 20th century, textbooks tended to describe entropy as "disorder", following Boltzmann's early conceptualisation of the "motional" (i.e. kinetic) energy of molecules. More recently, there has been a trend in chemistry and physics textbooks to describe entropy as energy dispersal. Entropy can also involve the dispersal of particles, which are themselves energetic. Thus there are instances where both particles and energy disperse at different rates when substances are mixed together.
The mathematics developed in statistical thermodynamics were found to be applicable in other disciplines. In particular, information sciences developed the concept of information entropy, which lacks the Boltzmann constant inherent in thermodynamic entropy.
Classical calculation of entropy
When the word 'entropy' was first defined and used in 1865, the very existence of atoms was still controversial, though it had long been speculated that temperature was due to the motion of microscopic constituents and that "heat" was the transferring of that motion from one place to another. Entropy change, , was described in macroscopic terms that could be directly measured, such as volume, temperature, or pressure. However, today the classical equation of entropy, can be explained, part by part, in modern terms describing how molecules are responsible for what is happening:
is the change in entropy of a system (some physical substance of interest) after some motional energy ("heat") has been transferred to it by fast-moving molecules. So, .
Then, , the quotient of the motional energy ("heat") q that is transferred "reversibly" (rev) to the system from the surroundings (or from another system in contact with the first system) divided by T, the absolute temperature at which the transfer occurs.
"Reversible" or "reversibly" (rev) simply means that T, the temperature of the system, has to stay (almost) exactly the same while any energy is being transferred to or from it. That is easy in the case of phase changes, where the system absolutely must stay in the solid or liquid form until enough energy is given to it to break bonds between the molecules before it can change to a liquid or a gas. For example, in the melting of ice at 273.15 K, no matter what temperature the surroundings are – from 273.20 K to 500 K or even higher, the temperature of the ice will stay at 273.15 K until the last molecules in the ice are changed to liquid water, i.e., until all the hydrogen bonds between the water molecules in ice are broken and new, less-exactly fixed hydrogen bonds between liquid water molecules are formed. This amount of energy necessary for ice melting per mole has been found to be 6008 joules at 273 K. Therefore, the entropy change per mole is , or 22 J/K.
When the temperature is not at the melting or boiling point of a substance no intermolecular bond-breaking is possible, and so any motional molecular energy ("heat") from the surroundings transferred to a system raises its temperature, making its molecules move faster and faster. As the temperature is constantly rising, there is no longer a particular value of "T" at which energy is transferred. However, a "reversible" energy transfer can be measured at a very small temperature increase, and a cumulative total can be found by adding each of many small temperature intervals or increments. For example, to find the entropy change from 300 K to 310 K, measure the amount of energy transferred at dozens or hundreds of temperature increments, say from 300.00 K to 300.01 K and then 300.01 to 300.02 and so on, dividing the q by each T, and finally adding them all.
Calculus can be used to make this calculation easier if the effect of energy input to the system is linearly dependent on the temperature change, as in simple heating of a system at moderate to relatively high temperatures. Thus, the energy being transferred "per incremental change in temperature" (the heat capacity, ), multiplied by the integral of from to , is directly given by .
Alternate explanations of entropy
Thermodynamic entropy
A measure of energy unavailable for work: This is an often-repeated phrase which, although it is true, requires considerable clarification to be understood. It is only true for cyclic reversible processes, and is in this sense misleading. By "work" is meant moving an object, for example, lifting a weight, or bringing a flywheel up to speed, or carrying a load up a hill. To convert heat into work, using a coal-burning steam engine, for example, one must have two systems at different temperatures, and the amount of work you can extract depends on how large the temperature difference is, and how large the systems are. If one of the systems is at room temperature, and the other system is much larger, and near absolute zero temperature, then almost ALL of the energy of the room temperature system can be converted to work. If they are both at the same room temperature, then NONE of the energy of the room temperature system can be converted to work. Entropy is then a measure of how much energy cannot be converted to work, given these conditions. More precisely, for an isolated system comprising two closed systems at different temperatures, in the process of reaching equilibrium the amount of entropy lost by the hot system, multiplied by the temperature of the hot system, is the amount of energy that cannot converted to work.
An indicator of irreversibility: fitting closely with the 'unavailability of energy' interpretation is the 'irreversibility' interpretation. Spontaneous thermodynamic processes are irreversible, in the sense that they do not spontaneously undo themselves. Thermodynamic processes artificially imposed by agents in the surroundings of a body also have irreversible effects on the body. For example, when James Prescott Joule used a device that delivered a measured amount of mechanical work from the surroundings through a paddle that stirred a body of water, the energy transferred was received by the water as heat. There was scarce expansion of the water doing thermodynamic work back on the surroundings. The body of water showed no sign of returning the energy by stirring the paddle in reverse. The work transfer appeared as heat, and was not recoverable without a suitably cold reservoir in the surroundings. Entropy gives a precise account of such irreversibility.
Dispersal: Edward A. Guggenheim proposed an ordinary language interpretation of entropy that may be rendered as "dispersal of modes of microscopic motion throughout their accessible range". Later, along with a criticism of the idea of entropy as 'disorder', the dispersal interpretation was advocated by Frank L. Lambert, and is used in some student textbooks.
The interpretation properly refers to dispersal in abstract microstate spaces, but it may be loosely visualised in some simple examples of spatial spread of matter or energy. If a partition is removed from between two different gases, the molecules of each gas spontaneously disperse as widely as possible into their respectively newly accessible volumes; this may be thought of as mixing. If a partition, that blocks heat transfer between two bodies of different temperatures, is removed so that heat can pass between the bodies, then energy spontaneously disperses or spreads as heat from the hotter to the colder.
Beyond such loose visualizations, in a general thermodynamic process, considered microscopically, spontaneous dispersal occurs in abstract microscopic phase space. According to Newton's and other laws of motion, phase space provides a systematic scheme for the description of the diversity of microscopic motion that occurs in bodies of matter and radiation. The second law of thermodynamics may be regarded as quantitatively accounting for the intimate interactions, dispersal, or mingling of such microscopic motions. In other words, entropy may be regarded as measuring the extent of diversity of motions of microscopic constituents of bodies of matter and radiation in their own states of internal thermodynamic equilibrium.
Information entropy and statistical mechanics
As a measure of disorder: Traditionally, 20th century textbooks have introduced entropy as order and disorder so that it provides "a measurement of the disorder or randomness of a system". It has been argued that ambiguities in, and arbitrary interpretations of, the terms used (such as "disorder" and "chaos") contribute to widespread confusion and can hinder comprehension of entropy for most students. On the other hand, in a convenient though arbitrary interpretation, "disorder" may be sharply defined as the Shannon entropy of the probability distribution of microstates given a particular macrostate, in which case the connection of "disorder" to thermodynamic entropy is straightforward, but arbitrary and not immediately obvious to anyone unfamiliar with information theory.
Missing information: The idea that information entropy is a measure of how much one does not know about a system is quite useful.
If, instead of using the natural logarithm to define information entropy, we instead use the base 2 logarithm, then the information entropy is roughly equal to the average number of (carefully chosen ) yes/no questions that would have to be asked to get complete information about the system under study. In the introductory example of two flipped coins, the information entropy for the macrostate which contains one head and one tail, one would only need one question to determine its exact state, (e.g. is the first one heads?") and instead of expressing the entropy as ln(2) one could say, equivalently, that it is log2(2) which equals the number of binary questions we would need to ask: One. When measuring entropy using the natural logarithm (ln), the unit of information entropy is called a "nat", but when it is measured using the base-2 logarithm, the unit of information entropy is called a "shannon" (alternatively, "bit"). This is just a difference in units, much like the difference between inches and centimeters. (1 nat = log2e shannons). Thermodynamic entropy is equal to the Boltzmann constant times the information entropy expressed in nats. The information entropy expressed with the unit shannon (Sh) is equal to the number of yes–no questions that need to be answered in order to determine the microstate from the macrostate.
The concepts of "disorder" and "spreading" can be analyzed with this information entropy concept in mind. For example, if we take a new deck of cards out of the box, it is arranged in "perfect order" (spades, hearts, diamonds, clubs, each suit beginning with the ace and ending with the king), we may say that we then have an "ordered" deck with an information entropy of zero. If we thoroughly shuffle the deck, the information entropy will be about 225.6 shannons: We will need to ask about 225.6 questions, on average, to determine the exact order of the shuffled deck. We can also say that the shuffled deck has become completely "disordered" or that the ordered cards have been "spread" throughout the deck. But information entropy does not say that the deck needs to be ordered in any particular way. If we take our shuffled deck and write down the names of the cards, in order, then the information entropy becomes zero. If we again shuffle the deck, the information entropy would again be about 225.6 shannons, even if by some miracle it reshuffled to the same order as when it came out of the box, because even if it did, we would not know that. So the concept of "disorder" is useful if, by order, we mean maximal knowledge and by disorder we mean maximal lack of knowledge. The "spreading" concept is useful because it gives a feeling to what happens to the cards when they are shuffled. The probability of a card being in a particular place in an ordered deck is either 0 or 1, in a shuffled deck it is 1/52. The probability has "spread out" over the entire deck. Analogously, in a physical system, entropy is generally associated with a "spreading out" of mass or energy.
The connection between thermodynamic entropy and information entropy is given by Boltzmann's equation, which says that . If we take the base-2 logarithm of W, it will yield the average number of questions we must ask about the microstate of the physical system in order to determine its macrostate.
See also
Entropy (classical thermodynamics)
Entropy (energy dispersal)
Second law of thermodynamics
Statistical mechanics
Thermodynamics
List of textbooks on thermodynamics and statistical mechanics
References
Further reading
Chapters 4–12 touch on entropy.
Thermodynamic entropy | Introduction to entropy | [
"Physics"
] | 5,407 | [
"Statistical mechanics",
"Entropy",
"Physical quantities",
"Thermodynamic entropy"
] |
7,593,055 | https://en.wikipedia.org/wiki/Optical%20sine%20theorem | In optics, the optical sine theorem states that the products of the index, height, and sine of the slope angle of a ray in object space and its corresponding ray in image space are equal. That is:
External links
http://physics.tamuk.edu/~suson/html/4323/aberatn.html#Optical%20Sine
Sine theorem
Physics theorems | Optical sine theorem | [
"Physics"
] | 85 | [
"Equations of physics",
"Physics theorems"
] |
1,101,579 | https://en.wikipedia.org/wiki/Dynamic%20Monte%20Carlo%20method | In chemistry, dynamic Monte Carlo (DMC) is a Monte Carlo method for modeling the dynamic behaviors of molecules by comparing the rates of individual steps with random numbers. It is essentially the same as Kinetic Monte Carlo. Unlike the Metropolis Monte Carlo method, which has been employed to study systems at equilibrium, the DMC method is used to investigate non-equilibrium systems such as a reaction, diffusion, and so-forth (Meng and Weinberg 1994). This method is mainly applied to analyze adsorbates' behavior on surfaces.
There are several well-known methods for performing DMC simulations, including the First Reaction Method (FRM) and Random Selection Method (RSM). Although the FRM and RSM give the same results from a given model, the computer resources are different depending on the applied system.
In the FRM, the reaction whose time is minimum on the event list is advanced. In the event list, the tentative times for all possible reactions are stored. After the selection of one event, the system time is advanced to the reaction time, and the event list is recalculated. This method is efficient in computation time because the reaction always occurs in one event. On the other hand, it consumes a lot of computer memory because of the event list. Therefore, it is difficult to apply to large-scale systems.
The RSM decides whether the reaction of the selected molecule proceeds or not by comparing the transition probability with a random number. In this method, the reaction does not necessarily proceed in one event, so it needs significantly more computation time than FRM. However, this method saves computer memory because it does not use an event list. Large-scale systems are able to be calculated by this method.
See also
Hybrid Monte Carlo
References
(Meng and Weinberg 1994): B. Meng and W. H. Weinberg, J. Chem. Phys. 100, 5280 (1994)
(Meng and Weinberg 1996): B. Meng, W.H. Weinberg, Surface Science 364 (1996) 151-163.
Monte Carlo methods
Computational chemistry | Dynamic Monte Carlo method | [
"Physics",
"Chemistry"
] | 428 | [
"Theoretical chemistry stubs",
"Monte Carlo methods",
"Computational physics",
"Theoretical chemistry",
"Computational chemistry",
"Computational chemistry stubs",
"Physical chemistry stubs"
] |
1,101,984 | https://en.wikipedia.org/wiki/Soft%20systems%20methodology | Soft systems methodology (SSM) is an organised way of thinking applicable to problematic social situations and in the management of change by using action. It was developed in England by academics at the Lancaster Systems Department on the basis of a ten-year action research programme.
Overview
The Soft Systems Methodology was developed primarily by Peter Checkland, through 10 years of research with his colleagues, such as Brian Wilson. The method was derived from numerous earlier systems engineering processes, primarily from the fact traditional 'hard' systems thinking was not able to account for larger organisational issues, with many complex relationships. SSM has a primary use in the analysis of these complex situations, where there are divergent views about the definition of the problem.
These complex situations are known as "soft problems". They are usually real world problems where the goals and purposes of the problem are problematic themselves. Examples of soft problems include: How to improve the delivery of health services? and How to manage homelessness with young people? Soft approaches take as tacit that people's view of the world will change all the time and their preferences of it will also change.
Depending on the current circumstances of a situation, trying to agree on the problem may be difficult as there might be multiple factors to take into consideration, such as all the different kinds of methods used to tackle these problems. Additionally, Peter Checkland had moved away from the idea of 'obvious' problems and started working with situations to make concepts of models to use them as a source of questions to help with the problem, soft systems methodologies then started emerging to be an organised learning system.
Purposeful activity models could be declared using worldviews, meaning they were never models of real-world action. Still, those relevant to disclosure and argument about real-world action led to them being called epistemological devices that could be used for discourse and debate. The distinction between the everyday world and systems thinking was to draw attention to the conscious use of systems language in developing intellectual devices which were used to structure debates or an exploration of the problem situation being addressed.
In its 'classic' form the methodology consists of seven steps, with initial appreciation of the problem situation leading to the modelling of several human activity systems that might be thought relevant to the problem situation. By getting all the relevant people who are the decision-makers in this situation to come together, sit down in discussion and exploration about the definition of the problem. Only then will the decision makers in said situation will more likely arrive at a mutual agreement which will settle any arguments or problems and help get to the solution over exactly what kind of changes could be either systemically desirable and feasible in the situation at hand.
Later explanations of the ideas give a more sophisticated view of this systemic method and give more attention to locating the methodology with respect to its philosophical underpinnings. It is the earlier classical view which is most widely used in practice (created by Peter Checkland). A common criticism of this earlier methodology is that it follows an approach that is too linear. Checkland himself agreed that the earlier methodology is 'rather bald'. Most advanced SSM analysts will agree, though, that the classical view is an easy way for inexperienced analysts to learn the SSM methodology.
SSM has been successfully used as a business analysis methodology in various fields. Real-world examples of SSM's wide range of applicability include research applying SSM in the sugar industry leading to improvements in business partner relationships, successful use as an approach in project management by directly involving stakeholders or aiding in business management by improving communication between stakeholders. It has proven to be a useful analysis approach to teaching and learning processes, as it does not require a specific problem to be identified as its starting point – which has led to "outside of the box" suggestions for improvement. SSM was even used by the UK government as part of the revaluation of their Structured Systems Analysis and Design Method (SSADM) system development methodology.
Even professional researchers who are to take the change for face value structure of thinking, show the same tendency to distort perceptions of the world rather than change the mental structure which we give our bearings with. Failure of classic systems in rich 'management' problem situations during the research programme led to examining the adequacy of the systems thinking.
The methodology has been described in several books and many academic articles.
SSM remains the most widely used and practical application of systems thinking, and other systems approaches such as critical systems thinking have incorporated many of its ideas.
Representation evolution
SSM had a gradual development process of the methodology as a whole from 1972 to 1990. During this period of time, four different representations of SSM were designed, becoming more sophisticated and at the same time less structured and broader in scope.
Blocks and arrows (1972)
The first studies in the research programme were carried out in 1969, and the first account of what became SSM was published in a paper three-years later titled "Towards a systems-based methodology for real-world problem solving" (Checkland 1972). In this paper, soft systems methodology is presented as a sequence of stages with iteration back to previous stages.The sequence was as follows: analysis, root definition of relevant systems, conceptualisation, comparison and definition of changes, selection of change to implement, design of change and implementation and appraisal.
The overall aim to implement change instead of introducing or enhancing a system implies that the thinking was ongoing as a result of these early experiences, even if the straight arrows in the diagrams and the rectangular blocks in some of the models can now be misleading!
Seven stages (1981)
Soft systems methodology (SSM) is a powerful tool that is utilised to analyse very complex organisational and systemic problems, that do not have an obvious solution. The methodology incorporates seven steps to come up with a viable solution for the problem defined. The seven steps are;
Enter situation in which a problem situation(s) have been identified
Address the issue at hand
Formulate root definitions of relevant systems of purposeful activity
Build conceptual models of the systems named in the root definitions : This methodology comes into place from raising concerns/ capturing problems within an organisation and looking into ways how it can be solved. Defining the root definition also describes the root purpose of a system.
The comparison stage: The systems thinker is to compare the perceived conceptual models against an intuitive perception of a real-world situation or scenario. Checkland defines this stage as the comparison of Stage 4 with Stage 2, formally, "Comparison of 4 with 2". Parts of the problem situation analysed in Stage 2 are to be examined alongside the conceptual model(s) created in Stage 4, this helps to achieve a "complete" comparison.
Problems identified should be accompanied now by feasible and desirable changes that will distinctly help the problem situation based in the system given. Human activity systems and other aspects of the system should be considered so that soft systems thinking, and Mumford's needs can be achieved with the potential changes. These potential changes should not be acted on until step 7 but they should be feasible enough to act upon to improve the problem situation.
Take action to improve the problem situation
Two streams (1988)
The two-stream model of SSM recognizes the crucially important role of history in human affairs, and for a given group of people their history determines what will be noticed as significant and how it will be judged. This expression of SSM is presented as an approach embodying not only a logic-based stream of analysis (via activity models) but also a cultural and political stream which enable judgements to be made about the accommodations between conflicting interests which might be reachable by the people concerned and which would enable action to be taken.
This particular expression of SSM removes the dividing line between the world of the problem situation and the systems thinking world.
Four main activities (1990)
The four-activities model is iconic rather than descriptive and subsumes the cultural stream of analysis in the four activities. The seven stage model gave an approach which applies real world situations, both large and small and public and private sector. The four main activities were created as a way to capture the more flexible use of SSM and to include more of the cultural aspect of the workplace into the concept of SSM. The four activities are used to show that SSM does not have to be used rigidly; it's there to show real life and not be constrained. The four main activities should be seen as an individual concept rather than a descriptive which incorporates the cultural stream of analysis. The four activities are:
Finding out about a problem situation, including culturally/politically
Formulating some relevant purposeful activity models: Creating and drawing specific diagrammatic illustrations of activity processes that occur in an organisation, which shows the relevant processes that take place in a structured order, and depicts any problem situation visually by showing the flow of one action to another. An example of this would be a diagram of a Soft Systems Methodology method, which is a 'Conceptual Model', which is a representation of a systems' human actions, or an 'Architecture System Map', which is a visual representation of the implementation of sections of a software system.
Debating the situation, using the models, seeking from that debate both:
changes which would improve the situation and are regarded as both desirable and (culturally) feasible, and
the accommodations between conflicting interests which will enable action
Taking action in the situation to bring about improvement
CATWOE
In 1975, David Smyth, a researcher in Checkland's department, observed that SSM was most successful when the root definition included certain elements. These elements, captured in the mnemonic CATWOE, identified the people, processes and environment that contribute to a situation, issue or problem that required analyzing.
This is used to prompt thinking about what the business is trying to achieve. In further detail, CATWOE helps explore a system by underlining the roots which involve turning the inputs into outputs. CATWOE helps businesses as it analyses a gap between current and useful systems. Business perspectives help the business analyst to consider the impact of any proposed solution on the people involved. This mainly involves stakeholders which allows them to test assumptions they have made as stakeholders will all have different opinions about certain problems and opportunities. CATWOE's method helps gain better and achievable results, as well as avoiding additional problems using six elements.
The six elements of CATWOE are:
Customers – Who are the beneficiaries of the highest level business process and how does the issue affect them?
Actors - The person or people directly involved in the transformation (T) part of CATWOE (Checkland & Scholes, 1999, p. 35). Implementation and involvement by the actors allows for the input to be transformed into an output (Checkland & Scholes, 1999, p. 35). Actors are also stakeholders as their actions can affect the transformation process and the system as a whole. As actors are directly involved, they also have a 'holon' by which they interpret the world outside (Checkland & Scholes, 1999, p. 19) and so how they view the situation would impact their work and success.
Transformation process – Change, in one word, is the centre of the transformation system; the process of the transformation is more important for the business solution system. This is because the change is what the industry 5.0 sustainability system intends. The purpose behind the transformation system where change is applied holds value. For example, when converting grapes into wine the purpose for Change is to supply to grape consumers more value of the grape (product), thus sustaining the product value systemically. What is the transformation that lies at the heart of the system - transforming grapes into wine, transforming unsold goods into sold goods, transforming a societal need into a societal need met? This means change, in one word, is the centre of the transformation system; the process of becoming is more important than the business solution system. This is because the change is what the industry 2.0 systemic sustainability system practice purpose solves. The purpose behind the transformation system where change is provides the change, thus the results. For example when converting grapes into wine the purpose for Change is to supply to members of the public interest or involvement in grapes more value of the product, thus sustaining the product value more systemically.
Weltanschauung (or Worldview) – What is the big picture and what are the wider impacts of the issue? "The word Weltanschauung is a German word that has no real English equivalent. It refers to "all the things that you take for granted" and is related to our values". But the closest translation would be "world view", which is the collective summary of the stakeholders belief that gives meaning to the root definition. Model of the human activity system as a whole.
Owner – Who owns the process or situation being investigated and what role will they play in the solution?
Environmental constraints – What are the constraints and limitations that will impact the solution and its success?
CATWOE can also be related to the holistic multi-benefit analysis due to the multiple perspectives that are taken into consideration. It further understands the perspectives and concerns of different stakeholders involved in the human activity systems adhering to the core values of soft systems thinking allowing multiple perspectives to be appreciated with good knowledge management
Human activity system
A human activity system can be defined as "notional system (i.e. not existing in any tangible form) where human beings are undertaking some activities that achieve some purpose".
Within most systems there will be many human activity systems integrated within it to form the whole system. Human activity systems can be used in SSM to establish worldviews (Weltanschauung) for people involved in problematic situations. The assumption with all human activity systems is that all actors within them will act accordingly with their own worldviews.
See also
Enterprise modelling
Hard systems
Holism
List of thought processes
Problem structuring methods
Rich picture
Structured systems analysis and design method
Systems theory
Systems philosophy
References
Further reading
Books
Avison, D., & Fitzgerald, G. (2006). Information Systems Development. methodologies, techniques & tools (4th ed.). McGraw-Hill Education.
Wilson, B. and van Haperen, K. (2015) Soft Systems Thinking, Methodology and the Management of Change (including the history of the systems engineering department at Lancaster University), London: Palgrave MacMillan. .
Checkland, P.B. and J. Scholes (2001) Soft Systems Methodology in Action, in J. Rosenhead and J. Mingers (eds), Rational Analysis for a Problematic World Revisited. Chichester: Wiley
Checkland, P.B. & Poulter, J. (2006) Learning for Action: A short definitive account of Soft Systems Methodology and its use for Practitioners, teachers and Students, Wiley, Chichester.
Checkland, P.B. Systems Thinking, Systems Practice, John Wiley & Sons Ltd. 1981, 1998.
Checkland, P.B. and S. Holwell Information, Systems and Information Systems, John Wiley & Sons Ltd. 1998.
Wilson, B. Systems: Concepts, Methodologies and Applications, John Wiley & Sons Ltd. 1984, 1990.
Wilson, B. Soft Systems Methodology, John Wiley & Sons Ltd. 2001.
Articles
Dale Couprie et al. (2007) Soft Systems Methodology Department of Computer Science, University of Calgary.
Mark P. Mobach, Jos J. van der Werf & F.J. Tromp (2000). The art of modelling in SSM, in papers ISSS meeting 2000.
Ian Bailey (2008) MODAF and Soft Systems. white paper.
Ivanov, K. (1991). Critical systems thinking and information technology. - In J. of Applied Systems Analysis, 18, 39-55. (ISSN 0308-9541). A review of soft systems methodology as related to critical systems thinking.
Michael Rada (2015-12-01) . white paper, INDUSTRY 5.0 launch.
Michael Rada (2015-02-03) . white paper, INDUSTRY 5.0 DEFINITION.
External links
Peter Checkland homepage.
Models for Change Soft Systems Methodology . Business Process Transformation, 1996.
Soft systems methodology Action research and evaluation on line, 2007.
Checkland and Smyth's CATWOE and Soft Systems Methodology, Business Open Learning Archive 2007.
Systems theory
Methodology
Enterprise modelling
Problem structuring methods | Soft systems methodology | [
"Engineering"
] | 3,340 | [
"Systems engineering",
"Enterprise modelling"
] |
1,102,240 | https://en.wikipedia.org/wiki/Marx%20generator | A Marx generator is an electrical circuit first described by Erwin Otto Marx in 1924. Its purpose is to generate a high-voltage pulse from a low-voltage DC supply. Marx generators are used in high-energy physics experiments, as well as to simulate the effects of lightning on power-line gear and aviation equipment. A bank of 36 Marx generators is used by Sandia National Laboratories to generate X-rays in their Z Machine.
Principle of operation
The circuit generates a high-voltage pulse by charging a number of capacitors in parallel, then suddenly connecting them in series. See the circuit diagram on the right. At first, n capacitors (C) are charged in parallel to a voltage VC by a DC power supply through the resistors (RC). The spark gaps used as switches have the voltage VC across them, but the gaps have a breakdown voltage greater than VC, so they all behave as open circuits while the capacitors charge. The last gap isolates the output of the generator from the load; without that gap, the load would prevent the capacitors from charging. To create the output pulse, the first spark gap is caused to break down (triggered); the breakdown effectively shorts the gap, placing the first two capacitors in series, applying a voltage of about 2VC across the second spark gap. Consequently, the second gap breaks down to add the third capacitor to the "stack", and the process continues to sequentially break down all of the gaps. This process of the spark gaps connecting the capacitors in series to create the high voltage is called erection. The last gap connects the output of the series "stack" of capacitors to the load. Ideally, the output voltage will be nVC, the number of capacitors times the charging voltage, but in practice the value is less. Note that none of the charging resistors Rc are subjected to more than the charging voltage even when the capacitors have been erected. The charge available is limited to the charge on the capacitors, so the output is a brief pulse as the capacitors discharge through the load. At some point, the spark gaps stop conducting, and the low-voltage supply begins charging the capacitors again.
The principle of multiplying voltage by charging capacitors in parallel and discharging them in series is also used in the voltage multiplier circuit, used to produce high voltages for laser printers and cathode-ray tube television sets, which has similarities to this circuit. One difference is that the voltage multiplier is powered with alternating current and produces a steady DC output voltage, whereas the Marx generator produces a pulse.
Optimization
Proper performance depends upon capacitor selection and the timing of the discharge. Switching times can be improved by doping of the electrodes with radioactive isotopes caesium 137 or nickel 63, and by orienting the spark gaps so that ultraviolet light from a firing spark gap switch illuminates the remaining open spark gaps. Insulation of the high voltages produced is often accomplished by immersing the Marx generator in transformer oil or a high pressure dielectric gas such as sulfur hexafluoride (SF6).
Note that the less resistance there is between the capacitor and the charging power supply, the faster it will charge. Thus, in this design, those closer to the power supply will charge quicker than those farther away. If the generator is allowed to charge long enough, all capacitors will attain the same voltage.
In the ideal case, the closing of the switch closest to the charging power supply applies a voltage 2V to the second switch. This switch will then close, applying a voltage 3V to the third switch. This switch will then close, resulting in a cascade down the generator that produces nV at the generator output (again, only in the ideal case).
The first switch may be allowed to spontaneously break down (sometimes called a self break) during charging if the absolute timing of the output pulse is unimportant. However, it is usually intentionally triggered once all the capacitors in the Marx bank have reached full charge, either by reducing the gap distance, by pulsing an additional trigger electrode (such as a Trigatron), by ionising the air in the gap using a pulsed laser, or by reducing the air pressure within the gap.
The charging resistors, Rc, need to be properly sized for both charging and discharging. They are sometimes replaced with inductors for improved efficiency and faster charging. In many generators the resistors are made from plastic or glass tubing filled with dilute copper sulfate solution. These liquid resistors overcome many of the problems experienced by more-conventional solid resistive materials, which have a tendency to lower their resistance over time under high voltage conditions.
Short pulses
The Marx generator is also used to generate short high-power pulses for Pockels cells, driving a TEA laser, ignition of the conventional explosive of a nuclear weapon, and radar pulses.
Shortness is relative, as the switching time of even high-speed versions is not less than 1 ns, and thus many low-power electronic devices are faster. In the design of high-speed circuits, electrodynamics is important, and the Marx generator supports this insofar as it uses short thick leads between its components, but the design is nevertheless essentially an electrostatic one. When the first gap breaks down, pure electrostatic theory predicts that the voltage across all stages rises. However, stages are coupled capacitively to ground and serially to each other, and thus each stage encounters a voltage rise that is increasingly weaker the further the stage is from the switching one; the adjacent stage to the switching one therefore encounters the largest voltage rise, and thus switches in turn. As more stages switch, the voltage rise to the remainder increases, which speeds up their operation. Thus a voltage rise fed into the first stage becomes amplified and steepened at the same time.
In electrodynamic terms, when the first stage breaks down it creates a spherical electromagnetic wave whose electric field vector is opposed to the static high voltage. This moving electromagnetic field has the wrong orientation to trigger the next stage, and may even reach the load; such noise in front of the edge is undesirable in many switching applications. If the generator is inside a tube of (say) 1 m diameter, it requires around 10 wave reflections for the field to settle to static conditions, which restricts pulse leading edge width to 30 ns or more. Smaller devices are of course faster.
The speed of a switch is determined by the speed of the charge carriers, which gets higher with higher voltage, and by the current available to charge the inevitable parasitic capacitance. In solid-state avalanche devices, a high voltage automatically leads to high current. Because the high voltage is applied only for a short time, solid-state switches will not heat up excessively. As compensation for the higher voltages encountered, the later stages have to carry lower charge too. Stage cooling and capacitor recharging also go well together.
Stage variants
Avalanche diodes can replace a spark gap for stage voltages less than 500 volts. The charge carriers easily leave the electrodes, so no extra ionisation is needed and jitter is low. The diodes also have a longer lifetime than spark gaps.
A speedy switching device is an NPN avalanche transistor fitted with a coil between base and emitter. The transistor is initially switched off and about 300 volts exists across its collector-base junction. This voltage is high enough that a charge carrier in this region can create more carriers by impact ionisation, but the probability is too low to form a proper avalanche; instead a somewhat noisy leakage current flows. When the preceding stage switches, the emitter-base junction is pushed into forward bias and the collector-base junction enters full avalanche mode, so charge carriers injected into the collector-base region multiply in a chain reaction. Once the Marx generator has completely fired, voltages everywhere drop, each switch avalanche stops, its matched coil puts its base-emitter junction into reverse bias, and the low static field allows remaining charge carriers to drain out of its collector-base junction.
Applications
One application is so-called boxcar switching of a Pockels cell. Four Marx generators are used, each of the two electrodes of the Pockels cell being connected to a positive pulse generator and a negative pulse generator. Two generators of opposite polarity, one on each electrode, are first fired to charge the Pockels cell into one polarity. This will also partly charge the other two generators but not trigger them, because they have been only partly charged beforehand. Leakage through the Marx resistors needs to be compensated by a small bias current through the generator. At the trailing edge of the boxcar, the two other generators are fired to "reverse" the cell.
Marx generators are used to provide high-voltage pulses for the testing of insulation of electrical apparatus such as large power transformers, or insulators used for supporting power transmission lines. Voltages applied may exceed two million volts for high-voltage apparatus.
In food industry Marx generators are used for Pulsed Electric Fields processing to induce cutting improvement or drying acceleration for potato and other fruits and vegetables.
See also
ATLAS-I
Cockcroft–Walton generator – a similar circuit with the same "ladder" structure; CW generator produce rectified DC from an AC input
Vector inversion generator – a transmission-line device using a similar charge in parallel discharge in series approach
Explosively pumped flux compression generator – a solution to the dual problem of creating high current pulses
Ignition coil
Induction coil
Istra High Voltage Research Center
Tesla coil
References
Further reading
Bauer, G. (June 1, 1968) "A low-impedance high-voltage nanosecond pulser", Journal of Scientific Instruments, London, UK. vol. 1, pp. 688–689.
Graham et al. (1997) "Compact 400 kV Marx Generator With Common Switch Housing", Pulsed Power Conference, 11th Annual Digest of Technical Papers, vol. 2, pp. 1519–1523.
Ness, R. et al. (1991) "Compact, Megavolt, Rep-Rated Marx Generators", IEEE Transactions on Electron Devices, vol. 38, No. 4, pp. 803–809.
Obara, M. (June 3–5, 1980) "Strip-Line Multichannel-Surface-Spark-Gap-Type Marx Generator for Fast Discharge Lasers", IEEE Conference Record of the 1980 Fourteenth Pulse Power Modulator Symposium, pp. 201–208.
Shkaruba et al. (May–June 1985) "Arkad'ev-Mark Generator with Capacitive Coupling", Instrum Exp Tech vol. 28, No. 3, part 2, pp. 625–628, XP002080293.
Sumerville, I. C. (June 11–24, 1989) "A Simple Compact 1 MV, 4 kJ Marx", Proceedings of the Pulsed Power Conference, Monterey, California conf. 7, pp. 744–746, XP000138799.
Turnbull, S. M. (1998) "Development of a High Voltage, High PRF PFN Marx Generator", Conference Record of the 1998 23rd International Power Modulation Symposium, pp. 213–16.
External links
"Marx Generator". ecse.rpi.edu. (ed. explains the Febetron 2020 pulser experimented within the RPI Plasma Dynamics Laboratory)
Jochen Kronjaeger, ""Marx generator". Jochen's High Voltage Page, 2003.
Jim Lux, "Marx Generators ", High Voltage Experimenter's Handbook, 3 May 1998.
"The 'Quick & Dirty' Marx generator". Mike's Electric Stuff, May 2003.
Electrical circuits
Electric power conversion
Pulsed power
Electronic test equipment
Laboratory equipment | Marx generator | [
"Physics",
"Technology",
"Engineering"
] | 2,447 | [
"Physical quantities",
"Electronic test equipment",
"Measuring instruments",
"Power (physics)",
"Electronic engineering",
"Electrical engineering",
"Pulsed power",
"Electrical circuits"
] |
1,102,395 | https://en.wikipedia.org/wiki/Wojciech%20H.%20Zurek | Wojciech Hubert Zurek (; born 1951) is a Polish and American theoretical physicist and a leading authority on quantum theory, especially decoherence and non-equilibrium dynamics of symmetry breaking and resulting defect generation (known as the Kibble–Zurek mechanism).
Education
He attended the I Liceum Ogólnokształcące im. Mikołaja Kopernika (1st Secondary High School of Mikołaj Kopernik) in Bielsko-Biała.
Zurek earned his M.Sc. in physics at AGH University of Science and Technology, Kraków, Poland in 1974 and completed his Ph.D. under advisor William C. Schieve at the University of Texas at Austin in 1979. He spent two years at Caltech as a Tolman Fellow, and started at LANL as a J. Oppenheimer Fellow.
Career
He was the leader of the Theoretical Astrophysics Group at Los Alamos from 1991 until he was made a laboratory fellow in the theory division in 1996. Zurek is currently a foreign associate of the Cosmology Program at the Canadian Institute for Advanced Research. He served as a member of the external faculty of the Santa Fe Institute, and has been a visiting professor at the University of California, Santa Barbara. Zurek co-organized the programs Quantum Coherence and Decoherence and Quantum Computing and Chaos at UCSB's Institute for Theoretical Physics.
He researches decoherence, physics of quantum and classical information, non-equilibrium dynamics of defect generation, and astrophysics. He is also the co-author, along with William Wootters and Dennis Dieks, of a proof stating that a single quantum cannot be cloned (see the no cloning theorem). He also coined the terms einselection and quantum discord.
Zurek with his colleague Tom W. B. Kibble pioneered a paradigmatic framework for understanding defect generation in non-equilibrium processes, particularly, for understanding topological defects generated when a second-order phase transition point is crossed at a finite rate. The paradigm covers phenomena of enormous varieties and scales, ranging from structure formation in the early Universe to vortex generation in superfluids. The key mechanism of critical defect generation is known as the Kibble–Zurek mechanism, and the resulting scaling laws known as the Kibble–Zurek scaling laws.
He pointed out the fundamental role of environment in determining a set of special basis states immune to environmental decoherence (pointer basis) which defines a classical measuring apparatus unambiguously. His work on decoherence paves a way towards the understanding of emergence of the classical world from the quantum mechanical one, getting rid of ad hoc demarcations between the two, like the one imposed by Niels Bohr in the famous Copenhagen interpretation of quantum mechanics. The underlying mechanism proposed and developed by Zurek and his collaborators is known as quantum Darwinism. His work also has a lot of potential benefit to the emerging field of quantum computing.
He is a pioneer in information physics, edited an influential book on "Complexity, Entropy and the Physics of Information", and spearheaded the efforts that finally exorcised Maxwell's demon. Zurek showed that the demon can extract energy from its environment for "free" as long as it (a) is able to find structure in the environment, and (b) is able to compress this pattern (whereas the remaining code is more succinct than the brute-force description of the structure). In this way the demon can exploit thermal fluctuations. However, he showed that in thermodynamic equilibrium (the most likely state of the environment), the demon can at best break even, even if the information about the environment is compressed. As a result of his exploration, Zurek suggested redefining entropy and distinguishing between two parts: the part that we already know about the environment (measured in Kolmogorov complexity), and, conditioned on our knowledge, the remaining uncertainty (measured in Shannon entropy).
He is a staff scientist at Los Alamos National Laboratory and also a laboratory fellow (a prestigious distinction for a US National Laboratory scientist). Zurek was awarded the Albert Einstein Professorship Prize by the Foundation of the University of Ulm in Germany in 2010.
Honors
1996 Laboratory Fellow at the Los Alamos National Laboratory
2004 Phi Beta Kappa Visiting Lecturer
2005 Alexander von Humboldt Prize
2009 Fellow of the American Physical Society
2009 Marian Smoluchowski Medal, highest prize of the Polish Physical Society
2010 Albert-Einstein Professorship, honorary professorship at the University of Ulm
2012 Order of Polonia Restituta, the Commander's Cross - one of Poland's highest Orders
2014 Los Alamos Medal, the highest honor bestowed by the Los Alamos National Laboratory
Books
as editor with John Wheeler: Quantum theory of measurement. Princeton University Press 1983 ; 2014 edition
as editor with A. van der Merwe, W. A. Miller: Between Quantum and Cosmos. Princeton University Press, 1988
as editor: Complexity, Entropy and Physics of Information. Addison-Wesley 1990;
as editor with J. J. Halliwell, J. Pérez-Mercader: Physical Origins of Time Asymmetry. Cambridge Univ. Press, Cambridge, 1994
as editor with H. Arodz and J. Dziarmaga: Patterns of Symmetry Breaking, NATO ASI series volume (Kluwer Academic, Dordrecht, 2003) ; e-book
References
External links
Wojciech H. Zurek's webpage
1951 births
Living people
20th-century American physicists
21st-century American physicists
Los Alamos National Laboratory personnel
Polish emigrants to the United States
Quantum information scientists
Quantum physicists
Santa Fe Institute people
Fellows of the American Physical Society | Wojciech H. Zurek | [
"Physics"
] | 1,168 | [
"Quantum physicists",
"Quantum mechanics"
] |
1,103,376 | https://en.wikipedia.org/wiki/Planimeter | A planimeter, also known as a platometer, is a measuring instrument used to determine the area of an arbitrary two-dimensional shape.
Construction
There are several kinds of planimeters, but all operate in a similar way. The precise way in which they are constructed varies, with the main types of mechanical planimeter being polar, linear, and Prytz or "hatchet" planimeters. The Swiss mathematician Jakob Amsler-Laffon built the first modern planimeter in 1854, the concept having been pioneered by Johann Martin Hermann in 1818. Many developments followed Amsler's famous planimeter, including electronic versions.
The Amsler (polar) type consists of a two-bar linkage. At the end of one link is a pointer, used to trace around the boundary of the shape to be measured. The other end of the linkage pivots freely on a weight that keeps it from moving. Near the junction of the two links is a measuring wheel of calibrated diameter, with a scale to show fine rotation, and worm gearing for an auxiliary turns counter scale. As the area outline is traced, this wheel rolls on the surface of the drawing. The operator sets the wheel, turns the counter to zero, and then traces the pointer around the perimeter of the shape. When the tracing is complete, the scales at the measuring wheel show the shape's area.
When the planimeter's measuring wheel moves perpendicular to its axis, it rolls, and this movement is recorded. When the measuring wheel moves parallel to its axis, the wheel skids without rolling, so this movement is ignored. That means the planimeter measures the distance that its measuring wheel travels, projected perpendicularly to the measuring wheel's axis of rotation. The area of the shape is proportional to the number of turns through which the measuring wheel rotates.
The polar planimeter is restricted by design to measuring areas within limits determined by its size and geometry. However, the linear type has no restriction in one dimension, because it can roll. Its wheels must not slip, because the movement must be constrained to a straight line.
Developments of the planimeter can establish the position of the first moment of area (center of mass), and even the second moment of area.
The images show the principles of a linear and a polar planimeter. The pointer M at one end of the planimeter follows the contour C of the surface S to be measured. For the linear planimeter the movement of the "elbow" E is restricted to the y-axis. For the polar planimeter the "elbow" is connected to an arm with its other endpoint O at a fixed position. Connected to the arm ME is the measuring wheel with its axis of rotation parallel to ME. A movement of the arm ME can be decomposed into a movement perpendicular to ME, causing the wheel to rotate, and a movement parallel to ME, causing the wheel to skid, with no contribution to its reading.
Principle
The working of the linear planimeter may be explained by measuring the area of a rectangle ABCD (see image). Moving with the pointer from A to B the arm EM moves through the yellow parallelogram, with area equal to PQ×EM. This area is also equal to the area of the parallelogram A"ABB". The measuring wheel measures the distance PQ (perpendicular to EM). Moving from C to D the arm EM moves through the green parallelogram, with area equal to the area of the rectangle D"DCC". The measuring wheel now moves in the opposite direction, subtracting this reading from the former. The movements along BC and DA are the same but opposite, so they cancel each other with no net effect on the reading of the wheel. The net result is the measuring of the difference of the yellow and green areas, which is the area of ABCD.
Mathematical derivation
The operation of a linear planimeter can be justified by applying Green's theorem onto the components of the vector field N, given by:
where b is the y-coordinate of the elbow E.
This vector field is perpendicular to the measuring arm EM:
and has a constant size, equal to the length m of the measuring arm:
Then:
because:
The left hand side of the above equation, which is equal to the area A enclosed by the contour, is proportional to the distance measured by the measuring wheel, with proportionality factor m, the length of the measuring arm.
The justification for the above derivation lies in noting that the linear planimeter only records movement perpendicular to its measuring arm, or when
is non-zero. When this quantity is integrated over the closed curve C, Green's theorem and the area follow.
Polar coordinates
The connection with Green's theorem can be understood in terms of integration in polar coordinates: in polar coordinates, area is computed by the integral where the form being integrated is quadratic in r, meaning that the rate at which area changes with respect to change in angle varies quadratically with the radius.
For a parametric equation in polar coordinates, where both r and θ vary as a function of time, this becomes
For a polar planimeter the total rotation of the wheel is proportional to as the rotation is proportional to the distance traveled, which at any point in time is proportional to radius and to change in angle, as in the circumference of a circle ().
This last integrand can be recognized as the derivative of the earlier integrand (with respect to r), and shows that a polar planimeter computes the area integral in terms of the derivative, which is reflected in Green's theorem, which equates a line integral of a function on a (1-dimensional) contour to the (2-dimensional) integral of the derivative.
See also
Curvimeter
Dot planimeter
Mathematical instrument
Integraph
Shoelace formula
References
Sources
External links
Hatchet Planimeter
P. Kunkel: Whistleralley site, The Planimeter
Larry's Planimeter Platter
Wuerzburg Planimeter Page
Robert Foote's planimeter page
Computer model of a planimeter
Tanya Leise's planimeter explanations and As the Planimeter’s Wheel Turns
Make a simple planimeter
Photo: Geographers using planimeters (1940–1941)
O. Knill and D. Winter: Green's Theorem and the Planimeter
Dimensional instruments
Technical drawing tools
Mathematical tools
Measuring instruments
Area | Planimeter | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 1,312 | [
"Scalar physical quantities",
"Physical quantities",
"Dimensional instruments",
"Applied mathematics",
"Quantity",
"Measuring instruments",
"Size",
"Mathematical tools",
"History of computing",
"nan",
"Wikipedia categories named after physical quantities",
"Area"
] |
1,103,407 | https://en.wikipedia.org/wiki/DAMA/NaI | The DAMA/NaI experiment investigated the presence of dark matter particles in the galactic halo by exploiting the model-independent annual modulation signature. Based on the Earth's orbit around the Sun and the solar system's speed with respect to the center of the galaxy (which on short time scales can be considered constant), the Earth should be exposed to a higher flux of dark matter particles around June 1, when its orbital speed is added to the one of the solar system with respect to the galaxy and to a smaller one around December 2, when the two velocities are subtracted. The annual modulation signature is distinctive since the effect induced by dark matter particles must simultaneously satisfy many requirements.
Description
The experimental set-up was located deep underground in the Laboratori Nazionali del Gran Sasso in Italy.
The experimental set-up was made by nine 9.70 kg low-radioactivity scintillating thallium-doped sodium iodide crystals [NaI(Tl)]. Each crystal was faced by two low-background photomultipliers through 10 cm light guides. The detectors were installed inside a sealed copper box flushed with highly pure nitrogen in order to insulate the detectors from air that contains trace amounts of radon, a radioactive gas. To reduce the natural environmental background the copper box is enclosed inside a multicomponent multi-ton passive shield made of copper, lead, polyethylene/paraffin, cadmium foil. A plexiglas box encloses the whole shield and is also kept in a highly pure nitrogen atmosphere. A concrete neutron moderator 1 m thick largely surrounds the set-up. The experiment followed the proposal of Pierluigi Belli (then a Ph.D. student, now a research director of the Italian National Institute of Nuclear Physics), which his research group then followed up on.
Results
The DAMA/NaI set-up observed the annual modulation signature over 7 annual cycles (1995–2002). The presence of a model independent positive evidence in the data of DAMA/NaI was first reported by the DAMA collaboration in fall 1997 and published beginning of 1998. The final paper with the full results was published in 2003 after the end of experiment in July 2002. Various corollary investigations are continuing and have also been published.
The model-independent evidence is compatible with a wide set of scenarios regarding the nature of the dark matter candidate and related astrophysical, nuclear and particle physics, for example: neutralinos, inelastic dark matter, self-interacting dark matter, and heavy 4th generation neutrinos.
A careful quantitative investigation of possible sources of systematic and side reactions has been regularly carried out and published at the time of each data release. No systematic effect or side reaction able to account for the observed modulation amplitude and to simultaneously satisfy all the requirements of the signature has been found.
The experiment has also obtained and published many results on other processes and approaches.
Skepticism
Negative results from the XENON Dark Matter Search Experiment seem to contradict DAMA/Nal's results.
The COSINE-100 collaboration has been working in Korea towards confirming or refuting the DAMA-signal. They are using a similar experimental setup to DAMA (NaI(Tl)-crystals). They published their results in December 2018 in the journal Nature; their conclusion was that their "result rules out WIMP–nucleon interactions as the cause of the annual modulation observed by the DAMA collaboration".
A possible explanation of the reported modulation was pointed out as originating from the data analysis procedure. A yearly subtraction of the constant component can give rise to a sawtooth residual in the presence of a slower time dependence. New support for this hypothesis came in August 2022 when COSINE-100 applied an analysis method similar to one used by DAMA/LIBRA and found a similar annual modulation suggesting the signal could be just a statistical artifact.
In May 2021, the ANAIS dark matter direct detection experiment, after acquiring data for 3 years at the Canfranc Underground Laboratory in Spain, has not seen evidence for annual modulation in 112.5 kg of NaI(Tl) crystals and is thus incompatible with DAMA/NaI and DAMA/LIBRA and in November new results from COSINE-100 experiment after 1.7 years of data collection also failed to replicate the signal of DAMA.
Follow-up
DAMA/NaI has been replaced by the new generation experiment, DAMA/LIBRA. These experiments are carried out by Italian and Chinese researchers.
References
External links
The DAMA Project
Experiments for dark matter search | DAMA/NaI | [
"Physics"
] | 951 | [
"Dark matter",
"Experiments for dark matter search",
"Unsolved problems in physics"
] |
1,103,898 | https://en.wikipedia.org/wiki/DD-Transpeptidase | {{DISPLAYTITLE:DD-Transpeptidase}}
DD-Transpeptidase (, DD-peptidase, DD-transpeptidase, DD-carboxypeptidase, D-alanyl-D-alanine carboxypeptidase, D-alanyl-D-alanine-cleaving-peptidase, D-alanine carboxypeptidase, D-alanyl carboxypeptidase, and serine-type D-Ala-D-Ala carboxypeptidase.) is a bacterial enzyme that catalyzes the transfer of the R-L-αα-D-alanyl moiety of R-L-αα-D-alanyl-D-alanine carbonyl donors to the γ-OH of their active-site serine and from this to a final acceptor. It is involved in bacterial cell wall biosynthesis, namely, the transpeptidation that crosslinks the peptide side chains of peptidoglycan strands.
The antibiotic penicillin irreversibly binds to and inhibits the activity of the transpeptidase enzyme by forming a highly stable penicilloyl-enzyme intermediate. Because of the interaction between penicillin and transpeptidase, this enzyme is also known as penicillin-binding protein (PBP).
Mechanism
DD-Transpeptidase is mechanistically similar to the proteolytic reactions of the trypsin protein family.
Crosslinking of peptidyl moieties of adjacent glycan strands is a two-step reaction. The first step involves the cleavage of the D-alanyl-D-alanine bond of a peptide unit precursor acting as carbonyl donor, the release of the carboxyl-terminal D-alanine, and the formation of the acyl-enzyme. The second step involves the breakdown of the acyl-enzyme intermediate and the formation of a new peptide bond between the carbonyl of the D-alanyl moiety and the amino group of another peptide unit.
Most discussion of DD-peptidase mechanisms revolves around the catalysts of proton transfer. During formation of the acyl-enzyme intermediate, a proton must be removed from the active site serine hydroxyl group and one must be added to the amine leaving group. A similar proton movement must be facilitated in deacylation. The identity of the general acid and base catalysts involved in these proton transfers has not yet been elucidated. However, the catalytic triad tyrosine, lysine, and serine, as well as serine, lysine, serine have been proposed.
Structure
Transpeptidases are members of the penicilloyl-serine transferase superfamily, which has a signature SxxK conserved motif. With "x" denoting a variable amino acid residue, the transpeptidases of this superfamily show a trend in the form of three motifs: SxxK, SxN (or analogue), and KTG (or analogue). These motifs occur at equivalent places, and are roughly equally spaced, along the polypeptide chain. The folded protein brings these motifs close to each other at the catalytic center between an all-α domain and an α/β domain.
The structure of the streptomyces K15 DD-transpeptidase has been studied, and consists of a single polypeptide chain organized into two domains. One domain contains mainly α-helices, and the second one is of α/β-type. The center of the catalytic cleft is occupied by the Ser35-Thr36-Thr37-Lys38 tetrad, which includes the nucleophilic Ser35 residue at the amino-terminal end of helix α2. One side of the cavity is defined by the Ser96-Gly97-Cys98 loop connecting helices α4 and α5. The Lys213-Thr214-Gly215 triad lies on strand β3 on the opposite side of the cavity. The backbone NH group of the essential Ser35 residue and that of Ser216 downstream from the motif Lys213-Thr214-Gly215 occupy positions that are compatible with the oxyanion hole function required for catalysis.
The enzyme is classified as a DD-transpeptidase because the susceptible peptide bond of the carbonyl donor extends between two carbon atoms with the D-configuration.
Biological Function
All bacteria possess at least one, most often several, monofunctional serine DD-peptidases.
Disease Relevance
This enzyme is an excellent drug target because it is essential, is accessible from the periplasm, and has no equivalent in mammalian cells. DD-Transpeptidase is the target protein of β-lactam antibiotics (e.g. penicillin). This is because the structure of the β-lactam closely resembles the D-ala-D-ala residue.
β-Lactams exert their effect by competitively inactivating the serine DD-transpeptidase catalytic site. Penicillin is a cyclic analogue of the D-Ala-D-Ala terminated carbonyl donors, therefore in the presence of this antibiotic, the reaction stops at the level of the serine ester-linked penicilloyl enzyme. Thus β-lactam antibiotics force these enzymes to behave like penicillin binding proteins.
Kinetically, the interaction between the DD-peptidase and β-lactams is a three-step reaction:
β-Lactams may form an adduct E-I* of high stability with . The half life of this adduct is around hours, whereas the half-life of the normal reaction is in the order of milliseconds.
The interference with the enzyme processes responsible for cell wall formation results in cellular lysis and death due to the triggering of the autolytic system in the bacteria.
See also
Vancomycin, an antibiotic that binds the D-ala-D-ala residues, inhibiting elongation via glycosyltransferase
References
External links
The MEROPS online database for peptidases and their inhibitors: S11.001
EC 3.4.16
Bacteria
Microbial metabolism | DD-Transpeptidase | [
"Chemistry",
"Biology"
] | 1,339 | [
"Microbial metabolism",
"Prokaryotes",
"Bacteria",
"Microorganisms",
"Metabolism"
] |
1,104,369 | https://en.wikipedia.org/wiki/Organic%20semiconductor | Organic semiconductors are solids whose building blocks are pi-bonded molecules or polymers made up by carbon and hydrogen atoms and – at times – heteroatoms such as nitrogen, sulfur and oxygen. They exist in the form of molecular crystals or amorphous thin films. In general, they are electrical insulators, but become semiconducting when charges are injected from appropriate electrodes or are introduced by doping or photoexcitation.
General properties
In molecular crystals the energetic separation between the top of the valence band and the bottom conduction band, i.e. the band gap, is typically 2.5–4 eV, while in inorganic semiconductors the band gaps are typically 1–2 eV. This implies that molecular crystals are, in fact, insulators rather than semiconductors in the conventional sense. They become semiconducting only when charge carriers are either injected from the electrodes or generated by intentional or unintentional doping.
Charge carriers can also be generated in the course of optical excitation. It is important to realize, however, that the primary optical excitations are neutral excitons with a Coulomb-binding energy of typically 0.5–1.0 eV. The reason is that in organic semiconductors their dielectric constants are as low as 3–4. This impedes efficient photogeneration of charge carriers in neat systems in the bulk. Efficient photogeneration can only occur in binary systems due to charge transfer between donor and acceptor moieties. Otherwise neutral excitons decay radiatively to the ground state – thereby emitting photoluminescence – or non-radiatively. The optical absorption edge of organic semiconductors is typically 1.7–3 eV, equivalent to a spectral range from 700 to 400 nm (which corresponds to the visible spectrum).
History
Early history
In 1862, Henry Letheby obtained a partly conductive material by anodic oxidation of aniline in sulfuric acid. The material was probably polyaniline. In the 1950s, researchers discovered that polycyclic aromatic compounds formed semi-conducting charge-transfer complex salts with halogens. In particular, high conductivity of 0.12 S/cm was reported in perylene–iodine complex in 1954. This finding indicated that organic compounds could carry current.
The fact that organic semiconductors are, in principle, insulators but become semiconducting when charge carriers are injected from the electrode(s) was discovered by Kallmann and Pope. They found that a hole current can flow through an anthracene crystal contacted with a positively biased electrolyte containing iodine that can act as a hole injector. This work was stimulated by the earlier discovery by Akamatu et al. that aromatic hydrocarbons become conductive when blended with molecular iodine because a charge-transfer complex is formed. Since it was readily realized that the crucial parameter that controls injection is the work function of the electrode, it was straightforward to replace the electrolyte by a solid metallic or semiconducting contact with an appropriate work function. When both electrons and holes are injected from opposite contacts, they can recombine radiatively and emit light (electroluminescence). It was observed in organic crystals in 1965 by Sano et al.
In 1972, researchers found metallic conductivity in the charge-transfer complex TTF-TCNQ. Superconductivity in charge-transfer complexes was first reported in the Bechgaard salt (TMTSF)2PF6 in 1980.
In 1973 Dr. John McGinness produced the first device incorporating an organic semiconductor. This occurred roughly eight years before the next such device was created. The "melanin (polyacetylenes) bistable switch" currently is part of the chips collection of the Smithsonian Institution.
In 1977, Shirakawa et al. reported high conductivity in oxidized and iodine-doped polyacetylene. They received the 2000 Nobel prize in Chemistry for "The discovery and development of conductive polymers". Similarly, highly conductive polypyrrole was rediscovered in 1979.
Organic LEDs, solar cells and FETs
Rigid-backbone organic semiconductors are now used as active elements in optoelectronic devices such as organic light-emitting diodes (OLED), organic solar cells, organic field-effect transistors (OFET), electrochemical transistors and recently in biosensing applications. Organic semiconductors have many advantages, such as easy fabrication, mechanical flexibility, and low cost.
The discovery by Kallman and Pope paved the way for applying organic solids as active elements in semiconducting electronic devices, such as organic light-emitting diodes (OLEDs) that rely on the recombination of electrons and holes injected from "ohmic" electrodes, i.e. electrodes with unlimited supply of charge carriers. The next major step towards the technological exploitation of the phenomenon of electron and hole injection into a non-crystalline organic semiconductor was the work by Tang and Van Slyke. They showed that efficient electroluminescence can be generated in a vapor-deposited thin amorphous bilayer of an aromatic diamine (TAPC) and Alq3 sandwiched between an indium-tin-oxide (ITO) anode and an Mg:Ag cathode. Another milestone towards the development of organic light-emitting diodes (OLEDs) was the recognition that also conjugated polymers can be used as active materials. The efficiency of OLEDs was greatly improved when realizing that phosphorescent states (triplet excitons) may be used for emission when doping an organic semiconductor matrix with a phosphorescent dye, such as complexes of iridium with strong spin–orbit coupling.
Work on conductivity of anthracene crystals contacted with an electrolyte showed that optically excited dye molecules adsorbed at the surface of the crystal inject charge carriers. The underlying phenomenon is called sensitized photoconductivity. It occurs when photo-exciting a dye molecule with appropriate oxidation/reduction potential adsorbed at the surface or incorporated in the bulk. This effect revolutionized electrophotography, which is the technological basis of today's office copying machines. It is also the basis of organic solar cells (OSCs), in which the active element is an electron donor, and an electron acceptor material is combined in a bilayer or a bulk heterojunction.
Doping with strong electron donors or acceptors can render organic solids conductive even in the absence of light. Examples are doped polyacetylene and doped light-emitting diodes.
Materials
Amorphous molecular films
Amorphous molecular films are produced by evaporation or spin-coating. They have been investigated for device applications such as OLEDs, OFETs, and OSCs. Illustrative materials are tris(8-hydroxyquinolinato)aluminium, C60, phenyl-C61-butyric acid methyl ester (PCBM), pentacene, carbazoles, and phthalocyanine.
Molecularly doped polymers
Molecularly doped polymers are prepared by spreading a film of an electrically inert polymer, e.g. polycarbonate, doped with typically 30% of charge transporting molecules, on a base electrode. Typical materials are the triphenylenes. They have been investigated for use as photoreceptors in electrophotography. This requires films to have a thickness of several micrometers, which can be prepared using the doctor-blade technique.
Molecular crystals
In the early days of fundamental research into organic semiconductors the prototypical materials were free-standing single crystals of the acene family, e.g. anthracene and tetracene.
The advantage of employing molecular crystals instead of amorphous film is that their charge carrier mobilities are much larger. This is of particular advantage for OFET applications. Examples are thin films of crystalline rubrene prepared by hot wall epitaxy.
Neat polymer films
They are usually processed from solution employing variable deposition techniques including simple spin-coating, ink-jet deposition or industrial reel-to-reel coating which allows preparing thin films on a flexible substrate. The materials of choice are conjugated polymers such as poly-thiophene, poly-phenylenevinylene, and copolymers of alternating donor and acceptor units such as members of the poly(carbazole-dithiophene-benzothiadiazole (PCDTBT) family. For solar cell applications they can be blended with C60 or PCBM as electron acceptors.
Aromatic short peptides self-assemblies
Aromatic short peptides self-assemblies are a kind of promising candidate for bioinspired and durable nanoscale semiconductors. The highly ordered and directional intermolecular π-π interactions and hydrogen-bonding network allow the formation of quantum confined structures within the peptide self-assemblies, thus decreasing the band gaps of the superstructures into semiconductor regions. As a result of the diverse architectures and ease of modification of peptide self-assemblies, their semiconductivity can be readily tuned, doped, and functionalized. Therefore, this family of electroactive supramolecular materials may bridge the gap between the inorganic semiconductor world and biological systems.
Characterization
Organic semiconductors can be characterized by UV-photoemission spectroscopy. The equivalent technique for electron states is inverse photoemission.
To measure the mobility of charge carriers, the traditional technique is the so-called time of flight (TOF) method. This technique requires relatively thick samples; it is not applicable to thin films. Alternatively, one can extract the charge carrier mobility from the current in a field effect transistor as a function of both the source-drain and the gate voltage. Other ways to determine the charge carrier mobility involve measuring space charge limited current (SCLC) flow and "carrier extraction by linearly increasing voltage (CELIV).
In order to characterize the morphology of semiconductor films, one can apply atomic force microscopy (AFM), scanning electron microscopy (SEM), and grazing-incidence small-angle scattering (GISAS).
Charge transport
In contrast to organic crystals investigated in the 1960-70s, organic semiconductors that are nowadays used as active media in optoelectronic devices are usually more or less disordered. Combined with the fact that the structural building blocks are held together by comparatively weak van der Waals forces this precludes charge transport in delocalized valence and conduction bands. Instead, charge carriers are localized at molecular entities, e.g. oligomers or segments of a conjugated polymer chain, and move by incoherent hopping among adjacent sites with statistically variable energies. Quite often the site energies feature a Gaussian distribution. Also the hopping distances can vary statistically (positional disorder).
A consequence of the energetic broadening of the density of states (DOS) distribution is that charge motion is both temperature and field dependent and the charge carrier mobility can be several orders of magnitude lower than in an equivalent crystalline system. This disorder effect on charge carrier motion is diminished in organic field-effect transistors because current flow is confined in a thin layer. Therefore, the tail states of the DOS distribution are already filled so that the activation energy for charge carrier hopping is diminished. For this reason the charge carrier mobility inferred from FET experiments is always higher than that determined from TOF experiments.
In organic semiconductors, charge carriers couple to vibrational modes and are referred to as polarons. Therefore, the activation energy for hopping motion contains an additional term due to structural site relaxation upon charging a molecular entity. It turns out, however, that usually the disorder contribution to the temperature dependence of the mobility dominates over the polaronic contribution.
Mechanical Properties
Elastic Modulus
The elastic modulus can be measured through tensile testing, which captures the material's stress-strain response. Additionally, the buckling method, employing buckling equations and measured wavelengths, can be used to determine the mechanical modulus of film materials. The elastic modulus significantly impacts the applications of organic semiconductors; lower moduli are preferable for wearable and flexible electronics to ensure flexibility, while higher moduli are required for devices needing greater resistance to mechanical stresses and enhanced structural integrity.
Yield Point
The yield point of organic semiconductors is the stress or strain level at which the material starts to deform permanently. After this point, the material loses its elasticity and undergoes permanent deformation. Yield strength is usually measured by conducting tensile testing. Understanding and regulating the yield point of organic semiconductors is essential to designing devices that can endure operational stress without permanent deformation. This helps maintain the device's functionality and prolong its lifetime.
Viscoelasticity
As polymers, organic semiconductors exhibit viscoelasticity, meaning they exhibit both viscous and elastic characteristics during deformation. Viscoelasticity allows materials to return to their original shape after being deformed and to exhibit strain that varies over time. Viscoelasticity is typically measured using dynamic mechanical analysis (DMA). Viscoelasticity is crucial for wearable devices, which are subjected to stretching and bending during use. The viscoelastic properties help the materials absorb energy during these processes, enhancing durability and ensuring long-term functionality under continuous physical stress.
See also
Conductive polymer
Dinaphthylene dioxide
Molecular electronics
Organic electronics
Organic field-effect transistor (OFET)
Organic laser
Organic light-emitting diode (OLED)
Organic photonics
Organic photovoltaic cell (OPVC)
References
Further reading
Electronic Processes in Organic Semiconductors : An Introduction by Anna Köhler and Heinz Bässler, Wiley – VCH, 2015
Electronic processes in organic crystals and polymers by M. Pope and C.E.Swenberg, Oxford Science Publications, 2nd edition, 1999.
Organic photoreceptors for Xerographyby P.M.Borsenberger and D.S.Weiss, Marcel Dekker, New York, 1998.
External links
Conductive polymers
Molecular electronics
Semiconductor material types | Organic semiconductor | [
"Chemistry",
"Materials_science"
] | 2,912 | [
"Molecular physics",
"Semiconductor materials",
"Molecular electronics",
"Nanotechnology",
"Semiconductor material types",
"Conductive polymers",
"Organic semiconductors"
] |
1,104,704 | https://en.wikipedia.org/wiki/Covariance%20and%20contravariance%20%28computer%20science%29 | Many programming language type systems support subtyping. For instance, if the type is a subtype of , then an expression of type should be substitutable wherever an expression of type is used.
Variance is how subtyping between more complex types relates to subtyping between their components. For example, how should a list of s relate to a list of s? Or how should a function that returns relate to a function that returns ?
Depending on the variance of the type constructor, the subtyping relation of the simple types may be either preserved, reversed, or ignored for the respective complex types. In the OCaml programming language, for example, "list of Cat" is a subtype of "list of Animal" because the list type constructor is covariant. This means that the subtyping relation of the simple types is preserved for the complex types.
On the other hand, "function from Animal to String" is a subtype of "function from Cat to String" because the function type constructor is contravariant in the parameter type. Here, the subtyping relation of the simple types is reversed for the complex types.
A programming language designer will consider variance when devising typing rules for language features such as arrays, inheritance, and generic datatypes. By making type constructors covariant or contravariant instead of invariant, more programs will be accepted as well-typed. On the other hand, programmers often find contravariance unintuitive, and accurately tracking variance to avoid runtime type errors can lead to complex typing rules.
In order to keep the type system simple and allow useful programs, a language may treat a type constructor as invariant even if it would be safe to consider it variant, or treat it as covariant even though that could violate type safety.
Formal definition
Suppose A and B are types, and I<U> denotes application of a type constructor I with type argument U.
Within the type system of a programming language, a typing rule for a type constructor I is:
covariant if it preserves the ordering of types (≤), which orders types from more specific to more generic: If A ≤ B, then I<A> ≤ I<B>;
contravariant if it reverses this ordering: If A ≤ B, then I<B> ≤ I<A>;
bivariant if both of these apply (i.e., if A ≤ B, then I<A> ≡ I<B>);
variant if covariant, contravariant or bivariant;
invariant or nonvariant if not variant.
The article considers how this applies to some common type constructors.
C# examples
For example, in C#, if is a subtype of , then:
is a subtype of . The subtyping is preserved because is covariant on .
is a subtype of . The subtyping is reversed because is contravariant on .
Neither nor is a subtype of the other, because is invariant on .
The variance of a C# generic interface is declared by placing the (covariant) or (contravariant) attribute on (zero or more of) its type parameters. The above interfaces are declared as , , and . Types with more than one type parameter may specify different variances on each type parameter. For example, the delegate type represents a function with a contravariant input parameter of type and a covariant return value of type . The compiler checks that all types are defined and used consistently with their annotations, and otherwise signals a compilation error.
The typing rules for interface variance ensure type safety. For example, an represents a first-class function expecting an argument of type , and a function that can handle any type of animal can always be used instead of one that can only handle cats.
Arrays
Read-only data types (sources) can be covariant; write-only data types (sinks) can be contravariant. Mutable data types which act as both sources and sinks should be invariant. To illustrate this general phenomenon, consider the array type. For the type we can make the type , which is an "array of animals". For the purposes of this example, this array supports both reading and writing elements.
We have the option to treat this as either:
covariant: a is an ;
contravariant: an is a ;
invariant: an is not a and a is not an .
If we wish to avoid type errors, then only the third choice is safe. Clearly, not every can be treated as if it were a , since a client reading from the array will expect a , but an may contain e.g. a . So, the contravariant rule is not safe.
Conversely, a cannot be treated as an . It should always be possible to put a into an . With covariant arrays this cannot be guaranteed to be safe, since the backing store might actually be an array of cats. So, the covariant rule is also not safe—the array constructor should be invariant. Note that this is only an issue for mutable arrays; the covariant rule is safe for immutable (read-only) arrays. Likewise, the contravariant rule would be safe for write-only arrays.
Covariant arrays in Java and C#
Early versions of Java and C# did not include generics, also termed parametric polymorphism. In such a setting, making arrays invariant rules out useful polymorphic programs.
For example, consider writing a function to shuffle an array, or a function that tests two arrays for equality using the . method on the elements. The implementation does not depend on the exact type of element stored in the array, so it should be possible to write a single function that works on all types of arrays. It is easy to implement functions of type:
boolean equalArrays(Object[] a1, Object[] a2);
void shuffleArray(Object[] a);
However, if array types were treated as invariant, it would only be possible to call these functions on an array of exactly the type . One could not, for example, shuffle an array of strings.
Therefore, both Java and C# treat array types covariantly.
For instance, in Java is a subtype of , and in C# is a subtype of .
As discussed above, covariant arrays lead to problems with writes into the array. Java and C# deal with this by marking each array object with a type when it is created. Each time a value is stored into an array, the execution environment will check that the run-time type of the value is equal to the run-time type of the array. If there is a mismatch, an (Java) or (C#) is thrown:
// a is a single-element array of String
String[] a = new String[1];
// b is an array of Object
Object[] b = a;
// Assign an Integer to b. This would be possible if b really were
// an array of Object, but since it really is an array of String,
// we will get a java.lang.ArrayStoreException.
b[0] = 1;
In the above example, one can read from the array (b) safely. It is only trying to write to the array that can lead to trouble.
One drawback to this approach is that it leaves the possibility of a run-time error that a stricter type system could have caught at compile-time. Also, it hurts performance because each write into an array requires an additional run-time check.
With the addition of generics, Java and C# now offer ways to write this kind of polymorphic function without relying on covariance. The array comparison and shuffling functions can be given the parameterized types
<T> boolean equalArrays(T[] a1, T[] a2);
<T> void shuffleArray(T[] a);
Alternatively, to enforce that a C# method accesses a collection in a read-only way, one can use the interface instead of passing it an array .
Function types
Languages with first-class functions have function types like "a function expecting a Cat and returning an Animal" (written in OCaml syntax or in C# syntax).
Those languages also need to specify when one function type is a subtype of another—that is, when it is safe to use a function of one type in a context that expects a function of a different type.
It is safe to substitute a function f for a function g if f accepts a more general type of argument and returns a more specific type than g. For example, functions of type , , and can be used wherever a was expected. (One can compare this to the robustness principle of communication: "be liberal in what you accept and conservative in what you produce.") The general rule is:
if and .
Using inference rule notation the same rule can be written as:
In other words, the → type constructor is contravariant in the parameter (input) type and covariant in the return (output) type. This rule was first stated formally by John C. Reynolds, and further popularized in a paper by Luca Cardelli.
When dealing with functions that take functions as arguments, this rule can be applied several times. For example, by applying the rule twice, we see that if . In other words, the type is covariant in the position of . For complicated types it can be confusing to mentally trace why a given type specialization is or isn't type-safe, but it is easy to calculate which positions are co- and contravariant: a position is covariant if it is on the left side of an even number of arrows applying to it.
Inheritance in object-oriented languages
When a subclass overrides a method in a superclass, the compiler must check that the overriding method has the right type. While some languages require that the type exactly matches the type in the superclass (invariance), it is also type safe to allow the overriding method to have a "better" type. By the usual subtyping rule for function types, this means that the overriding method should return a more specific type (return type covariance) and accept a more general argument (parameter type contravariance). In UML notation, the possibilities are as follows (where Class B is the subclass that extends Class A which is the superclass):
For a concrete example, suppose we are writing a class to model an animal shelter. We assume that is a subclass of , and that we have a base class (using Java syntax)
class AnimalShelter {
Animal getAnimalForAdoption() {
// ...
}
void putAnimal(Animal animal) {
//...
}
}
Now the question is: if we subclass , what types are we allowed to give to and ?
Covariant method return type
In a language which allows covariant return types, a derived class can override the method to return a more specific type:
class CatShelter extends AnimalShelter {
Cat getAnimalForAdoption() {
return new Cat();
}
}
Among mainstream OO languages, Java, C++ and C# (as of version 9.0 ) support covariant return types. Adding the covariant return type was one of the first modifications of the C++ language approved by the standards committee in 1998. Scala and D also support covariant return types.
Contravariant method parameter type
Similarly, it is type safe to allow an overriding method to accept a more general argument than the method in the base class:
class CatShelter extends AnimalShelter {
void putAnimal(Object animal) {
// ...
}
}
Only a few object-oriented languages actually allow this (for example, Python when typechecked with mypy). C++, Java and most other languages that support overloading and/or shadowing would interpret this as a method with an overloaded or shadowed name.
However, Sather supported both covariance and contravariance. Calling convention for overridden methods are covariant with out parameters and return values, and contravariant with normal parameters (with the mode in).
Covariant method parameter type
A couple of mainstream languages, Eiffel and Dart allow the parameters of an overriding method to have a more specific type than the method in the superclass (parameter type covariance). Thus, the following Dart code would type check, with overriding the method in the base class:
class CatShelter extends AnimalShelter {
void putAnimal(covariant Cat animal) {
// ...
}
}
This is not type safe. By up-casting a to an , one can try to place a dog in a cat shelter. That does not meet parameter restrictions and will result in a runtime error. The lack of type safety (known as the "catcall problem" in the Eiffel community, where "cat" or "CAT" is a Changed Availability or Type) has been a long-standing issue. Over the years, various combinations of global static analysis, local static analysis, and new language features have been proposed to remedy it, and these have been implemented in some Eiffel compilers.
Despite the type safety problem, the Eiffel designers consider covariant parameter types crucial for modeling real world requirements. The cat shelter illustrates a common phenomenon: it is a kind of animal shelter but has additional restrictions, and it seems reasonable to use inheritance and restricted parameter types to model this. In proposing this use of inheritance, the Eiffel designers reject the Liskov substitution principle, which states that objects of subclasses should always be less restricted than objects of their superclass.
One other instance of a mainstream language allowing covariance in method parameters is PHP in regards to class constructors. In the following example, the __construct() method is accepted, despite the method parameter being covariant to the parent's method parameter. Were this method anything other than __construct(), an error would occur:
interface AnimalInterface {}
interface DogInterface extends AnimalInterface {}
class Dog implements DogInterface {}
class Pet
{
public function __construct(AnimalInterface $animal) {}
}
class PetDog extends Pet
{
public function __construct(DogInterface $dog)
{
parent::__construct($dog);
}
}
Another example where covariant parameters seem helpful is so-called binary methods, i.e. methods where the parameter is expected to be of the same type as the object the method is called on. An example is the method: checks whether comes before or after in some ordering, but the way to compare, say, two rational numbers will be different from the way to compare two strings. Other common examples of binary methods include equality tests, arithmetic operations, and set operations like subset and union.
In older versions of Java, the comparison method was specified as an interface :
interface Comparable {
int compareTo(Object o);
}
The drawback of this is that the method is specified to take an argument of type . A typical implementation would first down-cast this argument (throwing an error if it is not of the expected type):
class RationalNumber implements Comparable {
int numerator;
int denominator;
// ...
public int compareTo(Object other) {
RationalNumber otherNum = (RationalNumber)other;
return Integer.compare(numerator * otherNum.denominator,
otherNum.numerator * denominator);
}
}
In a language with covariant parameters, the argument to could be directly given the desired type , hiding the typecast. (Of course, this would still give a runtime error if was then called on e.g. a .)
Avoiding the need for covariant parameter types
Other language features can provide the apparent benefits of covariant parameters while preserving Liskov substitutability.
In a language with generics (a.k.a. parametric polymorphism) and bounded quantification, the previous examples can be written in a type-safe way. Instead of defining , we define a parameterized class . (One drawback of this is that the implementer of the base class needs to foresee which types will need to be specialized in the subclasses.)
class Shelter<T extends Animal> {
T getAnimalForAdoption() {
// ...
}
void putAnimal(T animal) {
// ...
}
}
class CatShelter extends Shelter<Cat> {
Cat getAnimalForAdoption() {
// ...
}
void putAnimal(Cat animal) {
// ...
}
}
Similarly, in recent versions of Java the interface has been parameterized, which allows the downcast to be omitted in a type-safe way:
class RationalNumber implements Comparable<RationalNumber> {
int numerator;
int denominator;
// ...
public int compareTo(RationalNumber otherNum) {
return Integer.compare(numerator * otherNum.denominator,
otherNum.numerator * denominator);
}
}
Another language feature that can help is multiple dispatch. One reason that binary methods are awkward to write is that in a call like , selecting the correct implementation of really depends on the runtime type of both and , but in a conventional OO language only the runtime type of is taken into account. In a language with Common Lisp Object System (CLOS)-style multiple dispatch, the comparison method could be written as a generic function where both arguments are used for method selection.
Giuseppe Castagna observed that in a typed language with multiple dispatch, a generic function can have some parameters which control dispatch and some "left-over" parameters which do not. Because the method selection rule chooses the most specific applicable method, if a method overrides another method, then the overriding method will have more specific types for the controlling parameters. On the other hand, to ensure type safety the language still must require the left-over parameters to be at least as general. Using the previous terminology, types used for runtime method selection are covariant while types not used for runtime method selection of the method are contravariant. Conventional single-dispatch languages like Java also obey this rule: only one argument is used for method selection (the receiver object, passed along to a method as the hidden argument ), and indeed the type of is more specialized inside overriding methods than in the superclass.
Castagna suggests that examples where covariant parameter types are superior (particularly, binary methods) should be handled using multiple dispatch; which is naturally covariant.
However, most programming languages do not support multiple dispatch.
Summary of variance and inheritance
The following table summarizes the rules for overriding methods in the languages discussed above.
Generic types
In programming languages that support generics (a.k.a. parametric polymorphism), the programmer can extend the type system with new constructors. For example, a C# interface like makes it possible to construct new types like or . The question then arises what the variance of these type constructors should be.
There are two main approaches. In languages with declaration-site variance annotations (e.g., C#), the programmer annotates the definition of a generic type with the intended variance of its type parameters. With use-site variance annotations (e.g., Java), the programmer instead annotates the places where a generic type is instantiated.
Declaration-site variance annotations
The most popular languages with declaration-site variance annotations are C# and Kotlin (using the keywords and ), and Scala and OCaml (using the keywords and ). C# only allows variance annotations for interface types, while Kotlin, Scala and OCaml allow them for both interface types and concrete data types.
Interfaces
In C#, each type parameter of a generic interface can be marked covariant (), contravariant (), or invariant (no annotation). For example, we can define an interface of read-only iterators, and declare it to be covariant (out) in its type parameter.
interface IEnumerator<out T>
{
T Current { get; }
bool MoveNext();
}
With this declaration, will be treated as covariant in its type parameter, e.g. is a subtype of .
The type checker enforces that each method declaration in an interface only mentions the type parameters in a way consistent with the / annotations. That is, a parameter that was declared covariant must not occur in any contravariant positions (where a position is contravariant if it occurs under an odd number of contravariant type constructors). The precise rule is that the return types of all methods in the interface must be valid covariantly and all the method parameter types must be valid contravariantly, where valid S-ly is defined as follows:
Non-generic types (classes, structs, enums, etc.) are valid both co- and contravariantly.
A type parameter is valid covariantly if it was not marked , and valid contravariantly if it was not marked .
An array type is valid S-ly if is. (This is because C# has covariant arrays.)
A generic type is valid S-ly if for each parameter ,
Ai is valid S-ly, and the ith parameter to is declared covariant, or
Ai is valid (not S)-ly, and the ith parameter to is declared contravariant, or
Ai is valid both covariantly and contravariantly, and the ith parameter to is declared invariant.
As an example of how these rules apply, consider the interface.
interface IList<T>
{
void Insert(int index, T item);
IEnumerator<T> GetEnumerator();
}
The parameter type of must be valid contravariantly, i.e. the type parameter must not be tagged . Similarly, the result type of must be valid covariantly, i.e. (since is a covariant interface) the type must be valid covariantly, i.e. the type parameter must not be tagged . This shows that the interface is not allowed to be marked either co- or contravariant.
In the common case of a generic data structure such as , these restrictions mean that an parameter can only be used for methods getting data out of the structure, and an parameter can only be used for methods putting data into the structure, hence the choice of keywords.
Data
C# allows variance annotations on the parameters of interfaces, but not the parameters of classes. Because fields in C# classes are always mutable, variantly parameterized classes in C# would not be very useful. But languages which emphasize immutable data can make good use of covariant data types. For example, in all of Scala, Kotlin and OCaml the immutable list type is covariant: List[Cat] is a subtype of List[Animal].
Scala's rules for checking variance annotations are essentially the same as C#'s. However, there are some idioms that apply to immutable datastructures in particular. They are illustrated by the following (excerpt from the) definition of the List[A] class.
sealed abstract class List[+A] extends AbstractSeq[A] {
def head: A
def tail: List[A]
/** Adds an element at the beginning of this list. */
def ::[B >: A] (x: B): List[B] =
new scala.collection.immutable.::(x, this)
/** ... */
}
First, class members that have a variant type must be immutable. Here, head has the type A, which was declared covariant (+), and indeed head was declared as a method (def). Trying to declare it as a mutable field (var) would be rejected as type error.
Second, even if a data structure is immutable, it will often have methods where the parameter type occurs contravariantly. For example, consider the method :: which adds an element to the front of a list. (The implementation works by creating a new object of the similarly named class ::, the class of nonempty lists.) The most obvious type to give it would be
def :: (x: A): List[A]
However, this would be a type error, because the covariant parameter A appears in a contravariant position (as a function parameter). But there is a trick to get around this problem. We give :: a more general type, which allows adding an element of any type B
as long as B is a supertype of A. Note that this relies on List being covariant, since
this has type List[A] and we treat it as having type List[B]. At first glance it may not be obvious that the generalized type is sound, but if the programmer starts out with the simpler type declaration, the type errors will point out the place that needs to be generalized.
Inferring variance
It is possible to design a type system where the compiler automatically infers the best possible variance annotations for all datatype parameters. However, the analysis can get complex for several reasons. First, the analysis is nonlocal since the variance of an interface depends on the variance of all interfaces that mentions. Second, in order to get unique best solutions the type system must allow bivariant parameters (which are simultaneously co- and contravariant). And finally, the variance of type parameters should arguably be a deliberate choice by the designer of an interface, not something that just happens.
For these reasons most languages do very little variance inference. C# and Scala do not infer any variance annotations at all. OCaml can infer the variance of parameterized concrete datatypes, but the programmer must explicitly specify the variance of abstract types (interfaces).
For example, consider an OCaml datatype which wraps a function
type ('a, 'b) t = T of ('a -> 'b)
The compiler will automatically infer that is contravariant in the first parameter, and covariant in the second. The programmer can also provide explicit annotations, which the compiler will check are satisfied. Thus the following declaration is equivalent to the previous one:
type (-'a, +'b) t = T of ('a -> 'b)
Explicit annotations in OCaml become useful when specifying interfaces. For example, the standard library interface for association tables include an annotation saying that the map type constructor is covariant in the result type.
module type S =
sig
type key
type (+'a) t
val empty: 'a t
val mem: key -> 'a t -> bool
...
end
This ensures that e.g. is a subtype of .
Use-site variance annotations (wildcards)
One drawback of the declaration-site approach is that many interface types must be made invariant. For example, we saw above that needed to be invariant, because it contained both and . In order to expose more variance, the API designer could provide additional interfaces which provide subsets of the available methods (e.g. an "insert-only list" which only provides ). However this quickly becomes unwieldy.
Use-site variance means the desired variance is indicated with an annotation at the specific site in the code where the type will be used. This gives users of a class more opportunities for subtyping without requiring the designer of the class to define multiple interfaces with different variance. Instead, at the point a generic type is instantiated to an actual parameterized type, the programmer can indicate that only a subset of its methods will be used. In effect, each definition of a generic class also makes available interfaces for the covariant and contravariant parts of that class.
Java provides use-site variance annotations through wildcards, a restricted form of bounded existential types. A parameterized type can be instantiated by a wildcard together with an upper or lower bound, e.g. or . An unbounded wildcard like is equivalent to . Such a type represents for some unknown type which satisfies the bound. For example, if has type , then the type checker will accept
Animal a = l.get(3);
because the type is known to be a subtype of , but
l.add(new Animal());
will be rejected as a type error since an is not necessarily an . In general, given some interface , a reference to an forbids using methods from the interface where occurs contravariantly in the type of the method. Conversely, if had type one could call but not .
While non-wildcard parameterized types in Java are invariant (e.g. there is no subtyping relationship between and ), wildcard types can be made more specific by specifying a tighter bound. For example, is a subtype of . This shows that wildcard types are covariant in their upper bounds (and also contravariant in their lower bounds). In total, given a wildcard type like , there are three ways to form a subtype: by specializing the class , by specifying a tighter bound , or by replacing the wildcard with a specific type (see figure).
By applying two of the above three forms of subtyping, it becomes possible to, for example, pass an argument of type to a method expecting a . This is the kind of expressiveness that results from covariant interface types. The type acts as an interface type containing only the covariant methods of , but the implementer of did not have to define it ahead of time.
In the common case of a generic data structure , covariant parameters are used for methods getting data out of the structure, and contravariant parameters for methods putting data into the structure. The mnemonic for Producer Extends, Consumer Super (PECS), from the book Effective Java by Joshua Bloch gives an easy way to remember when to use covariance and contravariance.
Wildcards are flexible, but there is a drawback. While use-site variance means that API designers need not consider variance of type parameters to interfaces, they must often instead use more complicated method signatures. A common example involves the interface. Suppose we want to write a function that finds the biggest element in a collection. The elements need to implement the method, so a first try might be
<T extends Comparable<T>> T max(Collection<T> coll);
However, this type is not general enough—one can find the max of a , but not a . The problem is that does not implement , but instead the (better) interface . In Java, unlike in C#, is not considered a subtype of . Instead the type of has to be modified:
<T extends Comparable<? super T>> T max(Collection<T> coll);
The bounded wildcard conveys the information that calls only contravariant methods from the interface. This particular example is frustrating because all the methods in are contravariant, so that condition is trivially true. A declaration-site system could handle this example with less clutter by annotating only the definition of .
The method can be changed even further by using an upper bounded wildcard for the method parameter:
<T extends Comparable<? super T>> T max(Collection<? extends T> coll);
Comparing declaration-site and use-site annotations
Use-site variance annotations provide additional flexibility, allowing more programs to type check. However, they have been criticized for the complexity they add to the language, leading to complicated type signatures and error messages.
One way to assess whether the extra flexibility is useful is to see if it is used in existing programs. A survey of a large set of Java libraries found that 39% of wildcard annotations could have been directly replaced by declaration-site annotations. Thus the remaining 61% is an indication of places where Java benefits from having the use-site system available.
In a declaration-site language, libraries must either expose less variance, or define more interfaces. For example, the Scala Collections library defines three separate interfaces for classes which employ covariance: a covariant base interface containing common methods, an invariant mutable version which adds side-effecting methods, and a covariant immutable version which may specialize the inherited implementations to exploit structural sharing. This design works well with declaration-site annotations, but the large number of interfaces carry a complexity cost for clients of the library. And modifying the library interface may not be an option—in particular, one goal when adding generics to Java was to maintain binary backwards compatibility.
On the other hand, Java wildcards are themselves complex. In a conference presentation Joshua Bloch criticized them as being too hard to understand and use, stating that when adding support for closures "we simply cannot afford another wildcards". Early versions of Scala used use-site variance annotations but programmers found them difficult to use in practice, while declaration-site annotations were found to be very helpful when designing classes. Later versions of Scala added Java-style existential types and wildcards; however, according to Martin Odersky, if there were no need for interoperability with Java then these would probably not have been included.
Ross Tate argues that part of the complexity of Java wildcards is due to the decision to encode use-site variance using a form of existential types. The original proposals used special-purpose syntax for variance annotations, writing instead of Java's more verbose .
Since wildcards are a form of existential types they can be used for more things than just variance. A type like ("a list of unknown type") lets objects be passed to methods or stored in fields without exactly specifying their type parameters. This is particularly valuable for classes such as where most of the methods do not mention the type parameter.
However, type inference for existential types is a difficult problem. For the compiler implementer, Java wildcards raise issues with type checker termination, type argument inference, and ambiguous programs. In general it is undecidable whether a Java program using generics is well-typed or not, so any type checker will have to go into an infinite loop or time out for some programs. For the programmer, it leads to complicated type error messages. Java type checks wildcard types by replacing the wildcards with fresh type variables (so-called capture conversion). This can make error messages harder to read, because they refer to type variables that the programmer did not directly write. For example, trying to add a to a will give an error like
method List.add (capture#1) is not applicable
(actual argument Cat cannot be converted to capture#1 by method invocation conversion)
where capture#1 is a fresh type-variable:
capture#1 extends Animal from capture of ? extends Animal
Since both declaration-site and use-site annotations can be useful, some type systems provide both.
Etymology
These terms come from the notion of covariant and contravariant functors in category theory. Consider the category whose objects are types and whose morphisms represent the subtype relationship ≤. (This is an example of how any partially ordered set can be considered as a category.) Then for example the function type constructor takes two types p and r and creates a new type p → r; so it takes objects in to objects in . By the subtyping rule for function types this operation reverses ≤ for the first parameter and preserves it for the second, so it is a contravariant functor in the first parameter and a covariant functor in the second.
See also
Polymorphism (computer science)
Inheritance (object-oriented programming)
Liskov substitution principle
References
External links
Fabulous Adventures in Coding: An article series about implementation concerns surrounding co/contravariance in C#
Contra Vs Co Variance (note this article is not updated about C++)
Closures for the Java 7 Programming Language (v0.5)
The theory behind covariance and contravariance in C# 4
Object-oriented programming
Type theory
Polymorphism (computer science) | Covariance and contravariance (computer science) | [
"Mathematics"
] | 7,760 | [
"Polymorphism (computer science)",
"Mathematical structures",
"Mathematical logic",
"Mathematical objects",
"Type theory"
] |
1,104,705 | https://en.wikipedia.org/wiki/Calvin%20cycle | The Calvin cycle, light-independent reactions, bio synthetic phase, dark reactions, or photosynthetic carbon reduction (PCR) cycle of photosynthesis is a series of chemical reactions that convert carbon dioxide and hydrogen-carrier compounds into glucose. The Calvin cycle is present in all photosynthetic eukaryotes and also many photosynthetic bacteria. In plants, these reactions occur in the stroma, the fluid-filled region of a chloroplast outside the thylakoid membranes. These reactions take the products (ATP and NADPH) of light-dependent reactions and perform further chemical processes on them. The Calvin cycle uses the chemical energy of ATP and the reducing power of NADPH from the light-dependent reactions to produce sugars for the plant to use. These substrates are used in a series of reduction-oxidation (redox) reactions to produce sugars in a step-wise process; there is no direct reaction that converts several molecules of to a sugar. There are three phases to the light-independent reactions, collectively called the Calvin cycle: carboxylation, reduction reactions, and ribulose 1,5-bisphosphate (RuBP) regeneration.
Though it is also called the "dark reaction", the Calvin cycle does not occur in the dark or during nighttime. This is because the process requires NADPH, which is short-lived and comes from light-dependent reactions. In the dark, plants instead release sucrose into the phloem from their starch reserves to provide energy for the plant. The Calvin cycle thus happens when light is available independent of the kind of photosynthesis (C3 carbon fixation, C4 carbon fixation, and crassulacean acid metabolism (CAM)); CAM plants store malic acid in their vacuoles every night and release it by day to make this process work.
Coupling to other metabolic pathways
The reactions of the Calvin cycle are closely coupled to the thylakoid electron transport chain, as the energy required to reduce the carbon dioxide is provided by NADPH produced during the light dependent reactions. The process of photorespiration, also known as C2 cycle, is also coupled to the Calvin cycle, as it results from an alternative reaction of the RuBisCO enzyme, and its final byproduct is another glyceraldehyde-3-P molecule.
Calvin cycle
The Calvin cycle, Calvin–Benson–Bassham (CBB) cycle, reductive pentose phosphate cycle (RPP cycle) or C3 cycle is a series of biochemical redox reactions that take place in the stroma of chloroplast in photosynthetic organisms. The cycle was discovered in 1950 by Melvin Calvin, James Bassham, and Andrew Benson at the University of California, Berkeley by using the radioactive isotope carbon-14.
Photosynthesis occurs in two stages in a cell. In the first stage, light-dependent reactions capture the energy of light and use it to make the energy-storage molecule ATP and the moderate-energy hydrogen carrier NADPH. The Calvin cycle uses these compounds to convert carbon dioxide and water into organic compounds that can be used by the organism (and by animals that feed on it). This set of reactions is also called carbon fixation. The key enzyme of the cycle is called RuBisCO. In the following biochemical equations, the chemical species (phosphates and carboxylic acids) exist in equilibria among their various ionized states as governed by the pH.
The enzymes in the Calvin cycle are functionally equivalent to most enzymes used in other metabolic pathways such as gluconeogenesis and the pentose phosphate pathway, but the enzymes in the Calvin cycle are found in the chloroplast stroma instead of the cell cytosol, separating the reactions. They are activated in the light (which is why the name "dark reaction" is misleading), and also by products of the light-dependent reaction. These regulatory functions prevent the Calvin cycle from being respired to carbon dioxide. Energy (in the form of ATP) would be wasted in carrying out these reactions when they have no net productivity.
The sum of reactions in the Calvin cycle is the following:
3 + 6 NADPH + 9 ATP + 5 → glyceraldehyde-3-phosphate (G3P) + 6 NADP+ + 9 ADP + 8 Pi (Pi = inorganic phosphate)
Hexose (six-carbon) sugars are not products of the Calvin cycle. Although many texts list a product of photosynthesis as , this is mainly for convenience to match the equation of aerobic respiration, where six-carbon sugars are oxidized in mitochondria. The carbohydrate products of the Calvin cycle are three-carbon sugar phosphate molecules, or "triose phosphates", namely, glyceraldehyde-3-phosphate (G3P).
Steps
In the first stage of the Calvin cycle, a molecule is incorporated into one of two three-carbon molecules (glyceraldehyde 3-phosphate or G3P), where it uses up two molecules of ATP and two molecules of NADPH, which had been produced in the light-dependent stage. The three steps involved are:
The enzyme RuBisCO catalyses the carboxylation of ribulose-1,5-bisphosphate, RuBP, a 5-carbon compound, by carbon dioxide (a total of 6 carbons) in a two-step reaction. The product of the first step is enediol-enzyme complex that can capture or . Thus, enediol-enzyme complex is the real carboxylase/oxygenase. The that is captured by enediol in second step produces an unstable six-carbon compound called 2-carboxy 3-keto 1,5-biphosphoribotol (CKABP) (or 3-keto-2-carboxyarabinitol 1,5-bisphosphate) that immediately splits into 2 molecules of 3-phosphoglycerate (also written as 3-phosphoglyceric acid, PGA, 3PGA, or 3-PGA), a 3-carbon compound.
The enzyme phosphoglycerate kinase catalyses the phosphorylation of 3-PGA by ATP (which was produced in the light-dependent stage). 1,3-Bisphosphoglycerate (glycerate-1,3-bisphosphate) and ADP are the products. (However, note that two 3-PGAs are produced for every that enters the cycle, so this step utilizes two ATP per fixed.)
The enzyme glyceraldehyde 3-phosphate dehydrogenase catalyses the reduction of 1,3BPGA by NADPH (which is another product of the light-dependent stage). Glyceraldehyde 3-phosphate (also called G3P, GP, TP, PGAL, GAP) is produced, and the NADPH itself is oxidized and becomes NADP+. Again, two NADPH are utilized per fixed.
The next stage in the Calvin cycle is to regenerate RuBP. Five G3P molecules produce three RuBP molecules, using up three molecules of ATP. Since each molecule produces two G3P molecules, three molecules produce six G3P molecules, of which five are used to regenerate RuBP, leaving a net gain of one G3P molecule per three molecules (as would be expected from the number of carbon atoms involved).
The regeneration stage can be broken down into a series of steps.
Triose phosphate isomerase converts one of the G3P reversibly into dihydroxyacetone phosphate (DHAP), also a 3-carbon molecule.
Aldolase and fructose-1,6-bisphosphatase convert a G3P and a DHAP into fructose 6-phosphate (6C). A phosphate ion is lost into solution.
Then fixation of another generates two more G3P.
F6P has two carbons removed by transketolase, giving erythrose-4-phosphate (E4P). The two carbons on transketolase are added to a G3P, giving the ketose xylulose-5-phosphate (Xu5P).
E4P and a DHAP (formed from one of the G3P from the second fixation) are converted into sedoheptulose-1,7-bisphosphate (7C) by aldolase enzyme.
Sedoheptulose-1,7-bisphosphatase (one of only three enzymes of the Calvin cycle that are unique to plants) cleaves sedoheptulose-1,7-bisphosphate into sedoheptulose-7-phosphate, releasing an inorganic phosphate ion into solution.
Fixation of a third generates two more G3P. The ketose S7P has two carbons removed by transketolase, giving ribose-5-phosphate (R5P), and the two carbons remaining on transketolase are transferred to one of the G3P, giving another Xu5P. This leaves one G3P as the product of fixation of 3 , with generation of three pentoses that can be converted to Ru5P.
R5P is converted into ribulose-5-phosphate (Ru5P, RuP) by phosphopentose isomerase. Xu5P is converted into RuP by phosphopentose epimerase.
Finally, phosphoribulokinase (another plant-unique enzyme of the pathway) phosphorylates RuP into RuBP, ribulose-1,5-bisphosphate, completing the Calvin cycle. This requires the input of one ATP.
Thus, of six G3P produced, five are used to make three RuBP (5C) molecules (totaling 15 carbons), with only one G3P available for subsequent conversion to hexose. This requires nine ATP molecules and six NADPH molecules per three molecules. The equation of the overall Calvin cycle is shown diagrammatically below.
RuBisCO also reacts competitively with instead of in photorespiration. The rate of photorespiration is higher at high temperatures. Photorespiration turns RuBP into 3-PGA and 2-phosphoglycolate, a 2-carbon molecule that can be converted via glycolate and glyoxalate to glycine. Via the glycine cleavage system and tetrahydrofolate, two glycines are converted into serine plus . Serine can be converted back to 3-phosphoglycerate. Thus, only 3 of 4 carbons from two phosphoglycolates can be converted back to 3-PGA. It can be seen that photorespiration has very negative consequences for the plant, because, rather than fixing , this process leads to loss of . C4 carbon fixation evolved to circumvent photorespiration, but can occur only in certain plants native to very warm or tropical climates—corn, for example. Furthermore, RuBisCOs catalyzing the light-independent reactions of photosynthesis generally exhibit an improved specificity for CO2 relative to O2, in order to minimize the oxygenation reaction. This improved specificity evolved after RuBisCO incorporated a new protein subunit.
Products
The immediate products of one turn of the Calvin cycle are 2 glyceraldehyde-3-phosphate (G3P) molecules, 3 ADP, and 2 NADP+. (ADP and NADP+ are not really "products". They are regenerated and later used again in the light-dependent reactions). Each G3P molecule is composed of 3 carbons. For the Calvin cycle to continue, RuBP (ribulose 1,5-bisphosphate) must be regenerated. So, 5 out of 6 carbons from the 2 G3P molecules are used for this purpose. Therefore, there is only 1 net carbon produced to play with for each turn. To create 1 surplus G3P requires 3 carbons, and therefore 3 turns of the Calvin cycle. To make one glucose molecule (which can be created from 2 G3P molecules) would require 6 turns of the Calvin cycle. Surplus G3P can also be used to form other carbohydrates such as starch, sucrose, and cellulose, depending on what the plant needs.
Light-dependent regulation
These reactions do not occur in the dark or at night. There is a light-dependent regulation of the cycle enzymes, as the third step requires NADPH.
There are two regulation systems at work when the cycle must be turned on or off: the thioredoxin/ferredoxin activation system, which activates some of the cycle enzymes; and the RuBisCo enzyme activation, active in the Calvin cycle, which involves its own activase.
The thioredoxin/ferredoxin system activates the enzymes glyceraldehyde-3-P dehydrogenase, glyceraldehyde-3-P phosphatase, fructose-1,6-bisphosphatase, sedoheptulose-1,7-bisphosphatase, and ribulose-5-phosphatase kinase, which are key points of the process. This happens when light is available, as the ferredoxin protein is reduced in the photosystem I complex of the thylakoid electron chain when electrons are circulating through it. Ferredoxin then binds to and reduces the thioredoxin protein, which activates the cycle enzymes by severing a cystine bond found in all these enzymes. This is a dynamic process as the same bond is formed again by other proteins that deactivate the enzymes. The implications of this process are that the enzymes remain mostly activated by day and are deactivated in the dark when there is no more reduced ferredoxin available.
The enzyme RuBisCo has its own, more complex activation process. It requires that a specific lysine amino acid be carbamylated to activate the enzyme. This lysine binds to RuBP and leads to a non-functional state if left uncarbamylated. A specific activase enzyme, called RuBisCo activase, helps this carbamylation process by removing one proton from the lysine and making the binding of the carbon dioxide molecule possible. Even then the RuBisCo enzyme is not yet functional, as it needs a magnesium ion bound to the lysine to function. This magnesium ion is released from the thylakoid lumen when the inner pH drops due to the active pumping of protons from the electron flow. RuBisCo activase itself is activated by increased concentrations of ATP in the stroma caused by its phosphorylation.
See also
References
Further reading
Rubisco Activase, from the Plant Physiology Online website
Thioredoxins, from the Plant Physiology Online website
External links
The Biochemistry of the Calvin Cycle at Rensselaer Polytechnic Institute
The Calvin Cycle and the Pentose Phosphate Pathway from Biochemistry, Fifth Edition by Jeremy M. Berg, John L. Tymoczko and Lubert Stryer. Published by W. H. Freeman and Company (2002).
Biochemical reactions
Carbohydrate metabolism
Photosynthesis | Calvin cycle | [
"Chemistry",
"Biology"
] | 3,257 | [
"Carbohydrate metabolism",
"Photosynthesis",
"Biochemical reactions",
"Carbohydrate chemistry",
"Biochemistry",
"Metabolism"
] |
1,105,247 | https://en.wikipedia.org/wiki/Alarm%20clock | An alarm clock or alarm is a clock that is designed to alert an individual or group of people at a specified time. The primary function of these clocks is to awaken people from their night's sleep or short naps; they can sometimes be used for other reminders as well. Most alarm clocks make sounds; some make light or vibration. Some have sensors to identify when a person is in a light stage of sleep, in order to avoid waking someone who is deeply asleep, which causes tiredness, even if the person has had adequate sleep. To turn off the sound or light, a button or handle on the clock is pressed; most clocks automatically turn off the alarm if left unattended long enough. A classic analog alarm clock has an extra hand or inset dial that is used to show the time at which the alarm will ring. Alarm clock functions are also used in mobile phones, watches, and computers.
Many alarm clocks have radio receivers that can be set to start playing at specified times, and are known as clock radios. Additionally, some alarm clocks can set multiple alarms. A progressive alarm clock can have different alarms for different times (see next-generation alarms) and play music of the user's choice. Most modern televisions, computers, mobile phones and digital watches have alarm functions that automatically turn on or sound alerts at a specific time.
Types
Traditional analogue clocks
Traditional mechanical alarm clocks have one or two bells that ring by means of a mainspring that powers a gear to quickly move a hammer back and forth between the two bells, or between the internal sides of a single bell. In some models, the metal cover at back of the clock itself also functions as the bell. In an electronically operated bell-type alarm clock, the bell is rung by an electromagnetic circuit with an armature that turns the circuit on and off repeatedly.
Digital
Digital alarm clocks can make other noises. Simple battery-powered alarm clocks make a loud buzzing, ringing or beeping sound to wake a sleeper, while novelty alarm clocks can speak, laugh, sing, or play sounds from nature.
History
The ancient Greek philosopher Plato (428–348 BCE) was said to possess a large water clock with an unspecified alarm signal similar to the sound of a water organ; he used it at night, possibly for signaling the beginning of his lectures at dawn (Athenaeus 4.174c). The Hellenistic engineer and inventor Ctesibius (fl. 285–222 BCE) fitted his clepsydras with dial and pointer for indicating the time, and added elaborate "alarm systems, which could be made to drop pebbles on a gong, or blow trumpets (by forcing bell-jars down into water and taking the compressed air through a beating reed) at pre-set times" (Vitruv 11.11).
The late Roman statesman Cassiodorus (c. 485–585) advocated in his rulebook for monastic life the water clock as a useful alarm for the "soldiers of Christ" (Cassiod. Inst. 30.4 f.). The Christian rhetorician Procopius described in detail prior to 529 a complex public striking clock in his home town Gaza which featured an hourly gong and figures moving mechanically day and night.
In China, a striking clock was devised by the Buddhist monk and inventor Yi Xing (683–727). The Chinese engineers Zhang Sixun and Su Song integrated striking clock mechanisms in astronomical clocks in the 10th and 11th centuries, respectively. A striking clock outside of China was the water-powered clock tower near the Umayyad Mosque in Damascus, Syria, which struck once every hour. It is the subject of a book, On the Construction of Clocks and their Use (1203), by Riḍwān ibn al-Sāʿātī, the son of clockmaker. In 1235, an early monumental water-powered alarm clock that "announced the appointed hours of prayer and the time both by day and by night" was completed in the entrance hall of the Mustansiriya Madrasah in Baghdad.
From the 14th century, some clock towers in Western Europe were also capable of chiming at a fixed time every day; the earliest of these was described by the Florentine writer Dante Alighieri in 1319. The most famous original striking clock tower still standing is possibly the one in St Mark's Clocktower in St Mark's Square, Venice. The St Mark's Clock was assembled in 1493, by the famous clockmaker Gian Carlo Rainieri from Reggio Emilia, where his father Gian Paolo Rainieri had already constructed another famous device in 1481. In 1497, Simone Campanato moulded the great bell (h. 1,56 m., diameter m. 1,27), which was put on the top of the tower where it was alternatively beaten by the Due Mori (Two Moors), two bronze statues (h. 2,60) handling a hammer.
User-settable mechanical alarm clocks date back at least to 15th-century Europe. These early alarm clocks had a ring of holes in the clock dial and were set by placing a pin in the appropriate hole.
The first American alarm clock was created in 1787 by Levi Hutchins in Concord, New Hampshire. This device he made only for himself, however, and it only rang at 4 am, in order to wake him for his job. The French inventor Antoine Redier was the first to patent an adjustable mechanical alarm clock, in 1847.
Alarm clocks, like almost all other consumer goods in the United States, ceased production in the spring of 1942, as the factories which made them were converted over to war work during World War II, but they were one of the first consumer items to resume manufacture for civilian use, in November 1944. By that time, a critical shortage of alarm clocks had developed due to older clocks wearing out or breaking down. Workers were late for, or missed completely, their scheduled shifts in jobs critical to the war effort. In a pooling arrangement overseen by the Office of Price Administration, several clock companies were allowed to start producing new clocks, some of which were continuations of pre-war designs, and some of which were new designs, thus becoming among the first "postwar" consumer goods to be made, before the war had even ended. The price of these "emergency" clocks was, however, still strictly regulated by the Office of Price Administration.
The first radio alarm clock was invented by James F. Reynolds, in the 1940s and another design was also invented by Paul L. Schroth Sr.
Clock radio
A clock radio is an alarm clock and radio receiver integrated in one device. The clock may turn on the radio at a designated time to wake the user, and usually includes a buzzer alarm. Typically, clock radios are placed on the bedside stand. Some models offer dual alarm for awakening at different times and "snooze", usually a large button on the top that silences the alarm and sets it to resume sounding a few minutes later. Some clock radios also have a "sleep" timer, which turns the radio on for a set amount of time (usually around one hour). This is useful for people who like to fall asleep while listening to the radio.
Newer clock radios are available with other music sources such as iPod, iPhone, and/or audio CD. When the alarm is triggered, it can play a set radio station or the music from a selected music source to awaken the sleeper. Some models come with a dock for iPod/iPhone that also charges the device while it is docked. They can play AM/FM radio, iPod/iPhone or CD like a typical music player as well (without being triggered by the alarm function). A few popular models offer "nature sounds" like rain, forest, wind, sea, waterfall etc., in place of the buzzer.
Clock radios are powered by AC power from the wall socket. In the event of a power interruption, older electronic digital models used to reset the time to midnight (00:00) and lose alarm settings. This would cause failure to trigger the alarm even if the power is restored, such as in the event of a power outage. Many newer clock radios feature a battery backup to maintain the time and alarm settings. Some advanced radio clocks (not to be confused with clocks with AM/FM radios) have a feature which sets the time automatically using signals from atomic clock-synced time signal radio stations such as WWV, making the clock accurate and immune to time reset due to power interruptions.
Alarms in technology
Computer alarms
Alarm clock software programs have been developed for personal computers. There are Web-based alarm clocks, some of which may allow a virtually unlimited number of alarm times (i.e. Personal information manager) and personalized tones. However, unlike mobile phone alarms, online alarm clocks have some limitations. They do not work when the computer is shut off or in sleep mode. Native applications, however, can wake the computer up from sleep using the built-in real-time clock alarm chip or even power it back on after it had been shut down.
Mobile phone alarms
Many modern mobile phones feature built-in alarm clocks that do not need the phone to be switched on for the alarm to ring off. Some of these mobile phones feature the ability for the user to set the alarm's ringtone, and in some cases music can be downloaded to the phone and then chosen to play for waking.
Next-generation alarms
Scientific studies on sleep having shown that sleep stage at awakening is an important factor in amplifying sleep inertia. Alarm clocks involving sleep stage monitoring appeared on the market in 2005. The alarm clocks use sensing technologies such as EEG electrodes and accelerometers to wake people from sleep. Dawn simulators are another technology meant to mediate these effects.
Sleepers can become accustomed to the sound of their alarm clock if it has been used for a period of time, making it less effective. Due to progressive alarm clocks' complex waking procedure, they can deter this adaptation due to the body needing to adapt to more stimuli than just a simple sound alert.
Alarm signals for impaired hearing
The deaf and hard of hearing are often unable to perceive auditory alarms when asleep. They may use specialized alarms, including alarms with flashing lights instead of or in addition to noise. Alarms which can connect to vibrating devices (small ones inserted into pillows, or larger ones placed under bedposts to shake the bed) also exist.
Time switches
Time switches can be used to turn on anything that will awaken a sleeper, and can therefore be used as alarms. Lights, bells, and radio and TV sets can easily be used. More elaborate devices have also been used, such as machines that automatically prepare tea or coffee. A sound is produced when the drink is ready, so the sleeper awakes to find the freshly brewed drink waiting.
See also
Delayed sleep phase syndrome
Digital clock
Knocker-up
Light therapy
Teasmade
Timer
Wake-up call
References
Sources
External links
American inventions
Alarms
Clock designs
Counting instruments
Sleep | Alarm clock | [
"Mathematics",
"Technology",
"Engineering",
"Biology"
] | 2,260 | [
"Behavior",
"Alarms",
"Counting instruments",
"Measuring instruments",
"Numeral systems",
"Warning systems",
"Sleep"
] |
3,221,992 | https://en.wikipedia.org/wiki/Banked%20turn | A banked turn (or banking turn) is a turn or change of direction in which the vehicle banks or inclines, usually towards the inside of the turn. For a road or railroad this is usually due to the roadbed having a transverse down-slope towards the inside of the curve. The bank angle is the angle at which the vehicle is inclined about its longitudinal axis with respect to the horizontal.
Turn on flat surfaces
If the bank angle is zero, the surface is flat and the normal force is vertically upward. The only force keeping the vehicle turning on its path is friction, or traction. This must be large enough to provide the centripetal force, a relationship that can be expressed as an inequality, assuming the car is driving in a circle of radius :
The expression on the right hand side is the centripetal acceleration multiplied by mass, the force required to turn the vehicle. The left hand side is the maximum frictional force, which equals the coefficient of friction multiplied by the normal force. Rearranging the maximum cornering speed is
Note that can be the coefficient for static or dynamic friction. In the latter case, where the vehicle is skidding around a bend, the friction is at its limit and the inequalities becomes equations. This also ignores effects such as downforce, which can increase the normal force and cornering speed.
Frictionless banked turn
As opposed to a vehicle riding along a flat circle, inclined edges add an additional force that keeps the vehicle in its path and prevents a car from being "dragged into" or "pushed out of" the circle (or a railroad wheel from moving sideways so as to nearly rub on the wheel flange). This force is the horizontal component of the vehicle's normal force (N). In the absence of friction, the normal force is the only one acting on the vehicle in the direction of the center of the circle. Therefore, as per Newton's second law, we can set the horizontal component of the normal force equal to mass multiplied by centripetal acceleration:
Because there is no motion in the vertical direction, the sum of all vertical forces acting on the system must be zero. Therefore, we can set the vertical component of the vehicle's normal force equal to its weight:
Solving the above equation for the normal force and substituting this value into our previous equation, we get:
This is equivalent to:
Solving for velocity we have:
This provides the velocity that in the absence of friction and with a given angle of incline and radius of curvature, will ensure that the vehicle will remain in its designated path. The magnitude of this velocity is also known as the "rated speed" (or "balancing speed" for railroads) of a turn or curve. Notice that the rated speed of the curve is the same for all massive objects, and a curve that is not inclined will have a rated speed of 0.
Banked turn with friction
When considering the effects of friction on the system, once again we need to note which way the friction force is pointing. When calculating a maximum velocity for our automobile, friction will point down the incline and towards the center of the circle. Therefore, we must add the horizontal component of friction to that of the normal force. The sum of these two forces is our new net force in the direction of the center of the turn (the centripetal force):
Once again, there is no motion in the vertical direction, allowing us to set all opposing vertical forces equal to one another. These forces include the vertical component of the normal force pointing upwards and both the car's weight and the vertical component of friction pointing downwards:
By solving the above equation for mass and substituting this value into our previous equation we get:
Solving for we get:
Where is the critical angle, such that . This equation provides the maximum velocity for the automobile with the given angle of incline, coefficient of static friction and radius of curvature. By a similar analysis of minimum velocity, the following equation is rendered:
Notice
The difference in the latter analysis comes when considering the direction of friction for the minimum velocity of the automobile (towards the outside of the circle). Consequently, opposite operations are performed when inserting friction into equations for forces in the centripetal and vertical directions.
Improperly banked road curves increase the risk of run-off-road and head-on crashes. A 2% deficiency in superelevation (say, 4% superelevation on a curve that should have 6%) can be expected to increase crash frequency by 6%, and a 5% deficiency will increase it by 15%. Up until now, highway engineers have been without efficient tools to identify improperly banked curves and to design relevant mitigating road actions. A modern profilograph can provide data of both road curvature and cross slope (angle of incline). A practical demonstration of how to evaluate improperly banked turns was developed in the EU Roadex III project. See the linked referenced document below.
Banked turn in aeronautics
When a fixed-wing aircraft is making a turn (changing its direction) the aircraft must roll to a banked position so that its wings are angled towards the desired direction of the turn. When the turn has been completed the aircraft must roll back to the wings-level position in order to resume straight flight.
When any moving vehicle is making a turn, it is necessary for the forces acting on the vehicle to add up to a net inward force, to cause centripetal acceleration. In the case of an aircraft making a turn, the force causing centripetal acceleration is the horizontal component of the lift acting on the aircraft.
In straight, level flight, the lift acting on the aircraft acts vertically upwards to counteract the weight of the aircraft which acts downwards. If the aircraft is to continue in level flight (i.e. at constant altitude), the vertical component must continue to equal the weight of the aircraft and so the pilot must pull back on the stick to apply the elevators to pitch the nose up, and therefore increase the angle of attack, generating an increase in the lift of the wing. The total (now angled) lift is greater than the weight of the aircraft, The excess lift is the horizontal component of the total lift, which is the net force causing the aircraft to accelerate inward and execute the turn.
Because centripetal acceleration is:
During a balanced turn where the angle of bank is the lift acts at an angle away from the vertical. It is useful to resolve the lift into a vertical component and a horizontal component.
Newton's second law in the horizontal direction can be expressed mathematically as:
where:
is the lift acting on the aircraft
is the angle of bank of the aircraft
is the mass of the aircraft
is the true airspeed of the aircraft
is the radius of the turn
In straight level flight, lift is equal to the aircraft weight. In turning flight the lift exceeds the aircraft weight, and is equal to the weight of the aircraft () divided by the cosine of the angle of bank:
where is the gravitational field strength.
The radius of the turn can now be calculated:
This formula shows that the radius of turn is proportional to the square of the aircraft's true airspeed. With a higher airspeed the radius of turn is larger, and with a lower airspeed the radius is smaller.
This formula also shows that the radius of turn decreases with the angle of bank. With a higher angle of bank the radius of turn is smaller, and with a lower angle of bank the radius is greater.
In a banked turn at constant altitude, the load factor is equal to . We can see that the load factor in straight and level flight is , since , and to generate sufficient lift to maintain constant altitude, the load factor must approach infinity as the bank angle approaches and approaches . This is physically impossible, because structural limitations of the aircraft or physical endurance of the occupants will be exceeded well before then.
Banked turn in athletics
Most indoor track and field venues have banked turns since the tracks are smaller than outdoor tracks. The tight turns on these small tracks are usually banked to allow athletes to lean inward and neutralize the centrifugal force as they race around the curve; the lean is especially noticeable on sprint events.
See also
Camber angle
Cant (road/rail)
Coriolis force (perception)
Centripetal force
g-force
Oval track racing
References
Further reading
Surface vehicles
Serway, Raymond. Physics for Scientists and Engineers. Cengage Learning, 2010.
Health and Safety Issues, the EU Roadex III project on health and safety issues raised by poorly maintained road networks.
Aeronautics
Kermode, A.C. (1972) Mechanics of Flight, Chapter 8, 10th Edition, Longman Group Limited, London
Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London
Hurt, H.H. Jr, (1960), Aerodynamics for Naval Aviators, A National Flightshop Reprint, Florida
External links
Surface vehicles
http://hyperphysics.phy-astr.gsu.edu/hbase/mechanics/imgmech/carbank.gif
https://web.archive.org/web/20051222173550/http://whitts.alioth.net/
http://www.batesville.k12.in.us/physics/PHYNET/Mechanics/Circular%20Motion/banked_no_friction.htm
Aeronautics
NASA: Guidance on banking turns
aerospaceweb.org: Bank Angle and G's (math)
Pilot’s Handbook of Aeronautical Knowledge
Aerodynamics
Aerial maneuvers
Mechanics
Transportation engineering
https://edu-physics.com/2021/05/08/how-banking-of-road-will-help-the-vehicle-to-travel-along-a-circular-path-2/ | Banked turn | [
"Physics",
"Chemistry",
"Engineering"
] | 2,019 | [
"Industrial engineering",
"Aerodynamics",
"Transportation engineering",
"Mechanics",
"Civil engineering",
"Mechanical engineering",
"Aerospace engineering",
"Fluid dynamics"
] |
3,222,200 | https://en.wikipedia.org/wiki/Homonuclear%20molecule | In chemistry, homonuclear molecules, or elemental molecules, or homonuclear species, are molecules composed of only one element. Homonuclear molecules may consist of various numbers of atoms. The size of the molecule an element can form depends on the element's properties, and some elements form molecules of more than one size. The most familiar homonuclear molecules are diatomic molecules, which consist of two atoms, although not all diatomic molecules are homonuclear. Homonuclear diatomic molecules include hydrogen (), oxygen (), nitrogen () and all of the halogens. Ozone () is a common triatomic homonuclear molecule. Homonuclear tetratomic molecules include arsenic () and phosphorus ().
Allotropes are different chemical forms of the same element (not containing any other element). In that sense, allotropes are all homonuclear. Many elements have multiple allotropic forms. In addition to the most common form of gaseous oxygen, , and ozone, there are other allotropes of oxygen. Sulfur forms several allotropes containing different numbers of sulfur atoms, including diatomic, triatomic, hexatomic and octatomic () forms, though the first three are rare. The element carbon is known to have a number of homonuclear molecules, including diamond and graphite.
Sometimes a cluster of atoms of a single kind of metallic element is considered a single molecule.
See also
Heteronuclear molecule
:Category:Homonuclear diatomic molecules
:Category:Homonuclear triatomic molecules
References
External links
Sets of chemical elements | Homonuclear molecule | [
"Physics",
"Chemistry"
] | 354 | [
"Molecules",
"Homonuclear molecules",
"Matter"
] |
3,222,625 | https://en.wikipedia.org/wiki/Pauling%27s%20rules | Pauling's rules are five rules published by Linus Pauling in 1929 for predicting and rationalizing the crystal structures of ionic compounds.
First rule: the radius ratio rule
For typical ionic solids, the cations are smaller than the anions, and each cation is surrounded by coordinated anions which form a polyhedron. The sum of the ionic radii determines the cation-anion distance, while the cation-anion radius ratio (or ) determines the coordination number (C.N.) of the cation, as well as the shape of the coordinated polyhedron of anions.
For the coordination numbers and corresponding polyhedra in the table below, Pauling mathematically derived the minimum radius ratio for which the cation is in contact with the given number of anions (considering the ions as rigid spheres). If the cation is smaller, it will not be in contact with the anions which results in instability leading to a lower coordination number.
The three diagrams at right correspond to octahedral coordination with a coordination number of six: four anions in the plane of the diagrams, and two (not shown) above and below this plane. The central diagram shows the minimal radius ratio. The cation and any two anions form a right triangle, with , or . Then . Similar geometrical proofs yield the minimum radius ratios for the highly symmetrical cases C.N. = 3, 4 and 8.
For C.N. = 6 and a radius ratio greater than the minimum, the crystal is more stable since the cation is still in contact with six anions, but the anions are further from each other so that their mutual repulsion is reduced. An octahedron may then form with a radius ratio greater than or equal to 0.414, but as the ratio rises above 0.732, a cubic geometry becomes more stable. This explains why in NaCl with a radius ratio of 0.55 has octahedral coordination, whereas in CsCl with a radius ratio of 0.93 has cubic coordination.
If the radius ratio is less than the minimum, two anions will tend to depart and the remaining four will rearrange into a tetrahedral geometry where they are all in contact with the cation.
The radius ratio rules are a first approximation which have some success in predicting coordination numbers, but many exceptions do exist. In a set of over 5000 oxides, only 66% of coordination environments agree with Pauling's first rule. Oxides formed with alkali or alkali-earth metal cations that contain multiple cation coordinations are common deviations from this rule.
Second rule: the electrostatic valence rule
For a given cation, Pauling defined the electrostatic bond strength to each coordinated anion as , where z is the cation charge and ν is the cation coordination number. A stable ionic structure is arranged to preserve local electroneutrality, so that the sum of the strengths of the electrostatic bonds to an anion equals the charge on that anion.
where is the anion charge and the summation is over the adjacent cations. For simple solids, the are equal for all cations coordinated to a given anion, so that the anion coordination number is the anion charge divided by each electrostatic bond strength. Some examples are given in the table.
Pauling showed that this rule is useful in limiting the possible structures to consider for more complex crystals such as the aluminosilicate mineral orthoclase, , with three different cations. However, from data analysis of oxides from the Inorganic Crystal Structure Database (ICSD), the result showed that only 20% of all oxygen atoms matched with the prediction from second rule (using a cutoff of 0.01).
Third rule: sharing of polyhedron corners, edges and faces
The sharing of edges and particularly faces by two anion polyhedra decreases the stability of an ionic structure. Sharing of corners does not decrease stability as much, so (for example) octahedra may share corners with one another.
The decrease in stability is due to the fact that sharing edges and faces places cations in closer proximity to each other, so that cation-cation electrostatic repulsion is increased. The effect is largest for cations with high charge and low C.N. (especially when r+/r- approaches the lower limit of the polyhedral stability). Generally, smaller elements fulfill the rule better.
As one example, Pauling considered the three mineral forms of titanium dioxide, each with a coordination number of 6 for the cations. The most stable (and most abundant) form is rutile, in which the coordination octahedra are arranged so that each one shares only two edges (and no faces) with adjoining octahedra. The other two, less stable, forms are brookite and anatase, in which each octahedron shares three and four edges respectively with adjoining octahedra.
Fourth rule: crystals containing different cations
In a crystal containing different cations, those of high valency and small coordination number tend not to share polyhedron elements with one another. This rule tends to increase the distance between highly charged cations, so as to reduce the electrostatic repulsion between them.
One of Pauling's examples is olivine, , where M is a mixture of at some sites and at others. The structure contains distinct tetrahedra which do not share any oxygens (at corners, edges or faces) with each other. The lower-valence and cations are surrounded by polyhedra which do share oxygens.
Fifth rule: the rule of parsimony
The number of essentially different kinds of constituents in a crystal tends to be small. The repeating units will tend to be identical because each atom in the structure is most stable in a specific environment. There may be two or three types of polyhedra, such as tetrahedra or octahedra, but there will not be many different types.
Limitation
In a study of 5000 oxides, only 13% of them satisfy all of the last 4 rules, indicating limited universality of Pauling's rules.
See also
Goldschmidt tolerance factor
Octet rule
References
Molecular geometry
Crystals
Atomic radius
Coordination chemistry
Empirical laws
Eponymous chemical rules | Pauling's rules | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,296 | [
"Molecular geometry",
"Molecules",
"Coordination chemistry",
"Stereochemistry",
"Atomic radius",
"Crystallography",
"Crystals",
"Atoms",
"Matter"
] |
3,223,264 | https://en.wikipedia.org/wiki/Hydristor | Hydristor is a joining of the words 'hydraulic' and 'transistor'. The
device invented by Tom Kasmer in 1996 and is based on the dual pressure balanced hydraulic vane pump invented by Harry F. Vickers in 1925.
Vane pump details
The Vickers design included an elliptic chamber which confined the radial motion of the vanes nested in the rotor slots. As the rotor and vanes turn, each vane is first pushed radially inward followed by a maximum radial extension and that happens twice per revolution. The displacement of the fixed
device is calculated by determining the difference in vane extension between minimum and maximum, times the axial length of the vanes and rotor. This multiplies to an area subject to the hydraulic pressure in the device whether used as a motor or a pump. Then an average of the minimum and maximum extensions establishes a 'radius of motion' for the pressurized equivalent pressure/force area. of each vane which passes one of the 4 axial sealing areas. What happens is that this equivalent area patch travels through the circumference or the equivalent linear distance resulting from rotating the 'radius of motion' through one complete revolution of 360 degrees.
The elliptic chamber is called a 'cam ring' by the industry. As a vane moves into, say, a maximum extension, and then rotates into a minimum extension region of the cam ring, it passes through a gradual transition from maximum to minimum followed by a gradual transition
back to maximum and this happens twice per revolution. In order to prevent oil under pressure from bypassing the vanes, 4 sealing areas
are created by means of 4 kidney shaped ports located in the transition areas between minimum and maximum. The spaces between the kidney ports are called the sealing areas and this port system is located at either, or both axial ends of the rotor and vanes. The configuration of the ports and sealing areas are such that the space between any two adjacent vanes is slightly less than the coverage of
the sealing area. In other words, as the vanes rotate through the sealing area, for a small amount of rotation, both adjacent vanes are
within the sealing area. As the rotation continues, the first vane in line leaves the sealing area, but not before the next vane in succession is firmly in the sealing region.
The effect is to prevent the exchange of oil from any two adjacent chambers located on either side of a given sealing area and the oil can only be interchanged by the actual rotation of the rotor and vanes. This is the pumping mechanism for the historical vane pump or motor. The term 'pressure balanced' comes from the fact that pressure in any chamber is matched by the same pressure in the diametrically opposite chamber and the hydraulic radial side thrust calculated by a 'side view area' and the two forces are opposite and cancel; hence the name
'pressure balanced'.
Hydristor details
There are several problems with the historical design. The vane tips radially contact the cam ring elliptic surface and cause a significant friction as the rotor and vanes turn. This friction is both pressure dependent and speed squared dependent due to RPM-squared centripetal forces. The speed is limited to about 6-7,000 RPM and the pressure is limited to about 2,500 PSI. Another pressure-related problem is that the pressure forces into the axial rotor to stationary kidney endplate clearance and buckles the device ends thus increasing fluid blowby referred to as 'volumetric efficiency'. Typically, vane pumps and motors have two external ports but there are actually two separate sets of chambers which form two separate pumps and motors. The internal plumbing is y-connected to create only two external ports.
For the Hydristor. a 'concentric nesting of endless metal belts' replaces the fixed elliptic cam ring. And, all the vane tips
contact the inner surface of the belt. The historical friction of the vane tips now causes the belt set to rotate at approximately the
same speed as the rotor and vanes, but there is a very slight 'walking behind' of the vane contact area and there is a very slight speed slippage which results in the inner belt wear being spread out and this results in much longer belt life. Also, the belt set now confines the pressure and speed-squared forces like a pressure vessel and the potential speed of operation is very much higher. The result of all this is to raise both the operating pressure and the operating speed and this amounts to a 10 times increase in hydraulic packaging density and similar decrease in weight per unit power.
Related patents
There are 4 US and international patents on this device:
US6022201 - Hydraulic vane pump with flexible band control - filled May 14, 1997
US6527525 - Hydristor control means - filled Feb 8, 2001
US6612117 - Hydristor heat pump - filled Feb 20, 2002
US7484944 - Rotary vane pump seal - filled Aug 11, 2004
Hydristor efficiency
The fixed relationships of the elliptic cam ring are replaced by 4 curved surface (cupped) movable pistons located at the 4 sealing areas, at 12,3,6, and 9 o'clock like the face of a clock. The curvature of each piston rides on a 'hydrodynamic oil bearing' similar to hydroplaning tires in the wet and this virtually eliminates metal-to-metal contact and friction. The first Hydristor achieved almost 95% efficiency overall and the present designs are in the 97+% range. If the 4 pistons are positioned equidistant from the center of rotation, no oil is expressed or accepted by any of the kidney ports. This is called 'neutral'. For a clockwise rotation, if 3 and 9 pistons are moved inward with 6 and 12 moving outward, all moving an equal amount, then a device displacement in proportion to the piston movement is created. If the 6 and 12 pistons were moved in with 3 and 9 moving equally out, then all the oil flows reverse. Since the piston positions are infinitely variable, any possible displacement between zero and + or - maximum displacement can be created. If two such Hydristor units are packaged face-to-face with the 4 port kidney plate between them, an infinitely variable transmission is formed. This transmission can select any ratio in the forward direction and in the reverse direction without the need for any gears.
Hydristor value
A 2006 article for COE NewsNet discusses a few details related to the design and test of the Hydristor. In this article, the Appendix provides some overarching concepts that are important for evaluation of the Hydristor and related technologies. As an infinitely variable transmission, the Hydristor could help extend car longevity in the same way any other infinitely variable transmission can - by lowering the engine speed to a necessary minimum. As an example, recent Honda Civic IVT needs only 1400 engine RPM to travel at highway speeds.
Example use
Because the Hydristor is more easily packaged as a thin, large diameter device, it is easy to create a torque converter shape
which, with the proper adapters, can fit any existing vehicle. Thus a few 'standard' Hydristors can be made which, with adapters will fit everything making the technology completely retrofittable into the entire highway fleet. As engine rpm is now variably decoupled from wheel speed, the engine can run at its most efficient point at all times. With the very fast response time of the Hydristor, a change in demand allows the engine to quickly hit the desired point without interrupting power flow.
Regenerative braking and hybrid vehicles
The Hydristor torque converter can also accomplish total hydraulic braking and energy storage. Once a cruising speed has been achieved with front and rear Hydristors at some appropriate relative displacements, hydraulic braking is achieved by first simultaneously reducing both front and rear to zero displacement, then leaving the front Hydristor at zero (thus hydro mechanically disconnecting the engine from the torque converter hydraulic circuit and finally beginning to increase rear displacement as a braking function with the braking pressure and flow being directed to a hydraulic accumulator pressure tank. The decaying vehicle speed (kinetic energy), the rising tank pressure and the desired rate of deceleration determined by the driver all are variables which are easily managed by the Hydristor system. The stored braking energy can then be re-used for subsequent re-acceleration. With hydraulic storage capability, the acceleration at highway speeds can result in wheel spin.
The installation of a Hydristor torque converter into a typical car or truck already on the highways will create a hybrid vehicle which will out-perform the current crop of hybrids , thus adding other alternatives to that technology. One benefit of this approach is that the existing fleet can be re-configured thereby incurring monetary and natural resource savings .
Criticism
There are no independent tests to verify these claims for the Hydristor, and problems of excessive parts wear have yet to be overcome, making practical applications of this device unlikely.
Death of inventor
Thomas E. Kasmer died on October 27, 2011, from a heart attack. The Hydristor web site was shut down effective December 31, 2012, by the person who had been hosting it for him pro bono.
See also
Hydraulics
Hydraulic bicycle
References
External links
Hydristor Corp
Audio (13 May 2006, America's Car Show)
ENR Podcast 20 February 2007 (Engineering News-Record, McGraw Hill)
" CATIA Makes Possible a Solution to Global Warming" COE Article March, 2007
Audio (26 May 2007, America's Car Show), Update
Mechanisms (engineering)
Pumps
Automotive transmission technologies | Hydristor | [
"Physics",
"Chemistry",
"Engineering"
] | 1,964 | [
"Pumps",
"Turbomachinery",
"Physical systems",
"Hydraulics",
"Mechanical engineering",
"Mechanisms (engineering)"
] |
3,223,727 | https://en.wikipedia.org/wiki/Nipple%20%28plumbing%29 | In plumbing and piping, a nipple is a fitting, consisting of a short piece of pipe, usually provided with a male pipe thread at each end, for connecting two other fittings.
The length of the nipple is usually specified by the overall length with thread. It may have a hexagonal section in the center for a wrench to grasp (sometimes referred to as a "hex nipple"), or it may simply be made from a short piece of pipe (sometimes referred to as a "barrel nipple" or "pipe nipple"). A "close nipple" has no unthreaded area; when screwed tightly between two female fittings, very little of the nipple remains exposed. A close nipple can only be unscrewed by gripping one threaded end with a pipe wrench which will damage the threads and necessitate replacing the nipple, or by using a specialty tool known as a nipple wrench (or known as an internal pipe wrench) which grips the inside of the pipe, leaving the threads undamaged. When the ends are of two different sizes it is called a reducer or unequal nipple.
Threads used on nipples are BSP, BSPT, NPT, NPSM and Metric.
Chase nipple
A chase nipple is a short pipe fitting, which creates a path for wires between two electrical boxes. A chase nipple has male threads on one end only. The other end is a hexagon. The chase nipple passes through the knockouts of two boxes, and is secured by an internally threaded ring called a lock nut.
Chase-Shawmut Company, of Boston, is the company which first produced chase nipples.
See also
Coupling (piping)
Piping and plumbing fitting
Street elbow
References
Further reading
ASTM A733-03 Standard Specification for Welded and Seamless Carbon Steel and Austenitic Stainless Steel Pipe Nipples.
ASTM B687-99(2005)e1 Standard Specification for Brass, Copper, and Chromium-Plated Pipe Nipples.
ASME B1.20.7 Hose Coupling Screw Threads, Inch. (Quote: The normal sequence of connections, in relation to the direction of flow, is from an externally threaded nipple into an internally threaded coupling)
External links
Plumbing
Piping
ja:ニップル (機械)
ru:Ниппель
tr:Nipel | Nipple (plumbing) | [
"Chemistry",
"Engineering"
] | 479 | [
"Building engineering",
"Chemical engineering",
"Plumbing",
"Construction",
"Mechanical engineering",
"Piping"
] |
3,223,960 | https://en.wikipedia.org/wiki/Allais%20paradox | The Allais paradox is a choice problem designed by to show an inconsistency of actual observed choices with the predictions of expected utility theory. The Allais paradox demonstrates that individuals rarely make rational decisions consistently when required to do so immediately. The independence axiom of expected utility theory, which requires that the preferences of an individual should not change when altering two lotteries by equal proportions, was proven to be violated by the paradox.
Statement of the problem
The Allais paradox arises when comparing participants' choices in two different experiments, each of which consists of a choice between two gambles, A and B. The payoffs for each gamble in each experiment are as follows:
Several studies involving hypothetical and small monetary payoffs, and recently involving health outcomes, have supported the assertion that when presented with a choice between 1A and 1B, most people would choose 1A. Likewise, when presented with a choice between 2A and 2B, most people would choose 2B. Allais further asserted that it was reasonable to choose 1A alone or 2B alone, as the expected average outcomes (in millions) are 1.00 for 1A gamble, 1.39 for 1B, 0.11 for 2A and 0.50 for 2B.
However, that the same person (who chose 1A alone or 2B alone) would choose both 1A and 2B together is inconsistent with expected utility theory. According to expected utility theory, the person should choose either 1A and 2A or 1B and 2B. Expected payouts (in millions) are 1.11 for 1A+2A combination, 1.89 for 1B+2B combination, 1.50 for 1A+2B combination and 1.50 for 1B+2A combination.
The inconsistency stems from the fact that in expected utility theory, equal outcomes (e.g. $1 million for all gambles) added to each of the two choices should have no effect on the relative desirability of one gamble over the other; equal outcomes should "cancel out". In each experiment the two gambles give the same outcome 89% of the time (starting from the top row and moving down, both 1A and 1B give an outcome of $1 million with 89% probability, and both 2A and 2B give an outcome of nothing with 89% probability). If this 89% ‘common consequence’ is disregarded, then in each experiment the choice between gambles will be the same – 11% chance of $1 million versus 10% chance of $5 million.
After re-writing the payoffs, and disregarding the 89% chance of winning — equalising the outcome — then 1B is left offering a 1% chance of winning nothing and a 10% chance of winning $5 million, while 2B is also left offering a 1% chance of winning nothing and a 10% chance of winning $5 million. Hence, choice 1B and 2B can be seen as the same choice. In the same manner, 1A and 2A can also be seen as the same choice, i.e.:
Allais presented his paradox as a counterexample to the independence axiom.
Independence means that if an agent is indifferent between simple lotteries and , the agent is also indifferent between mixed with an arbitrary simple lottery with probability and mixed with with the same probability . Violating this principle is known as the "common consequence" problem (or "common consequence" effect). The idea of the common consequence problem is that as the prize offered by increases, and become consolation prizes, and the agent will modify preferences between the two lotteries so as to minimize risk and disappointment in case they do not win the higher prize offered by .
Difficulties such as this gave rise to a number of alternatives to, and generalizations of, the theory, notably including prospect theory, developed by Daniel Kahneman and Amos Tversky, weighted utility (Chew), rank-dependent expected utility by John Quiggin, and regret theory. The point of these models was to allow a wider range of behavior than was consistent with expected utility theory. Michael Birnbaum performed experimental dissections of the paradox and showed that the results violated the theories of Quiggin, Kahneman, Tversky, and others, but could be explained by his configural weight theory that violates the property of coalescing.
The main point Allais wished to make is that the independence axiom of expected utility theory may not be a valid axiom. The independence axiom states that two identical outcomes within a gamble should be treated as irrelevant to the analysis of the gamble as a whole. However, this overlooks the notion of complementarities, the fact your choice in one part of a gamble may depend on the possible outcome in the other part of the gamble. In the above choice, 1B, there is a 1% chance of getting nothing. However, this 1% chance of getting nothing also carries with it a great sense of disappointment if you were to pick that gamble and lose, knowing you could have won with 100% certainty if you had chosen 1A. This feeling of disappointment, however, is contingent on the outcome in the other portion of the gamble (i.e. the feeling of certainty). Hence, Allais argues that it is not possible to evaluate portions of gambles or choices independently of the other choices presented, as the independence axiom requires, and thus is a poor judge of our rational action (1B cannot be valued independently of 1A as the independence or sure thing principle requires of us). We don't act irrationally when choosing 1A and 2B; rather expected utility theory is not robust enough to capture such "bounded rationality" choices that in this case arise because of complementarities.
Intuition behind the Allais paradox
Zero effect vs certainty effect
The most common explanation of the Allais paradox is that individuals prefer certainty over a risky outcome even if this defies the expected utility axiom. The certainty effect was popularised by Kahneman and Tversky (1979), and further discussed in Wakker (2010). The certainty effect highlights the appeal of a zero-variance lottery. Recent studies have indicated an alternate explanation to the certainty effect called the zero effect.
The zero effect is a slight adjustment to the certainty effect that states individuals will appeal to the lottery that doesn't have the possibility of winning nothing (aversion to zero). During prior Allais style tasks that involve two experiments with four lotteries, the only lottery without a possible outcome of zero was the zero-variance lottery, making it impossible to differentiate the impact these effects have on decision making. Running two additional lotteries allowed the two effects to be distinguished and hence, their statistical significance to be tested.
From the two-stage experiment, if an individual selected lottery 1A over 1B, then selected lottery 2B over 2A, they conform to the paradox and violate the expected utility axiom. The third experiment choices of participants who had already violated the expected utility theory(in the first two experiments) highlighted the underlying effect causing the Allais Paradox. Participants who chose 3B over 3A provided evidence of the certainty effect, while those who chose 3A over 3B showed evidence of the zero effect. Participants who chose (1A,2B,3B) only deviated from the rational choice when presented with a zero-variance lottery. Participants who chose (1A,2B,3A) deviated from the rational lottery choice to avoid the risk of winning nothing (aversion to zero).
Findings of the six-lottery experiment indicated the zero effect was statistically significant with a p-value < 0.01. The certainty effect was found to be statistically insignificant and not the intuitive explanation individuals deviating from the expected utility theory.
Mathematical proof of inconsistency
Using the values above and a utility function U(W), where W is wealth, we can demonstrate exactly how the paradox manifests.
Because the typical individual prefers 1A to 1B and 2B to 2A, we can conclude that the expected utilities of the preferred is greater than the expected utilities of the second choices, or,
Experiment 1
Experiment 2
We can rewrite the latter equation (Experiment 2) as
which contradicts the first bet (Experiment 1), which shows the player prefers the sure thing over the gamble.
History
The Allais Paradox was first introduced in 1952, where Maurice Allais presented various choice sets to an audience of economists at Colloques Internationaux du Centre National de la Recherche Scientifique, an economics conference in Paris. Similar to the choice sets above, the audience provided decisions that were inconsistent with expected utility theory. Despite this result, the audience was not convinced of the validity of Allais's finding and dismissed the paradox as a simple irregularity. Regardless, in 1953 Allais published his finding of the Allais paradox in Econometrica, an economics peer-reviewed journal.
Allais’ work was yet to be considered feasible in the field of behavioural economics until the 1980s. Table 1 demonstrates the appearance of the Allais paradox in literature, collected through JSTOR.
Historian, Floris Heukelom, attributes this unpopularity to four distinct reasons. Firstly, Allais's work had not been translated from French to English until 1979 when he produced Expected Utility Hypotheses and the Allais Paradox. This 700-page book consisted of five parts: Editorial Introduction The 1952 Allais Theory of Choice involving Risk, The neo-Bernoullian Position versus the 1952 Allais Theory, Contemporary Views on the neo-Bernoullian Theory and the Allais Paradox, Allais' rejoinder: theory and empirical evidence. Of these, various economists and researchers of relevant study backgrounds contributed, including economist and cofounder of the mathematical field of game theory, Oskar Morgenstern.
Secondly, the field of economics in a behavioural sense was scarcely studied in the 1950s and 60s. The Von Neumann-Morgenstern utility theorem, which assumes that individuals make decisions that maximise utility, had been proven 6 years prior to the Allais paradox, in 1947.
Thirdly, In 1979, Allais's work was noticed and cited by Amos Tversky and Daniel Kahneman in their paper introducing Prospect Theory, titled Prospect Theory: An Analysis of Decision under Risk. Critiquing expected utility theory and postulating that individuals perceive the prospect of a loss differently to that of a gain, Kahneman and Tversky's research credited the Allais paradox as the “best known counterexample to expected utility theory”. Furthermore, Kahneman and Tversky's article became one of the most cited articles in Econometrica, thus adding to the popularity of the Allais paradox. The Allais Paradox was again presented in Tversky and Kahneman's Thinking, Fast and Slow (2011), a New York Times Best Seller.
Finally, Allais's prominence was further promoted when he received the Nobel Prize in Economic Sciences in 1988 for "his pioneering contributions to the theory of markets and efficient utilization of resources", thus bolstering the recognition of the paradox.
Criticisms
Whilst the Allais paradox is considered a counterexample to expected utility theory, Luc Wathieu, Professor of Marketing at Georgetown University, argued that the Allais paradox demonstrates the need for a modified utility function, and is not paradoxical in nature. In A Critique of the Allais Paradox (1993), Wathieu contends that the paradox "does not constitute a valid test of the independence axiom" that is required in expected utility theory. This is because the paradox involves the comparison of preferences between two separate cases, rather than the preferences in one choice set.
Applications
The mismatch between human behaviour and classical economics that is highlighted by the Allais paradox indicates the need for a remodelled expected utility function to account for the violation of the independence axiom. Yoshimura et al. (2013) modified the standard utility function proposed by expected utility theory, coined the “dynamic utility function”, by including a variable that is dependent on the state of an individual. The findings of this experiment suggested that the switching of preferences apparent in the Allais paradox are due to the state of the individual, which include bankruptcy and wealth.
List & Haigh (2005) tests the appearance of the Allais paradox in the behaviours of professional traders through an experiment and compares the results with those of university students. By providing two lotteries similar to those used to prove the Allais paradox, the researchers concluded that those who were professional traders less frequently make choices that are inconsistent with expected utility, as opposed to students.
See also
Ellsberg paradox
Priority heuristic
St. Petersburg paradox
References
Further reading
review
Lewis, Michael. (2017). The Undoing Project: A Friendship That Changed Our Minds. New York: Norton.
Behavioral economics
Behavioral finance
Decision-making paradoxes
Paradoxes in utility theory
fr:Maurice Allais#Le paradoxe d'Allais | Allais paradox | [
"Biology"
] | 2,639 | [
"Behavior",
"Behavioral economics",
"Human behavior",
"Behaviorism",
"Behavioral finance"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.