id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
8,223,772
https://en.wikipedia.org/wiki/NASU%20Institute%20of%20Cryobiology%20and%20Cryomedicine%20Issues
The Institute for Problems of Cryobiology and Cryomedicine in Kharkiv is one of the institutes of the National Academy of Science of Ukraine, and is the largest institute devoted to cryobiology research in the world. Background Established in 1972, the focus of the research is on cryoinjury, cryosurgery, cryopreservation, lyophilization and hypothermia. Since 1985 the Institute has published the open access peer-reviewed scientific journal Problems of Cryobiology and Cryomedicine. See also Cryobiology National Academy of Science of Ukraine References External links Institute for Problems of Cryobiology and Cryomedicine Problems of Cryobiology and Cryomedicine (journal) Cryobiology Research institutes in Ukraine Universities and institutes established in the Soviet Union Medical research institutes in the Soviet Union Biological research institutes Medical and health organizations based in Ukraine
NASU Institute of Cryobiology and Cryomedicine Issues
[ "Physics", "Chemistry", "Biology" ]
184
[ "Biochemistry", "Physical phenomena", "Phase transitions", "Cryobiology" ]
8,231,140
https://en.wikipedia.org/wiki/Methanol%20reformer
A methanol reformer is a device used in chemical engineering, especially in the area of fuel cell technology, which can produce pure hydrogen gas and carbon dioxide by reacting a methanol and water (steam) mixture. Methanol is transformed into hydrogen and carbon dioxide by pressure and heat and interaction with a catalyst. Technology A mixture of water and methanol with a molar concentration ratio (water:methanol) of 1.0 - 1.5 is pressurized to approximately 20 bar, vaporized and heated to a temperature of 250 - 360 °C. The hydrogen that is created is separated through the use of Pressure swing adsorption or a hydrogen-permeable membrane made of polymer or a palladium alloy. There are two basic methods of conducting this process. The water-methanol mixture is introduced into a tube-shaped reactor where it makes contact with the catalyst. Hydrogen is then separated from the other reactants and products in a later chamber, either by pressure swing adsorption (PSA), or through use of a membrane where the majority of the hydrogen passes through. This method is typically used for larger, non-mobile units. The other process features an integrated reaction chamber and separation membrane, a membrane reactor. In this relatively new approach, the reaction chamber is made to contain high-temperature, hydrogen-permeable membranes that can be formed of refractory metals, palladium alloys, or a PdAg-coated ceramic. The hydrogen is thereby separated out of the reaction chamber as the reaction proceeds, This purifies the hydrogen and, as the reaction continues, increases both the reaction rate and the amount of hydrogen extracted. With either design, not all of the hydrogen is removed from the product gases (raffinate). Since the remaining gas mixture still contains a significant amount of chemical energy, it is often mixed with air and burned to provide heat for the endothermic reforming reaction. Advantages and disadvantages Methanol reformers are used as a component of stationary fuel cell systems or hydrogen fuel cell-powered vehicles (see Reformed methanol fuel cell). A prototype car, the NECAR 5, was introduced by Daimler-Chrysler in the year 2000. The primary advantage of a vehicle with a reformer is that it does not need a pressurized gas tank to store hydrogen fuel; instead methanol is stored as a liquid. The logistic implications of this are great; pressurized hydrogen is difficult to store and produce. Also, this could help ease the public's concern over the danger of hydrogen and thereby make fuel cell-powered vehicles more attractive. However, methanol, like gasoline, is toxic and (of course) flammable. The cost of the PdAg membrane and its susceptibility to damage by temperature changes provide obstacles to adoption. While hydrogen power produces energy without CO2, a methanol reformer creates the gas as a byproduct. Methanol (prepared from natural gas) that is used in an efficient fuel cell, however, releases less CO2 in the atmosphere than gasoline, in a net analysis. References Emonts, B. et al.: Compact methanol reformer test for fuel-cell-powered light-duty vehicles, J. Power Sources 71 (1998) 288-293 Wiese, W. et al.: Methanol steam reforming in a fuel cell drive system, J. Power Sources 84 (1999) 187-193 Peters, R. et al.: Investigation of a methanol concept considering the particular impact of dynamics and long-term stability for use in a fuel-cell-powered passenger car, J. Power Sources 86 (1999) 507-514 See also Steam reforming Partial oxidation PROX Reformed methanol fuel cell Methanol economy Organic solution assisted water electrolysis Hydrogen production Fuel cells Chemical equipment Membrane technology Industrial gases
Methanol reformer
[ "Chemistry", "Engineering" ]
793
[ "Separation processes", "Chemical equipment", "Membrane technology", "Industrial gases", "nan", "Chemical process engineering" ]
472,429
https://en.wikipedia.org/wiki/Non-equilibrium%20thermodynamics
Non-equilibrium thermodynamics is a branch of thermodynamics that deals with physical systems that are not in thermodynamic equilibrium but can be described in terms of macroscopic quantities (non-equilibrium state variables) that represent an extrapolation of the variables used to specify the system in thermodynamic equilibrium. Non-equilibrium thermodynamics is concerned with transport processes and with the rates of chemical reactions. Almost all systems found in nature are not in thermodynamic equilibrium, for they are changing or can be triggered to change over time, and are continuously and discontinuously subject to flux of matter and energy to and from other systems and to chemical reactions. Many systems and processes can, however, be considered to be in equilibrium locally, thus allowing description by currently known equilibrium thermodynamics. Nevertheless, some natural systems and processes remain beyond the scope of equilibrium thermodynamic methods due to the existence of non variational dynamics, where the concept of free energy is lost. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. One fundamental difference between equilibrium thermodynamics and non-equilibrium thermodynamics lies in the behaviour of inhomogeneous systems, which require for their study knowledge of rates of reaction which are not considered in equilibrium thermodynamics of homogeneous systems. This is discussed below. Another fundamental and very important difference is the difficulty, in defining entropy at an instant of time in macroscopic terms for systems not in thermodynamic equilibrium. However, it can be done locally, and the macroscopic entropy will then be given by the integral of the locally defined entropy density. It has been found that many systems far outside global equilibrium still obey the concept of local equilibrium. Scope Difference between equilibrium and non-equilibrium thermodynamics A profound difference separates equilibrium from non-equilibrium thermodynamics. Equilibrium thermodynamics ignores the time-courses of physical processes. In contrast, non-equilibrium thermodynamics attempts to describe their time-courses in continuous detail. Equilibrium thermodynamics restricts its considerations to processes that have initial and final states of thermodynamic equilibrium; the time-courses of processes are deliberately ignored. Non-equilibrium thermodynamics, on the other hand, attempting to describe continuous time-courses, needs its state variables to have a very close connection with those of equilibrium thermodynamics. This conceptual issue is overcome under the assumption of local equilibrium, which entails that the relationships that hold between macroscopic state variables at equilibrium hold locally, also outside equilibrium. Throughout the past decades, the assumption of local equilibrium has been tested, and found to hold, under increasingly extreme conditions, such as in the shock front of violent explosions, on reacting surfaces, and under extreme thermal gradients. Thus, non-equilibrium thermodynamics provides a consistent framework for modelling not only the initial and final states of a system, but also the evolution of the system in time. Together with the concept of entropy production, this provides a powerful tool in process optimisation, and provides a theoretical foundation for exergy analysis. Non-equilibrium state variables The suitable relationship that defines non-equilibrium thermodynamic state variables is as follows. When the system is in local equilibrium, non-equilibrium state variables are such that they can be measured locally with sufficient accuracy by the same techniques as are used to measure thermodynamic state variables, or by corresponding time and space derivatives, including fluxes of matter and energy. In general, non-equilibrium thermodynamic systems are spatially and temporally non-uniform, but their non-uniformity still has a sufficient degree of smoothness to support the existence of suitable time and space derivatives of non-equilibrium state variables. Because of the spatial non-uniformity, non-equilibrium state variables that correspond to extensive thermodynamic state variables have to be defined as spatial densities of the corresponding extensive equilibrium state variables. When the system is in local equilibrium, intensive non-equilibrium state variables, for example temperature and pressure, correspond closely with equilibrium state variables. It is necessary that measuring probes be small enough, and rapidly enough responding, to capture relevant non-uniformity. Further, the non-equilibrium state variables are required to be mathematically functionally related to one another in ways that suitably resemble corresponding relations between equilibrium thermodynamic state variables. In reality, these requirements, although strict, have been shown to be fulfilled even under extreme conditions, such as during phase transitions, at reacting interfaces, and in plasma droplets surrounded by ambient air. There are, however, situations where there are appreciable non-linear effects even at the local scale. Overview Some concepts of particular importance for non-equilibrium thermodynamics include time rate of dissipation of energy (Rayleigh 1873, Onsager 1931, also), time rate of entropy production (Onsager 1931), thermodynamic fields, dissipative structure, and non-linear dynamical structure. One problem of interest is the thermodynamic study of non-equilibrium steady states, in which entropy production and some flows are non-zero, but there is no time variation of physical variables. One initial approach to non-equilibrium thermodynamics is sometimes called 'classical irreversible thermodynamics'. There are other approaches to non-equilibrium thermodynamics, for example extended irreversible thermodynamics, and generalized thermodynamics, but they are hardly touched on in the present article. Quasi-radiationless non-equilibrium thermodynamics of matter in laboratory conditions According to Wildt (see also Essex), current versions of non-equilibrium thermodynamics ignore radiant heat; they can do so because they refer to laboratory quantities of matter under laboratory conditions with temperatures well below those of stars. At laboratory temperatures, in laboratory quantities of matter, thermal radiation is weak and can be practically nearly ignored. But, for example, atmospheric physics is concerned with large amounts of matter, occupying cubic kilometers, that, taken as a whole, are not within the range of laboratory quantities; then thermal radiation cannot be ignored. Local equilibrium thermodynamics The terms 'classical irreversible thermodynamics' and 'local equilibrium thermodynamics' are sometimes used to refer to a version of non-equilibrium thermodynamics that demands certain simplifying assumptions, as follows. The assumptions have the effect of making each very small volume element of the system effectively homogeneous, or well-mixed, or without an effective spatial structure. Even within the thought-frame of classical irreversible thermodynamics, care is needed in choosing the independent variables for systems. In some writings, it is assumed that the intensive variables of equilibrium thermodynamics are sufficient as the independent variables for the task (such variables are considered to have no 'memory', and do not show hysteresis); in particular, local flow intensive variables are not admitted as independent variables; local flows are considered as dependent on quasi-static local intensive variables. Also it is assumed that the local entropy density is the same function of the other local intensive variables as in equilibrium; this is called the local thermodynamic equilibrium assumption (see also Keizer (1987)). Radiation is ignored because it is transfer of energy between regions, which can be remote from one another. In the classical irreversible thermodynamic approach, there is allowed spatial variation from infinitesimal volume element to adjacent infinitesimal volume element, but it is assumed that the global entropy of the system can be found by simple spatial integration of the local entropy density. This approach assumes spatial and temporal continuity and even differentiability of locally defined intensive variables such as temperature and internal energy density. While these demands may appear severely constrictive, it has been found that the assumptions of local equilibrium hold for a wide variety of systems, including reacting interfaces, on the surfaces of catalysts, in confined systems such as zeolites, under temperature gradients as large as K m, and even in shock fronts moving at up to six times the speed of sound. In other writings, local flow variables are considered; these might be considered as classical by analogy with the time-invariant long-term time-averages of flows produced by endlessly repeated cyclic processes; examples with flows are in the thermoelectric phenomena known as the Seebeck and the Peltier effects, considered by Kelvin in the nineteenth century and by Lars Onsager in the twentieth. These effects occur at metal junctions, which were originally effectively treated as two-dimensional surfaces, with no spatial volume, and no spatial variation. Local equilibrium thermodynamics with materials with "memory" A further extension of local equilibrium thermodynamics is to allow that materials may have "memory", so that their constitutive equations depend not only on present values but also on past values of local equilibrium variables. Thus time comes into the picture more deeply than for time-dependent local equilibrium thermodynamics with memoryless materials, but fluxes are not independent variables of state. Extended irreversible thermodynamics Extended irreversible thermodynamics is a branch of non-equilibrium thermodynamics that goes outside the restriction to the local equilibrium hypothesis. The space of state variables is enlarged by including the fluxes of mass, momentum and energy and eventually higher order fluxes. The formalism is well-suited for describing high-frequency processes and small-length scales materials. Basic concepts There are many examples of stationary non-equilibrium systems, some very simple, like a system confined between two thermostats at different temperatures or the ordinary Couette flow, a fluid enclosed between two flat walls moving in opposite directions and defining non-equilibrium conditions at the walls. Laser action is also a non-equilibrium process, but it depends on departure from local thermodynamic equilibrium and is thus beyond the scope of classical irreversible thermodynamics; here a strong temperature difference is maintained between two molecular degrees of freedom (with molecular laser, vibrational and rotational molecular motion), the requirement for two component 'temperatures' in the one small region of space, precluding local thermodynamic equilibrium, which demands that only one temperature be needed. Damping of acoustic perturbations or shock waves are non-stationary non-equilibrium processes. Driven complex fluids, turbulent systems and glasses are other examples of non-equilibrium systems. The mechanics of macroscopic systems depends on a number of extensive quantities. It should be stressed that all systems are permanently interacting with their surroundings, thereby causing unavoidable fluctuations of extensive quantities. Equilibrium conditions of thermodynamic systems are related to the maximum property of the entropy. If the only extensive quantity that is allowed to fluctuate is the internal energy, all the other ones being kept strictly constant, the temperature of the system is measurable and meaningful. The system's properties are then most conveniently described using the thermodynamic potential Helmholtz free energy (A = U - TS), a Legendre transformation of the energy. If, next to fluctuations of the energy, the macroscopic dimensions (volume) of the system are left fluctuating, we use the Gibbs free energy (G = U + PV - TS), where the system's properties are determined both by the temperature and by the pressure. Non-equilibrium systems are much more complex and they may undergo fluctuations of more extensive quantities. The boundary conditions impose on them particular intensive variables, like temperature gradients or distorted collective motions (shear motions, vortices, etc.), often called thermodynamic forces. If free energies are very useful in equilibrium thermodynamics, it must be stressed that there is no general law defining stationary non-equilibrium properties of the energy as is the second law of thermodynamics for the entropy in equilibrium thermodynamics. That is why in such cases a more generalized Legendre transformation should be considered. This is the extended Massieu potential. By definition, the entropy (S) is a function of the collection of extensive quantities . Each extensive quantity has a conjugate intensive variable (a restricted definition of intensive variable is used here by comparison to the definition given in this link) so that: We then define the extended Massieu function as follows: where is the Boltzmann constant, whence The independent variables are the intensities. Intensities are global values, valid for the system as a whole. When boundaries impose to the system different local conditions, (e.g. temperature differences), there are intensive variables representing the average value and others representing gradients or higher moments. The latter are the thermodynamic forces driving fluxes of extensive properties through the system. It may be shown that the Legendre transformation changes the maximum condition of the entropy (valid at equilibrium) in a minimum condition of the extended Massieu function for stationary states, no matter whether at equilibrium or not. Stationary states, fluctuations, and stability In thermodynamics one is often interested in a stationary state of a process, allowing that the stationary state include the occurrence of unpredictable and experimentally unreproducible fluctuations in the state of the system. The fluctuations are due to the system's internal sub-processes and to exchange of matter or energy with the system's surroundings that create the constraints that define the process. If the stationary state of the process is stable, then the unreproducible fluctuations involve local transient decreases of entropy. The reproducible response of the system is then to increase the entropy back to its maximum by irreversible processes: the fluctuation cannot be reproduced with a significant level of probability. Fluctuations about stable stationary states are extremely small except near critical points (Kondepudi and Prigogine 1998, page 323). The stable stationary state has a local maximum of entropy and is locally the most reproducible state of the system. There are theorems about the irreversible dissipation of fluctuations. Here 'local' means local with respect to the abstract space of thermodynamic coordinates of state of the system. If the stationary state is unstable, then any fluctuation will almost surely trigger the virtually explosive departure of the system from the unstable stationary state. This can be accompanied by increased export of entropy. Local thermodynamic equilibrium The scope of present-day non-equilibrium thermodynamics does not cover all physical processes. A condition for the validity of many studies in non-equilibrium thermodynamics of matter is that they deal with what is known as local thermodynamic equilibrium. Ponderable matter Local thermodynamic equilibrium of matter (see also Keizer (1987) means that conceptually, for study and analysis, the system can be spatially and temporally divided into 'cells' or 'micro-phases' of small (infinitesimal) size, in which classical thermodynamical equilibrium conditions for matter are fulfilled to good approximation. These conditions are unfulfilled, for example, in very rarefied gases, in which molecular collisions are infrequent; and in the boundary layers of a star, where radiation is passing energy to space; and for interacting fermions at very low temperature, where dissipative processes become ineffective. When these 'cells' are defined, one admits that matter and energy may pass freely between contiguous 'cells', slowly enough to leave the 'cells' in their respective individual local thermodynamic equilibria with respect to intensive variables. One can think here of two 'relaxation times' separated by order of magnitude. The longer relaxation time is of the order of magnitude of times taken for the macroscopic dynamical structure of the system to change. The shorter is of the order of magnitude of times taken for a single 'cell' to reach local thermodynamic equilibrium. If these two relaxation times are not well separated, then the classical non-equilibrium thermodynamical concept of local thermodynamic equilibrium loses its meaning and other approaches have to be proposed, see for instance Extended irreversible thermodynamics. For example, in the atmosphere, the speed of sound is much greater than the wind speed; this favours the idea of local thermodynamic equilibrium of matter for atmospheric heat transfer studies at altitudes below about 60 km where sound propagates, but not above 100 km, where, because of the paucity of intermolecular collisions, sound does not propagate. Milne's definition in terms of radiative equilibrium Edward A. Milne, thinking about stars, gave a definition of 'local thermodynamic equilibrium' in terms of the thermal radiation of the matter in each small local 'cell'. He defined 'local thermodynamic equilibrium' in a 'cell' by requiring that it macroscopically absorb and spontaneously emit radiation as if it were in radiative equilibrium in a cavity at the temperature of the matter of the 'cell'. Then it strictly obeys Kirchhoff's law of equality of radiative emissivity and absorptivity, with a black body source function. The key to local thermodynamic equilibrium here is that the rate of collisions of ponderable matter particles such as molecules should far exceed the rates of creation and annihilation of photons. Entropy in evolving systems It is pointed out by W.T. Grandy Jr, that entropy, though it may be defined for a non-equilibrium system is—when strictly considered—only a macroscopic quantity that refers to the whole system, and is not a dynamical variable and in general does not act as a local potential that describes local physical forces. Under special circumstances, however, one can metaphorically think as if the thermal variables behaved like local physical forces. The approximation that constitutes classical irreversible thermodynamics is built on this metaphoric thinking. This point of view shares many points in common with the concept and the use of entropy in continuum thermomechanics, which evolved completely independently of statistical mechanics and maximum-entropy principles. Entropy in non-equilibrium To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables that are used to fix the equilibrium state, as was described above, a set of variables that are called internal variables have been introduced. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their tending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable where is a relaxation time of a corresponding variables. It is convenient to consider the initial value are equal to zero. The above equation is valid for small deviations from equilibrium; The dynamics of internal variables in general case is considered by Pokrovskii. Entropy of the system in non-equilibrium is a function of the total set of variables The essential contribution to the thermodynamics of the non-equilibrium systems was brought by the Nobel Prize winner Ilya Prigogine, when he and his collaborators investigated the systems of chemically reacting substances. The stationary states of such systems exists due to exchange both particles and energy with the environment. In section 8 of the third chapter of his book, Prigogine has specified three contributions to the variation of entropy of the considered system at the given volume and constant temperature . The increment of entropy can be calculated according to the formula The first term on the right hand side of the equation presents a stream of thermal energy into the system; the last term—a part of a stream of energy coming into the system with the stream of particles of substances that can be positive or negative, , where is chemical potential of substance . The middle term in (1) depicts energy dissipation (entropy production) due to the relaxation of internal variables . In the case of chemically reacting substances, which was investigated by Prigogine, the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalised, to consider any deviation from the equilibrium state as an internal variable, so that we consider the set of internal variables in equation (1) to consist of the quantities defining not only degrees of completeness of all chemical reactions occurring in the system, but also the structure of the system, gradients of temperature, difference of concentrations of substances and so on. Flows and forces The fundamental relation of classical equilibrium thermodynamics expresses the change in entropy of a system as a function of the intensive quantities temperature , pressure and chemical potential and of the differentials of the extensive quantities energy , volume and particle number . Following Onsager (1931,I), let us extend our considerations to thermodynamically non-equilibrium systems. As a basis, we need locally defined versions of the extensive macroscopic quantities , and and of the intensive macroscopic quantities , and . For classical non-equilibrium studies, we will consider some new locally defined intensive macroscopic variables. We can, under suitable conditions, derive these new variables by locally defining the gradients and flux densities of the basic locally defined macroscopic quantities. Such locally defined gradients of intensive macroscopic variables are called 'thermodynamic forces'. They 'drive' flux densities, perhaps misleadingly often called 'fluxes', which are dual to the forces. These quantities are defined in the article on Onsager reciprocal relations. Establishing the relation between such forces and flux densities is a problem in statistical mechanics. Flux densities () may be coupled. The article on Onsager reciprocal relations considers the stable near-steady thermodynamically non-equilibrium regime, which has dynamics linear in the forces and flux densities. In stationary conditions, such forces and associated flux densities are by definition time invariant, as also are the system's locally defined entropy and rate of entropy production. Notably, according to Ilya Prigogine and others, when an open system is in conditions that allow it to reach a stable stationary thermodynamically non-equilibrium state, it organizes itself so as to minimize total entropy production defined locally. This is considered further below. One wants to take the analysis to the further stage of describing the behaviour of surface and volume integrals of non-stationary local quantities; these integrals are macroscopic fluxes and production rates. In general the dynamics of these integrals are not adequately described by linear equations, though in special cases they can be so described. Onsager reciprocal relations Following Section III of Rayleigh (1873), Onsager (1931, I) showed that in the regime where both the flows () are small and the thermodynamic forces () vary slowly, the rate of creation of entropy is linearly related to the flows: and the flows are related to the gradient of the forces, parametrized by a matrix of coefficients conventionally denoted : from which it follows that: The second law of thermodynamics requires that the matrix be positive definite. Statistical mechanics considerations involving microscopic reversibility of dynamics imply that the matrix is symmetric. This fact is called the Onsager reciprocal relations. The generalization of the above equations for the rate of creation of entropy was given by Pokrovskii. Speculated extremal principles for non-equilibrium processes Until recently, prospects for useful extremal principles in this area have seemed clouded. Nicolis (1999) concludes that one model of atmospheric dynamics has an attractor which is not a regime of maximum or minimum dissipation; she says this seems to rule out the existence of a global organizing principle, and comments that this is to some extent disappointing; she also points to the difficulty of finding a thermodynamically consistent form of entropy production. Another top expert offers an extensive discussion of the possibilities for principles of extrema of entropy production and of dissipation of energy: Chapter 12 of Grandy (2008) is very cautious, and finds difficulty in defining the 'rate of internal entropy production' in many cases, and finds that sometimes for the prediction of the course of a process, an extremum of the quantity called the rate of dissipation of energy may be more useful than that of the rate of entropy production; this quantity appeared in Onsager's 1931 origination of this subject. Other writers have also felt that prospects for general global extremal principles are clouded. Such writers include Glansdorff and Prigogine (1971), Lebon, Jou and Casas-Vásquez (2008), and Šilhavý (1997). There is good experimental evidence that heat convection does not obey extremal principles for time rate of entropy production. Theoretical analysis shows that chemical reactions do not obey extremal principles for the second differential of time rate of entropy production. The development of a general extremal principle seems infeasible in the current state of knowledge. Applications Non-equilibrium thermodynamics has been successfully applied to describe biological processes such as protein folding/unfolding and transport through membranes. It is also used to give a description of the dynamics of nanoparticles, which can be out of equilibrium in systems where catalysis and electrochemical conversion is involved. Also, ideas from non-equilibrium thermodynamics and the informatic theory of entropy have been adapted to describe general economic systems. See also Time crystal Dissipative system Entropy production Extremal principles in non-equilibrium thermodynamics Self-organization Autocatalytic reactions and order creation Self-organizing criticality Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy of equations Boltzmann equation Vlasov equation Maxwell's demon Information entropy Spontaneous symmetry breaking Autopoiesis Maximum power principle References Sources Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, . Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, . Glansdorff, P., Prigogine, I. (1971). Thermodynamic Theory of Structure, Stability, and Fluctuations, Wiley-Interscience, London, 1971, . Grandy, W.T. Jr (2008). Entropy and the Time Evolution of Macroscopic Systems. Oxford University Press. . Gyarmati, I. (1967/1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated from the Hungarian (1967) by E. Gyarmati and W.F. Heinz, Springer, Berlin. Lieb, E.H., Yngvason, J. (1999). 'The physics and mathematics of the second law of thermodynamics', Physics Reports, 310: 1–96. See also this. Further reading Ziegler, Hans (1977): An introduction to Thermomechanics. North Holland, Amsterdam. . Second edition (1983) . Kleidon, A., Lorenz, R.D., editors (2005). Non-equilibrium Thermodynamics and the Production of Entropy, Springer, Berlin. . Prigogine, I. (1955/1961/1967). Introduction to Thermodynamics of Irreversible Processes. 3rd edition, Wiley Interscience, New York. Zubarev D. N. (1974): Nonequilibrium Statistical Thermodynamics. New York, Consultants Bureau. ; . Keizer, J. (1987). Statistical Thermodynamics of Nonequilibrium Processes, Springer-Verlag, New York, . Zubarev D. N., Morozov V., Ropke G. (1996): Statistical Mechanics of Nonequilibrium Processes: Basic Concepts, Kinetic Theory. John Wiley & Sons. . Zubarev D. N., Morozov V., Ropke G. (1997): Statistical Mechanics of Nonequilibrium Processes: Relaxation and Hydrodynamic Processes. John Wiley & Sons. . Tuck, Adrian F. (2008). Atmospheric turbulence : a molecular dynamics perspective. Oxford University Press. . Grandy, W.T. Jr (2008). Entropy and the Time Evolution of Macroscopic Systems. Oxford University Press. . Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structures. John Wiley & Sons, Chichester. . de Groot S.R., Mazur P. (1984). Non-Equilibrium Thermodynamics (Dover). Ramiro Augusto Salazar La Rotta. (2011). The Non-Equilibrium Thermodynamics, Perpetual External links Stephan Herminghaus' Dynamics of Complex Fluids Department at the Max Planck Institute for Dynamics and Self Organization Non-equilibrium Statistical Thermodynamics applied to Fluid Dynamics and Laser Physics - 1992- book by Xavier de Hemptinne. Nonequilibrium Thermodynamics of Small Systems - PhysicsToday.org Into the Cool - 2005 book by Dorion Sagan and Eric D. Schneider, on nonequilibrium thermodynamics and evolutionary theory. "Thermodynamics "beyond" local equilibrium" Branches of thermodynamics
Non-equilibrium thermodynamics
[ "Physics", "Chemistry", "Mathematics" ]
6,211
[ "Non-equilibrium thermodynamics", "Branches of thermodynamics", "Thermodynamics", "Dynamical systems" ]
472,834
https://en.wikipedia.org/wiki/Natural%20nuclear%20fission%20reactor
A natural nuclear fission reactor is a uranium deposit where self-sustaining nuclear chain reactions occur. The idea of a nuclear reactor existing in situ within an ore body moderated by groundwater was briefly explored by Paul Kuroda in 1956. The existence of an extinct or fossil nuclear fission reactor, where self-sustaining nuclear reactions have occurred in the past, are established by analysis of isotope ratios of uranium and of the fission products (and the stable daughter nuclides of those fission products). The first such fossil reactor was first discovered in 1972 in Oklo, Gabon, by researchers from the French Alternative Energies and Atomic Energy Commission (CEA) when chemists performing quality control for the French nuclear industry noticed sharp depletions of fissile in gaseous uranium made from Gabonese ore. Oklo is the only location where this phenomenon is known to have occurred, and consists of 16 sites with patches of centimeter-sized ore layers. There, self-sustaining nuclear fission reactions are thought to have taken place approximately 1.7 billion years ago, during the Statherian period of the Paleoproterozoic. Fission in the ore at Oklo continued off and on for a few hundred thousand years and probably never exceeded 100 kW of thermal power. Life on Earth at this time consisted largely of sea-bound algae and the first eukaryotes, living under a 2% oxygen atmosphere. However even this meager oxygen was likely essential to the concentration of uranium into fissionable ore bodies, as uranium dissolves in water only in the presence of oxygen. Before the planetary-scale production of oxygen by the early photosynthesizers groundwater-moderated natural nuclear reactors are not thought to have been possible. Discovery of the Oklo fossil reactors In May 1972, at the Tricastin uranium enrichment site at Pierrelatte, France, routine mass spectrometry comparing UF6 samples from the Oklo mine showed a discrepancy in the amount of the isotope. Where the usual concentrations of were 0.72% the Oklo samples showed only 0.60%. This was a significant difference—the samples bore 17% less than expected. This discrepancy required explanation, as all civilian uranium handling facilities must meticulously account for all fissionable isotopes to ensure that none are diverted into the construction of unsanctioned nuclear weapons. Further, as fissile material is the reason for mining uranium in the first place, the missing 17% was also of direct economic concern. Thus the French Alternative Energies and Atomic Energy Commission (CEA) began an investigation. A series of measurements of the relative abundances of the two most significant isotopes of uranium mined at Oklo showed anomalous results compared to those obtained for uranium from other mines. Further investigations into this uranium deposit discovered uranium ore with a concentration as low as 0.44% (almost 40% below the normal value). Subsequent examination of isotopes of fission products such as neodymium and ruthenium also showed anomalies, as described in more detail below. However, the trace radioisotope did not deviate significantly in its concentration from other natural samples. Both depleted uranium and reprocessed uranium will usually have concentrations significantly different from the secular equilibrium of 55 ppm relative to . This is due to being enriched together with and due to it being both consumed by neutron capture and produced from by fast neutron induced (n,2n) reactions in nuclear reactors. In Oklo any possible deviation of concentration present at the time the reactor was active would have long since decayed away. must have also been present in higher than usual ratios during the time the reactor was operating, but due to its half life of years being almost two orders of magnitude shorter than the time elapsed since the reactor operated, it has decayed to roughly its original value and thus basically nothing and below any abilities of current equipment to detect. This loss in is exactly what happens in a nuclear reactor. A possible explanation was that the uranium ore had operated as a natural fission reactor in the distant geological past. Other observations led to the same conclusion, and on 25 September 1972 the CEA announced their finding that self-sustaining nuclear chain reactions had occurred on Earth about 2 billion years ago. Later, other natural nuclear fission reactors were discovered in the region. Fission product isotope signatures Neodymium The neodymium found at Oklo has a different isotopic composition to that of natural neodymium: the latter contains 27% , while that of Oklo contains less than 6%. The is not produced by fission; the ore contains both fission-produced and natural neodymium. From this content, we can subtract the natural neodymium and gain access to the isotopic composition of neodymium produced by the fission of . The two isotopes and lead to the formation of and by neutron capture. This excess must be corrected (see above) to obtain agreement between this corrected isotopic composition and that deduced from fission yields. Ruthenium Similar investigations into the isotopic ratios of ruthenium at Oklo found a much higher concentration than otherwise naturally occurring (27–30% vs. 12.7%). This anomaly could be explained by the decay of to . In the bar chart, the normal natural isotope signature of ruthenium is compared with that for fission product ruthenium which is the result of the fission of with thermal neutrons. The fission ruthenium has a different isotope signature. The level of in the fission product mixture is low because fission produces neutron rich isotopes which subsequently beta decay and would only be produced in appreciable quantities by double beta decay of the very long-lived (half life years) molybdenum isotope . On the timescale of when the reactors were in operation, very little (about 0.17 ppb) decay to will have occurred. Other pathways of production like neutron capture in or (quickly followed by beta decay) can only have occurred during high neutron flux and thus ceased when the fission chain reaction stopped. Mechanism The natural nuclear reactor at Oklo formed when a uranium-rich mineral deposit became inundated with groundwater, which could act as a moderator for the neutrons produced by nuclear fission. A chain reaction took place, producing heat that caused the groundwater to boil away; without a moderator that could slow the neutrons, however, the reaction slowed or stopped. The reactor thus had a negative void coefficient of reactivity, something employed as a safety mechanism in human-made light water reactors. After cooling of the mineral deposit, the water returned, and the reaction restarted, completing a full cycle every 3 hours. The fission reaction cycles continued for hundreds of thousands of years and ended when the ever-decreasing fissile materials, coupled with the build-up of neutron poisons, no longer could sustain a chain reaction. Fission of uranium normally produces five known isotopes of the fission-product gas xenon; all five have been found trapped in the remnants of the natural reactor, in varying concentrations. The concentrations of xenon isotopes, found trapped in mineral formations 2 billion years later, make it possible to calculate the specific time intervals of reactor operation: approximately 30 minutes of criticality followed by 2 hours and 30 minutes of cooling down (exponentially decreasing residual decay heat) to complete a 3-hour cycle. Xenon-135 is the strongest known neutron poison. However, it is not produced directly in appreciable amounts but rather as a decay product of iodine-135 (or one of its parent nuclides). Xenon-135 itself is unstable and decays to caesium-135 if not allowed to absorb neutrons. While caesium-135 is relatively long lived, all caesium-135 produced by the Oklo reactor has since decayed further to stable barium-135. Meanwhile, xenon-136, the product of neutron capture in xenon-135 decays extremely slowly via double beta decay and thus scientists were able to determine the neutronics of this reactor by calculations based on those isotope ratios almost two billion years after it stopped fissioning uranium. A key factor that made the reaction possible was that, at the time the reactor went critical 1.7 billion years ago, the fissile isotope made up about 3.1% of the natural uranium, which is comparable to the amount used in some of today's reactors. (The remaining 96.9% was and roughly 55 ppm , neither of which is fissile by slow or moderated neutrons.) Because has a shorter half-life than , and thus decays more rapidly, the current abundance of in natural uranium is only 0.72%. A natural nuclear reactor is therefore no longer possible on Earth without heavy water or graphite. The Oklo uranium ore deposits are the only known sites in which natural nuclear reactors existed. Other rich uranium ore bodies would also have had sufficient uranium to support nuclear reactions at that time, but the combination of uranium, water, and physical conditions needed to support the chain reaction was unique, as far as is currently known, to the Oklo ore bodies. It is also possible that other natural nuclear fission reactors were once operating but have since been geologically disturbed so much as to be unrecognizable, possibly even "diluting" the uranium so far that the isotope ratio would no longer serve as a "fingerprint". Only a small part of the continental crust and no part of the oceanic crust reaches the age of the deposits at Oklo or an age during which isotope ratios of natural uranium would have allowed a self sustaining chain reaction with water as a moderator. Another factor which probably contributed to the start of the Oklo natural nuclear reactor at 2 billion years, rather than earlier, was the increasing oxygen content in the Earth's atmosphere. Uranium is naturally present in the rocks of the earth, and the abundance of fissile was at least 3% or higher at all times prior to reactor startup. Uranium is soluble in water only in the presence of oxygen. Therefore, increasing oxygen levels during the aging of the Earth may have allowed uranium to be dissolved and transported with groundwater to places where a high enough concentration could accumulate to form rich uranium ore bodies. Without the new aerobic environment available on Earth at the time, these concentrations probably could not have taken place. It is estimated that nuclear reactions in the uranium in centimeter- to meter-sized veins consumed about five tons of and elevated temperatures to a few hundred degrees Celsius. Most of the non-volatile fission products and actinides have only moved centimeters in the veins during the last 2 billion years. Studies have suggested this as a useful natural analogue for nuclear waste disposal. The overall mass defect from the fission of five tons of is about . Over its lifetime the reactor produced roughly in thermal energy, including neutrinos. If one ignores fission of plutonium (which makes up roughly a third of fission events over the course of normal burnup in modern human-made light water reactors), then fission product yields amount to roughly of technetium-99 (since decayed to ruthenium-99), of zirconium-93 (since decayed to niobium-93), of caesium-135 (since decayed to barium-135, but the real value is probably lower as its parent nuclide, xenon-135, is a strong neutron poison and will have absorbed neutrons before decaying to in some cases), of palladium-107 (since decayed to silver), of strontium-90 (long since decayed to zirconium), and of caesium-137 (long since decayed to barium). Relation to the atomic fine-structure constant The natural reactor of Oklo has been used to check if the atomic fine-structure constant α might have changed over the past 2 billion years. That is because α influences the rate of various nuclear reactions. For example, captures a neutron to become , and since the rate of neutron capture depends on the value of α, the ratio of the two samarium isotopes in samples from Oklo can be used to calculate the value of α from 2 billion years ago. Several studies have analysed the relative concentrations of radioactive isotopes left behind at Oklo, and most have concluded that nuclear reactions then were much the same as they are today, which implies that α was the same too. See also Deep geological repository Geology of Gabon Mounana References Sources External links The natural nuclear reactor at Oklo: A comparison with modern nuclear reactors, Radiation Information Network, April 2005 Oklo Fossil Reactors הכור הגרעיני של הטבע (in Hebrew language) Nuclear reactors Geography of Gabon Nuclear physics Nuclear fission Radioactive waste repositories Uranium Nuclear reactors by type Nuclear chemistry
Natural nuclear fission reactor
[ "Physics", "Chemistry" ]
2,637
[ "Nuclear fission", "Nuclear chemistry", "nan", "Nuclear physics" ]
472,950
https://en.wikipedia.org/wiki/Surrogate%20key
A surrogate key (or synthetic key, pseudokey, entity identifier, factless key, or technical key) in a database is a unique identifier for either an entity in the modeled world or an object in the database. The surrogate key is not derived from application data, unlike a natural (or business) key. Definition There are at least two definitions of a surrogate: Surrogate (1) – Hall, Owlett and Todd (1976) A surrogate represents an entity in the outside world. The surrogate is internally generated by the system but is nevertheless visible to the user or application. Surrogate (2) – Wieringa and De Jonge (1991) A surrogate represents an object in the database itself. The surrogate is internally generated by the system and is invisible to the user or application. The Surrogate (1) definition relates to a data model rather than a storage model and is used throughout this article. See Date (1998). An important distinction between a surrogate and a primary key depends on whether the database is a current database or a temporal database. Since a current database stores only currently valid data, there is a one-to-one correspondence between a surrogate in the modeled world and the primary key of the database. In this case the surrogate may be used as a primary key, resulting in the term surrogate key. In a temporal database, however, there is a many-to-one relationship between primary keys and the surrogate. Since there may be several objects in the database corresponding to a single surrogate, we cannot use the surrogate as a primary key; another attribute is required, in addition to the surrogate, to uniquely identify each object. Although Hall et al. (1976) say nothing about this, others have argued that a surrogate should have the following characteristics: the value is never reused the value is system generated the value is not manipulable by the user or application the value contains no semantic meaning the value is not visible to the user or application the value is not composed of several values from different domains. Surrogates in practice In a current database, the surrogate key can be the primary key, generated by the database management system and not derived from any application data in the database. The only significance of the surrogate key is to act as the primary key. It is also possible that the surrogate key exists in addition to the database-generated UUID (for example, an HR number for each employee other than the UUID of each employee). A surrogate key is frequently a sequential number (e.g. a Sybase or SQL Server "identity column", a PostgreSQL or Informix serial, an Oracle or SQL Server SEQUENCE or a column defined with AUTO_INCREMENT in MySQL). Some databases provide UUID/GUID as a possible data type for surrogate keys (e.g. PostgreSQL UUID or SQL Server UNIQUEIDENTIFIER). Having the key independent of all other columns insulates the database relationships from changes in data values or database design (making the database more agile) and guarantees uniqueness. In a temporal database, it is necessary to distinguish between the surrogate key and the business key. Every row would have both a business key and a surrogate key. The surrogate key identifies one unique row in the database, the business key identifies one unique entity of the modeled world. One table row represents a slice of time holding all the entity's attributes for a defined timespan. Those slices depict the whole lifespan of one business entity. For example, a table EmployeeContracts may hold temporal information to keep track of contracted working hours. The business key for one contract will be identical (non-unique) in both rows however the surrogate key for each row is unique. Some database designers use surrogate keys systematically regardless of the suitability of other candidate keys, while others will use a key already present in the data, if there is one. Some of the alternate names ("system-generated key") describe the way of generating new surrogate values rather than the nature of the surrogate concept. Approaches to generating surrogates include: Universally Unique Identifiers (UUIDs) Globally Unique Identifiers (GUIDs) Object Identifiers (OIDs) Sybase or SQL Server identity column IDENTITY OR IDENTITY(n,n) Oracle SEQUENCE, or GENERATED AS IDENTITY (starting from version 12.1) SQL Server SEQUENCE (starting from SQL Server 2012) PostgreSQL or IBM Informix serial MySQL AUTO_INCREMENT SQLite INTEGER PRIMARY KEY (if AUTOINCREMENT is used it will prevent the reuse of numbers that have already been used but are available) AutoNumber data type in Microsoft Access AS IDENTITY GENERATED BY DEFAULT in IBM Db2 and PostgreSQL. Identity column (implemented in DDL) in Teradata Table Sequence when the sequence is calculated by a procedure and a sequence table with fields: id, sequenceName, sequenceValue and incrementValue Advantages Stability Surrogate keys typically do not change while the row exists. This has the following advantages: Applications cannot lose their reference to a row in the database (since the identifier does not change). The primary or natural key data can always be modified, even with databases that do not support cascading updates across related foreign keys. Requirement changes Attributes that uniquely identify an entity might change, which might invalidate the suitability of natural keys. Consider the following example: An employee's network user name is chosen as a natural key. Upon merging with another company, new employees must be inserted. Some of the new network user names create conflicts because their user names were generated independently (when the companies were separate). In these cases, generally a new attribute must be added to the natural key (for example, an original_company column). With a surrogate key, only the table that defines the surrogate key must be changed. With natural keys, all tables (and possibly other, related software) that use the natural key will have to change. Some problem domains do not clearly identify a suitable natural key. Surrogate keys avoid choosing a natural key that might be incorrect. Performance Surrogate keys tend to be a compact data type, such as a four-byte integer. This allows the database to query the single key column faster than it could multiple columns (which are often text - which is even further slower). Furthermore, a non-redundant distribution of keys causes the resulting b-tree index to be completely balanced. Surrogate keys are also less expensive to join (fewer columns to compare) than compound key. Compatibility While using several database application development systems, drivers, and object–relational mapping systems, such as Ruby on Rails or Hibernate, it is much easier to use an integer or GUID surrogate keys for every table instead of natural keys in order to support database-system-agnostic operations and object-to-row mapping. Uniformity When every table has a uniform surrogate key, some tasks can be easily automated by writing the code in a table-independent way. Validation It is possible to design key-values that follow a well-known pattern or structure which can be automatically verified. For instance, the keys that are intended to be used in some column of some table might be designed to "look differently from" those that are intended to be used in another column or table, thereby simplifying the detection of application errors in which the keys have been misplaced. However, this characteristic of the surrogate keys should never be used to drive any of the logic of the applications themselves, as this would violate the principles of database normalization. Simplicity of Relationships Surrogate keys simplify the creation of foreign key relationships because they only require a single column (as opposed to composite keys - which require multiple columns). When creating a query on the database, forgetting to include all the columns in a composite foreign key when joining tables can lead to unexpected results in the form of an undesired cartesian product. Disadvantages Disassociation The values of generated surrogate keys have no relationship to the real-world meaning of the data held in a row. When inspecting a row holding a foreign key reference to another table using a surrogate key, the meaning of the surrogate key's row cannot be discerned from the key itself. Every foreign key must be joined to see the related data item. If appropriate database constraints have not been set, or data imported from a legacy system where referential integrity was not employed, it is possible to have a foreign-key value that does not correspond to a primary-key value and is therefore invalid. (In this regard, C.J. Date regards the meaninglessness of surrogate keys as an advantage.) To discover such errors, one must perform a query that uses a left outer join between the table with the foreign key and the table with the primary key, showing both key fields in addition to any fields required to distinguish the record; all invalid foreign-key values will have the primary-key column as NULL. The need to perform such a check is so common that Microsoft Access actually provides a "Find Unmatched Query" wizard that generates the appropriate SQL after walking the user through a dialog. (It is, however, not too difficult to compose such queries manually.) "Find Unmatched" queries are typically employed as part of a data cleansing process when inheriting legacy data. Surrogate keys are unnatural for data that is exported and shared. A particular difficulty is that tables from two otherwise identical schemas (for example, a test schema and a development schema) can hold records that are equivalent in a business sense, but have different keys. This can be mitigated by not exporting surrogate keys, except as transient data (most obviously, in executing applications that have a "live" connection to the database). When surrogate keys supplant natural keys, then domain specific referential integrity will be compromised. For example, in a customer master table, the same customer may have multiple records under separate customer IDs, even though the natural key (a combination of customer name, date of birth, and e-mail address) would be unique. To prevent compromise, the natural key of the table must not be supplanted: it must be preserved as a unique constraint, which is implemented as a unique index on the combination of natural-key fields. Query optimization Relational databases assume a unique index is applied to a table's primary key. The unique index serves two purposes: (i) to enforce entity integrity, since primary key data must be unique across rows and (ii) to quickly search for rows when queried. Since surrogate keys replace a table's identifying attributes—the natural key—and since the identifying attributes are likely to be those queried, then the query optimizer is forced to perform a full table scan when fulfilling likely queries. The remedy to the full table scan is to apply indexes on the identifying attributes, or sets of them. Where such sets are themselves a candidate key, the index can be a unique index. These additional indexes, however, will take up disk space and slow down inserts and deletes. Normalization Surrogate keys can result in duplicate values in any natural keys. To prevent duplication, one must preserve the role of the natural keys as unique constraints when defining the table using either SQL's CREATE TABLE statement or ALTER TABLE ... ADD CONSTRAINT statement, if the constraints are added as an afterthought. Business process modeling Because surrogate keys are unnatural, flaws can appear when modeling the business requirements. Business requirements, relying on the natural key, then need to be translated to the surrogate key. A strategy is to draw a clear distinction between the logical model (in which surrogate keys do not appear) and the physical implementation of that model, to ensure that the logical model is correct and reasonably well normalised, and to ensure that the physical model is a correct implementation of the logical model. Inadvertent disclosure Proprietary information can be leaked if surrogate keys are generated sequentially. By subtracting a previously generated sequential key from a recently generated sequential key, one could learn the number of rows inserted during that time period. This could expose, for example, the number of transactions or new accounts per period. For example see German tank problem. There are a few ways to overcome this problem: increase the sequential number by a random amount; generate a random key such as a UUID. Inadvertent assumptions Sequentially generated surrogate keys can imply that events with a higher key value occurred after events with a lower value. This is not necessarily true, because such values do not guarantee time sequence as it is possible for inserts to fail and leave gaps which may be filled at a later time. If chronology is important then date and time must be separately recorded. See also Natural key Object identifier Persistent object identifier References Citations Sources Engles, R.W.: (1972), A Tutorial on Data-Base Organization, Annual Review in Automatic Programming, Vol.7, Part 1, Pergamon Press, Oxford, pp. 1–64. Langefors, B (1968). Elementary Files and Elementary File Records, Proceedings of File 68, an IFIP/IAG International Seminar on File Organisation, Amsterdam, November, pp. 89–96. Data modeling Database management systems
Surrogate key
[ "Engineering" ]
2,783
[ "Data modeling", "Data engineering" ]
473,514
https://en.wikipedia.org/wiki/Generalized%20coordinates
In analytical mechanics, generalized coordinates are a set of parameters used to represent the state of a system in a configuration space. These parameters must uniquely define the configuration of the system relative to a reference state. The generalized velocities are the time derivatives of the generalized coordinates of the system. The adjective "generalized" distinguishes these parameters from the traditional use of the term "coordinate" to refer to Cartesian coordinates. An example of a generalized coordinate would be to describe the position of a pendulum using the angle of the pendulum relative to vertical, rather than by the x and y position of the pendulum. Although there may be many possible choices for generalized coordinates for a physical system, they are generally selected to simplify calculations, such as the solution of the equations of motion for the system. If the coordinates are independent of one another, the number of independent generalized coordinates is defined by the number of degrees of freedom of the system. Generalized coordinates are paired with generalized momenta to provide canonical coordinates on phase space. Constraints and degrees of freedom Generalized coordinates are usually selected to provide the minimum number of independent coordinates that define the configuration of a system, which simplifies the formulation of Lagrange's equations of motion. However, it can also occur that a useful set of generalized coordinates may be dependent, which means that they are related by one or more constraint equations. Holonomic constraints For a system of particles in 3D real coordinate space, the position vector of each particle can be written as a 3-tuple in Cartesian coordinates: Any of the position vectors can be denoted where labels the particles. A holonomic constraint is a constraint equation of the form for particle which connects all the 3 spatial coordinates of that particle together, so they are not independent. The constraint may change with time, so time will appear explicitly in the constraint equations. At any instant of time, any one coordinate will be determined from the other coordinates, e.g. if and are given, then so is . One constraint equation counts as one constraint. If there are constraints, each has an equation, so there will be constraint equations. There is not necessarily one constraint equation for each particle, and if there are no constraints on the system then there are no constraint equations. So far, the configuration of the system is defined by quantities, but coordinates can be eliminated, one coordinate from each constraint equation. The number of independent coordinates is . (In dimensions, the original configuration would need coordinates, and the reduction by constraints means ). It is ideal to use the minimum number of coordinates needed to define the configuration of the entire system, while taking advantage of the constraints on the system. These quantities are known as generalized coordinates in this context, denoted . It is convenient to collect them into an -tuple which is a point in the configuration space of the system. They are all independent of one other, and each is a function of time. Geometrically they can be lengths along straight lines, or arc lengths along curves, or angles; not necessarily Cartesian coordinates or other standard orthogonal coordinates. There is one for each degree of freedom, so the number of generalized coordinates equals the number of degrees of freedom, . A degree of freedom corresponds to one quantity that changes the configuration of the system, for example the angle of a pendulum, or the arc length traversed by a bead along a wire. If it is possible to find from the constraints as many independent variables as there are degrees of freedom, these can be used as generalized coordinates. The position vector of particle is a function of all the generalized coordinates (and, through them, of time), and the generalized coordinates can be thought of as parameters associated with the constraint. The corresponding time derivatives of are the generalized velocities, (each dot over a quantity indicates one time derivative). The velocity vector is the total derivative of with respect to time and so generally depends on the generalized velocities and coordinates. Since we are free to specify the initial values of the generalized coordinates and velocities separately, the generalized coordinates and velocities can be treated as independent variables. Non-holonomic constraints A mechanical system can involve constraints on both the generalized coordinates and their derivatives. Constraints of this type are known as non-holonomic. First-order non-holonomic constraints have the form An example of such a constraint is a rolling wheel or knife-edge that constrains the direction of the velocity vector. Non-holonomic constraints can also involve next-order derivatives such as generalized accelerations. Physical quantities in generalized coordinates Kinetic energy The total kinetic energy of the system is the energy of the system's motion, defined as in which · is the dot product. The kinetic energy is a function only of the velocities , not the coordinates themselves. By contrast an important observation is which illustrates the kinetic energy is in general a function of the generalized velocities, coordinates, and time if the constraints also vary with time, so . In the case the constraints on the particles are time-independent, then all partial derivatives with respect to time are zero, and the kinetic energy is a homogeneous function of degree 2 in the generalized velocities. Still for the time-independent case, this expression is equivalent to taking the line element squared of the trajectory for particle , and dividing by the square differential in time, , to obtain the velocity squared of particle . Thus for time-independent constraints it is sufficient to know the line element to quickly obtain the kinetic energy of particles and hence the Lagrangian. It is instructive to see the various cases of polar coordinates in 2D and 3D, owing to their frequent appearance. In 2D polar coordinates , in 3D cylindrical coordinates , in 3D spherical coordinates , Generalized momentum The generalized momentum "canonically conjugate to" the coordinate is defined by If the Lagrangian does not depend on some coordinate , then it follows from the Euler–Lagrange equations that the corresponding generalized momentum will be a conserved quantity, because the time derivative is zero implying the momentum is a constant of the motion; Examples Bead on a wire For a bead sliding on a frictionless wire subject only to gravity in 2d space, the constraint on the bead can be stated in the form , where the position of the bead can be written , in which is a parameter, the arc length along the curve from some point on the wire. This is a suitable choice of generalized coordinate for the system. Only one coordinate is needed instead of two, because the position of the bead can be parameterized by one number, , and the constraint equation connects the two coordinates and ; either one is determined from the other. The constraint force is the reaction force the wire exerts on the bead to keep it on the wire, and the non-constraint applied force is gravity acting on the bead. Suppose the wire changes its shape with time, by flexing. Then the constraint equation and position of the particle are respectively which now both depend on time due to the changing coordinates as the wire changes its shape. Notice time appears implicitly via the coordinates and explicitly in the constraint equations. Simple pendulum The relationship between the use of generalized coordinates and Cartesian coordinates to characterize the movement of a mechanical system can be illustrated by considering the constrained dynamics of a simple pendulum. A simple pendulum consists of a mass hanging from a pivot point so that it is constrained to move on a circle of radius . The position of the mass is defined by the coordinate vector measured in the plane of the circle such that is in the vertical direction. The coordinates and are related by the equation of the circle that constrains the movement of . This equation also provides a constraint on the velocity components, Now introduce the parameter , that defines the angular position of from the vertical direction. It can be used to define the coordinates and , such that The use of to define the configuration of this system avoids the constraint provided by the equation of the circle. Notice that the force of gravity acting on the mass is formulated in the usual Cartesian coordinates, where is the acceleration due to gravity. The virtual work of gravity on the mass as it follows the trajectory is given by The variation can be computed in terms of the coordinates and , or in terms of the parameter , Thus, the virtual work is given by Notice that the coefficient of is the -component of the applied force. In the same way, the coefficient of is known as the generalized force along generalized coordinate , given by To complete the analysis consider the kinetic energy of the mass, using the velocity, so, D'Alembert's form of the principle of virtual work for the pendulum in terms of the coordinates and are given by, This yields the three equations in the three unknowns, , and . Using the parameter , those equations take the form which becomes, or This formulation yields one equation because there is a single parameter and no constraint equation. This shows that the parameter is a generalized coordinate that can be used in the same way as the Cartesian coordinates and to analyze the pendulum. Double pendulum The benefits of generalized coordinates become apparent with the analysis of a double pendulum. For the two masses , let define their two trajectories. These vectors satisfy the two constraint equations, and The formulation of Lagrange's equations for this system yields six equations in the four Cartesian coordinates and the two Lagrange multipliers that arise from the two constraint equations. Now introduce the generalized coordinates that define the angular position of each mass of the double pendulum from the vertical direction. In this case, we have The force of gravity acting on the masses is given by, where is the acceleration due to gravity. Therefore, the virtual work of gravity on the two masses as they follow the trajectories is given by The variations can be computed to be Thus, the virtual work is given by and the generalized forces are Compute the kinetic energy of this system to be Euler–Lagrange equation yield two equations in the unknown generalized coordinates given by and The use of the generalized coordinates provides an alternative to the Cartesian formulation of the dynamics of the double pendulum. Spherical pendulum For a 3D example, a spherical pendulum with constant length free to swing in any angular direction subject to gravity, the constraint on the pendulum bob can be stated in the form where the position of the pendulum bob can be written in which are the spherical polar angles because the bob moves in the surface of a sphere. The position is measured along the suspension point to the bob, here treated as a point particle. A logical choice of generalized coordinates to describe the motion are the angles . Only two coordinates are needed instead of three, because the position of the bob can be parameterized by two numbers, and the constraint equation connects the three coordinates so any one of them is determined from the other two. Generalized coordinates and virtual work The principle of virtual work states that if a system is in static equilibrium, the virtual work of the applied forces is zero for all virtual movements of the system from this state, that is, for any variation . When formulated in terms of generalized coordinates, this is equivalent to the requirement that the generalized forces for any virtual displacement are zero, that is . Let the forces on the system be be applied to points with Cartesian coordinates , then the virtual work generated by a virtual displacement from the equilibrium position is given by where denote the virtual displacements of each point in the body. Now assume that each depends on the generalized coordinates then and The terms are the generalized forces acting on the system. Kane shows that these generalized forces can also be formulated in terms of the ratio of time derivatives, where is the velocity of the point of application of the force . In order for the virtual work to be zero for an arbitrary virtual displacement, each of the generalized forces must be zero, that is See also Canonical coordinates Hamiltonian mechanics Virtual work Orthogonal coordinates Curvilinear coordinates Mass matrix Stiffness matrix Generalized forces Notes References Bibliography of cited references Dynamical systems Rigid bodies Mechanical quantities
Generalized coordinates
[ "Physics", "Mathematics" ]
2,434
[ "Mechanical quantities", "Physical quantities", "Quantity", "Classical mechanics", "Lagrangian mechanics", "Mechanics", "Dynamical systems" ]
474,105
https://en.wikipedia.org/wiki/Inerting%20system
An inerting system decreases the probability of combustion of flammable materials stored in a confined space. The most common such system is a fuel tank containing a combustible liquid, such as gasoline, diesel fuel, aviation fuel, jet fuel, or rocket propellant. After being fully filled, and during use, there is a space above the fuel, called the ullage, that contains evaporated fuel mixed with air, which contains the oxygen necessary for combustion. Under the right conditions this mixture can ignite. An inerting system replaces the air with a gas that cannot support combustion, such as nitrogen. Principle of operation Three elements are required to initiate and sustain combustion in the ullage: an ignition source (heat), fuel, and oxygen. Combustion may be prevented by reducing any one of these three elements. In many cases there is no ignition source, e.g. storage tanks. If the presence of an ignition source can not be prevented, as is the case with most tanks that feed fuel to internal combustion engines, then the tank may be made non-ignitable by progressively adding an inert gas to the ullage as the fuel is consumed. At present carbon dioxide or nitrogen are used almost exclusively, although some systems use nitrogen-enriched air, or steam. Using these inert gases reduces the oxygen concentration of the ullage to below the combustion threshold. Oil tankers Oil tankers fill the empty space above the oil cargo with inert gas to prevent fire or explosion of hydrocarbon vapors. Oil vapors cannot burn in air with less than 11% oxygen content. The inert gas may be supplied by cooling and scrubbing the flue gas produced by the ship's boilers. Where diesel engines are used, the exhaust gas may contain too much oxygen so fuel-burning inert gas generators may be installed. One-way valves are installed in process piping to the tanker spaces to prevent volatile hydrocarbon vapors or mist from entering other equipment. Inert gas systems have been required on oil tankers since the SOLAS regulations of 1974. The International Maritime Organization (IMO) publishes technical standard IMO-860 describing the requirements for inert gas systems. Other types of cargo such as bulk chemicals may also be carried in inerted tanks, but the inerting gas must be compatible with the chemicals used. Aircraft Fuel tanks for combat aircraft have long been inerted, as well as being self-sealing, but those for military cargo aircraft and civilian transport category aircraft usually were not. Early applications using nitrogen were on the Handley Page Halifax III and VIII, Short Stirling, and Avro Lincoln B.II, which incorporated inerting systems from around 1944. Cleve Kimmel first proposed an inerting system to passenger airlines in the early 1960s. His proposed system for passenger aircraft would have used nitrogen. However, the US Federal Aviation Administration (FAA) did not mandate installation of an inerting system at that time. Early versions of Kimmel's system weighed 2,000 pounds. The FAA focused on keeping ignition sources out of the fuel tanks. The FAA did not formally propose lightweight inerting systems for commercial jets until the 1996 crash of TWA Flight 800, a Boeing 747. The crash was caused by an explosion in the center wing fuel tank. This tank is normally used only on very long flights, and little fuel was present in the tank at the time of the explosion. A small amount of fuel in a tank is more dangerous than a large amount, since it takes less heat to raise the temperature of the remaining fuel. This causes the ullage fuel-to-air ratio to increase and exceed the lower flammability limit. A small amount of fuel in the tank leaves pumps on the floor of the tank exposed to the air-fuel mixture, and an electric pump is a potential ignition source. The explosion of a Thai Airways International Boeing 737 in 2001 and a Philippine Airlines 737 in 1990 also occurred in tanks that had a small amount of residual fuel. These three explosions occurred on warm days, in the center wing tank (CWT) that is within the contours of the fuselage. These fuel tanks are located in the vicinity of external equipment that inadvertently heats the fuel tanks. The National Transportation Safety Board's (NTSB) final report on the crash of the TWA 747 concluded "The fuel air vapor in the ullage of the TWA flight 800 CWT was flammable at the time of the accident". NTSB identified "Elimination of Explosive Mixture in Fuel tanks in Transport Category Aircraft" as Number 1 item on its Most Wanted List in 1997. After the TWA Flight 800 crash, a 2001 report by an FAA committee stated that U.S. airlines would have to spend US$35 billion to retrofit their existing aircraft fleets with inerting systems that might prevent such explosions. However, another FAA group developed a nitrogen-enriched air (NEA) based inerting system prototype that operated on compressed air supplied by the aircraft's propulsive engines. Also, the FAA determined that the fuel tank could be rendered inert by reducing the ullage oxygen concentration to 12% rather than the previously accepted threshold of 9 to 10%. Boeing commenced testing a derivative system of their own, performing successful test flights in 2003 with several Boeing 747 aircraft. The new, simplified inerting system was originally suggested to the FAA through public comment. It uses a hollow fiber membrane material that separates supplied air into nitrogen-enriched air (NEA) and oxygen enriched air (OEA). This technology is extensively used for generating oxygen-enriched air for medical purposes. It uses a membrane that preferentially allows the nitrogen molecule (molecular weight 28) to pass through it but not the oxygen molecule (molecular weight 32). Unlike the inerting systems on military aircraft, this inerting system runs continuously to reduce fuel vapor flammability whenever the aircraft's engines are running. The goal is to reduce oxygen content within the fuel tank to 12%, lower than normal atmospheric oxygen content of 21%, but higher than that of inerted military aircraft fuel tanks, which have a target of 9% oxygen. Inerting in military aircraft is typically accomplished by ventilating fuel-vapor laden ullage gas out of the tank and into the atmosphere. FAA rules After what it said was seven years of investigation, the FAA proposed a rule in November 2005, in response to an NTSB recommendation, which would require airlines to "reduce the flammability levels of fuel tank vapors on the ground and in the air". This was a shift from the previous 40 years of policy in which the FAA focused only on reducing possible sources of ignition of fuel tank vapors. The FAA issued the final rule on 21 July 2008. The rule amends regulations applicable to the design of new airplanes (14CFR§25.981), and introduces new regulations for continued safety (14CFR§26.31–39), Operating Requirements for Domestic Operations (14CFR§121.1117) and Operating Requirements for Foreign Air Carriers (14CFR§129.117). The regulations apply to airplanes certificated after 1 January 1958 of passenger capacity of 30 or more or payload capacity of greater than 7500 pounds. The regulations are performance based and do not require the implementation of a particular method. The proposed rule would affect all future fixed-wing aircraft designs (passenger capacity greater than 30), and require a retrofit of more than 3,200 Airbus and Boeing aircraft with center wing fuel tanks, over nine years. The FAA had initially planned to also order installation on cargo aircraft, but this was removed from the order by the Bush administration. Additionally, regional jets and smaller commuter planes would not be subject to the rule, because the FAA does not consider them at high risk for a fuel-tank explosion. The FAA estimated the cost of the program at US$808 million over the next 49 years, including US$313 million to retrofit the existing fleet. It compared this cost to an estimated US$1.2 billion "cost to society" from a large airliner exploding in mid-air. The proposed rule came at a time when nearly half of the U.S. airlines' capacity was on carriers that were in bankruptcy. The order affects aircraft whose air conditioning units have a possibility of heating up what can be considered a normally empty center wing fuel tank. Some Airbus A320 and Boeing 747 aircraft are slated for "early action". Regarding new aircraft designs, the Airbus A380 does not have a center wing fuel tank and is therefore exempt, and the Boeing 787 has a fuel tank safety system that already complies with the proposed rule. The FAA has stated that there have been four fuel tank explosions in the previous 16 years—two on the ground, and two in the air—and that based on this statistic and on the FAA's estimate that one such explosion would happen every 60 million hours of flight time, about 9 such explosions will probably occur in the next 50 years. The inerting systems will probably prevent 8 of those 9 probable explosions, the FAA said. Before the inerting system rule was proposed, Boeing stated that it would install its own inerting system on airliners it manufactures beginning in 2005. Airbus had argued that its planes' electrical wiring made the inerting system an unnecessary expense. , the FAA had a pending rule to increase the standards of on board inerting systems again. New technologies are being developed by others to provide fuel tank inerting: The On-Board Inert Gas Generation System (OBIGGS) system, tested in 2004 by the FAA and NASA, with an opinion written by the FAA in 2005. This system is currently in use by many military aircraft types, including the C-17. This system provides the level of safety that the proposed increase in standards by the proposed FAA rules has been written around. Critics of this system cite the high maintenance cost reported by the military. Three independent research and development firms have proposed new technologies in response to Research & Development grants by the FAA and SBA. The focus of these grants is to develop a system that is superior to OBIGGS that can replace classic inerting methods. None of these approaches has been validated in the general scientific community, nor have these efforts produced commercially available products. All the firms have issued press releases or given non-peer reviewed talks. Other methods Another method in current use to inert fuel tanks is an ullage system. The FAA has decided that the added weight of an ullage system makes it impractical for implementation in the aviation field. Some U.S. military aircraft still use nitrogen based foam inerting systems, and some companies will ship containers of fuel with an ullage system across rail transportation routes. See also Dilution (equation) TWA Flight 800 Oxygen reduction system Tank blanketing Hollow fiber membrane References Sources "FAA to Order Long-Delayed Fixes To Cut Airliner Fuel-Tank Danger", Wall Street Journal, 15 November 2005, page D5 External links Hollow Fiber Gas Separation Aviation safety Explosion protection Safety equipment Industrial gases
Inerting system
[ "Chemistry", "Engineering" ]
2,278
[ "Explosion protection", "Combustion engineering", "Industrial gases", "Explosions", "Chemical process engineering" ]
474,363
https://en.wikipedia.org/wiki/Glycosylation
Glycosylation is the reaction in which a carbohydrate (or 'glycan'), i.e. a glycosyl donor, is attached to a hydroxyl or other functional group of another molecule (a glycosyl acceptor) in order to form a glycoconjugate. In biology (but not always in chemistry), glycosylation usually refers to an enzyme-catalysed reaction, whereas glycation (also 'non-enzymatic glycation' and 'non-enzymatic glycosylation') may refer to a non-enzymatic reaction. Glycosylation is a form of co-translational and post-translational modification. Glycans serve a variety of structural and functional roles in membrane and secreted proteins. The majority of proteins synthesized in the rough endoplasmic reticulum undergo glycosylation. Glycosylation is also present in the cytoplasm and nucleus as the O-GlcNAc modification. Aglycosylation is a feature of engineered antibodies to bypass glycosylation. Five classes of glycans are produced: N-linked glycans attached to a nitrogen of asparagine or arginine side-chains. N-linked glycosylation requires participation of a special lipid called dolichol phosphate. O-linked glycans attached to the hydroxyl oxygen of serine, threonine, tyrosine, hydroxylysine, or hydroxyproline side-chains, or to oxygens on lipids such as ceramide. Phosphoglycans linked through the phosphate of a phosphoserine. C-linked glycans, a rare form of glycosylation where a sugar is added to a carbon on a tryptophan side-chain. Aloin is one of the few naturally occurring substances. Glypiation, which is the addition of a GPI anchor that links proteins to lipids through glycan linkages. Purpose Glycosylation is the process by which a carbohydrate is covalently attached to a target macromolecule, typically proteins and lipids. This modification serves various functions. For instance, some proteins do not fold correctly unless they are glycosylated. In other cases, proteins are not stable unless they contain oligosaccharides linked at the amide nitrogen of certain asparagine residues. The influence of glycosylation on the folding and stability of glycoprotein is twofold. Firstly, the highly soluble glycans may have a direct physicochemical stabilisation effect. Secondly, N-linked glycans mediate a critical quality control check point in glycoprotein folding in the endoplasmic reticulum. Glycosylation also plays a role in cell-to-cell adhesion (a mechanism employed by cells of the immune system) via sugar-binding proteins called lectins, which recognize specific carbohydrate moieties. Glycosylation is an important parameter in the optimization of many glycoprotein-based drugs such as monoclonal antibodies. Glycosylation also underpins the ABO blood group system. It is the presence or absence of glycosyltransferases which dictates which blood group antigens are presented and hence what antibody specificities are exhibited. This immunological role may well have driven the diversification of glycan heterogeneity and creates a barrier to zoonotic transmission of viruses. In addition, glycosylation is often used by viruses to shield the underlying viral protein from immune recognition. A significant example is the dense glycan shield of the envelope spike of the human immunodeficiency virus. Overall, glycosylation needs to be understood by the likely evolutionary selection pressures that have shaped it. In one model, diversification can be considered purely as a result of endogenous functionality (such as cell trafficking). However, it is more likely that diversification is driven by evasion of pathogen infection mechanism (e.g. Helicobacter attachment to terminal saccharide residues) and that diversity within the multicellular organism is then exploited endogenously. Glycosylation can also modulate the thermodynamic and kinetic stability of the proteins. Glycoprotein diversity Glycosylation increases diversity in the proteome, because almost every aspect of glycosylation can be modified, including: Glycosidic bond—the site of glycan linkage Glycan composition—the types of sugars that are linked to a given protein Glycan structure—can be unbranched or branched chains of sugars Glycan length—can be short- or long-chain oligosaccharides Mechanisms There are various mechanisms for glycosylation, although most share several common features: Glycosylation, unlike glycation, is an enzymatic process. Indeed, glycosylation is thought to be the most complex post-translational modification, because of the large number of enzymatic steps involved. The donor molecule is often an activated nucleotide sugar. The process is non-templated (unlike DNA transcription or protein translation); instead, the cell relies on segregating enzymes into different cellular compartments (e.g., endoplasmic reticulum, cisternae in Golgi apparatus). Therefore, glycosylation is a site-specific modification. Types N-linked glycosylation N-linked glycosylation is a very prevalent form of glycosylation and is important for the folding of many eukaryotic glycoproteins and for cell–cell and cell–extracellular matrix attachment. The N-linked glycosylation process occurs in eukaryotes in the lumen of the endoplasmic reticulum and widely in archaea, but very rarely in bacteria. In addition to their function in protein folding and cellular attachment, the N-linked glycans of a protein can modulate a protein's function, in some cases acting as an on/off switch. O-linked glycosylation O-linked glycosylation is a form of glycosylation that occurs in eukaryotes in the Golgi apparatus, but also occurs in archaea and bacteria. Phosphoserine glycosylation Xylose, fucose, mannose, and GlcNAc phosphoserine glycans have been reported in the literature. Fucose and GlcNAc have been found only in Dictyostelium discoideum, mannose in Leishmania mexicana, and xylose in Trypanosoma cruzi. Mannose has recently been reported in a vertebrate, the mouse, Mus musculus, on the cell-surface laminin receptor alpha dystroglycan4. It has been suggested this rare finding may be linked to the fact that alpha dystroglycan is highly conserved from lower vertebrates to mammals. C-mannosylation A mannose sugar is added to the first tryptophan residue in the sequence W–X–X–W (W indicates tryptophan; X is any amino acid). A C-C bond is formed between the first carbon of the alpha-mannose and the second carbon of the tryptophan. However, not all the sequences that have this pattern are mannosylated. It has been established that, in fact, only two thirds are and that there is a clear preference for the second amino acid to be one of the polar ones (Ser, Ala, Gly and Thr) in order for mannosylation to occur. Recently there has been a breakthrough in the technique of predicting whether or not the sequence will have a mannosylation site that provides an accuracy of 93% opposed to the 67% accuracy if we just consider the WXXW motif. Thrombospondins are one of the proteins most commonly modified in this way. However, there is another group of proteins that undergo C-mannosylation, type I cytokine receptors. C-mannosylation is unusual because the sugar is linked to a carbon rather than a reactive atom such as nitrogen or oxygen. In 2011, the first crystal structure of a protein containing this type of glycosylation was determined—that of human complement component 8. Currently it is established that 18% of human proteins, secreted and transmembrane undergo the process of C-mannosylation. Numerous studies have shown that this process plays an important role in the secretion of Trombospondin type 1 containing proteins which are retained in the endoplasmic reticulum if they do not undergo C-mannosylation This explains why a type of cytokine receptors, erythropoietin receptor remained in the endoplasmic reticulum if it lacked C-mannosylation sites. Formation of GPI anchors (glypiation) Glypiation is a special form of glycosylation that features the formation of a GPI anchor. In this kind of glycosylation a protein is attached to a lipid anchor, via a glycan chain. (See also prenylation.) Chemical glycosylation Glycosylation can also be effected using the tools of synthetic organic chemistry. Unlike the biochemical processes, synthetic glycochemistry relies heavily on protecting groups (e.g. the 4,6-O-benzylidene) in order to achieve desired regioselectivity. The other challenge of chemical glycosylation is the stereoselectivity that each glycosidic linkage has two stereo-outcomes, α/β or cis/trans. Generally, the α- or cis-glycoside is more challenging to synthesis. New methods have been developed based on solvent participation or the formation of bicyclic sulfonium ions as chiral-auxiliary groups. Non-enzymatic glycosylation The non-enzymatic glycosylation is also known as glycation or non-enzymatic glycation. It is a spontaneous reaction and a type of post-translational modification of proteins meaning it alters their structure and biological activity. It is the covalent attachment between the carbonil group of a reducing sugar (mainly glucose and fructose) and the amino acid side chain of the protein. In this process the intervention of an enzyme is not needed. It takes place across and close to the water channels and the protruding tubules. At first, the reaction forms temporary molecules which later undergo different reactions (Amadori rearrangements, Schiff base reactions, Maillard reactions, crosslinkings...) and form permanent residues known as Advanced Glycation end-products (AGEs). AGEs accumulate in long-lived extracellular proteins such as collagen which is the most glycated and structurally abundant protein, especially in humans. Also, some studies have shown lysine may trigger spontaneous non-enzymatic glycosylation. Role of AGEs AGEs are responsible for many things. These molecules play an important role especially in nutrition, they are responsible for the brownish color and the aromas and flavors of some foods. It is demonstrated that cooking at high temperature results in various food products having high levels of AGEs. Having elevated levels of AGEs in the body has a direct impact on the development of many diseases. It has a direct implication in diabetes mellitus type 2 that can lead to many complications such as: cataracts, renal failure, heart damage... And, if they are present at a decreased level, skin elasticity is reduced which is an important symptom of aging. They are also the precursors of many hormones and regulate and modify their receptor mechanisms at the DNA level. Deglycosylation There are different enzymes to remove the glycans from the proteins or remove some part of the sugar chain. α2-3,6,8,9-Neuraminidase (from Arthrobacter ureafaciens): cleaves all non-reducing terminal branched and unbranched sialic acids. β1,4-Galactosidase (from Streptococcus pneumoniae): releases only β1,4-linked, nonreducing terminal galactose from complex carbohydrates and glycoproteins. β-N-Acetylglucosaminidase (from Streptococcus pneumoniae): cleaves all non-reducing terminal β-linked N-acetylglucosamine residues from complex carbohydrates and glycoproteins. endo-α-N-Acetylgalactosaminidase (O-glycosidase from Streptococcus pneumoniae): removes O-glycosylation. This enzyme cleaves serine- or threonine-linked unsubstituted Galβ1,3GalNAc PNGase F: cleaves asparagine-linked oligosaccharides unless α1,3-core fucosylated. Regulation of Notch signalling Notch signalling is a cell signalling pathway whose role is, among many others, to control the cell differentiation process in equivalent precursor cells. This means it is crucial in embryonic development, to the point that it has been tested on mice that the removal of glycans in Notch proteins can result in embryonic death or malformations of vital organs like the heart. Some of the specific modulators that control this process are glycosyltransferases located in the endoplasmic reticulum and the Golgi apparatus. The Notch proteins go through these organelles in their maturation process and can be subject to different types of glycosylation: N-linked glycosylation and O-linked glycosylation (more specifically: O-linked glucose and O-linked fucose). All of the Notch proteins are modified by an O-fucose, because they share a common trait: O-fucosylation consensus sequences. One of the modulators that intervene in this process is the Fringe, a glycosyltransferase that modifies the O-fucose to activate or deactivate parts of the signalling, acting as a positive or negative regulator, respectively. Clinical There are three types of glycosylation disorders sorted by the type of alterations that are made to the glycosylation process: congenital alterations, acquired alterations and non-enzymatic acquired alterations. Congenital alterations: Over 40 congenital disorders of glycosylation (CGDs) have been reported in humans. These can be divided into four groups: disorders of protein N-glycosylation, disorders of protein O-glycosylation, disorders of lipid glycosylation and disorders of other glycosylation pathways and of multiple glycosylation pathways. No effective treatment is known for any of these disorders. 80% of these affect the nervous system. Acquired alterations: In this second group the main disorders are infectious diseases, autoimmune illnesses or cancer. In these cases, the changes in glycosylation are the cause of certain biological events. For example, in Rheumatoid Arthritis (RA), the body of the patient produces antibodies against the enzyme lymphocytes galactosyltransferase which inhibits the glycosylation of IgG. Therefore, the changes in the N-glycosylation produce the immunodeficiency involved in this illness. In this second group we can also find disorders caused by mutations on the enzymes that control the glycosylation of Notch proteins, such as Alagille syndrome. Non-enzymatic acquired alterations: Non-enzymatic disorders, are also acquired, but they are due to the lack of enzymes that attach oligosaccharides to the protein. In this group the illnesses that stand out are Alzheimer's disease and diabetes. All these diseases are difficult to diagnose because they do not only affect one organ, they affect many of them and in different ways. As a consequence, they are also hard to treat. However, thanks to the many advances that have been made in next-generation sequencing, scientists can now understand better these disorders and have discovered new CDGs. Effects on therapeutic efficacy It has been reported that mammalian glycosylation can improve the therapeutic efficacy of biotherapeutics. For example, therapeutic efficacy of recombinant human interferon gamma, expressed in HEK 293 platform, was improved against drug-resistant ovarian cancer cell lines. See also References External links GlycoEP GlyProt: In-silico N-glycosylation of proteins on the web NetNGlyc: The NetNglyc server predicts N-glycosylation sites in human proteins using artificial neural networks that examine the sequence context of Asn-Xaa-Ser/Thr sequons. Supplementary Material of the Book "The Sugar Code" Additional information on glycosylation and figures Post-translational modification Organic reactions Carbohydrates Carbohydrate chemistry Biochemistry Congenital disorders of glycosylation
Glycosylation
[ "Chemistry", "Biology" ]
3,705
[ "Biomolecules by chemical classification", "Carbohydrates", "Gene expression", "Congenital disorders of glycosylation", "Biochemical reactions", "Organic compounds", "Organic reactions", "Post-translational modification", "Carbohydrate chemistry", "nan", "Chemical synthesis", "Biochemistry", ...
3,543,062
https://en.wikipedia.org/wiki/Hydrodesulfurization
Hydrodesulfurization or hydrodesulphurisation (Commonwealth English; see spelling differences) (HDS), also called hydrotreatment or hydrotreating, is a catalytic chemical process widely used to remove sulfur (S) from natural gas and from refined petroleum products, such as gasoline or petrol, jet fuel, kerosene, diesel fuel, and fuel oils. The purpose of removing the sulfur, and creating products such as ultra-low-sulfur diesel, is to reduce the sulfur dioxide () emissions that result from using those fuels in automotive vehicles, aircraft, railroad locomotives, ships, gas or oil burning power plants, residential and industrial furnaces, and other forms of fuel combustion. Another important reason for removing sulfur from the naphtha streams within a petroleum refinery is that sulfur, even in extremely low concentrations, poisons the noble metal catalysts (platinum and rhenium) in the catalytic reforming units that are subsequently used to upgrade the octane rating of the naphtha streams. The industrial hydrodesulfurization processes include facilities for the capture and removal of the resulting hydrogen sulfide () gas. In petroleum refineries, the hydrogen sulfide gas is then subsequently converted into byproduct, sulfur (S) or sulfuric acid (). In fact, the vast majority of the 64,000,000 metric tons of sulfur produced worldwide in 2005 was byproduct sulfur from refineries and other hydrocarbon processing plants. An HDS unit in the petroleum refining industry is also often referred to as a hydrotreater. History Although some reactions involving catalytic hydrogenation of organic substances were already known, the property of finely divided nickel to catalyze the fixation of hydrogen on hydrocarbon (ethylene, benzene) double bonds was discovered by the French chemist Paul Sabatier in 1897. Through this work, he found that unsaturated hydrocarbons in the vapor phase could be converted into saturated hydrocarbons by using hydrogen and a catalytic metal, laying the foundation of the modern catalytic hydrogenation process. Soon after Sabatier's work, a German chemist, Wilhelm Normann, found that catalytic hydrogenation could be used to convert unsaturated fatty acids or glycerides in the liquid phase into saturated ones. He was awarded a patent in Germany in 1902 and in Britain in 1903, which was the beginning of what is now a worldwide industry. In the mid-1950s, the first noble metal catalytic reforming process (the Platformer process) was commercialized. At the same time, the catalytic hydrodesulfurization of the naphtha feed to such reformers was also commercialized. In the decades that followed, various proprietary catalytic hydrodesulfurization processes, such as the one depicted in the flow diagram below, have been commercialized. Currently, virtually all of the petroleum refineries worldwide have one or more HDS units. By 2006, miniature microfluidic HDS units had been implemented for treating JP-8 jet fuel to produce clean feed stock for a fuel cell hydrogen reformer. By 2007, this had been integrated into an operating 5 kW fuel cell generation system. Process chemistry Hydrogenation is a class of chemical reactions in which the net result is the addition of hydrogen (H). Hydrogenolysis is a type of hydrogenation and results in the cleavage of the C-X chemical bond, where C is a carbon atom and X is a sulfur (S), nitrogen (N) or oxygen (O) atom. The net result of a hydrogenolysis reaction is the formation of C-H and H-X chemical bonds. Thus, hydrodesulfurization is a hydrogenolysis reaction. Using ethanethiol (), a sulfur compound present in some petroleum products, as an example, the hydrodesulfurization reaction can be simply expressed as \overset{Ethanethiol}{C2H5SH} + \overset{Hydrogen}{H2} -> \overset{Ethane}{C2H6} + \overset{Hydrogen\ sulfide}{H2S} For the mechanistic aspects of, and the catalysts used in this reaction see the section catalysts and mechanisms. Process description In an industrial hydrodesulfurization unit, such as in a refinery, the hydrodesulfurization reaction takes place in a fixed-bed reactor at elevated temperatures ranging from 300 to 400 °C and elevated pressures ranging from 30 to 130 atmospheres of absolute pressure, typically in the presence of a catalyst consisting of an alumina base impregnated with cobalt and molybdenum (usually called a CoMo catalyst). Occasionally, a combination of nickel and molybdenum (called NiMo) is used, in addition to the CoMo catalyst, for specific difficult-to-treat feed stocks, such as those containing a high level of chemically bound nitrogen. The image below is a schematic depiction of the equipment and the process flow streams in a typical refinery HDS unit. The liquid feed (at the bottom left in the diagram) is pumped up to the required elevated pressure and is joined by a stream of hydrogen-rich recycle gas. The resulting liquid-gas mixture is preheated by flowing through a heat exchanger. The preheated feed then flows through a fired heater where the feed mixture is totally vaporized and heated to the required elevated temperature before entering the reactor and flowing through a fixed-bed of catalyst where the hydrodesulfurization reaction takes place. The hot reaction products are partially cooled by flowing through the heat exchanger where the reactor feed was preheated and then flows through a water-cooled heat exchanger before it flows through the pressure controller (PC) and undergoes a pressure reduction down to about 3 to 5 atmospheres. The resulting mixture of liquid and gas enters the gas separator pressure vessel at about 35 °C and 3 to 5 atmospheres of absolute pressure. Most of the hydrogen-rich gas from the gas separator vessel is recycle gas, which is routed through an amine contactor for removal of the reaction product that it contains. The -free hydrogen-rich gas is then recycled back for reuse in the reactor section. Any excess gas from the gas separator vessel joins the sour gas from the stripping of the reaction product liquid. The liquid from the gas separator vessel is routed through a reboiled stripper distillation tower. The bottoms product from the stripper is the final desulfurized liquid product from hydrodesulfurization unit. The overhead sour gas from the stripper contains hydrogen, methane, ethane, hydrogen sulfide, propane, and, perhaps, some butane and heavier components. That sour gas is sent to the refinery's central gas processing plant for removal of the hydrogen sulfide in the refinery's main amine gas treating unit and through a series of distillation towers for recovery of propane, butane and pentane or heavier components. The residual hydrogen, methane, ethane, and some propane is used as refinery fuel gas. The hydrogen sulfide removed and recovered by the amine gas treating unit is subsequently converted to elemental sulfur in a Claus process unit or to sulfuric acid in a wet sulfuric acid process or in the conventional Contact Process. Note that the above description assumes that the HDS unit feed contains no olefins. If the feed does contain olefins (for example, the feed is a naphtha derived from a refinery fluid catalytic cracker (FCC) unit), then the overhead gas from the HDS stripper may also contain some ethene, propene, butenes and pentenes, or heavier components. The amine solution to and from the recycle gas contactor comes from and is returned to the refinery's main amine gas treating unit. Sulfur compounds in refinery HDS feedstocks The refinery HDS feedstocks (naphtha, kerosene, diesel oil, and heavier oils) contain a wide range of organic sulfur compounds, including thiols, thiophenes, organic sulfides and disulfides, and many others. These organic sulfur compounds are products of the degradation of sulfur containing biological components, present during the natural formation of the fossil fuel, petroleum crude oil. When the HDS process is used to desulfurize a refinery naphtha, it is necessary to remove the total sulfur down to the parts per million range or lower in order to prevent poisoning the noble metal catalysts in the subsequent catalytic reforming of the naphthas. When the process is used for desulfurizing diesel oils, the latest environmental regulations in the United States and Europe, requiring what is referred to as ultra-low-sulfur diesel (ULSD), in turn requires that very deep hydrodesulfurization is needed. In the very early 2000s, the governmental regulatory limits for highway vehicle diesel was within the range of 300 to 500 ppm by weight of total sulfur. As of 2006, the total sulfur limit for highway diesel is in the range of 15 to 30 ppm by weight. Thiophenes A family of substrates that are particularly common in petroleum are the aromatic sulfur-containing heterocycles called thiophenes. Many kinds of thiophenes occur in petroleum ranging from thiophene itself to more condensed derivatives, benzothiophenes and dibenzothiophenes. Thiophene itself and its alkyl derivatives are easier to hydrogenolyse, whereas dibenzothiophene, especially 4,6-dimethyldibenzothiophene is considered the most challenging substrates. Benzothiophenes are midway between the simple thiophenes and dibenzothiophenes in their susceptibility to HDS. Catalysts and mechanisms The main HDS catalysts are based on molybdenum disulfide () together with smaller amounts of other metals. The nature of the sites of catalytic activity remains an active area of investigation, but it is generally assumed basal planes of the structure are not relevant to catalysis, rather the edges or rims of these sheet. At the edges of the crystallites, the molybdenum centre can stabilize a coordinatively unsaturated site (CUS), also known as an anion vacancy. Substrates, such as thiophene, bind to this site and undergo a series of reactions that result in both C-S scission and C=C hydrogenation. Thus, the hydrogen serves multiple roles—generation of anion vacancy by removal of sulfide, hydrogenation, and hydrogenolysis. A simplified diagram for the cycle is shown: Catalysts Most metals catalyse HDS, but it is those at the middle of the transition metal series that are most active. Although not practical, ruthenium disulfide appears to be the single most active catalyst, but binary combinations of cobalt and molybdenum are also highly active. Aside from the basic cobalt-modified MoS2 catalyst, nickel and tungsten are also used, depending on the nature of the feed. For example, Ni-W catalysts are more effective for hydrodenitrogenation. Supports Metal sulfides are supported on materials with high surface areas. A typical support for HDS catalyst is γ-alumina. The support allows the more expensive catalyst to be more widely distributed, giving rise to a larger fraction of the that is catalytically active. The interaction between the support and the catalyst is an area of intense interest, since the support is often not fully inert but participates in the catalysis. Other uses The basic hydrogenolysis reaction has a number of uses other than hydrodesulfurization. Hydrodenitrogenation The hydrogenolysis reaction is also used to reduce the nitrogen content of a petroleum stream in a process referred to as hydrodenitrogenation (HDN). The process flow is the same as that for an HDS unit. Using pyridine (), a nitrogen compound present in some petroleum fractionation products, as an example, the hydrodenitrogenation reaction has been postulated as occurring in three steps: \overset{Pyridine}{C5H5N} + \overset{Hydrogen}{5H2} -> \overset{Piperdine}{C5H11N} + \overset{Hydrogen}{2H2} -> \overset{Amylamine}{C5H11NH2} + \overset{Hydrogen}{H2} -> \overset{Pentane}{C5H12} + \overset{Ammonia}{NH3} and the overall reaction may be simply expressed as: \overset{Pyridine}{C5H5N} + \overset{Hydrogen}{5H2} -> \overset{Pentane}{C5H12} + \overset{Ammonia}{NH3} Many HDS units for desulfurizing naphthas within petroleum refineries are actually simultaneously denitrogenating to some extent as well. Saturation of olefins The hydrogenolysis reaction may also be used to saturate or convert alkenes into alkanes. The process used is the same as for an HDS unit. As an example, the saturation of the olefin pentene can be simply expressed as: \overset{Pentene}{C5H10} + \overset{Hydrogen}{H2} -> \overset{Pentane}{C5H12} Some hydrogenolysis units within a petroleum refinery or a petrochemical plant may be used solely for the saturation of olefins or they may be used for simultaneously desulfurizing as well as denitrogenating and saturating olefins to some extent. See also Claus process Hydrogen pinch Timeline of hydrogen technologies References External links Criterion Catalysts (Hydroprocessing Catalyst Supplier) Haldor Topsoe (Catalyzing Your Business) Albemarle Catalyst Company (Petrochemical catalysts supplier) UOP-Honeywell (Engineering design and construction of large-scale, industrial HDS plants) Hydrogenation for Low Trans and High Conjugated Fatty Acids by E.S. Jang, M.Y. Jung, D.B. Min, Comprehensive Reviews in Food Science and Food Safety, Vol.1, 2005 Oxo Alcohols (Engineered and constructed by Aker Kvaerner) Catalysts and technology for Oxo-Alcohols Oil refining Desulfurization Natural gas technology
Hydrodesulfurization
[ "Chemistry" ]
3,030
[ "Desulfurization", "Separation processes", "Petroleum technology", "Chemical processes", "Natural gas technology", "Oil refining", "nan", "Chemical process engineering" ]
3,543,381
https://en.wikipedia.org/wiki/Kato%27s%20conjecture
Kato's conjecture is a mathematical problem named after mathematician Tosio Kato, of the University of California, Berkeley. Kato initially posed the problem in 1953. Kato asked whether the square roots of certain elliptic operators, defined via functional calculus, are analytic. The full statement of the conjecture as given by Auscher et al. is: "the domain of the square root of a uniformly complex elliptic operator with bounded measurable coefficients in Rn is the Sobolev space H1(Rn) in any dimension with the estimate ". The problem remained unresolved for nearly a half-century, until in 2001 it was jointly solved in the affirmative by Pascal Auscher, Steve Hofmann, Michael Lacey, Alan McIntosh, and Philippe Tchamitchian. References Differential operators Operator theory Conjectures that have been proved
Kato's conjecture
[ "Mathematics" ]
164
[ "Mathematical analysis", "Mathematical analysis stubs", "Conjectures that have been proved", "Mathematical problems", "Mathematical theorems", "Differential operators" ]
3,543,720
https://en.wikipedia.org/wiki/Particle%20number%20operator
In quantum mechanics, for systems where the total number of particles may not be preserved, the number operator is the observable that counts the number of particles. The following is in bra–ket notation: The number operator acts on Fock space. Let be a Fock state, composed of single-particle states drawn from a basis of the underlying Hilbert space of the Fock space. Given the corresponding creation and annihilation operators and we define the number operator by and we have where is the number of particles in state . The above equality can be proven by noting that then See also Harmonic oscillator Quantum harmonic oscillator Second quantization Quantum field theory Thermodynamics (-1)F References Second quantization notes by Fradkin Quantum mechanics
Particle number operator
[ "Physics" ]
161
[ "Quantum operators", "Quantum mechanics" ]
3,544,145
https://en.wikipedia.org/wiki/9%2C10-Dithioanthracene
9,10-Dithioanthracene (DTA) is an organic molecule and a derivative of anthracene with two thiol groups. In 2004, DTA molecules were demonstrated to be able to "walk" in a straight line (reportedly a first) on a metal surface by, in effect, mimicking the bipedal motion of a human being. The sulfur-bearing functional groups on either side (referred to as "linkers") serve as the molecule's "feet". When the compound is heated on a flat copper surface, the linkers raise up, alternating from side to side, and propel the molecule forward. During testing at UC Riverside's Center for Nanoscale Science and Engineering, the molecule took about 10,000 unassisted nano-scale steps, moving in a straight line without requiring the assistance of nano-rails or nano-grooves for guidance. As described by one of the researchers, "Similar to a human walking, where one foot is kept on the ground while the other moves forward and propels the body, our molecule always has one linker on a flat surface, which prevents the molecule from stumbling to the side or veering off course." Researchers believe the project could lead to the development of molecular computers in which DTA or other similar molecules would function as nano-abacuses. References Thiols Nanomaterials Anthracenes
9,10-Dithioanthracene
[ "Chemistry", "Materials_science" ]
290
[ "Organic compounds", "Nanotechnology", "Thiols", "Nanomaterials" ]
3,544,225
https://en.wikipedia.org/wiki/Nano-abacus
A nano-abacus is a nano-sized abacus that performs basic arithmetic computations using various forms of nanotechnology including photonics and lateral mechanical stimulation of molecular motion with a scanning tunneling microscope (STM) tip by repulsion. The nano-abacus has the potential to be used in a variety of nanotechnological inventions such as the nano-computer. History IBM Nano-abacus The first nano-abacus was developed on November 13, 1997 by physicist James Gimzewski at an IBM research laboratory in Zürich, Switzerland. Gimzewski's initial idea for the device was inspired by the Japanese soroban. The creation of the nano-abacus was sponsored by the Swiss Federal Office of Education and Science within the European Strategic Program for Research in Information Technology (ESPRIT) of the European Union as part of IBM's "PRONANO" (processing on the nanometer scale) project. Gimzewski's nano-abacus consists of stable rows containing ten molecules acting as railings. The beads are made up of buckminsterfullerene constrained by one-atom-high ridges on a copper sheet and are pushed around by the tip of a scanning tunneling microscope at room temperature to create a calculation and allow it be viewed when operated in imaging mode. Gimzewski, along with physicists Maria Teresa Cuberes and Reto R. Schlittler, found that their device was capable of controllably repositioning C60 molecules with a scanning tunneling microscope tip along Cu(111) mono-atomic steps at room temperature. Chip-scale all-optical abacus A similar nanoscopic optical abacus was developed in 2017 by a team of international researchers led by Professor C. David Wright from the University of Exeter. The team's chip-scale all-optical abacus uses picosecond light pulses to perform arithmetic computations. The device has proved successful in calculating with multi-digit numbers using equivalent photonic phase-change cells. References Nanotechnology
Nano-abacus
[ "Materials_science", "Engineering" ]
417
[ "Nanotechnology", "Materials science stubs", "Nanotechnology stubs", "Materials science" ]
3,545,648
https://en.wikipedia.org/wiki/Probability%20current
In quantum mechanics, the probability current (sometimes called probability flux) is a mathematical quantity describing the flow of probability. Specifically, if one thinks of probability as a heterogeneous fluid, then the probability current is the rate of flow of this fluid. It is a real vector that changes with space and time. Probability currents are analogous to mass currents in hydrodynamics and electric currents in electromagnetism. As in those fields, the probability current (i.e. the probability current density) is related to the probability density function via a continuity equation. The probability current is invariant under gauge transformation. The concept of probability current is also used outside of quantum mechanics, when dealing with probability density functions that change over time, for instance in Brownian motion and the Fokker–Planck equation. The relativistic equivalent of the probability current is known as the probability four-current. Definition (non-relativistic 3-current) Free spin-0 particle In non-relativistic quantum mechanics, the probability current of the wave function of a particle of mass in one dimension is defined as where is the reduced Planck constant; denotes the complex conjugate of the wave function; denotes the real part; denotes the imaginary part. Note that the probability current is proportional to a Wronskian In three dimensions, this generalizes to where denotes the del or gradient operator. This can be simplified in terms of the kinetic momentum operator, to obtain These definitions use the position basis (i.e. for a wavefunction in position space), but momentum space is possible. Spin-0 particle in an electromagnetic field The above definition should be modified for a system in an external electromagnetic field. In SI units, a charged particle of mass and electric charge includes a term due to the interaction with the electromagnetic field; where is the magnetic vector potential. The term has dimensions of momentum. Note that used here is the canonical momentum and is not gauge invariant, unlike the kinetic momentum operator . In Gaussian units: where is the speed of light. Spin-s particle in an electromagnetic field If the particle has spin, it has a corresponding magnetic moment, so an extra term needs to be added incorporating the spin interaction with the electromagnetic field. According to Landau-Lifschitz's Course of Theoretical Physics the electric current density is in Gaussian units: And in SI units: Hence the probability current (density) is in SI units: where is the spin vector of the particle with corresponding spin magnetic moment and spin quantum number . It is doubtful if this formula is valid for particles with an interior structure. The neutron has zero charge but non-zero magnetic moment, so would be impossible (except would also be zero in this case). For composite particles with a non-zero charge – like the proton which has spin quantum number s=1/2 and μS= 2.7927·μN or the deuteron (H-2 nucleus) which has s=1 and μS=0.8574·μN – it is mathematically possible but doubtful. Connection with classical mechanics The wave function can also be written in the complex exponential (polar) form: where are real functions of and . Written this way, the probability density is and the probability current is: The exponentials and terms cancel: Finally, combining and cancelling the constants, and replacing with , Hence, the spatial variation of the phase of a wavefunction is said to characterize the probability flux of the wavefunction. If we take the familiar formula for the mass flux in hydrodynamics: where is the mass density of the fluid and is its velocity (also the group velocity of the wave). In the classical limit, we can associate the velocity with which is the same as equating with the classical momentum however, it does not represent a physical velocity or momentum at a point since simultaneous measurement of position and velocity violates uncertainty principle. This interpretation fits with Hamilton–Jacobi theory, in which in Cartesian coordinates is given by , where is Hamilton's principal function. The de Broglie-Bohm theory equates the velocity with in general (not only in the classical limit) so it is always well defined. It is an interpretation of quantum mechanics. Motivation Continuity equation for quantum mechanics The definition of probability current and Schrödinger's equation can be used to derive the continuity equation, which has exactly the same forms as those for hydrodynamics and electromagnetism. For some wave function , let: be the probability density (probability per unit volume, denotes complex conjugate). Then, where is any volume and is the boundary of . This is the conservation law for probability in quantum mechanics. The integral form is stated as: whereis the probability current or probability flux (flow per unit area). Here, equating the terms inside the integral gives the continuity equation for probability:and the integral equation can also be restated using the divergence theorem as: . In particular, if is a wavefunction describing a single particle, the integral in the first term of the preceding equation, sans time derivative, is the probability of obtaining a value within when the position of the particle is measured. The second term is then the rate at which probability is flowing out of the volume . Altogether the equation states that the time derivative of the probability of the particle being measured in is equal to the rate at which probability flows into . By taking the limit of volume integral to include all regions of space, a well-behaved wavefunction that goes to zero at infinities in the surface integral term implies that the time derivative of total probability is zero ie. the normalization condition is conserved. This result is in agreement with the unitary nature of time evolution operators which preserve length of the vector by definition. Transmission and reflection through potentials In regions where a step potential or potential barrier occurs, the probability current is related to the transmission and reflection coefficients, respectively and ; they measure the extent the particles reflect from the potential barrier or are transmitted through it. Both satisfy: where and can be defined by: where are the incident, reflected and transmitted probability currents respectively, and the vertical bars indicate the magnitudes of the current vectors. The relation between and can be obtained from probability conservation: In terms of a unit vector normal to the barrier, these are equivalently: where the absolute values are required to prevent and being negative. Examples Plane wave For a plane wave propagating in space: the probability density is constant everywhere; (that is, plane waves are stationary states) but the probability current is nonzero – the square of the absolute amplitude of the wave times the particle's speed; illustrating that the particle may be in motion even if its spatial probability density has no explicit time dependence. Particle in a box For a particle in a box, in one spatial dimension and of length , confined to the region , the energy eigenstates are and zero elsewhere. The associated probability currents are since Discrete definition For a particle in one dimension on we have the Hamiltonian where is the discrete Laplacian, with being the right shift operator on Then the probability current is defined as with the velocity operator, equal to and is the position operator on Since is usually a multiplication operator on we get to safely write As a result, we find: References Further reading Quantum mechanics
Probability current
[ "Physics" ]
1,502
[ "Theoretical physics", "Quantum mechanics" ]
3,546,095
https://en.wikipedia.org/wiki/Pneumatic%20cylinder
Pneumatic cylinder, also known as air cylinder, is a mechanical device which uses the power of compressed gas to produce a force in a reciprocating linear motion. Like in a hydraulic cylinder, something forces a piston to move in the desired direction. The piston is a disc or cylinder, and the piston rod transfers the force it develops to the object to be moved. Engineers sometimes prefer to use pneumatics because they are quieter, cleaner, and do not require large amounts of space for fluid storage. Because the operating fluid is a gas, leakage from a pneumatic cylinder will not drip out and contaminate the surroundings, making pneumatics more desirable where cleanliness is a requirement. For example, in the mechanical puppets of the Disney Tiki Room, pneumatics are used to prevent fluid from dripping onto people below the puppets. Operation General Once actuated, compressed air enters into the tube at one end of the piston and imparts force on the piston. Consequently, the piston becomes displaced. Compressibility of gases One major issue engineers come across working with pneumatic cylinders has to do with the compressibility of a gas. Many studies have been completed on how the precision of a pneumatic cylinder can be affected as the load acting on the cylinder tries to further compress the gas used. Under a vertical load, a case where the cylinder takes on the full load, the precision of the cylinder is affected the most. A study at the National Cheng Kung University in Taiwan, concluded that the accuracy is about ± 30 nm, which is still within a satisfactory range but shows that the compressibility of air has an effect on the system. Fail safe mechanisms Pneumatic systems are often found in settings where even rare and brief system failure is unacceptable. In such situations, locks can sometimes serve as a safety mechanism in case of loss of air supply (or its pressure falling) and, thus remedy or abate any damage arising in such a situation. Leakage of air from the input or output reduces the output pressure. Types Although pneumatic cylinders will vary in appearance, size and function, they generally fall into one of the specific categories shown below. However, there are also numerous other types of pneumatic cylinder available, many of which are designed to fulfill specific and specialized functions. Single-acting cylinders A single-acting cylinder (SAC) has one port, which allows compressed air to enter and for the rod to move in one direction only. The high pressure of the compressed air causes the rod to extend as the cylinder chamber continues to fill. When the compressed air leaves the cylinder through the same port the rod is returned to its original position. Double-acting cylinders Double-acting cylinders (DAC) use the force of air to move in both extend and retract strokes. They have two ports to allow air in, one for outstroke and one for instroke. Stroke length for this design is not limited, however, the piston rod is more vulnerable to buckling and bending. Additional calculations should be performed as well. Multi-stage, telescoping cylinder Telescoping cylinders, also known as telescopic cylinders can be either single or double-acting. The telescoping cylinder incorporates a piston rod nested within a series of hollow stages of increasing diameter. Upon actuation, the piston rod and each succeeding stage "telescopes" out as a segmented piston. The main benefit of this design is the allowance for a notably longer stroke than would be achieved with a single-stage cylinder of the same collapsed (retracted) length. One cited drawback to telescoping cylinders is the increased potential for piston flexion due to the segmented piston design. Consequently, telescoping cylinders are primarily utilized in applications where the piston bears minimal side loading. Other types Although SACs and DACs are the most common types of pneumatic cylinder, the following types are not particularly rare: Through rod air cylinders: piston rod extends through both sides of the cylinder, allowing for equal forces and speeds on either side. Cushion end air cylinders: cylinders with regulated air exhaust to avoid impacts between the piston rod and the cylinder end cover. Rotary air cylinders: actuators that use air to impart a rotary motion. Rodless air cylinders: These have no piston rod. They are actuators that use a mechanical or magnetic coupling to impart force, typically to a table or other body that moves along the length of the cylinder body, but does not extend beyond it. Tandem air cylinder: two cylinders assembled in series Impact air cylinder: high velocity cylinders with specially designed end covers that withstand the impact of extending or retracting piston rods. Rodless cylinders Rodless cylinders have no rod, only a relatively long piston. Cable cylinders retain openings at one or both ends, but pass a flexible cable rather than a rod. This cable has a smooth plastic jacket for sealing purposes. Of course, a single cable has to be kept in tension. Other rodless cylinders close off both ends, coupling the piston either magnetically or mechanically to an actuator that runs along the outside of the cylinder. In the magnetic type, the cylinder is thin-walled and of a non-magnetic material, the cylinder is a powerful magnet, and pulls along a magnetic traveller on the outside. In the mechanical type, part of the cylinder extends to the outside through a slot cut down the length of the cylinder. The slot is then sealed by flexible metal sealing bands on the inside (to prevent gas escape) and outside (to prevent contamination). The piston itself has two end seals, and between them, camming surfaces to "peel off" the seals ahead of the projecting linkage and to replace them behind. The interior of the piston, then, is at atmospheric pressure. One well-known application of the mechanical type (albeit steam-powered) are the catapults used on many modern aircraft carriers. Design Construction Depending on the job specification, there are multiple forms of body constructions available: Tie rod cylinders: The most common cylinder constructions that can be used in many types of loads. Has been proven to be the safest form. Flanged-type cylinders: Fixed flanges are added to the ends of cylinder, however, this form of construction is more common in hydraulic cylinder construction. One-piece welded cylinders: Ends are welded or crimped to the tube, this form is inexpensive but makes the cylinder non-serviceable. Threaded end cylinders: Ends are screwed onto the tube body. The reduction of material can weaken the tube and may introduce thread concentricity problems to the system. Material Upon job specification, the material may be chosen. Material range from nickel-plated brass to aluminum, and even steel and stainless steel. Depending on the level of loads, humidity, temperature, and stroke lengths specified, the appropriate material may be selected. Mounts Depending on the location of the application and machinability, there exist different kinds of mounts for attaching pneumatic cylinders: Sizes Air cylinders are available in a variety of sizes and can typically range from a small air cylinder, which might be used for picking up a small transistor or other electronic component, to diameter air cylinders which would impart enough force to lift a car. Some pneumatic cylinders reach in diameter, and are used in place of hydraulic cylinders for special circumstances where leaking hydraulic oil could impose an extreme hazard. Pressure, radius, area and force relationships Rod stresses Due to the forces acting on the cylinder, the piston rod is the most stressed component and has to be designed to withstand high amounts of bending, tensile and compressive forces. Depending on how long the piston rod is, stresses can be calculated differently. If the rods length is less than 10 times the diameter, then it may be treated as a rigid body which has compressive or tensile forces acting on it. In which case the relationship is: Where: is the compressive or tensile force is the cross-sectional area of the piston rod is the stress However, if the length of the rod exceeds the 10 times the value of the diameter, then the rod needs to be treated as a column and buckling needs to be calculated as well. Instroke and outstroke Although the diameter of the piston and the force exerted by a cylinder are related, they are not directly proportional to one another. Additionally, the typical mathematical relationship between the two assumes that the air supply does not become saturated. Due to the effective cross sectional area reduced by the area of the piston rod, the instroke force is less than the outstroke force when both are powered pneumatically and by same supply of compressed gas. The relationship between the force, radius, and pressure can derived from simple distributed load equation: Where: is the resultant force is the pressure or distributed load on the surface is the effective cross sectional area the load is acting on Outstroke Using the distributed load equation provided the can be replaced with area of the piston surface where the pressure is acting on. Where: represents the resultant force represents the radius of the piston is pi, approximately equal to 3.14159. Instroke On instroke, the same relationship between force exerted, pressure and effective cross sectional area applies as discussed above for outstroke. However, since the cross sectional area is less than the piston area the relationship between force, pressure and radius is different. The calculation isn't more complicated though, since the effective cross sectional area is merely that of the piston surface minus the cross sectional area of the piston rod. For instroke, therefore, the relationship between force exerted, pressure, radius of the piston, and radius of the piston rod, is as follows: Where: represents the resultant force represents the radius of the piston represents the radius of the piston rod is pi, approximately equal to 3.14159. See also Fluid dynamics Fluid power Hydraulics Hydraulic cylinder Pneumatic motor Pneumatics Tubular linear motor References Pneumatic actuators Fluid dynamics Pneumatics hu:Munkahenger
Pneumatic cylinder
[ "Chemistry", "Engineering" ]
2,054
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
3,548,289
https://en.wikipedia.org/wiki/Penning%20mixture
A Penning mixture is a mixture of gases that is used in electric gas-discharge lamps. It is defined as a mixture of one inert gas with a minute amount of another gas, one that has lower ionization voltage than the main constituent. It is named after Frans Michel Penning. The well-known neon lighting and neon lamps and displays are filled not with pure neon, but with a Penning mixture. Explanation The other gas, called a quenching gas, has to have a lower ionization energy than the first excited state of the noble gas. The energy of the excited, but neutral, noble gas atoms then can ionize the quench gas particles by energy transfer via collisions; known as the Penning effect. A very common Penning mixture of about 98–99.5% of neon with 0.5–2% argon is used in some neon lamps, especially those rated for 120 volts. The mixture is easier to ionize than either neon or argon alone, and lowers the breakdown voltage at which the tube becomes conductive and starts producing light. The optimal level of argon is about 0.25%, but some of it gets adsorbed onto the borosilicate glass used for the tubes, so higher concentrations are used to take the losses into account; higher argon content is used in higher-power tubes, as hotter glass adsorbs more argon. The argon changes the color of the "neon light", making it slightly more yellowish. The neon gas used in some nixie tubes includes a small amount of mercury vapor (for various reasons), which glows blue. A Penning mixture of neon and argon is also used as a starter gas in sodium vapor lamps, where it is responsible for the faint pinkish glow before the sodium emission begins. The Penning mixture used in plasma displays is usually helium or neon with small percentage of xenon, at several hundred torr. Penning mixtures with the formulas of argon–xenon, neon–argon, argon–acetylene, and xenon–TMA are used as filler gases in gaseous ionization detectors. Other kinds of Penning mixture include helium–xenon. See also Geiger–Müller tube Paschen's law References Further reading Industrial gases Neon lighting Noble gases Plasma technology and applications
Penning mixture
[ "Physics", "Chemistry", "Materials_science" ]
485
[ "Noble gases", "Plasma physics", "Plasma technology and applications", "Nonmetals", "Industrial gases", "Chemical process engineering" ]
4,772,239
https://en.wikipedia.org/wiki/Pao%20%28unit%29
The pao is a unit of dry measure (mass) which is used in South Asia. The name may come from the Punjabi ਪਾਓ páo, which was a traditional charge of one quarter of a seer per every maund of grain that was weighed, converted into a tax by Sawan Mal. Turner also cites a Sindhi word pāu () meaning a quarter of a seer. The pao was recorded in the Bengal Presidency in 1850, but was not considered to be an integral part of the local system of weights. It was equal to four chitaks, and hence a quarter of a seer: the equivalent Imperial weight at the time was given as 7 oz. 10 dwt. Troy (233.3 grams). The use of a quarter-seer weight in Ahmedabad had also been noted in a British East India Company survey of South Asian metrology carried out in 1821: the name of the unit was not recorded, but it would have been equivalent to 4 oz. 3 dr. 17 gr. avoirdupois (119.8 grams) based on the measurement of the Ahmedabad seer. It is still occasionally used in northern India. In Nepal, the pao () was of a dharni, and equivalent to about 194.4 grams in 1966. Convenient "pau" units of both 200 grams and 250 grams are in current use in retail sales in different parts of the country. In Pakistan, the pao was slightly heavier, at 233.3 grams. As to Afghanistan, it was reported in 1950 that 1 pao ≈ 1 lb (450 grams) in Kabul, with four paos to one charak and sixteen paos to a seer. References External links Sizes.com Pao to Gram Calculator Units of mass Customary units in India Obsolete units of measurement
Pao (unit)
[ "Physics", "Mathematics" ]
371
[ "Obsolete units of measurement", "Matter", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
4,772,764
https://en.wikipedia.org/wiki/Mount%20Mulligan%20mine%20disaster
The Mount Mulligan mine disaster occurred on 19 September 1921 in Mount Mulligan, Far North Queensland, Australia. A series of explosions in the local coal mine, audible as much as 30 km away, rocked the close-knit township. Seventy-five workers were killed by the disaster, making it the third-worst coal mining accident in Australia in terms of human lives lost. Four of the dead had been at the mouth of the pit at the time of the explosion. Seventy four bodies were recovered by the time the Royal Commission ended, the last body was recovered five months after the disaster after the mine had reopened. The disaster affected people in cities and towns all over the country. The mine, which had operated for six years at the time of the accident, was widely considered safe and had no previous indications of gas leaks. The miners hence worked using open flame lights instead of safety lamps. Public inquiry A Royal Commission into the accident confirmed that the disaster was caused by the accidental or negligent firing of an explosive charge on top of a block of coal, apparently in order to split it. No methane was ever detected in the mine and candles and naked flames were used throughout its history (Royal Commission: 1921). The investigation found that explosives were used, stored, distributed and carried underground in a careless manner. It was also determined that the lack of appropriate means to render the coal dust safe in the mine was a violation of law. The coal seams at Mt Mulligan are conspicuously dry, leading to the ignition of coal dust from the firing of the charge. The disaster was also the impetus for the passing of a Coal Mining Act in Queensland that would ban the use of open flames in underground coal mines. Aftermath The mine was reopened after four months and suffered surprisingly little damage from the explosion. In 1923, the Queensland Government bought it from the operators. It was in operation until 1957, although it was heavily subsidised after World War II. The mine's final demise occurred with the completion of the Tully Falls hydro electricity scheme. Soon after, the town was sold and most of the buildings were removed. See also Blantyre mining disaster Mount Kembla Mine disaster References Page XXXIII, 1921 Report of the Royal Commission appointed to inquire into and report upon the recent disaster at Mount Mulligan Coal Mine, and also into the methods of mining carried on at such mine, and further, to make such recommendations as may tend to prevent the recurrence of accidents of a like nature. Caims Post, 13 February 1922. The Australian Journal of Emergency Management Vol 18. No.3 August 2003. Walkabout.com.au Mount Mulligan history External links Mount Mulligan Mine Disaster – John Oxley Library Blog, State Library of Queensland 1920s in Queensland 1921 in Australia 1921 mining disasters Coal mining disasters in Australia Disasters in Queensland Dust explosions Far North Queensland Mining in Queensland Public inquiries in Australia
Mount Mulligan mine disaster
[ "Chemistry" ]
596
[ "Dust explosions", "Explosions" ]
4,772,897
https://en.wikipedia.org/wiki/Promethium%28III%29%20oxide
Promethium(III) oxide is a compound with the formula Pm2O3. It is the most common form of promethium. Crystal structure Promethium oxide exists in three major crystalline forms: *a, b and c are lattice parameters, Z is the number of formula units per unit cell, density is calculated from X-ray data. The low-temperature cubic form converts to the monoclinic structure upon heating to 750–800 °C, and this transition can only be reversed by melting the oxide. The transition from the monoclinic to hexagonal form occurs at 1740 °C. References Promethium compounds Sesquioxides
Promethium(III) oxide
[ "Chemistry" ]
136
[ "Inorganic compounds", "Inorganic compound stubs" ]
4,773,615
https://en.wikipedia.org/wiki/Aerary
Aerary is a room in a building that was used to contain something precious, such as treasure. An example is the aerary porch in St. George's Chapel at Windsor Castle, which was built in 1353–1354. It was used as the entrance to a new college being established there by Edward III. References Rooms
Aerary
[ "Engineering" ]
71
[ "Rooms", "Architecture" ]
4,775,145
https://en.wikipedia.org/wiki/Aeon%20%28Thelema%29
In the esoteric philosophy of Thelema, founded by Aleister Crowley in the early 20th century, an Aeon is a period of time defined by distinct spiritual and cultural characteristics, each accompanied by its own forms of magical and religious expression. Thelemites believe that the history of humanity is divided into a series of these Aeons, each governed by a particular deity or archetype that embodies the spiritual formula of the era. The first of these was the Aeon of Isis, associated with prehistory, a time when humanity revered a Great Goddess, symbolised by the ancient Egyptian deity Isis. This was followed by the Aeon of Osiris, spanning the classical and medieval periods, during which the worship of a singular male god, represented by Osiris, dominated, reflecting patriarchal values. The current Aeon, known as the Aeon of Horus, is believed to have begun in 1904 with the reception of The Book of the Law (Liber AL vel Legis), which Crowley maintained was dictated to him by a praeterhuman intelligence named Aiwass. The Aeon of Horus, frequently referred to as simply the Aeon and symbolised by the child god Horus, is seen as a time of greater consciousness, individual sovereignty, and spiritual awakening. Thelemites believe that this Aeon represents a departure from the constraints and dogmas of the previous Aeon, particularly the influence of the Abrahamic religions, and heralds an era of self-actualisation and the realisation of human potential. Within Thelema, each Aeon is characterised by its own specific magical formula, which is fundamental to the practice and understanding of Thelemic Magick. The transition between these Aeons is understood not merely as a change in religious or cultural practices, but as a profound shift in the underlying spiritual paradigm that governs human existence. Aeons Aeon of Isis The first Aeon, of Isis, was maternal. The female aspect of the Godhead was revered due to a mostly matriarchal society and the idea that "Mother Earth" nourished, clothed and housed man closed in the womb of Matrix. It was characterised by pagan worship of the Mother and Nature. In his Equinox of the Gods Crowley describes this period as "simple, quiet, easy, and pleasant; the material ignores the spiritual." Lon Milo DuQuette remarked that this aeon was "the Age of the Great Goddess", and that it had originated in prehistory, reaching its zenith at "approximately 2400 B.C." Continuing with this idea, he remarked that this period was when "the cult of the Great Goddess" was truly universal. She was worshipped by countless cultures under myriad names and forms. It would also be a mistake for us to conclude that the magical formula of this period manifested exclusively through the worship of any particular anthropomorphic female deity. For, like every aeon, the magical formula of the Aeon of Isis was founded upon mankind's interpretation of the 'perceived facts' of nature, and our Isian-age progenitors perceived nature as a continuous process of spontaneous growth." Aeon of Osiris The classical and medieval Aeon of Osiris is considered to be dominated by the paternal principle and the formula of the Dying God. This Aeon was characterized by that of self-sacrifice and submission to the Father God while man spoke of his father and mother. Crowley says of this Aeon in his Heart of the Master: Crowley also says of the Aeon of Osiris in The Equinox of the Gods: Aeon of Horus The Aeon of Horus, identified by Crowley as beginning in 1904 with the reception of The Book of the Law, marks the current era in Thelemic philosophy. This aeon emphasizes self-realization, individualism, and the pursuit of one's True Will, symbolized by the child god Horus representing new beginnings and potential growth. Crowley described it as a time of the Crowned and Conquering Child, focusing on spiritual awakening and personal freedom. He also stated, "every man and every woman is a star", highlighting the unique and divine nature of each individual. Key figures such as Israel Regardie and Kenneth Grant highlight the transformative nature of this aeon, encouraging individuals to embrace their True Will and move beyond previous constraints. Regardie saw it as a shift towards new spiritual and psychological paradigms, while Grant emphasized the break from the restrictions of prior aeons. DuQuette elaborates on the Aeon of Horus as a period of growing individual consciousness and the realization of one's spiritual potential, contrasting it with the Age of Aquarius, which he sees as a smaller aspect of a greater spiritual age. Gunther interprets the Aeon as a time of significant spiritual evolution, driven by the awakening of individual consciousness and the unfolding of the True Will. The Thelemic calendar uses a unique dating system incorporating Tarot trumps and astrological positions, aligning significant events with corresponding Tarot cards and the positions of the Sun and Moon, reflecting the Thelemic emphasis on synchronizing personal and cosmic cycles. Crowley detailed the practice of recording magical work in his writings on the magical record, emphasizing the importance of documenting spiritual progress. Aeon of Ma'at Aleister Crowley believed that the Aeon of Ma'at will succeed the present one. However, Crowley suggested that the succession of the aeons is not bound to the precession of the equinoxes in his 'Old Comment' to Liber AL chapter III, verse 34, where he states, "Following him [Horus] will arise the Equinox of Ma, the Goddess of Justice, it may be a hundred or ten thousand years from now; for the Computation of Time is not here as There." According to one of Crowley's early students, Charles Stansfeld Jones (a.k.a. Frater Achad), the Aeon of Ma'at has already arrived or overlaps the present Aeon of Horus. Crowley wrote: See also References Citations Works cited Further reading Latin words and phrases New Testament Greek words and phrases Thelema Time in religion Units of time
Aeon (Thelema)
[ "Physics", "Mathematics" ]
1,275
[ "Physical quantities", "Time", "Units of time", "Quantity", "Time in religion", "Spacetime", "Units of measurement" ]
4,777,013
https://en.wikipedia.org/wiki/6-Monoacetylmorphine
6-Monoacetylmorphine (6-MAM, 6-acetylmorphine, or 6-AM) is an opioid and also one of three active metabolites of heroin (diacetylmorphine), the others being morphine and the much less active 3-monoacetylmorphine (3-MAM). Pharmacology 6-MAM occurs as a metabolite of heroin. Once it has passed first-pass metabolism, 6-MAM is then metabolized into morphine or excreted in urine. Heroin is rapidly metabolized by esterase enzymes in the brain and has an extremely short half-life. It has also relatively weak affinity to μ-opioid receptors because the 3-hydroxy group, essential for effective binding to the receptor, is masked by the acetyl group. Therefore, heroin acts as a pro-drug, serving as a lipophilic transporter for the systemic delivery of morphine, which actively binds with μ-opioid receptors. 6-MAM already has a free 3-hydroxy group and shares the high lipophilicity of heroin, so it penetrates the brain just as quickly and does not need to be deacetylated at the 6-position in order to be bioactivated; this makes 6-MAM somewhat more potent than heroin. Availability 6-MAM is rarely encountered in an isolated form due to the difficulty in selectively acetylating morphine at the 6-position without also acetylating the 3-position. However, it is found in significant amounts in black tar heroin along with heroin itself. Synthesis The production of black tar heroin results in significant amounts of 6-MAM in the final product. 6-MAM is approximately 30 percent more active than diacetylmorphine itself, This is why despite lower heroin content, black tar heroin may be more potent than some other forms of heroin. 6-MAM can be synthesized from morphine using glacial acetic acid with an organic base as a catalyst. The acetic acid must be of a high purity (97–99 per cent) for the acid to properly acetylate the morphine at the 6th position effectively creating 6-MAM. Acetic acid is used rather than acetic anhydride, as acetic acid is not strong enough to acetylate the phenolic 3-hydroxy group but is able to acetylate the 6-hydroxy group, thus selectively producing 6-MAM rather than heroin. Acetic acid is a convenient way to produce 6-MAM, as acetic acid also is not a watched chemical as it is the main component of vinegar. Chemistry Detection in bodily fluids Since 6-MAM is a metabolite unique to heroin, its presence in the urine confirms heroin use. This is significant because a urine immunoassay drug screen typically tests for morphine, which is a metabolite of a number of legal and illegal opiates/opioids such as codeine, morphine sulfate, and heroin. Trace amounts of 6-MAM are excreted approximately 6–8 hours following heroin use. 6-MAM is naturally found in trace amounts in rat and cow brains. See also M3G, morphine-3-glucuronide an inactive metabolite of morphine much as 3-MAM is the less active metabolite of heroin (notably here as morphine is an active secondary metabolite of heroin itself with 6-Monoacetylmorphine being the intermediate stage) M6G, morphine-6-glucuronide the active variant in close relation to 6-MAM, being relative as twin metabolites of this articles very metabolite itself, morphine, twinned to a metabolite (3-MAM) of a parent compound (heroin) of this article's chemical References Acetate esters 4,5-Epoxymorphinans Heroin Opioid metabolites Morphine Mu-opioid receptor agonists Prodrugs Hydroxyarenes Semisynthetic opioids Recreational drug metabolites Human drug metabolites
6-Monoacetylmorphine
[ "Chemistry" ]
877
[ "Chemicals in medicine", "Prodrugs", "Human drug metabolites" ]
23,834,912
https://en.wikipedia.org/wiki/Arithmetic%20topology
Arithmetic topology is an area of mathematics that is a combination of algebraic number theory and topology. It establishes an analogy between number fields and closed, orientable 3-manifolds. Analogies The following are some of the analogies used by mathematicians between number fields and 3-manifolds: A number field corresponds to a closed, orientable 3-manifold Ideals in the ring of integers correspond to links, and prime ideals correspond to knots. The field Q of rational numbers corresponds to the 3-sphere. Expanding on the last two examples, there is an analogy between knots and prime numbers in which one considers "links" between primes. The triple of primes are "linked" modulo 2 (the Rédei symbol is −1) but are "pairwise unlinked" modulo 2 (the Legendre symbols are all 1). Therefore these primes have been called a "proper Borromean triple modulo 2" or "mod 2 Borromean primes". History In the 1960s topological interpretations of class field theory were given by John Tate based on Galois cohomology, and also by Michael Artin and Jean-Louis Verdier based on Étale cohomology. Then David Mumford (and independently Yuri Manin) came up with an analogy between prime ideals and knots which was further explored by Barry Mazur. In the 1990s Reznikov and Kapranov began studying these analogies, coining the term arithmetic topology for this area of study. See also Arithmetic geometry Arithmetic dynamics Topological quantum field theory Langlands program Notes Further reading Masanori Morishita (2011), Knots and Primes, Springer, Masanori Morishita (2009), Analogies Between Knots And Primes, 3-Manifolds And Number Rings Christopher Deninger (2002), A note on arithmetic topology and dynamical systems Adam S. Sikora (2001), Analogies between group actions on 3-manifolds and number fields Curtis T. McMullen (2003), From dynamics on surfaces to rational points on curves Chao Li and Charmaine Sia (2012), Knots and Primes External links Mazur’s knotty dictionary Algebraic number theory 3-manifolds Knot theory
Arithmetic topology
[ "Mathematics" ]
455
[ "Algebraic number theory", "Number theory" ]
23,835,696
https://en.wikipedia.org/wiki/Gordan%27s%20lemma
Gordan's lemma is a lemma in convex geometry and algebraic geometry. It can be stated in several ways. Let be a matrix of integers. Let be the set of non-negative integer solutions of . Then there exists a finite subset of vectors in , such that every element of is a linear combination of these vectors with non-negative integer coefficients. The semigroup of integral points in a rational convex polyhedral cone is finitely generated. An affine toric variety is an algebraic variety (this follows from the fact that the prime spectrum of the semigroup algebra of such a semigroup is, by definition, an affine toric variety). The lemma is named after the mathematician Paul Gordan (1837–1912). Some authors have misspelled it as "Gordon's lemma". Proofs There are topological and algebraic proofs. Topological proof Let be the dual cone of the given rational polyhedral cone. Let be integral vectors so that Then the 's generate the dual cone ; indeed, writing C for the cone generated by 's, we have: , which must be the equality. Now, if x is in the semigroup then it can be written as where are nonnegative integers and . But since x and the first sum on the right-hand side are integral, the second sum is a lattice point in a bounded region, and so there are only finitely many possibilities for the second sum (the topological reason). Hence, is finitely generated. Algebraic proof The proof is based on a fact that a semigroup S is finitely generated if and only if its semigroup algebra is a finitely generated algebra over . To prove Gordan's lemma, by induction (cf. the proof above), it is enough to prove the following statement: for any unital subsemigroup S of , If S is finitely generated, then , v an integral vector, is finitely generated. Put , which has a basis . It has -grading given by . By assumption, A is finitely generated and thus is Noetherian. It follows from the algebraic lemma below that is a finitely generated algebra over . Now, the semigroup is the image of S under a linear projection, thus finitely generated and so is finitely generated. Hence, is finitely generated then. Lemma: Let A be a -graded ring. If A is a Noetherian ring, then is a finitely generated -algebra. Proof: Let I be the ideal of A generated by all homogeneous elements of A of positive degree. Since A is Noetherian, I is actually generated by finitely many , homogeneous of positive degree. If f is homogeneous of positive degree, then we can write with homogeneous. If f has sufficiently large degree, then each has degree positive and strictly less than that of f. Also, each degree piece is a finitely generated -module. (Proof: Let be an increasing chain of finitely generated submodules of with union . Then the chain of the ideals stabilizes in finite steps; so does the chain ) Thus, by induction on degree, we see is a finitely generated -algebra. Applications A multi-hypergraph over a certain set is a multiset of subsets of (it is called "multi-hypergraph" since each hyperedge may appear more than once). A multi-hypergraph is called regular if all vertices have the same degree. It is called decomposable if it has a proper nonempty subset that is regular too. For any integer n, let be the maximum degree of an indecomposable multi-hypergraph on n vertices. Gordan's lemma implies that is finite. Proof: for each subset S of vertices, define a variable xS (a non-negative integer). Define another variable d (a non-negative integer). Consider the following set of n equations (one equation per vertex):Every solution (x,d) denotes a regular multi-hypergraphs on , where x defines the hyperedges and d is the degree. By Gordan's lemma, the set of solutions is generated by a finite set of solutions, i.e., there is a finite set of multi-hypergraphs, such that each regular multi-hypergraph is a linear combination of some elements of . Every non-decomposable multi-hypergraph must be in (since by definition, it cannot be generated by other multi-hypergraph). Hence, the set of non-decomposable multi-hypergraphs is finite. See also Birkhoff algorithm is an algorithm that, given a bistochastic matrix (a matrix which solves a particular set of equations), finds a decomposition of it into integral matrices. It is related to Gordan's lemma in that it shows that the set of these matrices is generated by a finite set of integral matrices. References See also Dickson's lemma Lemmas Theorems in convex geometry Algebraic geometry
Gordan's lemma
[ "Mathematics" ]
1,032
[ "Theorems in convex geometry", "Fields of abstract algebra", "Theorems in geometry", "Algebraic geometry", "Mathematical problems", "Mathematical theorems", "Lemmas" ]
23,836,847
https://en.wikipedia.org/wiki/Ranklet
In statistics, a ranklet is an orientation-selective non-parametric feature which is based on the computation of Mann–Whitney–Wilcoxon (MWW) rank-sum test statistics. Ranklets achieve similar response to Haar wavelets as they share the same pattern of orientation-selectivity, multi-scale nature and a suitable notion of completeness. They were invented by Fabrizio Smeralhi in 2002. Rank-based (non-parametric) features have become popular in the field of image processing for their robustness in detecting outliers and invariance to monotonic transformations such as brightness, contrast changes and gamma correction. The MWW is a combination of Wilcoxon rank-sum test and Mann–Whitney U-test. It is a non-parametric alternative to the t-test used to test the hypothesis for the comparison of two independent distributions. It assesses whether two samples of observations, usually referred as Treatment T and Control C, come from the same distribution but do not have to be normally distributed. The Wilcoxon rank-sum statistics Ws is determined as: Subsequently, let MW be the Mann–Whitney statistics defined by: where m is the number of Treatment values. A ranklet R is defined as the normalization of MW in the range [−1, +1]: where a positive value means that the Treatment region is brighter than the Control region, and a negative value otherwise. Example Suppose and then Hence, in the above example the Control region was a little bit brighter than the Treatment region. Method Since Ranklets are non-linear filters, they can only be applied in the spatial domain. Filtering with Ranklets involves dividing an image window W into Treatment and Control regions as shown in the image below: Subsequently, Wilcoxon rank-sum test statistics are computed in order to determine the intensity variations among conveniently chosen regions (according to the required orientation) of the samples in W. The intensity values of both regions are then replaced by the respective ranking scores. These ranking scores determine a pairwise comparison between the T and C regions. This means that a ranklet essentially counts the number of TxC pairs which are brighter in the T set. Hence a positive value means that the Treatment values are brighter than the Control values, and vice versa. References External links Matlab RankletFilter.m -> source file to decompose an image into Intensity Ranklets Nonlinear filters Nonparametric statistics Spatial analysis
Ranklet
[ "Physics" ]
499
[ "Spacetime", "Space", "Spatial analysis" ]
10,617,563
https://en.wikipedia.org/wiki/Inter-domain%20routing
Inter-domain routing is data flow control and interaction between Primary Domain Controller (PDC) computers. This type of computer uses various computer protocols and services to operate. It is most commonly used to multicast between internet domains. Internet use An Internet service provider, ISP, is provided with a unique URL access address. This address is a unique number. The number for each ISP is stored within a DNS server. The DNS servers interpret the ISP URL Domain name and provide the appropriate IP address number. The Domain is under the control of a specialized computer, called a PDC, (primary domain controller). This computer holds records of all the user accounts within the domain, their rights to access information, and lists of approved System Operatives. This PDC is backed up by an SDC, (a secondary domain controller), this computer synchronises itself with the PDC and takes over the role in the event of a PDC failure. Multiple replication servers connect to these control computers and they are routed to the Internet backbone to provide the requested data to and from the domain. Communication protocols Internet protocols that are focused on inter-domain functions include: Border Gateway Multicast Protocol, Classless Inter-Domain Routing, Multicast Source Discovery Protocol, and Protocol Independent Multicast. Services A PDC uses a number of special computer programs to announce its presence to other domain controllers. It uses Windows Internet naming service WINS and Browser services to allow other computers to gain access to digital information that it has control over. Other The opposite of inter-domain routing is intra-domain routing, routing within a domain or an autonomous system. References See also List of routing protocols. Network architecture
Inter-domain routing
[ "Engineering" ]
344
[ "Network architecture", "Computer networks engineering" ]
10,617,707
https://en.wikipedia.org/wiki/Sewer%20gas%20destructor%20lamp
A sewer gas destructor lamp is a type of lamp used to remove sewer gases and their hazards. Background Biogas forming in sewers via anaerobic digestion can be a potentially foul-smelling and explosive hazard (chiefly due to chemical spills). Unlike ordinary gas lamps for street lighting, the main purpose of sewer gas destructor lamps is to remove sewer gases and their hazards. Joseph Edmund Webb of Birmingham patented a sewer gas destructor lamp. Many of these lamps were installed in the UK in towns and cities including Sheffield, Winchester, Durham, Whitley Bay, Monkseaton and Blyth, Northumberland. With a flame generated by burning town gas, sewer gases were drawn from the sewer below and discharged above the heads of passers-by to dissipate odours. The flame in the lamp does not actually generate sufficient thermal energy to combust any of the odour compounds in the air. Improvements JE Webb addressed a number of problems with the lamps with further patents. His patent GB189408193, approved 2 March 1895, stated: It has also been found that when the gases are drawn out from the sewer by the burning of ordinary gas a sudden flushing of the sewer might prevent any sewer gas from escaping, and thus momentarily cause the gas jets to be extinguished. In order to solve this problem the patent specifies an arrangement of burners, air supply and heat reflection designed to produce an intense heat at the point of combustion–Webb suggests . Sheffield The lamps were installed at places where sewer gases were likely to collect, such as at the tops of hills. The city of Sheffield, being a hilly area, had many sewer gas destructor lamps and many remain. Sheffield on the Net has a section on the old gas lamps, which states: Eighty-four of these street lamps were erected in Sheffield between 1914 and 1935, the largest number in any British town, due mainly to the many hills in the area where gas could be trapped. The Sheffield Star newspaper reported a local survey of the lamps by W Jessop. This survey found 24 remaining lamps in Sheffield. Twenty of these are grade II listed. In 2016 Sheffield residents campaigned for the lamps to be restored when the city council's replacement of every lamppost began, as part of the 25-year Streets Ahead road improvement programme. Sheffield Council plans to repaint the lamps and convert them to solar power with LED lights to replicate the original lighting. Sheffield's four gas-powered lamps will remain so after their restoration. It is planned that the lamps will be restored by December 2017. London Only one working sewer gas destructor lamp remains in London; however, due to a traffic accident, the original lamp was damaged and has been replaced with a replica. This lamp is currently in use and can be found burning day and night down the side street of the Savoy Hotel in London. The story of this lamp has given rise to locals referring to Carting Lane as 'Farting Lane'. Current justification of the lamps Although many of the existing lamps in Sheffield and elsewhere are now disused, the lamps still have a use today in reducing odours. They do not prevent explosions as the concentration of methane in sewer gas is below the lower explosion limit (LEL) for methane. If the methane concentration were over the explosive limit (≈ 50,000 ppmv) the open flames in the lamps would burn like flares. References Humphreys, G.W. “Main Drainage of London”, London County Council, 1930 External links Photo of lamp in Seaton Delaval, Northumberland, UK History of the Webb Lamp Co Ltd Anaerobic digestion Street furniture
Sewer gas destructor lamp
[ "Chemistry", "Engineering" ]
743
[ "Water technology", "Anaerobic digestion", "Environmental engineering" ]
10,620,457
https://en.wikipedia.org/wiki/Lambek%E2%80%93Moser%20theorem
The Lambek–Moser theorem is a mathematical description of partitions of the natural numbers into two complementary sets. For instance, it applies to the partition of numbers into even and odd, or into prime and non-prime (one and the composite numbers). There are two parts to the Lambek–Moser theorem. One part states that any two non-decreasing integer functions that are inverse, in a certain sense, can be used to split the natural numbers into two complementary subsets, and the other part states that every complementary partition can be constructed in this way. When a formula is known for the natural number in a set, the Lambek–Moser theorem can be used to obtain a formula for the number not in the set. The Lambek–Moser theorem belongs to combinatorial number theory. It is named for Joachim Lambek and Leo Moser, who published it in 1954, and should be distinguished from an unrelated theorem of Lambek and Moser, later strengthened by Wild, on the number of primitive Pythagorean triples. It extends Rayleigh's theorem, which describes complementary pairs of Beatty sequences, the sequences of rounded multiples of irrational numbers. From functions to partitions Let be any function from positive integers to non-negative integers that is both non-decreasing (each value in the sequence is at least as large as any earlier value) and unbounded (it eventually increases past any fixed value). The sequence of its values may skip some numbers, so it might not have an inverse function with the same properties. Instead, define a non-decreasing and unbounded integer function that is as close as possible to the inverse in the sense that, for all positive integers , Equivalently, may be defined as the number of values for which . It follows from either of these definitions that . If the two functions and are plotted as histograms, they form mirror images of each other across the diagonal line . From these two functions and , define two more functions and , from positive integers to positive integers, by Then the first part of the Lambek–Moser theorem states that each positive integer occurs exactly once among the values of either or . That is, the values obtained from and the values obtained from form two complementary sets of positive integers. More strongly, each of these two functions maps its argument to the th member of its set in the partition. As an example of the construction of a partition from a function, let , the function that squares its argument. Then its inverse is the square root function, whose closest integer approximation (in the sense used for the Lambek–Moser theorem) is . These two functions give and For the values of are the pronic numbers 2, 6, 12, 20, 30, 42, 56, 72, 90, 110, ... while the values of are 1, 3, 4, 5, 7, 8, 9, 10, 11, 13, 14, .... These two sequences are complementary: each positive integer belongs to exactly one of them. The Lambek–Moser theorem states that this phenomenon is not specific to the pronic numbers, but rather it arises for any choice of with the appropriate properties. From partitions to functions The second part of the Lambek–Moser theorem states that this construction of partitions from inverse functions is universal, in the sense that it can explain any partition of the positive integers into two infinite parts. If and are any two complementary increasing sequences of integers, one may construct a pair of functions and from which this partition may be derived using the Lambek–Moser theorem. To do so, define and . One of the simplest examples to which this could be applied is the partition of positive integers into even and odd numbers. The functions and should give the even or odd number, respectively, so and . From these are derived the two functions and . They form an inverse pair, and the partition generated via the Lambek–Moser theorem from this pair is just the partition of the positive integers into even and odd numbers. Another integer partition, into evil numbers and odious numbers (by the parity of the binary representation) uses almost the same functions, adjusted by the values of the Thue–Morse sequence. Limit formula In the same work in which they proved the Lambek–Moser theorem, Lambek and Moser provided a method of going directly the function giving the member of a set of positive integers, the function giving the non-member, without going through Let denote the number of values of for which ; this is an approximation to the inverse function but (because it uses in place offset by one from the type of inverse used to define Then, for is the limit of the sequence meaning that this sequence eventually becomes constant and the value it takes when it does Lambek and Moser used the prime numbers as an example, following earlier work by Viggo Brun and D. H. Lehmer. If is the prime-counting function (the number of primes less than or equal then the non-prime (1 or a composite number) is given by the limit of the sequence For some other sequences of integers, the corresponding limit converges in a fixed number of steps, and a direct formula for the complementary sequence is possible. In particular, the positive integer that is not a power can be obtained from the limiting formula as History and proofs The theorem was discovered by Leo Moser and Joachim Lambek, who published it in 1954. Moser and Lambek cite the previous work of Samuel Beatty on Beatty sequences as their inspiration, and also cite the work of Viggo Brun and D. H. Lehmer from the early 1930s on methods related to their limiting formula for . Edsger W. Dijkstra has provided a visual proof of the result, and later another proof based on algorithmic reasoning. Yuval Ginosar has provided an intuitive proof based on an analogy of two athletes running in opposite directions around a circular racetrack. Related results For non-negative integers A variation of the theorem applies to partitions of the non-negative integers, rather than to partitions of the positive integers. For this variation, every partition corresponds to a Galois connection of the ordered non-negative integers to themselves. This is a pair of non-decreasing functions with the property that, for all and , if and only if . The corresponding functions and are defined slightly less symmetrically by and . For functions defined in this way, the values of and (for non-negative arguments, rather than positive arguments) form a partition of the non-negative integers, and every partition can be constructed in this way. Rayleigh's theorem Rayleigh's theorem states that for two positive irrational numbers and , both greater than one, with , the two sequences and for , obtained by rounding down to an integer the multiples of and , are complementary. It can be seen as an instance of the Lambek–Moser theorem with and . The condition that and be greater than one implies that these two functions are non-decreasing; the derived functions are and The sequences of values of and forming the derived partition are known as Beatty sequences, after Samuel Beatty's 1926 rediscovery of Rayleigh's theorem. See also Hofstadter Figure-Figure sequences, another pair of complementary sequences to which the Lambek–Moser theorem can be applied Notes References ; Solutions by Beatty, A. Ostrowski, J. Hyslop, and A. C. Aitken, vol. 34 (1927), pp. 159–160, , as cited by Integer sequences Theorems in number theory
Lambek–Moser theorem
[ "Mathematics" ]
1,561
[ "Sequences and series", "Mathematical theorems", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Theorems in number theory", "Mathematical problems", "Numbers", "Number theory" ]
10,622,169
https://en.wikipedia.org/wiki/King%20post
A king post (or king-post or kingpost) is a central vertical post used in architectural or bridge designs, working in tension to support a beam below from a truss apex above (whereas a crown post, though visually similar, supports items above from the beam below). In aircraft design a strut called a king post acts in compression, similarly to an architectural crown post. Usage in mechanical plant and marine engineering differs again, as noted below. Architecture A king post extends vertically from a crossbeam (the tie beam) to the apex of a triangular truss. The king post, itself in tension, connects the apex of the truss with its base, holding up the tie beam (also in tension) at the base of the truss. The post can be replaced with an iron rod called a king rod (or king bolt) and thus a king rod truss. The king post truss is also called a "Latin truss". In traditional timber framing, a crown post looks similar to a king post, but it is very different structurally: whereas the king post is in tension, usually supporting the tie beam as a truss, the crown post is supported by the tie beam and is in compression. The crown post rises to a crown plate immediately below collar beams which it supports; it does not rise to the apex like a king post. Historically a crown post was called a king post in England but this usage is obsolete. An alternative truss construction uses two queen posts (or queen-posts). These vertical posts, positioned along the base of the truss, are supported by the sloping sides of the truss, rather than reaching its apex. A development adds a collar beam above the queen posts, which are then termed queen struts. A section of the tie beam between the queen posts may be removed to create a hammerbeam roof. King post truss The king post truss is used for simple roof trusses and short-span bridges. It is the simplest form of truss in that it is constructed of the fewest truss members (individual lengths of wood or metal). The truss consists of two diagonal members that meet at the apex of the truss, one horizontal beam that serves to tie the bottom end of the diagonals together, and the king post which connects the apex to the horizontal beam below. For a roof truss, the diagonal members are called rafters, and the horizontal member may serve as a ceiling joist. A bridge would require two king post trusses with the driving surface between them. A roof usually uses many side-by-side trusses depending on the size of the structure. Pont-y-Cafnau, the world's first iron railway bridge, is of the king post type. History King posts were used in timber-framed roof construction in Roman buildings, and in medieval architecture in buildings such as parish churches and tithe barns. The oldest surviving roof truss in the world is a king post truss in Saint Catherine's Monastery, Egypt, built between 548 and 565. King posts also appear in Gothic Revival architecture, Queen Anne style architecture and occasionally in modern construction. King post trusses are also used as a structural element in wood and metal bridges. A painting by Karl Blechen circa 1833 illustrating construction of the second Devil's Bridge (Teufelsbrücke) in the Schöllenen Gorge shows multiple king posts suspended from the apex of the falsework upon which the masonry arch has been laid. In this example, beams in compression are supported by each king post several feet below the apex, and the bottom of the king posts can clearly be seen to be unsupported. Norman truss Architectural historians in the French colonial cities St Louis, Missouri and New Orleans, Louisiana use the term "Norman roof" to refer to a steeply pitched roof; it is supported by what they call a "Norman truss" which is similar to a king post truss. This is a through-purlin truss consisting of a tie beam and paired truss blades, with a central king post to support the roof ridge. The name derives from a belief that this system of construction was introduced to North America by settlers from Normandy in northern France, but it is really a misnomer as the system was more widely used than that. The difference between a Norman truss and a king post truss is the tie beam in a Norman truss is technically a collar beam (a beam between the rafters above the rafter feet) where the king post truss the rafters land on top of a tie beam. Aviation King posts are also used in the construction of some wire-braced aircraft, where a king post supports the top cables or "ground wires" supporting the wing. Only on the ground are these wires from the kingpost in tension, while in the air under positive g flight they are unloaded. Mechanical plant The very robust hinge connecting the boom to the chassis in a backhoe, similar in function and appearance to a large automotive kingpin, is called a king post. Marine engineering On a cargo ship or oiler a king post is an upright with cargo-handling or fueling rig devices attached to it. On a cargo vessel king posts are designed for handling cargo, and so are located at the forward or after end of a hatch. For an oiler they are located over the fuel transfer lines. See also Strut Cabane strut Queen post Timber roof truss References Notes Bibliography External links Bridge Basics Timber roofs Crown post roofs King and Queen post roofs on the former mansion at Parlington, near Aberford in Yorkshire, England An Illustrated Roof Glossary (archived) Architectural elements Structural engineering Trusses Timber framing Truss bridges by type
King post
[ "Technology", "Engineering" ]
1,141
[ "Structural engineering", "Timber framing", "Building engineering", "Trusses", "Structural system", "Construction", "Architectural elements", "Civil engineering", "Components", "Architecture" ]
10,624,594
https://en.wikipedia.org/wiki/Krypton
Krypton (from 'the hidden one') is a chemical element; it has symbol Kr and atomic number 36. It is a colorless, odorless noble gas that occurs in trace amounts in the atmosphere and is often used with other rare gases in fluorescent lamps. Krypton is chemically inert. Krypton, like the other noble gases, is used in lighting and photography. Krypton light has many spectral lines, and krypton plasma is useful in bright, high-powered gas lasers (krypton ion and excimer lasers), each of which resonates and amplifies a single spectral line. Krypton fluoride also makes a useful laser medium. From 1960 to 1983, the official definition of the metre was based on the wavelength of one spectral line of krypton-86, because of the high power and relative ease of operation of krypton discharge tubes. History Krypton was discovered in Britain in 1898 by William Ramsay, a Scottish chemist, and Morris Travers, an English chemist, in residue left from evaporating nearly all components of liquid air. Neon was discovered by a similar procedure by the same workers just a few weeks later. William Ramsay was awarded the 1904 Nobel Prize in Chemistry for discovery of a series of noble gases, including krypton. In 1960, the International Bureau of Weights and Measures defined the meter as 1,650,763.73 wavelengths of light emitted in the vacuum corresponding to the transition between the 2p10 and 5d5 levels in the isotope krypton-86. This agreement replaced the 1889 international prototype meter, which was a metal bar located in Sèvres. This also made obsolete the 1927 definition of the ångström based on the red cadmium spectral line, replacing it with 1 Å = 10−10 m. The krypton-86 definition lasted until the October 1983 conference, which redefined the meter as the distance that light travels in vacuum during 1/299,792,458 s. Characteristics Krypton is characterized by several sharp emission lines (spectral signatures) the strongest being green and yellow. Krypton is one of the products of uranium fission. Solid krypton is white and has a face-centered cubic crystal structure, which is a common property of all noble gases (except helium, which has a hexagonal close-packed crystal structure). Isotopes Naturally occurring krypton in Earth's atmosphere is composed of five stable isotopes, plus one isotope (78Kr) with such a long half-life (9.2×1021 years) that it can be considered stable. (This isotope has the third-longest known half-life among all isotopes for which decay has been observed; it undergoes double electron capture to 78Se). In addition, about thirty unstable isotopes and isomers are known. Traces of 81Kr, a cosmogenic nuclide produced by the cosmic ray irradiation of 80Kr, also occur in nature: this isotope is radioactive with a half-life of 230,000 years. Krypton is highly volatile and does not stay in solution in near-surface water, but 81Kr has been used for dating old (50,000–800,000 years) groundwater. 85Kr is an inert radioactive noble gas with a half-life of 10.76 years. It is produced by the fission of uranium and plutonium, such as in nuclear bomb testing and nuclear reactors. 85Kr is released during the reprocessing of fuel rods from nuclear reactors. Concentrations at the North Pole are 30% higher than at the South Pole due to convective mixing. Chemistry Like the other noble gases, krypton is chemically highly unreactive. The rather restricted chemistry of krypton in the +2 oxidation state parallels that of the neighboring element bromine in the +1 oxidation state; due to the scandide contraction it is difficult to oxidize the 4p elements to their group oxidation states. Until the 1960s no noble gas compounds had been synthesized. Following the first successful synthesis of xenon compounds in 1962, synthesis of krypton difluoride () was reported in 1963. In the same year, was reported by Grosse, et al., but was subsequently shown to be a mistaken identification. Under extreme conditions, krypton reacts with fluorine to form KrF2 according to the following equation: Kr + F2 -> KrF2 Krypton gas in a krypton fluoride laser absorbs energy from a source, causing the krypton to react with fluorine gas, producing the exciplex krypton fluoride, a temporary complex in an excited energy state: 2Kr + F2 -> 2KrF The complex can undergo spontaneous or stimulated emission, reducing its energy state to a metastable, but highly repulsive ground state. The ground state complex quickly dissociates into unbound atoms: 2KrF -> 2Kr + F2 The result is an exciplex laser which radiates energy at 248 nm, near the ultraviolet portion of the spectrum, corresponding with the energy difference between the ground state and the excited state of the complex. Compounds with krypton bonded to atoms other than fluorine have also been discovered. There are also unverified reports of a barium salt of a krypton oxoacid. ArKr+ and KrH+ polyatomic ions have been investigated and there is evidence for KrXe or KrXe+. The reaction of with produces an unstable compound, , that contains a krypton-oxygen bond. A krypton-nitrogen bond is found in the cation [HC≡N–Kr–F], produced by the reaction of with [HC≡NH][AsF] below −50 °C. HKrCN and HKrC≡CH (krypton hydride-cyanide and hydrokryptoacetylene) were reported to be stable up to 40 K. Krypton hydride (Kr(H2)4) crystals can be grown at pressures above 5 GPa. They have a face-centered cubic structure where krypton octahedra are surrounded by randomly oriented hydrogen molecules. Natural occurrence Earth has retained all of the noble gases that were present at its formation except helium. Krypton's concentration in the atmosphere is about 1 ppm. It can be extracted from liquid air by fractional distillation. The amount of krypton in space is uncertain, because measurement is derived from meteoric activity and solar winds. The first measurements suggest an abundance of krypton in space. Applications Krypton's multiple emission lines make ionized krypton gas discharges appear whitish, which in turn makes krypton-based bulbs useful in photography as a white light source. Krypton is used in some photographic flashes for high speed photography. Krypton gas is also combined with mercury to make luminous signs that glow with a bright greenish-blue light. Krypton is mixed with argon in energy efficient fluorescent lamps, reducing the power consumption, but also reducing the light output and raising the cost. Krypton costs about 100 times as much as argon. Krypton (along with xenon) is also used to fill incandescent lamps to reduce filament evaporation and allow higher operating temperatures. Krypton's white discharge is sometimes used as an artistic effect in gas discharge "neon" tubes. Krypton produces much higher light power than neon in the red spectral line region, and for this reason, red lasers for high-power laser light-shows are often krypton lasers with mirrors that select the red spectral line for laser amplification and emission, rather than the more familiar helium-neon variety, which could not achieve the same multi-watt outputs. The krypton fluoride laser is important in nuclear fusion energy research in confinement experiments. The laser has high beam uniformity, short wavelength, and the spot size can be varied to track an imploding pellet. In experimental particle physics, liquid krypton is used to construct quasi-homogeneous electromagnetic calorimeters. A notable example is the calorimeter of the NA48 experiment at CERN containing about 27 tonnes of liquid krypton. This usage is rare, since liquid argon is less expensive. The advantage of krypton is a smaller Molière radius of 4.7 cm, which provides excellent spatial resolution with little overlapping. The other parameters relevant for calorimetry are: radiation length of X0=4.7 cm, and density of 2.4 g/cm3. Krypton-83 has application in magnetic resonance imaging (MRI) for imaging airways. In particular, it enables the radiologist to distinguish between hydrophobic and hydrophilic surfaces containing an airway. Although xenon has potential for use in computed tomography (CT) to assess regional ventilation, its anesthetic properties limit its fraction in the breathing gas to 35%. A breathing mixture of 30% xenon and 30% krypton is comparable in effectiveness for CT to a 40% xenon fraction, while avoiding the unwanted effects of a high partial pressure of xenon gas. The metastable isotope krypton-81m is used in nuclear medicine for lung ventilation/perfusion scans, where it is inhaled and imaged with a gamma camera. Krypton-85 in the atmosphere has been used to detect clandestine nuclear fuel reprocessing facilities in North Korea and Pakistan. Those facilities were detected in the early 2000s and were believed to be producing weapons-grade plutonium. Krypton-85 is a medium lived fission product and thus escapes from spent fuel when the cladding is removed. Krypton is used occasionally as an insulating gas between window panes. SpaceX Starlink uses krypton as a propellant for their electric propulsion system. Precautions Krypton is considered to be a non-toxic asphyxiant. Being lipophilic, krypton has a significant anaesthetic effect (although the mechanism of this phenomenon is still not fully clear, there is good evidence that the two properties are mechanistically related), with narcotic potency seven times greater than air, and breathing an atmosphere of 50% krypton and 50% natural air (as might happen in the locality of a leak) causes narcosis in humans similar to breathing air at four times atmospheric pressure. This is comparable to scuba diving at a depth of and could affect anyone breathing it. References Further reading William P. Kirk "Krypton 85: a Review of the Literature and an Analysis of Radiation Hazards", Environmental Protection Agency, Office of Research and Monitoring, Washington (1972) External links Krypton at The Periodic Table of Videos (University of Nottingham) Krypton Fluoride Lasers, Plasma Physics Division Naval Research Laboratory Chemical elements Noble gases
Krypton
[ "Physics", "Materials_science" ]
2,269
[ "Noble gases", "Chemical elements", "Nonmetals", "Atoms", "Matter" ]
10,626,173
https://en.wikipedia.org/wiki/Sun%20valve
A sun valve (Swedish: solventil, "solar valve") is a flow control valve that automatically shuts off gas flow during daylight. It earned its inventor Gustaf Dalén the 1912 Nobel Prize in Physics. Subsequently other variants of sun valve were developed for different uses. Dalén's valve The valve was the key component of the Dalén light used in lighthouses from the 1900s through the 1960s, by which time electric lighting was dominant. Prominent engineers, such as Thomas Edison, doubted that the device could work. The German patent office required a demonstration before approving the patent application. Design The valve is controlled by four metal rods enclosed in a glass tube. The central rod that is blackened is surrounded by the three polished rods. As sunlight falls onto all of the rods, the absorbed heat of the sun expands the dark rod, switching a valve to cut the gas supply. After sunset, the central rod cools down, contracting to become the same length as the polished rods and opening the gas supply. The gas is lit by the small, always-burning pilot light. Reliability Dalen's system of acetylene lighting for marine navigation proved very reliable, as exemplified by the lighthouse at Chumbe Island off Zanzibar. This lighthouse was constructed in 1904 and converted to unstaffed automatic acetylene gas operation in 1926. The acetylene lighting installation, controlled by a sun valve, remained in use until the lighthouse was converted to a solar-powered (photovoltaic) system in 2013. Other variants In 1921 Francis Everard Lamplough (an engineer with AGA's rival firm in lighthouse provision: Chance Brothers) patented an alternative 'light valve' in the hope of breaking Dalén's effective monopoly. In subsequent years it was installed on several lighthouses and beacons, but because of its dependence on liquid it could only be used in static locations (unlike Dalén's valves, very many of which were installed on floating buoys). Lamplough's 'valve' was a form of rocker switch, on which were mounted two glass bulbs, one black, the other clear, part-filled with liquid ether and linked by a tube. During the day, heat from the sunlight would cause the air in the black bulb to expand, forcing the liquid into the clear bulb, the additional weight tipping the switch and cutting off the current to the lamp; at night, or at other times of insufficient daylight, the process was reversed, reconnecting the current. In 1935 one Edwin H. Pendleton was granted a US patent for a sun valve activated by a pair of bimetallic strips. References Valves
Sun valve
[ "Physics", "Chemistry" ]
539
[ "Physical systems", "Valves", "Hydraulics", "Piping" ]
10,626,778
https://en.wikipedia.org/wiki/Transdermal
Transdermal is a route of administration wherein active ingredients are delivered across the skin for systemic distribution. Examples include transdermal patches used for medicine delivery. The drug is administered in the form of a patch or ointment that delivers the drug into the circulation for systemic effect. Techniques Obstacles Although the skin is a large and logical target for drug delivery, its basic functions limit its utility for this purpose. The skin functions mainly to protect the body from external penetration (by e.g. harmful substances and microorganisms) and to contain all body fluids. There are two important layers to the human skin: (1) the epidermis and (2) the dermis. For transdermal delivery, drugs must pass through the two sublayers of the epidermis to reach the microcirculation of the dermis. The stratum corneum is the top layer of the skin and varies in thickness from approximately ten to several hundred micrometres, depending on the region of the body. It is composed of layers of dead, flattened keratinocytes surrounded by a lipid matrix, which together act as a brick-and-mortar system that is difficult to penetrate. The stratum corneum provides the most significant barrier to diffusion. In fact, the stratum corneum is the barrier to approximately 90% of transdermal drug applications. However, nearly all molecules penetrate it to some minimal degree. Below the stratum corneum lies the viable epidermis. This layer is about ten times as thick as the stratum corneum; however, diffusion is much faster here due to the greater degree of hydration in the living cells of the viable epidermis. Below the epidermis lies the dermis, which is approximately one millimeter thick, 100 times the thickness of the stratum corneum. The dermis contains small vessels that distribute drugs into the systemic circulation and to regulate temperature, a system known as the skin's microcirculation. Transdermal pathways There are two main pathways by which drugs can cross the skin and reach the systemic circulation. The more direct route is known as the transcellular pathway. Transcellular pathway By this route, drugs cross the skin by directly passing through both the phospholipids membranes and the cytoplasm of the dead keratinocytes that constitute the stratum corneum. Although this is the path of shortest distance, the drugs encounter significant resistance to permeation. This resistance is caused because the drugs must cross the lipophilic membrane of each cell, then the hydrophilic cellular contents containing keratin, and then the phospholipid bilayer of the cell one more time. This series of steps is repeated numerous times to traverse the full thickness of the stratum corneum. Intercellular pathway The other more common pathway through the skin is via the intercellular route. Drugs crossing the skin by this route must pass through the small spaces between the cells of the skin, making the route more tortuous. Although the thickness of the stratum corneum is only about 20 μm, the actual diffusional path of most molecules crossing the skin is on the order of 400 μm. The 20-fold increase in the actual path of permeating molecules greatly reduces the rate of drug penetration. Recent research has established that the intercellular route can be dramatically enhanced by attending to the physical chemistry of the system solubilizing the active pharmaceutical ingredient, rendering a dramatically more efficient delivery of payload and enabling the delivery of most compounds via this route. Microneedles A third pathway to breach the Stratum Corneum layer is via tiny microchannels created by a medical micro-needling device of which there are many brands and variants. Investigations at the University of Marburg, Germany, using a standard Franz diffusion cell showed that this approach is efficient in enhancing skin penetration ability for lipophilic as well as hydrophilic compounds. The micro-needling approach is also seen as 'the vaccine of the future'. The microneedles can be hollow, solid, coated, dissolving, or hydrogel-forming. Some have regulatory approval. Microneedle devices/patches can be used to deliver nanoparticle medicines. Devices and formulations Devices and formulations for transdermally administered substances include: Transdermal patch Transdermal gel specially formula See also Invasomes References Medical treatments Routes of administration
Transdermal
[ "Chemistry" ]
913
[ "Pharmacology", "Routes of administration" ]
10,626,981
https://en.wikipedia.org/wiki/Da%20Vinci%20Systems
da Vinci Systems was an American digital cinema company founded in 1984 in Coral Springs, Florida as a spinoff of Video Tape Associates. It was known for its hardware-based color correction products, GPU-based color grading, digital mastering systems, and film restoration and remastering systems. It was one of the earliest pioneers in post-production products. The company was owned by Dynatech Corporation (Acterna after 2000) for the majority of its lifespan until being bought by JDS Uniphase in 2005 and by Blackmagic Design in 2009. Company history In 1982, Video Tape Associates (VTA), a Hollywood, Florida-based production/post-production facility, began developing the Wiz for internal use and introduced it to the public the following year. The Wiz controlled early telecines such as the RCA FR-35 and the Bosch FDL 60 and offered basic primary and secondary color correction. The American post-production facilities company EDITEL Group asked VTA to build multiple Wiz systems for them. Fifteen units were made and subsequently purchased by other post-production facilities across the country. The Wiz served as a major inspiration/prototype for what would become the da Vinci Classic. In 1984, VTA Technologies, the research and development division of VTA Post, broke away from its parent company to become da Vinci Systems, Inc. One of its four founders was Bob Hemsky. The da Vinci was the only film-to-tape or tape-to-tape color correction system on the market that offered the capability to create a basic rectangular window shape isolating a secondary color correction. In 1986, da Vinci was acquired by Dynatech Corporation and managed within their Utah Scientific business. Two years later, da Vinci Systems, LLC became its own entity as one of roughly eight video manufacturing companies within the Dynatech Video Group. In 1998, da Vinci Academy was formed to provide training to the growing number of aspiring colorists. The following year, da Vinci acquired Nevada-based Sierra Design Labs, at that time a worldwide leader in HDTV storage and workstation interface solutions. In 2000, da Vinci's parent company, Dynatech, became Acterna after a merger with Wavetek, Wandel & Goltermann and TTC. Acterna then acquired Singaporean company Nirvana Digital to add the Revival film restoration system to its production line. In 2004, da Vinci had offices in Coral Springs, Los Angeles, New York, London, Paris, Germany, and Singapore. On August 3, 2005, JDS Uniphase acquired Acterna, including da Vinci systems, for $450 million and 200 million shares of JDSU common stock. In September 2009, Blackmagic Design's purchase of da Vinci Systems was announced. Product history da Vinci Classic (1984-1990) The da Vinci, now known as the da Vinci Classic, was launched in 1984 and manufactured until 1990. At the time of its introduction, it was the only film-to-tape or tape-to-tape color correction system available that offered the capability to correct secondary colors by isolating them. The analog grading system became the most popular color corrector for telecines like the Fernseh FL 60 and Rank Cintel Mark 3. The Classic had a customized external control panel with primary and secondary processing and an internal NTSC encoder. It operated on a Motorola 68000 Multibus 1 system computer. Early models had knob-only color correction controls; trackball control was introduced later. da Vinci Renaissance (1990-1993) The da Vinci Renaissance, manufactured between 1990 and 1993, was similar to the Classic but ran on a Motorola 68020 system rather than a 68000. Kilvectors secondary color processing, which would become an industry standard function for secondary color isolation, later became available on the system. Options for 525 and 625 resolutions were available. The system was often used with FDL 60, FDL 90, MK3, or URSA telecines. da Vinci's Leonardo (1990) In 1990, da Vinci released a low-cost color corrector for smaller facilities. To reduce the cost, they used a flat plate control panel and limited its capabilities to scene-by-scene control of a telecine. The Leonardo did not offer da Vinci color processing and only one unit was sold in its short time on the market. da Vinci Renaissance 888 (1991-1998) In 1991, the da Vinci Renaissance 888 was introduced; it was manufactured until 1998. The 888 operated without a GUI and was the first product ever to include digital 888 signal processing throughout. Power Windows, which enabled area isolation using soft edges and shapes; Custom Curves, a color correction tool using curves; and YSFX, which allowed independently adjustable luminance and chrominance ratios, were all included features. The 888 was used with FDL 60, FDL 90, Quadra, MK3, and URSA telecines. Time Logic Controller (1994) In 1994, da Vinci Systems acquired the Time Logic Controller (TLC) product line from Time Logic. TLC was an edit controller for telecines, vision mixers, and video tape recorders. It provided accurate 2:3 editing when transferring a 24 frames per second film into a 30 frames per second video environment. TLC 1 was released by Time Logic in early 1994 and TLC 2 was released by da Vinci later that year. 888 da Vinci User Interface (1995) In 1995, the 888 da Vinci User Interface (DUI) was introduced. It had similar color processing to the 888 but had a new Windows-style user interface, an internal TLC controller, and EDWIN. The telecine interface card controlled the telecine's internal color corrector. The 888 DUI came in two configurations: the first used a SGI Indy workstation and the second used SGI O2. The da Vinci Lite, a scaled-down version of the 888 DUI, was released later that year. It was largely unsuccessful due to lack of marketing. da Vinci 2K (1998) The da Vinci 2K, which began production in 1998, was an enhanced version of previous color correction systems. With an improved color processing quality and performance, it could support high-definition, standard-definition, and 2K formats. It operated with a 4:2:2, 4:4:4, or 8:4:4 input. The system was initially controlled by SGI O2 before being upgraded to Linux. Many 2Ks were interfaced within the Spirit DataCine or other high-end telecines. da Vinci 2K also included features such as PowerTiers, Defocus (using defocus aberration); and Colorist Toolbox. In 2001, PowerGrades, color presets, and the Gallery, an integrated reference store, were available as additions. 2Ks were among the systems used in the development of digital intermediate. In addition to telecine control, 2Ks were often used for tape-to-tape, virtual telecine, and digital disk recording applications. It also allowed for real-time filesharing. Seabiscuit and Star Wars: Episode I – The Phantom Menace were both graded on the 2K. In 2001, the 2K won the Philo T. Farnsworth Award at the Primetime Engineering Emmy Awards. The 2K Plus was introduced in 2002. Upgrades included four PowerVectors, Defocus Plus, Colorist Plus, and redesigned primaries, secondaries, and keys. The TLC Assistant allowed for single and dual user modes for editor access. Following JDS Uniphase's 2005 acquisition of Acterna's assets, including da Vinci systems, the 2K Plus continued to evolve and the Emerald, Sapphire, and Ruby upgrade packages were released In 2006, ColorTrace was offered for 2K Plus to track color grades when the edit decision list (EDL) is revised. The 2K Plus was used to grade Scrubs, The War at Home, and 24. Nucleas (2003) Nucleas was launched in 2003, providing server-to-server software interface to existing 2k Plus systems to work from data disks and storage networks. HIPPI and HSDL (High Speed Data Link, which transferred 2K and higher resolution images over HD-SDI links) interfaces and data waveforms were also available. In 2004, the Nucleas Conform was released, which built a data timeline from an EDL, rendered dissolves, and allowed switching between source and record order. The Nucleas DI Suite was used to grade Thunderstruck. Resolve (2004) In 2004, da Vinci released Resolve, a software-based, resolution-independent color grading system that used multiple parallel processing engines within normal PC computer infrastructure for real-time 2K resolution color grading. It was developed for use specifically within digital intermediate. In addition to color correcting, the Resolve had an advanced toolset that included conforming, network file browsing, scaling, and formatting This system was the first to implement InfiniBand topology. The first season of the TV show Sex & Drugs & Rock & Roll and the film The Grand Budapest Hotel were graded on the Resolve. In 2007, da Vinci released the Resolve R-3D which was focused on nonlinear grading in 3D. Some of the early films graded on the R-3RD include Quantum of Solace, U2 3D, and Meet the Robinsons. In 2008, Impresario, a new control panel for Resolve, was launched at NAB 2008 and demonstrated at NAB 2009. Resolve v6.2, released in 2009, allowed syncing two Resolve systems for shared work; when any changes are made on one, they immediately appeared on the other. Splice (2004) Like the da Vinci Nucleas, Splice was a server-to-server system that enabled 2K systems to work nonlinearly. It was promoted for use with SANs and as a life-extender for the 2K and 2K Plus. It is also capable of handling 4K files. The Splice was built on the Resolve's Transformer II and mirrored its basic conform and I/O features. See also Parallax Graphics, sister company to da Vinci Systems that also manufactured digital graphics products References Television and film post-production companies Video equipment manufacturers Video hardware 1984 establishments in Florida 2009 disestablishments in Florida
Da Vinci Systems
[ "Engineering" ]
2,143
[ "Electronic engineering", "Video hardware" ]
10,627,634
https://en.wikipedia.org/wiki/2001%20Humber%20Refinery%20explosion
The 2001 Humber Refinery explosion was a major incident at the then Conoco-owned Humber Refinery at South Killingholme in North Lincolnshire, England. A large explosion occurred on the Saturate Gas Plant area of the site on Easter Monday, 16 April 2001 at approximately 2:20 p.m. There were no fatalities, but two people were injured. Background The Humber refinery occupies a 480 acre (194 ha) site on the south of the Humber. It is about 1.5 km from the town of Immingham and 0.5 km from the village South Kilingholme. The refinery was commissioned in 1969-70 and comprises a number of processing plants including crude distillation, catalytic reforming, a fluidized cracking unit, an alkylation unit, and a saturate gas plant. At the time of the incident the refinery was owned and operated by Conoco Limited. On a normal weekday there were about 800 people on the site, at the time of the incident, on Easter Monday a public holiday, there were only about 185 people on the site. The plant The incident on 16 April 2001 occurred in the Saturate Gas Plant (SGP) which separates hydrocarbons into various gas and liquid streams. The plant comprises a number of tall distillation columns, separators and condensers. The first column in the plant is the de-ethaniser (W-413) which removes methane, ethane, and propane vapour from the liquid product. The vapour at the top of the column is at a pressure of 400 psig and a temperature of 119°F (27.6 barg and 48.3°C). The vapour flows through a 6-inch diameter overheads line (Line P4363) to the condensers (X-452/3).   After the SGP was commissioned salts and hydrates (ice-like crystals) started to accumulate in the condensers. This began to cause fouling problems and blockages. This had been anticipated in the original design and a water injection point had been installed in a line upstream of the de-ethaniser. Water dissolved corrosive agents in the feed fluid. However, this arrangement was not sufficient to prevent fouling in the downstream condensers. In November 1981 a study recommended that an additional water injection point should be installed in the overheads line. This was done by using a 1” vent point in line P4363 as a water injection point. No Injection quill or other dispersion device was fitted. The injection point was 670 mm upstream of a 90° elbow. Causes of failure In operation the overheads line had built up an internal coating of iron sulphide. This so-called pacification layer protected the inside of the carbon steel pipe from corrosion. However, the water wash acted to wash away the protective layer and exposed the steel to attack from corrosive agents in the vapour stream. This is a process of erosion-corrosion and caused the pipe wall to be eroded away at the elbow. At the time of the incident the wall thickness had been reduced from 7-8mm to as little as 0.3mm. The pipe could no longer contain the pressure (27.6 barg) and burst catastrophically releasing the vapour in the line, and in the upstream and downstream plant such as the de-ethaniser column. It was estimated that 80 tonnes of flammable vapour were released from the SGP plant producing a vapour cloud 175m by 80m. The cloud exploded and damaged the SGP causing further release of material which ignited and led to a large fire. This sustained fire damaged plant and weakened pipework leading to further releases. It was estimated that a total of 180 tonnes of flammable liquids and gasses were released. Effects The incident temporarily shut down the entire refinery and caused oil prices to increase. Damage was caused to the nearby villages of North and South Killingholme as well as the nearby town of Immingham - mainly doors being blown from their hinges and windows being blown in. HSE investigation ConocoPhillips (now Phillips 66) was investigated and subsequently fined £895,000 and ordered to pay £218,854 costs by the Health and Safety Executive for failing to effectively monitor the degradation of the refinery's pipework. The company pleaded guilty to these charges in court and has since implemented a Risk Based Inspection programme. See also Phillips 66 Humber Refinery South Killingholme Flixborough Disaster References External links Humber Refinery at the ConocoPhillips website HSE report into the incident Explosions in 2001 Engineering failures Disasters in Lincolnshire Humber 2001 disasters in the United Kingdom Humber Refinery explosion 2000s in Lincolnshire Humber Refinery explosion Industrial fires and explosions in the United Kingdom Phillips 66 April 2001 events in the United Kingdom
2001 Humber Refinery explosion
[ "Technology", "Engineering" ]
994
[ "Systems engineering", "Reliability engineering", "Technological failures", "Engineering failures", "Civil engineering" ]
15,713,242
https://en.wikipedia.org/wiki/Superconducting%20radio%20frequency
Superconducting radio frequency (SRF) science and technology involves the application of electrical superconductors to radio frequency devices. The ultra-low electrical resistivity of a superconducting material allows an RF resonator to obtain an extremely high quality factor, Q. For example, it is commonplace for a 1.3 GHz niobium SRF resonant cavity at 1.8 kelvins to obtain a quality factor of Q=5×1010. Such a very high Q resonator stores energy with very low loss and narrow bandwidth. These properties can be exploited for a variety of applications, including the construction of high-performance particle accelerator structures. Introduction The amount of loss in an SRF resonant cavity is so minute that it is often explained with the following comparison: Galileo Galilei (1564–1642) was one of the first investigators of pendulous motion, a simple form of mechanical resonance. Had Galileo experimented with a 1 Hz resonator with a quality factor Q typical of today's SRF cavities and left it swinging in an entombed lab since the early 17th century, that pendulum would still be swinging today with about half of its original amplitude. The most common application of superconducting RF is in particle accelerators. Accelerators typically use resonant RF cavities formed from or coated with superconducting materials. Electromagnetic fields are excited in the cavity by coupling in an RF source with an antenna. When the RF fed by the antenna is the same as that of a cavity mode, the resonant fields build to high amplitudes. Charged particles passing through apertures in the cavity are then accelerated by the electric fields and deflected by the magnetic fields. The resonant frequency driven in SRF cavities typically ranges from 200 MHz to 3 GHz, depending on the particle species to be accelerated. The most common fabrication technology for such SRF cavities is to form thin walled (1–3 mm) shell components from high purity niobium sheets by stamping. These shell components are then welded together to form cavities. A simplified diagram of the key elements of an SRF cavity setup is shown below. The cavity is immersed in a saturated liquid helium bath. Pumping removes helium vapor boil-off and controls the bath temperature. The helium vessel is often pumped to a pressure below helium's superfluid lambda point to take advantage of the superfluid's thermal properties. Because superfluid has very high thermal conductivity, it makes an excellent coolant. In addition, superfluids boil only at free surfaces, preventing the formation of bubbles on the surface of the cavity, which would cause mechanical perturbations. An antenna is needed in the setup to couple RF power to the cavity fields and, in turn, any passing particle beam. The cold portions of the setup need to be extremely well insulated, which is best accomplished by a vacuum vessel surrounding the helium vessel and all ancillary cold components. The full SRF cavity containment system, including the vacuum vessel and many details not discussed here, is a cryomodule. Entry into superconducting RF technology can incur more complexity, expense, and time than normal-conducting RF cavity strategies. SRF requires chemical facilities for harsh cavity treatments, a low-particulate cleanroom for high-pressure water rinsing and assembly of components, and complex engineering for the cryomodule vessel and cryogenics. A vexing aspect of SRF is the as-yet elusive ability to consistently produce high Q cavities in high volume production, which would be required for a large linear collider. Nevertheless, for many applications the capabilities of SRF cavities provide the only solution for a host of demanding performance requirements. Several extensive treatments of SRF physics and technology are available, many of them free of charge and online. There are the proceedings of CERN accelerator schools, a scientific paper giving a thorough presentation of the many aspects of an SRF cavity to be used in the International Linear Collider, bi-annual International Conferences on RF Superconductivity held at varying global locations in odd numbered years, and tutorials presented at the conferences. SRF cavity application in particle accelerators A large variety of RF cavities are used in particle accelerators. Historically most have been made of copper – a good electrical conductor – and operated near room temperature with exterior water cooling to remove the heat generated by the electrical loss in the cavity. In the past two decades, however, accelerator facilities have increasingly found superconducting cavities to be more suitable (or necessary) for their accelerators than normal-conducting copper versions. The motivation for using superconductors in RF cavities is not to achieve a net power saving, but rather to increase the quality of the particle beam being accelerated. Though superconductors have little AC electrical resistance, the little power they do dissipate is radiated at very low temperatures, typically in a liquid helium bath at 1.6 K to 4.5 K, and maintaining such low temperatures takes a lot of energy. The refrigeration power required to maintain the cryogenic bath at low temperature in the presence of heat from small RF power dissipation is dictated by the Carnot efficiency, and can easily be comparable to the normal-conductor power dissipation of a room-temperature copper cavity. The principle motivations for using superconducting RF cavities, are: High duty cycle or cw operation. SRF cavities allow the excitation of high electromagnetic fields at high duty cycle, or even cw, in such regimes that a copper cavity's electrical loss could melt the copper, even with robust water cooling. Low beam impedance. The low electrical loss in an SRF cavity allows their geometry to have large beampipe apertures while still maintaining a high accelerating field along the beam axis. Normal-conducting cavities need small beam apertures to concentrate the electric field as compensation for power losses in wall currents. However, the small apertures can be deleterious to a particle beam due to their spawning of larger wakefields, which are quantified by the accelerator parameters termed "beam impedance" and "loss parameter". Nearly all RF power goes to the beam. The RF source driving the cavity need only provide the RF power that is absorbed by the particle beam being accelerated, since the RF power dissipated in the SRF cavity walls is negligible. This is in contrast to normal-conducting cavities where the wall power loss can easily equal or exceed the beam power consumption. The RF power budget is important since the RF source technologies, such as a Klystron, Inductive output tube (IOT), or solid state amplifier, have costs that increase dramatically with increasing power. When future advances in superconducting material science allow higher superconducting critical temperatures Tc and consequently higher SRF bath temperatures, then the reduced thermocline between the cavity and the surrounding environment could yield a significant net power savings by SRF over the normal conducting approach to RF cavities. Other issues will need to be considered with a higher bath temperature, though, such as the fact that superfluidity (which is presently exploited with liquid helium) would not be present with (for example) liquid nitrogen. At present, none of the "high Tc" superconducting materials are suitable for RF applications. Shortcomings of these materials arise due to their underlying physics as well as their bulk mechanical properties not being amenable to fabricating accelerator cavities. However, depositing films of promising materials onto other mechanically amenable cavity materials may provide a viable option for exotic materials serving SRF applications. At present, the de facto choice for SRF material is still pure niobium, which has a critical temperature of 9.3 K and functions as a superconductor nicely in a liquid helium bath of 4.2 K or lower, and has excellent mechanical properties. Physics of SRF cavities The physics of Superconducting RF can be complex and lengthy. A few simple approximations derived from the complex theories, though, can serve to provide some of the important parameters of SRF cavities. By way of background, some of the pertinent parameters of RF cavities are itemized as follows. A resonator's quality factor is defined by , where: ω is the resonant frequency in [rad/s], U is the energy stored in [J], and Pd is the power dissipated in [W] in the cavity to maintain the energy U. The energy stored in the cavity is given by the integral of field energy density over its volume, , where: H is the magnetic field in the cavity and μ0 is the permeability of free space. The power dissipated is given by the integral of resistive wall losses over its surface, , where: Rs is the surface resistance which will be discussed below. The integrals of the electromagnetic field in the above expressions are generally not solved analytically, since the cavity boundaries rarely lie along axes of common coordinate systems. Instead, the calculations are performed by any of a variety of computer programs that solve for the fields for non-simple cavity shapes, and then numerically integrate the above expressions. An RF cavity parameter known as the Geometry Factor ranks the cavity's effectiveness of providing accelerating electric field due to the influence of its shape alone, which excludes specific material wall loss. The Geometry Factor is given by , and then The geometry factor is quoted for cavity designs to allow comparison to other designs independent of wall loss, since wall loss for SRF cavities can vary substantially depending on material preparation, cryogenic bath temperature, electromagnetic field level, and other highly variable parameters. The Geometry Factor is also independent of cavity size, it is constant as a cavity shape is scaled to change its frequency. As an example of the above parameters, a typical 9-cell SRF cavity for the International Linear Collider (a.k.a. a TESLA cavity) would have G=270 Ω and Rs= 10 nΩ, giving Qo=2.7×1010. The critical parameter for SRF cavities in the above equations is the surface resistance Rs, and is where the complex physics comes into play. For normal-conducting copper cavities operating near room temperature, Rs is simply determined by the empirically measured bulk electrical conductivity σ by . For copper at 300 K, σ=5.8×107 (Ω·m)−1 and at 1.3 GHz, Rs copper= 9.4 mΩ. For Type II superconductors in RF fields, Rs can be viewed as the sum of the superconducting BCS resistance and temperature-independent "residual resistances", . The BCS resistance derives from BCS theory. One way to view the nature of the BCS RF resistance is that the superconducting Cooper pairs, which have zero resistance for DC current, have finite mass and momentum which has to alternate sinusoidally for the AC currents of RF fields, thus giving rise to a small energy loss. The BCS resistance for niobium can be approximated when the temperature is less than half of niobium's superconducting critical temperature, T<Tc/2, by [Ω], where: f is the frequency in [Hz], T is the temperature in [K], and Tc=9.3 K for niobium, so this approximation is valid for T<4.65 K. Note that for superconductors, the BCS resistance increases quadratically with frequency, ~f 2, whereas for normal conductors the surface resistance increases as the root of frequency, ~√f. For this reason, the majority of superconducting cavity applications favor lower frequencies, <3 GHz, and normal-conducting cavity applications favor higher frequencies, >0.5 GHz, there being some overlap depending on the application. The superconductor's residual resistance arises from several sources, such as random material defects, hydrides that can form on the surface due to hot chemistry and slow cool-down, and others that are yet to be identified. One of the quantifiable residual resistance contributions is due to an external magnetic field pinning magnetic fluxons in a Type II superconductor. The pinned fluxon cores create small normal-conducting regions in the niobium that can be summed to estimate their net resistance. For niobium, the magnetic field contribution to Rs can be approximated by [Ω], where: Hext is any external magnetic field in [Oe], Hc2 is the Type II superconductor magnetic quench field, which is 2400 Oe (190 kA/m) for niobium, and Rn is the normal-conducting resistance of niobium in ohms. The Earth's nominal magnetic flux of 0.5 gauss (50 μT) translates to a magnetic field of 0.5 Oe (40 A/m) and would produce a residual surface resistance in a superconductor that is orders of magnitude greater than the BCS resistance, rendering the superconductor too lossy for practical use. For this reason, superconducting cavities are surrounded by magnetic shielding to reduce the field permeating the cavity to typically <10 mOe (0.8 A/m). Using the above approximations for a niobium a SRF cavity at 1.8 K, 1.3 GHz, and assuming a magnetic field of 10 mOe (0.8 A/m), the surface resistance components would be RBCS = 4.55 nΩ and Rres = RH = 3.42 nΩ, giving a net surface resistance Rs = 7.97 nΩ. If for this cavity G = 270 Ω then the ideal quality factor would be Qo = 3.4×1010. The Qo just described can be further improved by up to a factor of 2 by performing a mild vacuum bake of the cavity. Empirically, the bake seems to reduce the BCS resistance by 50%, but increases the residual resistance by 30%. The plot below shows the ideal Qo values for a range of residual magnetic field for a baked and unbaked cavity. In general, much care and attention to detail must be exercised in the experimental setup of SRF cavities so that there is not Qo degradation due to RF losses in ancillary components, such as stainless steel vacuum flanges that are too close to the cavity's evanescent fields. However, careful SRF cavity preparation and experimental configuration have achieved the ideal Qo not only for low field amplitudes, but up to cavity fields that are typically 75% of the magnetic field quench limit. Few cavities make it to the magnetic field quench limit since residual losses and vanishingly small defects heat up localized spots, which eventually exceed the superconducting critical temperature and lead to a thermal quench. Q vs E When using superconducting RF cavities in particle accelerators, the field level in the cavity should generally be as high as possible to most efficiently accelerate the beam passing through it. The Qo values described by the above calculations tend to degrade as the fields increase, which is plotted for a given cavity as a "Q vs E" curve, where "E" refers to the accelerating electric field of the TM01 mode. Ideally, the cavity Qo would remain constant as the accelerating field is increased all the way up to the point of a magnetic quench field, as indicated by the "ideal" dashed line in the plot below. In reality, though, even a well prepared niobium cavity will have a Q vs E curve that lies beneath the ideal, as shown by the "good" curve in the plot. There are many phenomena that can occur in an SRF cavity to degrade its Q vs E performance, such as impurities in the niobium, hydrogen contamination due to excessive heat during chemistry, and a rough surface finish. After a couple decades of development, a necessary prescription for successful SRF cavity production is emerging. This includes: Eddy-current scanning of the raw niobium sheet for impurities, Good quality control of electron beam welding parameters, Maintaining low cavity temperatures during acid chemistry to avoid hydrogen contamination, Electropolishing of the cavity interior to achieve a very smooth surface, High pressure rinsing (HPR) of the cavity interior in a clean room with filtered water to remove particulate contamination, Careful assembly of the cavity and other vacuum apparatus in a clean room with clean practices, A vacuum bake of the cavity at 120 °C for 48 hours; this typically improves Qo by a factor of 2. There remains some uncertainty as to the root cause of why some of these steps lead to success, such as the electropolish and vacuum bake. However, if this prescription is not followed, the Q vs E curve often shows an excessive degradation of Qo with increasing field, as shown by the "Q slope" curve in the plot below. Finding the root causes of Q slope phenomena is the subject of ongoing fundamental SRF research. The insight gained could lead to simpler cavity fabrication processes as well as benefit future material development efforts to find higher Tc alternatives to niobium. In 2012, the Q(E) dependence on SRF cavities discovered for the first time in such a way that the Q-rise phenomenon was observed in Ti doped SRF cavity. The quality factor increases with increase in accelerating field and was explained by the presence of sharper peaks in the electronic density of states at the gap edges in doped cavities and such peaks being broadened by the rf current. Later the similar phenomenon was observed with nitrogen doping and which has been the current state-of-art cavity preparation for high performance. Wakefields and higher order modes (HOMs) One of the main reasons for using SRF cavities in particle accelerators is that their large apertures result in low beam impedance and higher thresholds of deleterious beam instabilities. As a charged particle beam passes through a cavity, its electromagnetic radiation field is perturbed by the sudden increase of the conducting wall diameter in the transition from the small-diameter beampipe to the large hollow RF cavity. A portion of the particle's radiation field is then "clipped off" upon re-entrance into the beampipe and left behind as wakefields in the cavity. The wakefields are simply superimposed upon the externally driven accelerating fields in the cavity. The spawning of electromagnetic cavity modes as wakefields from the passing beam is analogous to a drumstick striking a drumhead and exciting many resonant mechanical modes. The beam wakefields in an RF cavity excite a subset of the spectrum of the many electromagnetic modes, including the externally driven TM01 mode. There are then a host of beam instabilities that can occur as the repetitive particle beam passes through the RF cavity, each time adding to the wakefield energy in a collection of modes. For a particle bunch with charge q, a length much shorter than the wavelength of a given cavity mode, and traversing the cavity at time t=0, the amplitude of the wakefield voltage left behind in the cavity in a given mode is given by , where: R is the shunt impedance of the cavity mode defined by , E is the electric field of the RF mode, Pd is the power dissipated in the cavity to produce the electric field E, QL is the "loaded Q" of the cavity, which takes into account energy leakage out of the coupling antenna, ωo is the angular frequency of the mode, the imaginary exponential is the mode's sinusoidal time variation, the real exponential term quantifies the decay of the wakefield with time, and is termed the loss parameter of the RF mode. The shunt impedance R can be calculated from the solution of the electromagnetic fields of a mode, typically by a computer program that solves for the fields. In the equation for Vwake, the ratio R/Qo serves as a good comparative measure of wakefield amplitude for various cavity shapes, since the other terms are typically dictated by the application and are fixed. Mathematically, , where relations defined above have been used. R/Qo is then a parameter that factors out cavity dissipation and is viewed as measure of the cavity geometry's effectiveness of producing accelerating voltage per stored energy in its volume. The wakefield being proportional to R/Qo can be seen intuitively since a cavity with small beam apertures concentrates the electric field on axis and has high R/Qo, but also clips off more of the particle bunch's radiation field as deleterious wakefields. The calculation of electromagnetic field buildup in a cavity due to wakefields can be complex and depends strongly on the specific accelerator mode of operation. For the straightforward case of a storage ring with repetitive particle bunches spaced by time interval Tb and a bunch length much shorter than the wavelength of a given mode, the long term steady state wakefield voltage presented to the beam by the mode is given by , where: is the decay of the wakefield between bunches, and δ is the phase shift of the wakefield mode between bunch passages through the cavity. As an example calculation, let the phase shift δ=0, which would be close to the case for the TM01 mode by design and unfortunately likely to occur for a few HOM's. Having δ=0 (or an integer multiple of an RF mode's period, δ=n2π) gives the worse-case wakefield build-up, where successive bunches are maximally decelerated by previous bunches' wakefields and give up even more energy than with only their "self wake". Then, taking ωo = 2π 500 MHz, Tb=1 μs, and QL=106, the buildup of wakefields would be Vss wake=637×Vwake. A pitfall for any accelerator cavity would be the presence of what is termed a "trapped mode". This is an HOM that does not leak out of the cavity and consequently has a QL that can be orders of magnitude larger than used in this example. In this case, the buildup of wakefields of the trapped mode would likely cause a beam instability. The beam instability implications due to the Vss wake wakefields is thus addressed differently for the fundamental accelerating mode TM01 and all other RF modes, as described next. Fundamental accelerating mode TM010 The complex calculations treating wakefield-related beam stability for the TM010 mode in accelerators show that there are specific regions of phase between the beam bunches and the driven RF mode that allow stable operation at the highest possible beam currents. At some point of increasing beam current, though, just about any accelerator configuration will become unstable. As pointed out above, the beam wakefield amplitude is proportional to the cavity parameter R/Qo, so this is typically used as a comparative measure of the likelihood of TM01 related beam instabilities. A comparison of R/Qo and R for a 500 MHz superconducting cavity and a 500 MHz normal-conducting cavity is shown below. The accelerating voltage provided by both cavities is comparable for a given net power consumption when including refrigeration power for SRF. The R/Qo for the SRF cavity is 15 times less than the normal-conducting version, and thus less beam-instability susceptible. This one of the main reasons such SRF cavities are chosen for use in high-current storage rings. Higher order modes (HOMs) In addition to the fundamental accelerating TM010 mode of an RF cavity, numerous higher frequency modes and a few lower-frequency dipole modes are excited by charged particle beam wakefields, all generally denoted higher order modes (HOMs). These modes serve no useful purpose for accelerator particle beam dynamics, only giving rise to beam instabilities, and are best heavily damped to have as low a QL as possible. The damping is accomplished by preferentially allowing dipole and all HOMs to leak out of the SRF cavity, and then coupling them to resistive RF loads. The leaking out of undesired RF modes occurs along the beampipe, and results from a careful design of the cavity aperture shapes. The aperture shapes are tailored to keep the TM01 mode "trapped" with high Qo inside of the cavity and allow HOMs to propagate away. The propagation of HOMs is sometimes facilitated by having a larger diameter beampipe on one side of the cavity, beyond the smaller diameter cavity iris, as seen in the SRF cavity CAD cross-section at the top of this wiki page. The larger beampipe diameter allows the HOMs to easily propagate away from the cavity to an HOM antenna or beamline absorber. The resistive load for HOMs can be implemented by having loop antennas located at apertures on the side of the beampipe, with coaxial lines routing the RF to outside of the cryostat to standard RF loads. Another approach is to place the HOM loads directly on the beampipe as hollow cylinders with RF lossy material attached to the interior surface, as shown in the adjacent image. This "beamline load" approach can be more technically challenging, since the load must absorb high RF power while preserving a high-vacuum beamline environment in close proximity to a contamination-sensitive SRF cavity. Further, such loads must sometimes operate at cryogenic temperatures to avoid large thermal gradients along the beampipe from the cold SRF cavity. The benefit of the beamline HOM load configuration, however, is a greater absorptive bandwidth and HOM attenuation as compared to antenna coupling. This benefit can be the difference between a stable vs. an unstable particle beam for high current accelerators. Cryogenics A significant part of SRF technology is cryogenic engineering. The SRF cavities tend to be thin-walled structures immersed in a bath of liquid helium having temperature 1.6 K to 4.5 K. Careful engineering is then required to insulate the helium bath from the room-temperature external environment. This is accomplished by: A vacuum chamber surrounding the cold components to eliminate convective heat transfer by gases. Multi-layer insulation wrapped around cold components. This insulation is composed of dozens of alternating layers of aluminized mylar and thin fiberglass sheet, which reflects infrared radiation that shines through the vacuum insulation from the 300 K exterior walls. Low thermal conductivity mechanical connections between the cold mass and the room temperature vacuum vessel. These connections are required, for example, to support the mass of the helium vessel inside the vacuum vessel and to connect the apertures in the SRF cavity to the accelerator beamline. Both types of connections transition from internal cryogenic temperatures to room temperature at the vacuum vessel boundary. The thermal conductivity of these parts is minimized by having small cross sectional area and being composed of low thermal conductivity material, such as stainless steel for the vacuum beampipe and fiber reinforced epoxies (G10) for mechanical support. The vacuum beampipe also requires good electrical conductivity on its interior surface to propagate the image currents of the beam, which is accomplished by about 100 μm of copper plating on the interior surface. The major cryogenic engineering challenge is the refrigeration plant for the liquid helium. The small power that is dissipated in an SRF cavity and the heat leak to the vacuum vessel are both heat loads at very low temperature. The refrigerator must replenish this loss with an inherent poor efficiency, given by the product of the Carnot efficiency ηC and a "practical" efficiency ηp. The Carnot efficiency derives from the second law of thermodynamics and can be quite low. It is given by where Tcold is the temperature of the cold load, which is the helium vessel in this case, and Twarm is the temperature of the refrigeration heat sink, usually room temperature. In most cases Twarm =300 K, so for Tcold ≥150 K the Carnot efficiency is unity. The practical efficiency is a catch-all term that accounts for the many mechanical non-idealities that come into play in a refrigeration system aside from the fundamental physics of the Carnot efficiency. For a large refrigeration installation there is some economy of scale, and it is possible to achieve ηp in the range of 0.2–0.3. The wall-plug power consumed by the refrigerator is then , where Pcold is the power dissipated at temperature Tcold . As an example, if the refrigerator delivers 1.8 K helium to the cryomodule where the cavity and heat leak dissipate Pcold=10 W, then the refrigerator having Twarm=300 K and ηp=0.3 would have ηC=0.006 and a wall-plug power of Pwarm=5.5 kW. Of course, most accelerator facilities have numerous SRF cavities, so the refrigeration plants can get to be very large installations. The temperature of operation of an SRF cavity is typically selected as a minimization of wall-plug power for the entire SRF system. The plot to the right then shows the pressure to which the helium vessel must be pumped to obtain the desired liquid helium temperature. Atmospheric pressure is 760 Torr (101.325 kPa), corresponding to 4.2 K helium. The superfluid λ point occurs at about 38 Torr (5.1 kPa), corresponding to 2.18 K helium. Most SRF systems either operate at atmospheric pressure, 4.2 K, or below the λ point at a system efficiency optimum usually around 1.8 K, corresponding to about 12 Torr (1.6 kPa). See also Cavity quantum electrodynamics Circuit quantum electrodynamics References Accelerator physics Superconductivity
Superconducting radio frequency
[ "Physics", "Materials_science", "Engineering" ]
6,202
[ "Applied and interdisciplinary physics", "Physical quantities", "Superconductivity", "Materials science", "Experimental physics", "Condensed matter physics", "Accelerator physics", "Electrical resistance and conductance" ]
15,714,959
https://en.wikipedia.org/wiki/Expanded%20metal
Expanded metal is a type of sheet metal which has been cut and stretched to form a regular pattern (often diamond-shaped) of mesh-like material. It is commonly used for fences and grates, and as metallic lath to support plaster or stucco. Description Expanded metal is stronger than an equivalent weight of wire mesh such as chicken wire, because the material is flattened, allowing the metal to stay in one piece. The other benefit to expanded metal is that the metal is never completely cut and reconnected, allowing the material to retain its strength. History The inventor and patentee of expanded metal is John French Golding, whose first British patent was issued in 1884. He forged partnerships with Hartlepool industrialists Mathew Gray, Christopher Furness and Robert Irving Jr., who, together with W.B Close, brought the manufacture of expanded metal to Hartlepool. The Expanded Metal Company Limited of Hartlepool, United Kingdom, remains a recognised market leader globally in the manufacture of expanded metal to this day and is one of the largest employers locally. Design Some commonly used shapes are circles, squares, and diamonds; diamonds are the most popular shapes because of how well the shape absorbs energy and resists mechanical deformation after installation. Other design considerations are the size and angles of the shapes, which will also affect how well the metal absorbs energy and where the energy is spread throughout the expanded metal. For the diamond shape, there are at least four different angles that come into account, the two acute and two obtuse angles. The larger the angles, the less strength the shape will have because there would be too much space inside the shape. However, if the angles are too small, strength is lost because the shape is too close together, so there is no space for the structure to hold. The angle in which the shapes are laid also plays a significant role. If the angle is zero, the ends of the shape point to the start and the end of the sheet, making straight lines across the sheet of diamonds. This option provides the most strength when it comes to compressing the sheet on its side. This can even take more pressure than a solid piece of metal because the sheet will compress and spread the pressure throughout the sheet. The other four commonly used angles are 60°, 90°, 90° plus 60°, and 60° plus 90°. A 60° angle puts the diamond diagonal at the start and end to the sheet. A 90° angle makes the diamond vertical to the start and the end of a sheet. The 90° plus 60° and 60° plus 90° angles combine both a 60° angle and a 90° angle; the order of the angles is respective to the order in naming. The expanded metal can be manufactured and supplied as standard mesh; or it can be flattened by further levelling processes to have a smooth surface, which allows usage of mesh in more applications, such as prisons, as it no longer has sharp edges that could cause injury. Applications Expanded metal is frequently used to make fences, walkways, and grates, as the material is very durable and strong, unlike lighter and less expensive wire mesh. The many small openings in the material allow flow through of air, water, and light, while still providing a mechanical barrier to larger objects. Another advantage to using expanded metal as opposed to plain sheet metal is that the exposed edges of the expanded metal provide more traction, which has led to its use in catwalks or drainage covers. Large quantities of expanded metal are used by the construction industry as metal lath to support materials such as plaster, stucco, or adobe in walls and other structures. Expanded metal is also used by artists, especially sculptors, who use the material to form complex 3-dimensional surfaces and compound curves which can then be covered with plaster, clay, or other materials. For example, Niki de Saint Phalle made extensive use of expanded metal to support the curved surfaces of large-scale architectural sculptures in her Tarot Garden sculpture garden, in Tuscany, Italy. A similar material made of stiff sheets of paper or cardboard is used as a low-cost cushioning and packaging material. In contemporary architecture, expanded metal has been used as an exposed facade or screen material which can be formed into simple or complex decorative shapes. Photographic images may be printed on the surface, producing textures or large graphic images, which still allow light to filter through the exterior surface of a building. Safety Freshly-cut expanded metal has a large number of exposed sharp edges, requiring caution and protective clothing, such as leather gloves and aprons to prevent skin abrasions and cuts. Notes References Building materials
Expanded metal
[ "Physics", "Engineering" ]
940
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
19,668,009
https://en.wikipedia.org/wiki/Polymer%20field%20theory
A polymer field theory is a statistical field theory describing the statistical behavior of a neutral or charged polymer system. It can be derived by transforming the partition function from its standard many-dimensional integral representation over the particle degrees of freedom in a functional integral representation over an auxiliary field function, using either the Hubbard–Stratonovich transformation or the delta-functional transformation. Computer simulations based on polymer field theories have been shown to deliver useful results, for example to calculate the structures and properties of polymer solutions (Baeurle 2007, Schmid 1998), polymer melts (Schmid 1998, Matsen 2002, Fredrickson 2002) and thermoplastics (Baeurle 2006). Canonical ensemble Particle representation of the canonical partition function The standard continuum model of flexible polymers, introduced by Edwards (Edwards 1965), treats a solution composed of linear monodisperse homopolymers as a system of coarse-grained polymers, in which the statistical mechanics of the chains is described by the continuous Gaussian thread model (Baeurle 2007) and the solvent is taken into account implicitly. The Gaussian thread model can be viewed as the continuum limit of the discrete Gaussian chain model, in which the polymers are described as continuous, linearly elastic filaments. The canonical partition function of such a system, kept at an inverse temperature and confined in a volume , can be expressed as where is the potential of mean force given by, representing the solvent-mediated non-bonded interactions among the segments, while represents the harmonic binding energy of the chains. The latter energy contribution can be formulated as where is the statistical segment length and the polymerization index. Field-theoretic transformation To derive the basic field-theoretic representation of the canonical partition function, one introduces in the following the segment density operator of the polymer system Using this definition, one can rewrite Eq. (2) as Next, one converts the model into a field theory by making use of the Hubbard-Stratonovich transformation or delta-functional transformation where is a functional and is the delta functional given by with representing the auxiliary field function. Here we note that, expanding the field function in a Fourier series, implies that periodic boundary conditions are applied in all directions and that the -vectors designate the reciprocal lattice vectors of the supercell. Basic field-theoretic representation of canonical partition function Using the Eqs. (3), (4) and (5), we can recast the canonical partition function in Eq. (1) in field-theoretic representation, which leads to where can be interpreted as the partition function for an ideal gas of non-interacting polymers and is the path integral of a free polymer in a zero field with elastic energy In the latter equation the unperturbed radius of gyration of a chain . Moreover, in Eq. (6) the partition function of a single polymer, subjected to the field , is given by Grand canonical ensemble Basic field-theoretic representation of grand canonical partition function To derive the grand canonical partition function, we use its standard thermodynamic relation to the canonical partition function, given by where is the chemical potential and is given by Eq. (6). Performing the sum, this provides the field-theoretic representation of the grand canonical partition function, where is the grand canonical action with defined by Eq. (8) and the constant Moreover, the parameter related to the chemical potential is given by where is provided by Eq. (7). Mean field approximation A standard approximation strategy for polymer field theories is the mean field (MF) approximation, which consists in replacing the many-body interaction term in the action by a term where all bodies of the system interact with an average effective field. This approach reduces any multi-body problem into an effective one-body problem by assuming that the partition function integral of the model is dominated by a single field configuration. A major benefit of solving problems with the MF approximation, or its numerical implementation commonly referred to as the self-consistent field theory (SCFT), is that it often provides some useful insights into the properties and behavior of complex many-body systems at relatively low computational cost. Successful applications of this approximation strategy can be found for various systems of polymers and complex fluids, like e.g. strongly segregated block copolymers of high molecular weight, highly concentrated neutral polymer solutions or highly concentrated block polyelectrolyte (PE) solutions (Schmid 1998, Matsen 2002, Fredrickson 2002). There are, however, a multitude of cases for which SCFT provides inaccurate or even qualitatively incorrect results (Baeurle 2006a). These comprise neutral polymer or polyelectrolyte solutions in dilute and semidilute concentration regimes, block copolymers near their order-disorder transition, polymer blends near their phase transitions, etc. In such situations the partition function integral defining the field-theoretic model is not entirely dominated by a single MF configuration and field configurations far from it can make important contributions, which require the use of more sophisticated calculation techniques beyond the MF level of approximation. Higher-order corrections One possibility to face the problem is to calculate higher-order corrections to the MF approximation. Tsonchev et al. developed such a strategy including leading (one-loop) order fluctuation corrections, which allowed to gain new insights into the physics of confined PE solutions (Tsonchev 1999). However, in situations where the MF approximation is bad many computationally demanding higher-order corrections to the integral are necessary to get the desired accuracy. Renormalization techniques An alternative theoretical tool to cope with strong fluctuations problems occurring in field theories has been provided in the late 1940s by the concept of renormalization, which has originally been devised to calculate functional integrals arising in quantum field theories (QFT's). In QFT's a standard approximation strategy is to expand the functional integrals in a power series in the coupling constant using perturbation theory. Unfortunately, generally most of the expansion terms turn out to be infinite, rendering such calculations impracticable (Shirkov 2001). A way to remove the infinities from QFT's is to make use of the concept of renormalization (Baeurle 2007). It mainly consists in replacing the bare values of the coupling parameters, like e.g. electric charges or masses, by renormalized coupling parameters and requiring that the physical quantities do not change under this transformation, thereby leading to finite terms in the perturbation expansion. A simple physical picture of the procedure of renormalization can be drawn from the example of a classical electrical charge, , inserted into a polarizable medium, such as in an electrolyte solution. At a distance from the charge due to polarization of the medium, its Coulomb field will effectively depend on a function , i.e. the effective (renormalized) charge, instead of the bare electrical charge, . At the beginning of the 1970s, K.G. Wilson further pioneered the power of renormalization concepts by developing the formalism of renormalization group (RG) theory, to investigate critical phenomena of statistical systems (Wilson 1971). Renormalization group theory The RG theory makes use of a series of RG transformations, each of which consists of a coarse-graining step followed by a change of scale (Wilson 1974). In case of statistical-mechanical problems the steps are implemented by successively eliminating and rescaling the degrees of freedom in the partition sum or integral that defines the model under consideration. De Gennes used this strategy to establish an analogy between the behavior of the zero-component classical vector model of ferromagnetism near the phase transition and a self-avoiding random walk of a polymer chain of infinite length on a lattice, to calculate the polymer excluded volume exponents (de Gennes 1972). Adapting this concept to field-theoretic functional integrals, implies to study in a systematic way how a field theory model changes while eliminating and rescaling a certain number of degrees of freedom from the partition function integral (Wilson 1974). Hartree renormalization An alternative approach is known as the Hartree approximation or self-consistent one-loop approximation (Amit 1984). It takes advantage of Gaussian fluctuation corrections to the -order MF contribution, to renormalize the model parameters and extract in a self-consistent way the dominant length scale of the concentration fluctuations in critical concentration regimes. Tadpole renormalization In a more recent work Efimov and Nogovitsin showed that an alternative renormalization technique originating from QFT, based on the concept of tadpole renormalization, can be a very effective approach for computing functional integrals arising in statistical mechanics of classical many-particle systems (Efimov 1996). They demonstrated that the main contributions to classical partition function integrals are provided by low-order tadpole-type Feynman diagrams, which account for divergent contributions due to particle self-interaction. The renormalization procedure performed in this approach effects on the self-interaction contribution of a charge (like e.g. an electron or an ion), resulting from the static polarization induced in the vacuum due to the presence of that charge (Baeurle 2007). As evidenced by Efimov and Ganbold in an earlier work (Efimov 1991), the procedure of tadpole renormalization can be employed very effectively to remove the divergences from the action of the basic field-theoretic representation of the partition function and leads to an alternative functional integral representation, called the Gaussian equivalent representation (GER). They showed that the procedure provides functional integrals with significantly ameliorated convergence properties for analytical perturbation calculations. In subsequent works Baeurle et al. developed effective low-cost approximation methods based on the tadpole renormalization procedure, which have shown to deliver useful results for prototypical polymer and PE solutions (Baeurle 2006a, Baeurle 2006b, Baeurle 2007a). Numerical simulation Another possibility is to use Monte Carlo (MC) algorithms and to sample the full partition function integral in field-theoretic formulation. The resulting procedure is then called a polymer field-theoretic simulation. In a recent work, however, Baeurle demonstrated that MC sampling in conjunction with the basic field-theoretic representation is impracticable due to the so-called numerical sign problem (Baeurle 2002). The difficulty is related to the complex and oscillatory nature of the resulting distribution function, which causes a bad statistical convergence of the ensemble averages of the desired thermodynamic and structural quantities. In such cases special analytical and numerical techniques are necessary to accelerate the statistical convergence (Baeurle 2003, Baeurle 2003a, Baeurle 2004). Mean field representation To make the methodology amenable for computation, Baeurle proposed to shift the contour of integration of the partition function integral through the homogeneous MF solution using Cauchy's integral theorem, providing its so-called mean-field representation. This strategy was previously successfully employed by Baer et al. in field-theoretic electronic structure calculations (Baer 1998). Baeurle could demonstrate that this technique provides a significant acceleration of the statistical convergence of the ensemble averages in the MC sampling procedure (Baeurle 2002, Baeurle 2002a). Gaussian equivalent representation In subsequent works Baeurle et al. (Baeurle 2002, Baeurle 2002a, Baeurle 2003, Baeurle 2003a, Baeurle 2004) applied the concept of tadpole renormalization, leading to the Gaussian equivalent representationof the partition function integral, in conjunction with advanced MC techniques in the grand canonical ensemble. They could convincingly demonstrate that this strategy provides a further boost in the statistical convergence of the desired ensemble averages (Baeurle 2002). References External links University of Regensburg Research Group on Theory and Computation of Advanced Materials Statistical field theories
Polymer field theory
[ "Physics" ]
2,498
[ "Physical phenomena", "Statistical mechanics", "Critical phenomena", "Statistical field theories" ]
19,669,402
https://en.wikipedia.org/wiki/Mark%20Benson%20%28engineer%29
Mark Benson (born Mark Müller; Jun 19, 1888 – May 1965) was a Bohemian German engineer, best known as the inventor of a supercritical boiler. Benson was born in the Sudetenland, and his original name was Müller (he changed his name during World War I to hide his German origin). He emigrated to the United States, then returned to Europe to work for the English Electric Company in Rugby. For English Electric he designed a relatively small steam generator (3 tons/hr), but with—for the time—very high pressure (supercritical) and without any drum. In 1922 Benson was granted a patent for this type of boiler. In 1924, Siemens acquired the right to use Benson's patent, and in 1926–27 built the first large Benson boiler in Berlin-Gartenfeld. Siemens improved the technology and developed it as an internationally acknowledged standard for large steam generators. Since 1933 (until today) Siemens do not manufacture their own Benson boilers any more, but instead license the technology to others. After his patent, Mark Benson did not make any further public appearances, but Siemens continued to use Benson as a registered trademark for this successful type of boiler, so the name is renowned worldwide in boiler engineering, although relatively little is known about the inventor behind it. References Date of death missing British mechanical engineers German Bohemian people People associated with electricity Boilers Emigrants from Austria-Hungary to the United States 1888 births
Mark Benson (engineer)
[ "Chemistry" ]
293
[ "Boilers", "Pressure vessels" ]
19,675,057
https://en.wikipedia.org/wiki/Upper%20tropospheric%20cyclonic%20vortex
An upper tropospheric cyclonic vortex is a vortex, or a circulation with a definable center, that usually moves slowly from east-northeast to west-southwest and is prevalent across Northern Hemisphere's warm season. Its circulations generally do not extend below in altitude, as it is an example of a cold-core low. A weak inverted wave in the easterlies is generally found beneath it, and it may also be associated with broad areas of high-level clouds. Downward development results in an increase of cumulus cloudy and the appearance of circulation at ground level. In rare cases, a warm-core cyclone can develop in its associated convective activity, resulting in a tropical cyclone and a weakening and southwest movement of the nearby upper tropospheric cyclonic vortex. Symbiotic relationships can exist between tropical cyclones and the upper level lows in their wake, with the two systems occasionally leading to their mutual strengthening. When they move over land during the warm season, an increase in monsoon rains occurs History of research Using charts of mean 200-hectopascal circulation for July through August (located above sea level) to locate the circumpolar troughs and ridges, trough lines extend over the eastern and central North Pacific and over the North Atlantic. Case studies of upper tropospheric cyclones in the Atlantic and Pacific have been performed by using airplane reports (winds, temperatures and heights), radiosonde data, geostationary satellite cloud imagery, and cloud-tracked winds throughout the troposphere. It was determined they were the origin of an upper tropospheric cold-core lows, or cut-off lows. Characteristics The tropical upper tropospheric cyclone has a cold core, meaning it is stronger aloft than at the Earth's surface, or stronger in areas of the troposphere with lower pressures. This is explained by the thermal wind relationship. It also means that a pool of cold air aloft is associated with the feature. If both an upper tropospheric cold-core low and lower tropospheric easterly wave trough are in-phase, with the easterly wave near or to the east of the upper level cyclone, thunderstorm development (also known as moist convection) is enhanced. If they are out-of-phase, with the tropical wave west of the upper level circulation, convection is suppressed due to convergence aloft leading to downward motion over the tropical wave or surface trough in the easterlies. Upper level cyclones also interact with troughs in the subtropical westerlies, such as cold fronts and stationary fronts. When the subtropical disturbances in the Northern Hemisphere actively move southward, or dig, the area between the upper tropospheric anticyclone to its west and cold-core low to its east generally have strong northeasterly winds in addition to a rapid development of active thunderstorm activity. Cloud bands associated with upper tropospheric cyclonic vortices are aligned with the vertical wind shear. Animated satellite cloud imagery is a better tool for their early detection and tracking. The low-level convergence caused by the cut-off low can trigger squall lines and rough seas, and the low-level spiral cloud bands caused by the upper level circulation are parallel to the low-level wind direction. This has also been witnessed with upper level lows which occur at higher latitudes. For example, in areas where small-scale snow bands develop within the cold sector of extratropical cyclones. Climatology In the Northern Hemisphere, the tropical upper tropospheric trough (TUTT) normally occurs between May and November, with peak activity between July and September. James Sadler suggested a revised model for the TUTT during the early part of the typhoon season in the western Pacific. Both Sadler and Lance Bosart have shown that the tropical upper tropospheric trough cyclonic cells are caused by the mid-latitude disturbance riding around the western side of the tropical upper tropospheric trough when the subtropical ridge to its south is quite weak. In the north Atlantic, the TUTT is characterized by the semi-permanent circulation pattern that forms in the North Atlantic between August and November. Toby Carlson evaluated data over the eastern Caribbean sea for October 1965 and pinpointed the presence of an upper tropospheric cold-core cyclone. These cold-core cyclones generally form close to the Azores and move south and westward towards a latitude of 20°N. These circulations extend over an area of about 20° of latitude (or ) and 40° of longitude. The lowest level of closed circulation underneath the upper level cold-core cyclone is often between the 700 and the 500-hectopascal level ( to above sea level). Their life cycles span 5 to 14 days. The upper tropospheric cyclonic centers in the North Atlantic differ from that in the North Pacific. Most of them are detectable in the low tropospheric temperature field as cold troughs in the easterlies. They tend to vertically tilt toward the northeast. Cumulonimbus clouds and rainfall occur in the southeast quadrant, approximately 5° latitude (or ) from the upper cyclone center. Large variations of cloud cover can exist in different systems. The summer tropical upper tropospheric trough is a dominant feature over the trade wind regions of the North Atlantic Ocean, Gulf of Mexico, and Caribbean Sea, and that the lower tropospheric responses to the tropical upper tropospheric trough in the North Atlantic are differ from those in the North Pacific. Interaction with tropical cyclones The summer TUTT in the Southern Hemisphere lies over the trade wind region of the east central Pacific and can cause tropical cyclogenesis offshore Central America. University of Hawaii Professor James C. Sadler has documented tropical cyclones over the eastern North Pacific that were revealed by weather satellite observations, and suggested that the upper-tropospheric circulation is a factor in the development, as well as the life history, of the tropical cyclones. Ralph Huschke and Gary Atkinson proposed that a moist southwest wind that results from southeast trades of the eastern South Pacific deflecting towards the Pacific coasts of Central America between June and November, is known as the "temporale". Temporales are most frequent in July and August, when they can reach gale force and cause rough seas/swell. The area of heavy rain is generally located in the northeast quadrant approximately 5° of latitude (or ) from the eye. In the western Pacific, tropical upper tropospheric lows are the main cause for the few tropical cyclones which develop north of the 20th parallel north and east of the 160th meridian east during La Nina events. Trailing upper cyclones and upper troughs can cause additional outflow channels and aid in the intensification process of tropical cyclones. Developing tropical disturbances can help create or deepen upper troughs or upper lows in their wake due to the outflow jet stream emanating from the developing tropical disturbance/cyclone. In the western North Pacific, there are strong reciprocal relationships between the areas of formative tropical cyclones and that of the lower tropospheric monsoon troughs and the tropical upper tropospheric trough. Tropical cyclone movement can also be influenced by TUTT cells within of their position, which can lead to non-climatological tropical cyclone tracks. Interaction with monsoon regimes As upper level lows retrograde over land masses, they can enhance thunderstorm activity during the afternoon. This magnifies regional monsoon regimes, such as that over western North America near the United States and Mexican border, which can be used to effectively forecast monsoon surges in precipitation magnitude. Across the north Indian Ocean, the formation of this type of vortex leads to the onset of monsoon rains during the wet season. References Satellite interpretation Storm Types of cyclone Vortices
Upper tropospheric cyclonic vortex
[ "Chemistry", "Mathematics" ]
1,608
[ "Dynamical systems", "Vortices", "Fluid dynamics" ]
19,679,428
https://en.wikipedia.org/wiki/Inertance
In fluid mechanics, inertance is a measure of the pressure difference in a fluid required to cause a unit change in the rate of change of volumetric flow-rate with time. The base SI units of inertance are or and the usual symbol is . The inertance of a tube is given by: where is the density (with dimensionality of mass per volume) of the fluid is the length of the tube is the cross-sectional area of the tube The pressure difference is related to the change in flow-rate by the equation: where is the pressure of the fluid is the volumetric flow-rate (with dimensionality of volume per time) This equation assumes constant density, that the acceleration is uniform, and that the flow is fully developed "plug flow". This precludes sharp bends, water hammer, and so on. To some, it may appear counterintuitive that an increase in cross-sectional area of a tube reduces the inertance of the tube. However, for the same mass flow-rate, a lower cross-sectional area implies a higher fluid velocity and therefore a higher pressure difference to accelerate the fluid. In respiratory physiology, inertance (of air) is measured in cm s2 L−1. 1 cm s2 L−1 ≈ 98100 Pa s2 m−3. Using small-signal analysis, an inertance can be represented as a fluid reactance (cf. electrical reactance) through the relation: where is the frequency in Hz. References Fluid mechanics
Inertance
[ "Engineering" ]
311
[ "Civil engineering", "Fluid mechanics" ]
22,322,716
https://en.wikipedia.org/wiki/Dipropylene%20glycol
Dipropylene glycol is a mixture of three isomeric chemical compounds, 4-oxa-2,6-heptandiol, 2-(2-hydroxy-propoxy)-propan-1-ol, and 2-(2-hydroxy-1-methyl-ethoxy)-propan-1-ol. It is a colorless, nearly odorless liquid with a high boiling point and low toxicity. Uses Dipropylene glycol finds many uses as a plasticizer, an intermediate in industrial chemical reactions, as a polymerization initiator or monomer, and as a solvent. Its low toxicity and solvent properties make it an ideal additive for perfumes and skin and hair care products. It is also a common ingredient in commercial fog fluid, used in entertainment industry fog machines. References Cosmetics chemicals Monomers Plasticizers Diols Glycol ethers
Dipropylene glycol
[ "Chemistry", "Materials_science" ]
190
[ "Monomers", "Polymer chemistry" ]
22,323,371
https://en.wikipedia.org/wiki/Hydrophobicity%20scales
Hydrophobicity scales are values that define the relative hydrophobicity or hydrophilicity of amino acid residues. The more positive the value, the more hydrophobic are the amino acids located in that region of the protein. These scales are commonly used to predict the transmembrane alpha-helices of membrane proteins. When consecutively measuring amino acids of a protein, changes in value indicate attraction of specific protein regions towards the hydrophobic region inside lipid bilayer. The hydrophobic or hydrophilic character of a compound or amino acid is its hydropathic character, hydropathicity, or hydropathy. Hydrophobicity and the hydrophobic effect The hydrophobic effect represents the tendency of water to exclude non-polar molecules. The effect originates from the disruption of highly dynamic hydrogen bonds between molecules of liquid water. Polar chemical groups, such as OH group in methanol do not cause the hydrophobic effect. However, a pure hydrocarbon molecule, for example hexane, cannot accept or donate hydrogen bonds to water. Introduction of hexane into water causes disruption of the hydrogen bonding network between water molecules. The hydrogen bonds are partially reconstructed by building a water "cage" around the hexane molecule, similar to that in clathrate hydrates formed at lower temperatures. The mobility of water molecules in the "cage" (or solvation shell) is strongly restricted. This leads to significant losses in translational and rotational entropy of water molecules and makes the process unfavorable in terms of free energy of the system. In terms of thermodynamics, the hydrophobic effect is the free energy change of water surrounding a solute. A positive free energy change of the surrounding solvent indicates hydrophobicity, whereas a negative free energy change implies hydrophilicity. In this way, the hydrophobic effect not only can be localized but also decomposed into enthalpic and entropic contributions. Types of amino acid hydrophobicity scales A number of different hydrophobicity scales have been developed. The Expasy Protscale website lists a total of 22 hydrophobicity scales. There are clear differences between the four scales shown in the table. Both the second and fourth scales place cysteine as the most hydrophobic residue, unlike the other two scales. This difference is due to the different methods used to measure hydrophobicity. The method used to obtain the Janin and Rose et al. scales was to examine proteins with known 3-D structures and define the hydrophobic character as the tendency for a residue to be found inside of a protein rather than on its surface. Since cysteine forms disulfide bonds that must occur inside a globular structure, cysteine is ranked as the most hydrophobic. The first and third scales are derived from the physiochemical properties of the amino acid side chains. These scales result mainly from inspection of the amino acid structures. Biswas et al., divided the scales based on the method used to obtain the scale into five different categories. Partitioning methods The most common method of measuring amino acid hydrophobicity is partitioning between two immiscible liquid phases. Different organic solvents are most widely used to mimic the protein interior. However, organic solvents are slightly miscible with water and the characteristics of both phases change making it difficult to obtain pure hydrophobicity scale. Nozaki and Tanford proposed the first major hydrophobicity scale for nine amino acids. Ethanol and dioxane are used as the organic solvents and the free energy of transfer of each amino acid was calculated. Non liquid phases can also be used with partitioning methods such as micellar phases and vapor phases. Two scales have been developed using micellar phases. Fendler et al. measured the partitioning of 14 radiolabeled amino acids using sodium dodecyl sulfate (SDS) micelles. Also, amino acid side chain affinity for water was measured using vapor phases. Vapor phases represent the simplest non polar phases, because it has no interaction with the solute. The hydration potential and its correlation to the appearance of amino acids on the surface of proteins was studied by Wolfenden. Aqueous and polymer phases were used in the development of a novel partitioning scale. Partitioning methods have many drawbacks. First, it is difficult to mimic the protein interior. In addition, the role of self solvation makes using free amino acids very difficult. Moreover, hydrogen bonds that are lost in the transfer to organic solvents are not reformed but often in the interior of protein. Accessible surface area methods Hydrophobicity scales can also be obtained by calculating the solvent accessible surface areas for amino acid residues in the expended polypeptide chain or in alpha-helix and multiplying the surface areas by the empirical solvation parameters for the corresponding types of atoms. A differential solvent accessible surface area hydrophobicity scale based on proteins as compacted networks near a critical point, due to self-organization by evolution, was constructed based on asymptotic power-law (self-similar) behavior. This scale is based on a bioinformatic survey of 5526 high-resolution structures from the Protein Data Bank. This differential scale has two comparative advantages: (1) it is especially useful for treating changes in water-protein interactions that are too small to be accessible to conventional force-field calculations, and (2) for homologous structures, it can yield correlations with changes in properties from mutations in the amino acid sequences alone, without determining corresponding structural changes, either in vitro or in vivo. Chromatographic methods Reversed phase liquid chromatography (RPLC) is the most important chromatographic method for measuring solute hydrophobicity. The non polar stationary phase mimics biological membranes. Peptide usage has many advantages because partition is not extended by the terminal charges in RPLC. Also, secondary structures formation is avoided by using short sequence peptides. Derivatization of amino acids is necessary to ease its partition into a C18 bonded phase. Another scale had been developed in 1971 and used peptide retention on hydrophilic gel. 1-butanol and pyridine were used as the mobile phase in this particular scale and glycine was used as the reference value. Pliska and his coworkers used thin layer chromatography to relate mobility values of free amino acids to their hydrophobicities. About a decade ago, another hydrophilicity scale was published, this scale used normal phase liquid chromatography and showed the retention of 121 peptides on an amide-80 column. The absolute values and relative rankings of hydrophobicity determined by chromatographic methods can be affected by a number of parameters. These parameters include the silica surface area and pore diameter, the choice and pH of aqueous buffer, temperature and the bonding density of stationary phase chains. Site-directed mutagenesis This method use DNA recombinant technology and it gives an actual measurement of protein stability. In his detailed site-directed mutagenesis studies, Utani and his coworkers substituted 19 amino acids at Trp49 of the tryptophan synthase and he measured the free energy of unfolding. They found that the increased stability is directly proportional to increase in hydrophobicity up to a certain size limit. The main disadvantage of site-directed mutagenesis method is that not all the 20 naturally occurring amino acids can substitute a single residue in a protein. Moreover, these methods have cost problems and is useful only for measuring protein stability. Physical property methods The hydrophobicity scales developed by physical property methods are based on the measurement of different physical properties. Examples include, partial molar heat capacity, transition temperature and surface tension. Physical methods are easy to use and flexible in terms of solute. The most popular hydrophobicity scale was developed by measuring surface tension values for the naturally occurring 20 amino acids in NaCl solution. The main drawbacks of surface tension measurements is that the broken hydrogen bonds and the neutralized charged groups remain at the solution air interface. Another physical property method involve measuring the solvation free energy. The solvation free energy is estimated as a product of an accessibility of an atom to the solvent and an atomic solvation parameter. Results indicate the solvation free energy lowers by an average of 1 Kcal/residue upon folding. Recent applications Palliser and Parry have examined about 100 scales and found that they can use them for locating B-strands on the surface of proteins. Hydrophobicity scales were also used to predict the preservation of the genetic code. Trinquier observed a new order of the bases that better reflect the conserved character of the genetic code. They believed new ordering of the bases was uracil-guanine-cystosine-adenine (UGCA) better reflected the conserved character of the genetic code compared to the commonly seen ordering UCAG. Wimley–White whole residue hydrophobicity scales The Wimley–White whole residue hydrophobicity scales are significant for two reasons. First, they include the contributions of the peptide bonds as well as the sidechains, providing absolute values. Second, they are based on direct, experimentally determined values for transfer free energies of polypeptides. Two whole-residue hydrophobicity scales have been measured: One for the transfer of unfolded chains from water to the bilayer interface (referred to as the Wimley–White interfacial hydrophobicity scale). One for the transfer of unfolded chains into octanol, which is relevant to the hydrocarbon core of a bilayer. The Stephen H. White website provides an example of whole residue hydrophobicity scales showing the free energy of transfer ΔG(kcal/mol) from water to POPC interface and to n-octanol. These two scales are then used together to make Whole residue hydropathy plots. The hydropathy plot constructed using ΔGwoct − ΔGwif shows favorable peaks on the absolute scale that correspond to the known TM helices. Thus, the whole residue hydropathy plots illustrate why transmembrane segments prefer a transmembrane location rather than a surface one. Bandyopadhyay-Mehler protein structure based scales Most of the existing hydrophobicity scales are derived from the properties of amino acids in their free forms or as a part of a short peptide. Bandyopadhyay-Mehler hydrophobicity scale was based on partitioning of amino acids in the context of protein structure. Protein structure is a complex mosaic of various dielectric medium generated by arrangement of different amino acids. Hence, different parts of the protein structure most likely would behave as solvents with different dielectric values. For simplicity, each protein structure was considered as an immiscible mixture of two solvents, protein interior and protein exterior. The local environment around individual amino acid (termed as "micro-environment") was computed for both protein interior and protein exterior. The ratio gives the relative hydrophobicity scale for individual amino acids. Computation was trained on high resolution protein crystal structures. This quantitative descriptor for microenvironment was derived from the octanol-water partition coefficient, (known as Rekker's Fragmental Constants) widely used for pharmacophores. This scale well correlate with the existing methods, based on partitioning and free energy computations. Advantage of this scale is it is more realistic, as it is in the context of real protein structures. Scale based on contact angle of water nanodroplet In the field of engineering, the hydrophobicity (or dewetting ability) of a flat surface (e.g., a counter top in kitchen or a cooking pan) can be measured by the contact angle of water droplet. A University of Nebraska-Lincoln team recently devised a computational approach that can relate the molecular hydrophobicity scale of amino-acid chains to the contact angle of water nanodroplet. The team constructed planar networks composed of unified amino-acid side chains with native structure of the beta-sheet protein. Using molecular dynamics simulation, the team is able to measure the contact angle of water nanodroplet on the planar networks (caHydrophobicity). On the other hand, previous studies show that the minimum of excess chemical potential of a hard-sphere solute with respect to that in the bulk exhibits a linear dependence on cosine value of contact angle. Based on the computed excess chemical potentials of the purely repulsive methane-sized Weeks–Chandler–Andersen solute with respect to that in the bulk, the extrapolated values of cosine value of contact angle are calculated(ccHydrophobicity), which can be used to quantify the hydrophobicity of amino acid side chains with complete wetting behaviors. See also Hydrophobic mismatch References External links ProtScale (web-based tool for calculating hydropathy plots) NetSurfP - Secondary Structure and Surface accessibility predictor Whole residue hydrophobicity scale Membrane protein explorer Biophysics Intermolecular forces
Hydrophobicity scales
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
2,682
[ "Molecular physics", "Applied and interdisciplinary physics", "Materials science", "Intermolecular forces", "Biophysics" ]
22,324,819
https://en.wikipedia.org/wiki/Blandford%E2%80%93Znajek%20process
The Blandford–Znajek process is a mechanism for the extraction of energy from a rotating black hole, introduced by Roger Blandford and Roman Znajek in 1977. This mechanism is the most preferred description of how astrophysical jets are formed around spinning supermassive black holes. This is one of the mechanisms that power quasars, or rapidly accreting supermassive black holes. Generally speaking, it was demonstrated that the power output of the accretion disk is significantly larger than the power output extracted directly from the hole, through its ergosphere. Hence, the presence (or not) of a poloidal magnetic field around the black hole is not determinant in its overall power output. It was also suggested that the mechanism plays a crucial role as a central engine for a gamma-ray burst. Physics of the mechanism As in the Penrose process, the ergosphere plays an important role in the Blandford–Znajek process. In order to extract energy and angular momentum from the black hole, the electromagnetic field around the hole must be modified by magnetospheric currents. In order to drive such currents, the electric field needs to not be screened, and consequently the vacuum field created within the ergosphere by distant sources must have an unscreened component. The most favored way to provide this is an e± pair cascade in a strong electric and radiation field. As the ergosphere causes the magnetosphere inside it to rotate, the outgoing flux of angular momentum results in extraction of energy from the black hole. The Blandford–Znajek process requires an accretion disc with a strong poloidal magnetic field around a spinning black hole. The magnetic field extracts spin energy, and the power can be estimated as the energy density at the speed of light cylinder times area: where B is the magnetic field strength, is the Schwarzschild radius, and ω is the angular velocity. See also Penrose process, another mechanism to extract energy from a black hole Hawking radiation, another mechanism to extract mass, hence energy from a black hole Astrophysical jets, large structures seen around some quasars and created by the Blandford–Znajek process and/or the Penrose process References External links Physicists Identify the Engine Powering Black Hole Energy Beams a Quanta Magazine article on the SANE vs MAD scenarios of the Blandford–Znajek process Black holes
Blandford–Znajek process
[ "Physics", "Astronomy" ]
498
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Stellar phenomena", "Astronomical objects" ]
22,328,498
https://en.wikipedia.org/wiki/Efferocytosis
In cell biology, efferocytosis (from efferre, Latin for 'to carry out' (to the grave), extended meaning 'to bury') is the process by which apoptotic cells are removed by phagocytic cells. It can be regarded as the 'burying of dead cells'. During efferocytosis, the cell membrane of phagocytic cells engulfs the apoptotic cell, forming a large fluid-filled vesicle containing the dead cell. This ingested vesicle is called an efferosome (in analogy to the term phagosome). This process is similar to macropinocytosis. Mechanism For apoptosis, the effect of efferocytosis is that dead cells are removed before their membrane integrity is breached and their contents leak into the surrounding tissue. This prevents exposure of tissue to toxic enzymes, oxidants and other intracellular components such as proteases and caspases. Efferocytosis can be performed not only by 'professional' phagocytic cells such as macrophages or dendritic cells, but also by many other cell types including epithelial cells and fibroblasts. To distinguish them from living cells, apoptotic cells carry specific 'eat me' signals, such as the presence of phosphatidyl serine (resulting from phospholipid flip-flop) or calreticulin on the outer leaflet of the cell membrane. Down stream consequences Efferocytosis triggers specific downstream intracellular signal transduction pathways, for example resulting in anti-inflammatory, anti-protease and growth-promoting effects. Conversely, impaired efferocytosis has been linked to autoimmune disease and tissue damage. Efferocytosis results in production by the ingesting cell of mediators such as hepatocyte- and vascular endothelial growth factor, which are thought to promote replacement of the dead cells. Specialized pro-resolving mediators are cell-derived metabolites of certain polyunsaturated fatty acids viz.: arachidonic acid which is metabolized to the lipoxins; eicosapentaenoic acid which is metabolized to the Resolvin E's; docosahexaenoic acid which is metabolized to the resolvin D's, maresins, and neuroprotectins; and n-3 docosapentaenoic acid which is metabolized to the n-3 docosapentaenoic acid-derived resolvins and n-3 docosapentaenoic acid-derived neuroprotectins (See Specialized pro-resolving mediators). These mediators possess a broad range of overlapping activities which act to resolve inflammation; one of the important activities which many of these mediators possess is the stimulation of efferocytosis in inflamed tissues. Failure to form sufficient amounts of these mediators is proposed to be one cause of chronic and pathological inflammatory responses (see Specialized pro-resolving mediators#SPM and inflammation). Clincal significance Defective efferocytosis has been demonstrated in such diseases as cystic fibrosis and bronchiectasis, Chronic obstructive pulmonary disease, asthma and idiopathic pulmonary fibrosis, rheumatoid arthritis, systemic lupus erythematosus, glomerulonephritis and atherosclerosis. Footnotes Cellular processes
Efferocytosis
[ "Biology" ]
738
[ "Cellular processes" ]
22,330,333
https://en.wikipedia.org/wiki/Waste%20converter
A waste converter is a machine used for the treatment and recycling of solid and liquid refuse material. A converter is a self-contained system capable of performing the following functions: pasteurization of organic waste; sterilization of pathogenic or biohazard waste; grinding and pulverization of refuse into unrecognizable output; trash compaction; dehydration. Because of the wide variety of functions available on converters, this technology has found application in diverse waste-producing industrial segments. Hospitals, clinics, municipal waste facilities, farms, slaughterhouses, supermarkets, ports, sea vessels, and airports are the primary beneficiaries of on-site waste conversion. The converter is an evolution of the autoclave, invented by Sir Charles Chamberland in 1879, but differs from a waste autoclave in several key characteristics. While the autoclave relies on high temperature and pressure to achieve moist heat sterilization of waste, a converter operates in the atmospheric pressure range. Superheating conditions and steam generation are achieved by variable pressure control, which cycles between ambient and negative pressure within the sterilization cell. The advantage of this updated approach is a safer and less complicated operation that does not require a pressure vessel. Additionally, while autoclaves require external water input, modern converters utilize the moisture content already present in the conversion cell to generate steam sterilization conditions. Any water that is introduced into the process can be recycled in a closed-loop system as opposed to being dumped as run-off sewage. In general, the converter is a simplified, cleaner, and more efficient update to Sir Charles's invention. Converter technology is an environmentally friendly alternative to other traditional means of waste disposal that include incineration, plasma arc, and landfill dumping in that waste conversion results in a small carbon footprint, avoids polluting emissions into the atmosphere, and results in a usable end product such as biofuel, soil compost, or building material (see also Refuse-derived fuel). Applications Application of the converter is common in centralized waste conversion centers, where large machines process waste on an industrial scale. MSW (Municipal solid waste) or infectious waste, depending on the type of plant, is sterilized and converted into a sterilized organic and inorganic, innocuous end-product. Machines used in such large-scale applications process between 1,000 and 4,000 kg of waste per hour. At the end of each cycle, lasting as little as half-hour in Converters (that are capable of grinding), the pulverized, sanitized, and dehydrated product is off-loaded and segregated for other uses. Some of the product is routed for use in pulp production, composting, or refuse-derived fuel. Applications outside of waste treatment centers are increasingly common due to the portability and simplicity of modern converters. Hospitals are a large beneficiary of converter technology, which allows for the immediate treatment of potentially infected hazardous waste at its source. Hospitals and clinics equipped to have a zero hazardous waste footprint operate by having a converter placed on every floor where single use sanitary items such as needles, scalpels, bandages, and blood bags are immediately converted into innocuous product. In addition to the marked improvement in sanitation, on-site treatment of hazardous waste allows operational cost savings for these facilities. The government of Tuscany, Italy for example calculated an annual figure of 8 Million Euro that was saved by turning to on-site treatment of medical hospital waste. Supermarkets and food producers (who dump unused food waste in municipal landfills at a rate that is alarming many conservationists) have found a use for converter technology. By processing unused and decomposing food matter together with packaging and other refuse on site, supermarkets have achieved improvements in terms of waste disposal costs. This is in addition to improvements in public perception, which had been seriously critical of the amount of waste sent to landfills by food stores. In the UK alone 6.7 million metric tons of food waste goes into landfills each year, resulting in 8 million metric tons of CO2 being emitted. Farms, slaughterhouses, and other food producers are likewise becoming more involved in on-site waste conversion. Larger installations especially, where garbage hauling is a major and expensive operation, currently have economic and legislative incentives to move towards operating own converters. Recent drives toward environmentally conscious or "green" technologies have even provided government budgets for such installations. Naval vessels, cruise liners, and off-shore installations such as gas-drilling rigs and oil platforms are another logical application of converter technology. Due to the extended isolation periods of sea-going vessels and off-shore platforms there is an issue of how to store and dispose of refuse in an efficient and sanitary way. Worldwide legislation on sea dumping is strict and does not allow, under stringent penalties, any ships or sea vessels to dump waste, gray water, or even ballast water that has been collected in a remote geographic location due to the danger of biological contamination. Ship-generated waste is either held and disposed of in port waste disposal facilities or can be converted directly on the vessel for easier storage and at times (depending on waste composition) for additional fuel. Environmental Impact The converter is one of the "green technologies" available today for waste treatment. There is a clear and definite positive environmental impact stemming from the use of waste conversion into biofuel, building material, and soil compost. In 2009 ever-increasing numbers of waste exporters around the world are finding it difficult to find buyers for their cargo. Increasing numbers of local and national governments are also turning to recycling and conversion technology to relieve the pressure on already-full or overfilled landfills. Waste conversion, augmented by traditional recycling methods, now allows nearly 99% of all MSW to be reused in some way, thus sharply reducing the demand on landfills. In 1980 only about 10% on municipal solid waste was recycled, and the product consisted largely of paper and glass recycled material. With the widespread use of autoclaves, that percentage climbed to a significant 45% by the year 2000, when composting and energy recovery became more common. With the evolution of the autoclave and the arrival of converters new uses of converted product are emerging on an ongoing basis. Some application only recently implemented are: Composting and combining with farm fertilizer, building material such as concrete additive, gasification fuel, and furnace/boiler pellet additive. The latter two are energy recovery options that only became acceptable on a large scale once the product was demonstrated to burn cleanly and within EPA emission regulations. A notable fact about energy recovery is that even though some waste was already reused in this way as early as 1980, the current generation of converters produces Biomass Fuel that burns exponentially cleaner than the incinerated waste of those times. There are environmentally conscious improvements that have been built into the design of converters based on the lessons learned from older technologies. The new machines run on much less power than the large pressure vessel type autoclaves, and can be plugged into 400 V power supplies or run off of a small motor as stand-alone units. As a result of this, a new degree of portability became possible, and now facilities such as hospitals are placing dish-washer sized units in every department. One important advent that allowed for a leaner, green technology is the ability of modern converters to transfer mechanical energy and friction force on the waste mass into heat energy that is used in the pasteurization and sterilization processes. Operation A typical treatment cycle will begin with the loading of unsorted waste material and end in the offload of a dry powder (product), which now possesses new characteristics and that the input material did not. The garbage is loaded into a chamber, also the conversion cell, by hand or through the use of a loading elevator or conveyor belt, depending on the application and toxicity/danger level associated with handling the waste. The previous batch of post-treatment product is removed and a new cycle is started through an electronic control panel. Modern converters are fully automatic and will finish the computer controlled cycle autonomously, unless a failure occurs. The precise conditions needed to achieve pasteurization and sterilization are controlled by a programmable logic controller (PLC) in millisecond intervals. This level of control in modern units has allowed for the simplification of autoclave technology to the point where heavy and potentially dangerous pressure vessels are not needed, and sterilization conditions are reached by depressing the conversion cell and continuing to evaporate moisture from the product under negative pressure. The result is a statistically safer and more reliable machine, which is also smaller and lighter than older autoclaves. The conversion cycle consists of several sequential steps or phases. The waste is first ground and pulverized to an unrecognizable mixture by a combination of fixed and actuated hardened steel blades. The mixture is then heated through the injection of steam and also by the heat generated by frictional forces of the grinding phase. The exact temperature required to pasteurize, and in the subsequent phase to sterilize the waste, is maintained for a time that allows for an 18 log 10 reduction in microorganisms. In order to eliminate the required amount of microorganisms required by government regulations, a complete saturation of waste matter with superheated steam is required for a minimum amount of time, also regulated by environmental agencies. The modern converter achieves saturation within 10–15 minutes due to the high degree of pulverization preceding the sterilization phase, whereas older models required up to several hours to saturate and sterilize the same load. The cycle ends in a cooling phase, during which product continues to be dehydrated. Upon reaching temperature at which product is safe to handle, near ambient temperature, the cycle automatically shuts down. The end product is expelled into a tray that can then be hauled off for storage. The entire process and statistics are recorded and stored in computer memory for record keeping. Competing Technologies Waste converters do have indirect competition since there are many modes of waste disposal available in the market. However, most of the 'competing' technologies can be added into a larger waste conversion process that will usually place the waste converter at the forefront of a supply chain for on-site waste conversion into a sanitized and dehydrated bulk material. Following processing, the bulk material will follow one of several process paths; see Applications section. Plasma-arc gasification plants are able to process all types of waste under extreme heat that emanates from plasma torches that are in contact with the refuse. Plasma-arc plants produce two types of output; syn-gas, which is collected for use as fuel, and a composite solid that has some properties of plastic and can also be recycled for use in consumer goods. Plasma-arc and gasification plants are highly complex installations and only compete with converter technology at the level of industrial-size converters. The two technologies can still be used in series by sterilizing and lightening waste on-site, before sending it to a gasification plant for syn-gas extraction. This way transportation and storage costs are kept down while maintaining a more sanitary operation. Adding a waste converter at the front-end of a gasification process will theoretically produce less emissions and a cleaner, more usable end product. Micro-wave and other irradiation sterilization has been used to sterilize bio-hazard material in large quantities, but all such approaches suffer from major drawbacks as compared to converters. Irradiation plants are large installations that are expensive to build and maintain, and must necessarily be hub locations as part of a larger supply chain. There is also a health risk and a danger of exposure to radioactive material in installations that use energy sources for generation of gamma rays and other types of radiation. In general, Micro-wave and Irradiation technology was used to treat hazardous pathogenic waste before the widespread adoption of moist-heat sterilization that both autoclaves and converters use. These solutions have been economically forced out of the mainstream use because of their high complexity and operational costs. The introduction of autoclaves was a big step in the direction of widespread treatment of refuse, and so many functional forms of autoclaves are currently in operation at landfills and treatment centers around the world. Some common features of autoclaves that differentiate them from competing technologies are as follows. Autoclaves employ the moist-heat method to sterilize and deactivate refuse in a large pressure vessel where saturated steam is injected. The end product is completely sanitized and even previously bio-hazardous waste can be discarded in a normal municipal landfill. Autoclaves employ the same method of sterilization as converters, but do so using larger and more complex equipment that also has higher safety and energy consumption requirements. In a majority of cases, autoclaves are hub-and-spoke operations requiring regional treatment centers and a supply chain. Incineration and compaction have traditionally been treatment options and still compete with newer technologies by virtue of having a proven track record. These solutions are widely recognized as obsolete due to their impact on the biosphere; while compaction has nearly no effect on the long-term goal of reducing waste dumping, incineration had been notorious for polluting the atmosphere. While incineration is popular for its ability to recover energy from waste, it has been widely debated and regulated in order to preserve clean air. One solution that had been instituted to update incineration technology is to add converter stations into the supply chain of clean-burning RDF, of refuse-derived fuel. This, along with advanced filtering and condensation scrubbers was able to render energy recovery from waste a feasible and eco-friendly solution. Sources https://www.conversionwastecenter.com/ http://ompeco.com http://www.cali.gov.co/ https://web.archive.org/web/20090504075858/http://www.ecogeek.org:80/content/view/2510/81/ http://www.ecologicasud.it http://www.environmentalleverage.com http://www.epa.gov http://www.gov.bw/ http://www.hitechambiente.com/index.asp http://www.ms.ro/ http://www.organicfarming.com.au/ http://www.smrc.com.au/ http://www.svswa.org/ Sterilizzatore "Converter", University of Napoli study by Prof. Paolo Marinelli. Thermal treatment Waste treatment technology
Waste converter
[ "Chemistry", "Engineering" ]
3,049
[ "Water treatment", "Waste treatment technology", "Environmental engineering" ]
22,332,426
https://en.wikipedia.org/wiki/Perfluorotripentylamine
Perfluorotripentylamine is an organic compound with the chemical formula . A molecule of this chemical compound consists of three pentyl groups connected to one nitrogen atom, in which all of the hydrogen atoms are replaced with fluorine atoms. It is a perfluorocarbon. It is used as an electronics coolant, and has a high boiling point. It is colorless, odorless, and insoluble in water. Unlike ordinary amines, perfluoroamines are of low basicity. Perfluorinated amines are components of fluorofluids, used as immersive coolants for supercomputers. It is prepared by electrofluorination of the amine using hydrogen fluoride as solvent and source of fluorine: Safety Fluoroamines are generally of very low toxicity, so much that they have been evaluated as synthetic blood. See also Perfluorotributylamine References Fluorinert FC-70 (3M) Coolants Halogenated solvents Perfluorinated compounds Amines
Perfluorotripentylamine
[ "Chemistry" ]
227
[ "Amines", "Bases (chemistry)", "Functional groups" ]
23,668,054
https://en.wikipedia.org/wiki/Leslie%20cube
Leslie's cube is a device used in the measurement or demonstration of the variations in thermal radiation emitted from different surfaces at the same temperature. Device It was devised in 1804 by John Leslie (1766–1832), a Scottish mathematician and physicist. In the version of the experiment described by John Tyndall in the late 1800s, one of the cube's vertical sides is coated with a layer of gold, another with a layer of silver, a third with a layer of copper, while the fourth side is coated with a varnish of isinglass. The cube is made from a solid block of metal with a central cavity. In use, the cavity was filled with hot water; the entire cube has essentially the same temperature as the water. The thermal detector (on the far right in the figure) showed much greater emission from the side with varnish than from any of the other three sides. In contemporary terms, the emissivities of shiny metals are low. Isinglass is an organic glue, and has a much larger emissivity than the metals. Leslie's cube is still in use to demonstrate and measure the variations in emissivities for different materials. In the figure, the false color images ("thermographs") of a cube at about 55 °C were taken with an infrared camera; the black and white photographs are taken with an ordinary camera. The black face of the cube is highly emissive, as indicated by the reddish color of the thermograph. The mirror-like, polished face of the aluminum cube emits thermal radiation weakly, as indicated by the blue color. The reflection of the experimenter's hand is green, which corresponds to a high emissivity surface near body temperature (37 °C). The photographs also show that the white painted surface is nearly as emissive as a black surface. A modern version of Leslie's Cube is part of the structure of a small earth-orbiting satellite known as FUNcube-1 and registered as a Dutch spacecraft. Launched in November 2013, it demonstrates the absorption and emission of solar radiation in space as the satellite orbits in full sunlight, eclipse and rotates around its three axes. See also Blackbody radiation Emissivity References Further reading In 1856, Draper described the device as a cubical brass vessel set upon a vertical rotatable stem. At a little distance is the blackened bulb of a differential thermometer. A mirror reflects the infrared rays of the cube onto the bulb. One of the sides of the cube is left with a clear surface, another with a coat of varnish, the third with two, and the fourth with three coats. It was found that more heat escaped as the number of coats increased. In the experiments of Macedonio Melloni, it was found that the maximum rate of radiation was at 16 coats. Radiometry 1804 introductions 1804 in science 1804 in Scotland Cubes
Leslie cube
[ "Engineering" ]
593
[ "Telecommunications engineering", "Radiometry" ]
23,669,193
https://en.wikipedia.org/wiki/Undercut%20%28manufacturing%29
In manufacturing, an undercut is a special type of recessed surface that is inaccessible using a straight tool. In turning, it refers to a recess in a diameter generally on the inside diameter of the part. In milling, it refers to a feature which is not visible when the part is viewed from the spindle. In molding, it refers to a feature that cannot be molded using only a single pull mold. In printed circuit board construction, it refers to the portion of the copper that is etched away under the photoresist. Turning On turned parts an undercut is also known as a neck or "relief groove". They are often used at the end of the threaded portion of a shaft or screw to provide clearance for the cutting tool. Molding Undercut - Any indentation or protrusion in a shape that will prevent its withdrawal from a one-piece mold. Milling In milling the spindle is where a cutting tool is mounted. In some situations material must be cut from a direction where the feature can not be seen from the perspective of the spindle and requires special tooling to reach behind the visible material. The corners may be undercut to remove the radius that is usually left by the milling cutter this is commonly referred to as a relief. Etching Undercuts from etching (microfabrication) are a side effect, not an intentional feature. Gears References Bibliography . Mechanical engineering Metalworking terminology Plastics industry
Undercut (manufacturing)
[ "Physics", "Engineering" ]
292
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
23,672,379
https://en.wikipedia.org/wiki/Agricultural%20machinery
Agricultural machinery relates to the mechanical structures and devices used in farming or other agriculture. There are many types of such equipment, from hand tools and power tools to tractors and the farm implements that they tow or operate. Machinery is used in both organic and nonorganic farming. Especially since the advent of mechanised agriculture, agricultural machinery is an indispensable part of how the world is fed. Agricultural machinery can be regarded as part of wider agricultural automation technologies, which includes the more advanced digital equipment and agricultural robotics. While robots have the potential to automate the three key steps involved in any agricultural operation (diagnosis, decision-making and performing), conventional motorized machinery is used principally to automate only the performing step where diagnosis and decision-making are conducted by humans based on observations and experience. History The Industrial Revolution With the coming of the Industrial Revolution and the development of more complicated machines, farming methods took a great leap forward. Instead of harvesting grain by hand with a sharp blade, wheeled machines cut a continuous swath. Instead of threshing the grain by beating it with sticks, threshing machines separated the seeds from the heads and stalks. The first tractors appeared in the late 19th century. Steam power Power for agricultural machinery was originally supplied by ox or other domesticated animals. With the invention of steam power came the portable engine, and later the traction engine, a multipurpose, mobile energy source that was the ground-crawling cousin to the steam locomotive. Agricultural steam engines took over the heavy pulling work of oxen, and were also equipped with a pulley that could power stationary machines via the use of a long belt. The steam-powered machines were low-powered by today's standards but because of their size and their low gear ratios, they could provide a large drawbar pull. The slow speed of steam-powered machines led farmers to comment that tractors had two speeds: "slow, and damn slow". Internal combustion engines The internal combustion engine; first the petrol engine, and later diesel engines; became the main source of power for the next generation of tractors. These engines also contributed to the development of the self-propelled combine harvester and thresher, or the combine harvester (also shortened to 'combine'). Instead of cutting the grain stalks and transporting them to a stationary threshing machine, these combines cut, threshed, and separated the grain while moving continuously throughout the field. Agricultural machinery types Tractors Tractors do the majority of work on a modern farm. They are used to push/pull implements—machines that till the ground, plant seeds, and perform other tasks. Tillage implements prepare the soil for planting by loosening the soil and killing weeds or competing plants. The best-known is the plow, the ancient implement that was upgraded in 1838 by John Deere. Plows are now used less frequently in the U.S. than formerly, with offset disks used instead to turn over the soil, and chisels used to gain the depth needed to retain moisture. Combines Combine is a machine designed to efficiently harvest a variety of grain crops. The name derives from its combining four separate harvesting operations—reaping, threshing, gathering, and winnowing—into a single process. Among the crops harvested with a combine are wheat, rice, oats, rye, barley, corn (maize), sorghum, soybeans, flax (linseed), sunflowers and rapeseed. Planters The most common type of seeder is called a planter, and spaces seeds out equally in long rows, which are usually two to three feet apart. Some crops are planted by drills, which put out much more seed in rows less than a foot apart, blanketing the field with crops. Transplanters automate the task of transplanting seedlings to the field. With the widespread use of plastic mulch, plastic mulch layers, transplanters, and seeders lay down long rows of plastic, and plant through them automatically. Sprayers After planting, other agricultural machinery such as self-propelled sprayers can be used to apply fertilizer and pesticides. Agriculture sprayer application is a method to protect crops from weeds by using herbicides, fungicides, and insecticides. Spraying or planting a cover crop are ways to mix weed growth. Balers and other agriculture implements Planting crop hay balers can be used to tightly package grass or alfalfa into a storable form for the winter months. Modern irrigation relies on machinery. Engines, pumps and other specialized gear provide water quickly and in high volumes to large areas of land. Similar types of equipment such as agriculture sprayers can be used to deliver fertilizers and pesticides. Besides the tractor, other vehicles have been adapted for use in farming, including trucks, airplanes, and helicopters, such as for transporting crops and making equipment mobile, to aerial spraying and livestock herd management. New technology and the future The basic technology of agricultural machines has changed little in the last century. Though modern harvesters and planters may do a better job or be slightly tweaked from their predecessors, the combine of today still cuts, threshes, and separates grain in the same way it has always been done. However, technology is changing the way that humans operate the machines, as computer monitoring systems, GPS locators and self-steer programs allow the most advanced tractors and implements to be more precise and less wasteful in the use of fuel, seed, or fertilizer. In the foreseeable future, there may be mass production of driverless tractors, which use GPS maps and electronic sensors. Agricultural automation The Food and Agriculture Organization of the United Nations (FAO) defines agricultural automation as the use of machinery and equipment in agricultural operations to improve their diagnosis, decision-making, or performance, reducing the drudgery of agricultural work and improving the timeliness, and potentially the precision, of agricultural operations. The technological evolution in agriculture has been a journey from manual tools to animal traction, then to motorized mechanization, and further to digital equipment. This progression has culminated in the use of robotics with artificial intelligence (AI). Motorized mechanization, for instance, automates operations like ploughing, seeding, fertilizing, milking, feeding, and irrigating, thereby significantly reducing manual labor. With the advent of digital automation technologies, it has become possible to automate diagnosis and decision-making. For instance, autonomous crop robots can harvest and seed crops, and drones can collect information to help automate input applications. Tractors, on the other hand, can be transformed into automated vehicles that can sow fields independently. < ref name= ":1"/> A 2023 report by the United States Department of Agriculture (USDA) revealed that over 50% of corn, cotton, rice, sorghum, soybeans, and winter wheat in the United States is planted using automated guidance systems. These systems, which utilize technology to autonomously steer farm equipment, only require supervision from a farmer. This is a clear example of how agricultural automation is being implemented in real-world farming scenarios. Open source agricultural equipment Many farmers are upset by their inability to fix the new types of high-tech farm equipment. This is due mostly to companies using intellectual property law to prevent farmers from having the legal right to fix their equipment (or gain access to the information to allow them to do it). In October 2015 an exemption was added to the DMCA to allow inspection and modification of the software in cars and other vehicles including agricultural machinery. The Open Source Agriculture movement counts different initiatives and organizations such as Farm Labs which is a network in Europe, l'Atelier Paysan which is a cooperative to teach farmers in France how to build and repair their tools, and Ekylibre which is an open-source company to provide farmers in France with open source software (SaaS) to manage farming operations. In the United States, the MIT Media Lab's Open Agriculture Initiative seeks to foster "the creation of an open-source ecosystem of technologies that enable and promote transparency, networked experimentation, education, and hyper-local production". It develops the Personal Food Computer, an educational project to create a "controlled environment agriculture technology platform that uses robotic systems to control and monitor climate, energy, and plant growth inside of a specialized growing chamber". It includes the development of Open Phenom, an open source library with open data sets for climate recipes which link the phenotype response of plants (taste, nutrition) to environmental variables, biological, genetic and resource-related necessary for cultivation (input). Plants with the same genetics can naturally vary in color, size, texture, growth rate, yield, flavor, and nutrient density according to the environmental conditions in which they are produced. Manufacturers Active AGCO Agrale Al-Ghazi Tractors Algerian Tractors Company Arbos ARGO SpA Carraro Agritalia Case IH Challenger Tractors Claas CNH Industrial Daedong Deutz-Fahr Escorts Limited Fendt Goldoni Iseki Jacto JCB John Deere Kharkiv Tractor Plant Kirov Plant Kubota Lamborghini Trattori Landini Lindner LS Mtron Mahindra Tractors Massey Ferguson McCormick Tractors Millat Tractors Minsk Tractor Works Mitsubishi Agricultural Machinery New Holland Agriculture Pronar Shibaura Sonalika Tractors SAME SAS Motors SDF Group Stara Steyr TAFE TYM Ursus SA Valpadana Valtra Versatile Yanmar YTO Group Zetor Zoomlion Balwaan Agri Former Allis-Chalmers Case Corporation Ferguson-Brown Company Fiat Trattori Ford International Harvester Leyland Tractors Massey-Harris Renault Agriculture See also List of agricultural machinery Mechanised agriculture Agricultural machinery industry Agricultural robot Sources References External links Hay Harvesting in the 1940s instructional films, Center for Digital Initiatives, University of Vermont Library Worldwide Agricultural Machinery and Farm Equipment Directory Economic Situation of the agricultural machinery sector—VDMA Report machinery Machinery
Agricultural machinery
[ "Physics", "Technology", "Engineering" ]
2,038
[ "Physical systems", "Machines", "Machinery", "Mechanical engineering" ]
23,672,633
https://en.wikipedia.org/wiki/Wattle%20and%20daub
Wattle and daub is a composite building method used for making walls and buildings, in which a woven lattice of wooden strips called "wattle" is "daubed" with a sticky material usually made of some combination of wet soil, clay, sand, and straw. Wattle and daub has been used for at least 6,000 years and is still an important construction method in many parts of the world. Many historic buildings include wattle and daub construction. History The wattle and daub technique has been used since the Neolithic period. It was common for houses of Linear pottery and Rössen cultures of middle Europe, but is also found in Western Asia (Çatalhöyük, Shillourokambos) as well as in North America (Mississippian culture) and South America (Brazil). In Africa it is common in the architecture of traditional houses such as those of the Ashanti people. Its usage dates back at least 6,000 years. There are suggestions that construction techniques such as lath and plaster and even cob may have evolved from wattle and daub. Fragments from prehistoric wattle and daub buildings have been found in Africa, Europe, Mesoamerica and North America. Evidence for wattle and daub (or "wattle and reed") fire pits, storage bins, and buildings shows up in Egyptian archaeological sites such as Merimda and El Omari, dating back to the 5th millennium BCE, predating the use of mud brick and continuing to be the preferred building material until about the start of the First Dynasty. It continued to flourish well into the New Kingdom and beyond. Vitruvius refers to it as being employed in Rome. A review of English architecture especially reveals that the sophistication of this craft is dependent on the various styles of timber frame housing. The wattle and plaster process has been replaced in modern architecture by brick and mortar or by lath and plaster, a common building material for wall and ceiling surfaces, in which a series of nailed wooden strips are covered with plaster smoothed into a flat surface. In many regions this building method has itself been overtaken by drywall construction using plasterboard sheets. Wattle The wattle is made by weaving thin branches (either whole, or more usually split) or slats between upright stakes. The wattle may be made as loose panels, slotted between timber framing to make infill panels, or made in place to form the whole of a wall. In different regions, the material of wattle can be different. For example, at the Mitchell Site on the northern outskirts of the city of Mitchell, South Dakota, willow has been found as the wattle material of the walls of the house. Reeds and vines can also be used as wattle material. The origin of the term wattle describing a group of acacias in Australia, is derived from the common use of acacias as wattle in early Australian European settlements. Daub Daub is usually created from a mixture of ingredients from three categories: binders, aggregates and reinforcement. Binders hold the mix together and can include clay, lime, chalk dust and limestone dust. Aggregates give the mix its bulk and dimensional stability through materials such as mud, sand, crushed chalk and crushed stone. Reinforcement is provided by straw, hair, hay or other fibrous materials, and helps to hold the mix together as well as to control shrinkage and provide flexibility. The daub may be mixed by hand, or by treading – either by humans or livestock. It is then applied to the wattle and allowed to dry, and often then whitewashed to increase its resistance to rain. Sometimes there can be more than one layer of daub. At the Mitchell Site, the anterior of the house had double layers of burned daub. Styles of infill panels There were two popular choices for wattle and daub infill paneling: close-studded paneling and square paneling. Close-studding Close-studding panels create a much narrower space between the timbers: anywhere from 7 to 16 inches (18 to 40 cm). For this style of panel, weaving is too difficult, so the wattles run horizontally and are known as ledgers. The ledgers are sprung into each upright timber (stud) through a system of augered holes on one side and short chiseled grooves along the other. The holes (along with holes of square paneling) are drilled at a slight angle towards the outer face of each stud. This allows room for upright hazels to be tied to ledgers from the inside of the building. The horizontal ledgers are placed every two to three feet (0.6 to 0.9 metres) with whole hazel rods positioned upright top to bottom and lashed to the ledgers. These hazel rods are generally tied a finger-width apart with 6–8 rods each with a 16-inch (40 cm) width. Gaps allow key formation for drying. Square panels Square panels are large, wide panels typical of some later timber-frame houses. These panels may be square in shape, or sometimes triangular to accommodate arched or decorative bracing. This style requires the wattles to be woven for better support of the daub. To insert wattles in a square panel several steps are required. First, a series of evenly spaced holes are drilled along the middle of the inner face of each upper timber. Next, a continuous groove is cut along the middle of each inner face of the lower timber in each panel. Vertical slender timbers, known as staves, are then inserted and these hold the whole panel within the timber frame. The staves are positioned into the holes and then sprung into the grooves. They must be placed with sufficient gaps to weave the flexible horizontal wattles. Applications In some places or cultures, the technique of wattle and daub was used with different materials and thus has different names. Pug and pine In the early days of the colonisation of South Australia, in areas where substantial timber was unavailable, pioneers' cottages and other small buildings were frequently constructed with light vertical timbers, which may have been "native pine" (Callitris or Casuarina spp.), driven into the ground, the gaps being stopped with pug (kneaded clay and grass mixture). Another term for this construction is palisade and pug. Mud and stud "Mud and stud" is a similar process to wattle and daub, with a simple frame consisting only of upright studs joined by cross rails at the tops and bottoms. Thin staves of ash were attached, then daubed with a mixture of mud, straw, hair and dung. The style of building was once common in Lincolnshire. Pierrotage, columbage Pierrotage is the infilling material used in French Vernacular architecture of the Southern United States to infill between half-timbering with diagonal braces, which is similar to daub. It is usually made of lime mortar clay mixed with small stones. It is also called bousillage or bouzillage, especially in French Vernacular architecture of Louisiana of the early 1700s. The materials of bousillage are Spanish moss or clay and grass. Bousillage also refers to the type of brick molded with the same materials and used as infilling between posts. Columbage refers to the timber-framed construction with diagonal bracing of the framework. Pierratage or bousillage is the material filled into the structural timbers. Bajarreque Bajarreque is a wall constructed with the technique of wattle and daub. The wattle here is made of bagasse, and the daub is the mix of clay and straw. Jacal Jacal can refer to a type of crude house whose wall is built with wattle and daub in southwestern US. Closely spaced upright sticks or poles driven into the ground with small branches (wattle) interwoven between them make the structural frame of the wall. Mud or an adobe clay (daub) is covered outside. To provide additional weather protection, the wall is usually plastered. See also Adobe Ceramic houses Clay panel Cob (building) Earthen plaster Lath and plaster Mudbrick Quincha Rammed earth Timber frame Citations General and cited references External links Building engineering Çatalhöyük Earth structures Neolithic Plastering Soil-based building materials Timber framing Types of wall es:Bahareque
Wattle and daub
[ "Chemistry", "Technology", "Engineering" ]
1,719
[ "Structural engineering", "Earth structures", "Timber framing", "Building engineering", "Coatings", "Structural system", "Construction", "Types of wall", "Civil engineering", "Plastering", "Architecture" ]
18,600,440
https://en.wikipedia.org/wiki/Osmosis
Osmosis (, ) is the spontaneous net movement or diffusion of solvent molecules through a selectively-permeable membrane from a region of high water potential (region of lower solute concentration) to a region of low water potential (region of higher solute concentration), in the direction that tends to equalize the solute concentrations on the two sides. It may also be used to describe a physical process in which any solvent moves across a selectively permeable membrane (permeable to the solvent, but not the solute) separating two solutions of different concentrations. Osmosis can be made to do work. Osmotic pressure is defined as the external pressure required to prevent net movement of solvent across the membrane. Osmotic pressure is a colligative property, meaning that the osmotic pressure depends on the molar concentration of the solute but not on its identity. Osmosis is a vital process in biological systems, as biological membranes are semipermeable. In general, these membranes are impermeable to large and polar molecules, such as ions, proteins, and polysaccharides, while being permeable to non-polar or hydrophobic molecules like lipids as well as to small molecules like oxygen, carbon dioxide, nitrogen, and nitric oxide. Permeability depends on solubility, charge, or chemistry, as well as solute size. Water molecules travel through the plasma membrane, tonoplast membrane (vacuole) or organelle membranes by diffusing across the phospholipid bilayer via aquaporins (small transmembrane proteins similar to those responsible for facilitated diffusion and ion channels). Osmosis provides the primary means by which water is transported into and out of cells. The turgor pressure of a cell is largely maintained by osmosis across the cell membrane between the cell interior and its relatively hypotonic environment. History Some kinds of osmotic flow have been observed since ancient times, e.g., on the construction of Egyptian pyramids. Jean-Antoine Nollet first documented observation of osmosis in 1748. The word "osmosis" descends from the words "endosmose" and "exosmose", which were coined by French physician René Joachim Henri Dutrochet (1776–1847) from the Greek words ἔνδον (éndon "within"), ἔξω (éxō "outer, external"), and ὠσμός (ōsmós "push, impulsion"). In 1867, Moritz Traube invented highly selective precipitation membranes, advancing the art and technique of measurement of osmotic flow. Description Osmosis is the movement of a solvent across a semipermeable membrane toward a higher concentration of solute. In biological systems, the solvent is typically water, but osmosis can occur in other liquids, supercritical liquids, and even gases. When a cell is submerged in water, the water molecules pass through the cell membrane from an area of low solute concentration to high solute concentration. For example, if the cell is submerged in saltwater, water molecules move out of the cell. If a cell is submerged in freshwater, water molecules move into the cell. When the membrane has a volume of pure water on both sides, water molecules pass in and out in each direction at exactly the same rate. There is no net flow of water through the membrane. Osmosis can be demonstrated when potato slices are added to a high salt solution. The water from inside the potato moves out to the solution, causing the potato to shrink and to lose its 'turgor pressure'. The more concentrated the salt solution, the bigger the loss in size and weight of the potato slice. Chemical gardens demonstrate the effect of osmosis in inorganic chemistry. Mechanism The mechanism responsible for driving osmosis has commonly been represented in biology and chemistry texts as either the dilution of water by solute (resulting in lower concentration of water on the higher solute concentration side of the membrane and therefore a diffusion of water along a concentration gradient) or by a solute's attraction to water (resulting in less free water on the higher solute concentration side of the membrane and therefore net movement of water toward the solute). Both of these notions have been conclusively refuted. The diffusion model of osmosis is rendered untenable by the fact that osmosis can drive water across a membrane toward a higher concentration of water. The "bound water" model is refuted by the fact that osmosis is independent of the size of the solute molecules—a colligative property—or how hydrophilic they are. It is difficult to describe osmosis without a mechanical or thermodynamic explanation, but essentially there is an interaction between the solute and water that counteracts the pressure that otherwise free solute molecules would exert. One fact to take note of is that heat from the surroundings is able to be converted into mechanical energy (water rising). Many thermodynamic explanations go into the concept of chemical potential and how the function of the water on the solution side differs from that of pure water due to the higher pressure and the presence of the solute counteracting such that the chemical potential remains unchanged. The virial theorem demonstrates that attraction between the molecules (water and solute) reduces the pressure, and thus the pressure exerted by water molecules on each other in solution is less than in pure water, allowing pure water to "force" the solution until the pressure reaches equilibrium. Role in living things Osmotic pressure is the main agent of support in many plants. The osmotic entry of water raises the turgor pressure exerted against the cell wall, until it equals the osmotic pressure, creating a steady state. When a plant cell is placed in a solution that is hypertonic relative to the cytoplasm, water moves out of the cell and the cell shrinks. In doing so, the cell becomes flaccid. In extreme cases, the cell becomes plasmolyzed – the cell membrane disengages with the cell wall due to lack of water pressure on it. When a plant cell is placed in a solution that is hypotonic relative to the cytoplasm, water moves into the cell and the cell swells to become turgid. Osmosis also plays a vital role in human cells by facilitating the movement of water across cell membranes. This process is crucial for maintaining proper cell hydration, as cells can be sensitive to dehydration or overhydration. In human cells, osmosis is essential for maintaining the balance of water and solutes, ensuring optimal cellular function. Imbalances in osmotic pressure can lead to cellular dysfunction, highlighting the importance of osmosis in sustaining the health and integrity of human cells. In certain environments, osmosis can be harmful to organisms. Freshwater and saltwater aquarium fish, for example, will quickly die should they be placed in water of a maladaptive salinity. The osmotic effect of table salt to kill leeches and slugs is another example of a way osmosis can cause harm to organisms. Suppose an animal or plant cell is placed in a solution of sugar or salt in water. If the medium is hypotonic relative to the cell cytoplasm, the cell will gain water through osmosis. If the medium is isotonic, there will be no net movement of water across the cell membrane. If the medium is hypertonic relative to the cell cytoplasm, the cell will lose water by osmosis. This means that if a cell is put in a solution which has a solute concentration higher than its own, it will shrivel, and if it is put in a solution with a lower solute concentration than its own, the cell will swell and may even burst. Factors Osmotic pressure Osmosis may be opposed by increasing the pressure in the region of high solute concentration with respect to that in the low solute concentration region. The force per unit area, or pressure, required to prevent the passage of water (or any other high-liquidity solution) through a selectively permeable membrane and into a solution of greater concentration is equivalent to the osmotic pressure of the solution, or turgor. Osmotic pressure is a colligative property, meaning that the property depends on the concentration of the solute, but not on its content or chemical identity. Osmotic gradient The osmotic gradient is the difference in concentration between two solutions on either side of a semipermeable membrane, and is used to tell the difference in percentages of the concentration of a specific particle dissolved in a solution. Usually the osmotic gradient is used while comparing solutions that have a semipermeable membrane between them allowing water to diffuse between the two solutions, toward the hypertonic solution (the solution with the higher concentration). Eventually, the force of the column of water on the hypertonic side of the semipermeable membrane will equal the force of diffusion on the hypotonic (the side with a lesser concentration) side, creating equilibrium. When equilibrium is reached, water continues to flow, but it flows both ways in equal amounts as well as force, therefore stabilizing the solution. Variation Reverse osmosis Reverse osmosis is a separation process that uses pressure to force a solvent through a semi-permeable membrane that retains the solute on one side and allows the pure solvent to pass to the other side, forcing it from a region of high solute concentration through a membrane to a region of low solute concentration by applying a pressure in excess of the osmotic pressure. This process is known primarily for its role in turning seawater into drinking water, when salt and other unwanted substances are ridded from the water molecules. Forward osmosis Osmosis may be used directly to achieve separation of water from a solution containing unwanted solutes. A "draw" solution of higher osmotic pressure than the feed solution is used to induce a net flow of water through a semi-permeable membrane, such that the feed solution becomes concentrated as the draw solution becomes dilute. The diluted draw solution may then be used directly (as with an ingestible solute like glucose), or sent to a secondary separation process for the removal of the draw solute. This secondary separation can be more efficient than a reverse osmosis process would be alone, depending on the draw solute used and the feedwater treated. Forward osmosis is an area of ongoing research, focusing on applications in desalination, water purification, water treatment, food processing, and other areas of study. Future developments in osmosis Future developments in osmosis and osmosis research hold promise for a range of applications. Researchers are exploring advanced materials for more efficient osmotic processes, leading to improved water desalination and purification technologies. Additionally, the integration of osmotic power generation, where the osmotic pressure difference between saltwater and freshwater is harnessed for energy, presents a sustainable and renewable energy source with significant potential. Furthermore, the field of medical research is looking at innovative drug delivery systems that utilize osmotic principles, offering precise and controlled administration of medications within the body. As technology and understanding in this field continue to evolve, the applications of osmosis are expected to expand, addressing various global challenges in water sustainability, energy generation, and healthcare. See also Brining Homeostasis Osmoregulation Osmotic shock Osmotic power Plasmolysis Reverse osmosis plant Salinity gradient power Water potential Notes References External links Osmosis simulation in Java NetLogo Osmosis simulation for educational use An Osmosis Experiment Diffusion Water technology Membrane technology
Osmosis
[ "Physics", "Chemistry" ]
2,446
[ "Transport phenomena", "Physical phenomena", "Diffusion", "Separation processes", "Membrane technology", "Water technology" ]
18,605,319
https://en.wikipedia.org/wiki/Quark%E2%80%93gluon%20plasma
Quark–gluon plasma (QGP or quark soup) is an interacting localized assembly of quarks and gluons at thermal (local kinetic) and (close to) chemical (abundance) equilibrium. The word plasma signals that free color charges are allowed. In a 1987 summary, Léon Van Hove pointed out the equivalence of the three terms: quark gluon plasma, quark matter and a new state of matter. Since the temperature is above the Hagedorn temperature—and thus above the scale of light u,d-quark mass—the pressure exhibits the relativistic Stefan-Boltzmann format governed by temperature to the fourth power () and many practically massless quark and gluon constituents. It can be said that QGP emerges to be the new phase of strongly interacting matter which manifests its physical properties in terms of nearly free dynamics of practically massless gluons and quarks. Both quarks and gluons must be present in conditions near chemical (yield) equilibrium with their colour charge open for a new state of matter to be referred to as QGP. In the Big Bang theory, quark–gluon plasma filled the entire Universe before matter as we know it was created. Theories predicting the existence of quark–gluon plasma were developed in the late 1970s and early 1980s. Discussions around heavy ion experimentation followed suit, and the first experiment proposals were put forward at CERN and BNL in the following years. Quark–gluon plasma was detected for the first time in the laboratory at CERN in the year 2000. General introduction Quark–gluon plasma is a state of matter in which the elementary particles that make up the hadrons of baryonic matter are freed of their strong attraction for one another under extremely high energy densities. These particles are the quarks and gluons that compose baryonic matter. In normal matter quarks are confined; in the QGP quarks are deconfined. In classical quantum chromodynamics (QCD), quarks are the fermionic components of hadrons (mesons and baryons) while the gluons are considered the bosonic components of such particles. The gluons are the force carriers, or bosons, of the QCD color force, while the quarks by themselves are their fermionic matter counterparts. Quark–gluon plasma is studied to recreate and understand the high energy density conditions prevailing in the Universe when matter formed from elementary degrees of freedom (quarks, gluons) at about 20 μs after the Big Bang. Experimental groups are probing over a 'large' distance the (de)confining quantum vacuum structure, which determines prevailing form of matter and laws of nature. The experiments give insight to the origin of matter and mass: the matter and antimatter is created when the quark–gluon plasma 'hadronizes' and the mass of matter originates in the confining vacuum structure. How the quark–gluon plasma fits into the general scheme of physics QCD is one part of the modern theory of particle physics called the Standard Model. Other parts of this theory deal with electroweak interactions and neutrinos. The theory of electrodynamics has been tested and found correct to a few parts in a billion. The theory of weak interactions has been tested and found correct to a few parts in a thousand. Perturbative forms of QCD have been tested to a few percent. Perturbative models assume relatively small changes from the ground state, i.e. relatively low temperatures and densities, which simplifies calculations at the cost of generality. In contrast, non-perturbative forms of QCD have barely been tested. The study of the QGP, which has both a high temperature and density, is part of this effort to consolidate the grand theory of particle physics. The study of the QGP is also a testing ground for finite temperature field theory, a branch of theoretical physics which seeks to understand particle physics under conditions of high temperature. Such studies are important to understand the early evolution of our universe: the first hundred microseconds or so. It is crucial to the physics goals of a new generation of observations of the universe (WMAP and its successors). It is also of relevance to Grand Unification Theories which seek to unify the three fundamental forces of nature (excluding gravity). Reasons for studying the formation of quark–gluon plasma The generally accepted model of the formation of the Universe states that it happened as the result of the Big Bang. In this model, in the time interval of 10−10–10−6 s after the Big Bang, matter existed in the form of a quark–gluon plasma. It is possible to reproduce the density and temperature of matter existing of that time in laboratory conditions to study the characteristics of the very early Universe. So far, the only possibility is the collision of two heavy atomic nuclei accelerated to energies of more than a hundred GeV. Using the result of a head-on collision in the volume approximately equal to the volume of the atomic nucleus, it is possible to model the density and temperature that existed in the first instants of the life of the Universe. Relation to normal plasma A plasma is matter in which charges are screened due to the presence of other mobile charges. For example: Coulomb's Law is suppressed by the screening to yield a distance-dependent charge, , i.e., the charge Q is reduced exponentially with the distance divided by a screening length α. In a QGP, the color charge of the quarks and gluons is screened. The QGP has other analogies with a normal plasma. There are also dissimilarities because the color charge is non-abelian, whereas the electric charge is abelian. Outside a finite volume of QGP the color-electric field is not screened, so that a volume of QGP must still be color-neutral. It will therefore, like a nucleus, have integer electric charge. Because of the extremely high energies involved, quark-antiquark pairs are produced by pair production and thus QGP is a roughly equal mixture of quarks and antiquarks of various flavors, with only a slight excess of quarks. This property is not a general feature of conventional plasmas, which may be too cool for pair production (see however pair instability supernova). Theory One consequence of this difference is that the color charge is too large for perturbative computations which are the mainstay of QED. As a result, the main theoretical tools to explore the theory of the QGP is lattice gauge theory. The transition temperature (approximately ) was first predicted by lattice gauge theory. Since then lattice gauge theory has been used to predict many other properties of this kind of matter. The AdS/CFT correspondence conjecture may provide insights in QGP, moreover the ultimate goal of the fluid/gravity correspondence is to understand QGP. The QGP is believed to be a phase of QCD which is completely locally thermalized and thus suitable for an effective fluid dynamic description. Production Production of QGP in the laboratory is achieved by colliding heavy atomic nuclei (called heavy ions as in an accelerator atoms are ionized) at relativistic energy in which matter is heated well above the Hagedorn temperature TH = 150 MeV per particle, which amounts to a temperature exceeding 1.66×1012 K. This can be accomplished by colliding two large nuclei at high energy (note that is not the energy of the colliding beam). Lead and gold nuclei have been used for such collisions at CERN SPS and BNL RHIC, respectively. The nuclei are accelerated to ultrarelativistic speeds (contracting their length) and directed towards each other, creating a "fireball", in the rare event of a collision. Hydrodynamic simulation predicts this fireball will expand under its own pressure, and cool while expanding. By carefully studying the spherical and elliptic flow, experimentalists put the theory to test. Diagnostic tools There is overwhelming evidence for production of quark–gluon plasma in relativistic heavy ion collisions. The important classes of experimental observations are Strangeness production Elliptic flow Jet quenching J/ψ melting Hanbury Brown and Twiss effect and Bose–Einstein correlations Single particle spectra (thermal photons and thermal dileptons) Expected properties Thermodynamics The cross-over temperature from the normal hadronic to the QGP phase is about . This "crossover" may actually not be only a qualitative feature, but instead one may have to do with a true (second order) phase transition, e.g. of the universality class of the three-dimensional Ising model. The phenomena involved correspond to an energy density of a little less than . For relativistic matter, pressure and temperature are not independent variables, so the equation of state is a relation between the energy density and the pressure. This has been found through lattice computations, and compared to both perturbation theory and string theory. This is still a matter of active research. Response functions such as the specific heat and various quark number susceptibilities are currently being computed. Flow The discovery of the perfect liquid was a turning point in physics. Experiments at RHIC have revealed a wealth of information about this remarkable substance, which we now know to be a QGP. Nuclear matter at "room temperature" is known to behave like a superfluid. When heated the nuclear fluid evaporates and turns into a dilute gas of nucleons and, upon further heating, a gas of baryons and mesons (hadrons). At the critical temperature, TH, the hadrons melt and the gas turns back into a liquid. RHIC experiments have shown that this is the most perfect liquid ever observed in any laboratory experiment at any scale. The new phase of matter, consisting of dissolved hadrons, exhibits less resistance to flow than any other known substance. The experiments at RHIC have, already in 2005, shown that the Universe at its beginning was uniformly filled with this type of material—a super-liquid—which once the Universe cooled below TH evaporated into a gas of hadrons. Detailed measurements show that this liquid is a quark–gluon plasma where quarks, antiquarks and gluons flow independently. In short, a quark–gluon plasma flows like a splat of liquid, and because it is not "transparent" with respect to quarks, it can attenuate jets emitted by collisions. Furthermore, once formed, a ball of quark–gluon plasma, like any hot object, transfers heat internally by radiation. However, unlike in everyday objects, there is enough energy available so that gluons (particles mediating the strong force) collide and produce an excess of the heavy (i.e., high-energy) strange quarks. Whereas, if the QGP did not exist and there was a pure collision, the same energy would be converted into a non-equilibrium mixture containing even heavier quarks such as charm quarks or bottom quarks. The equation of state is an important input into the flow equations. The speed of sound (speed of QGP-density oscillations) is currently under investigation in lattice computations. The mean free path of quarks and gluons has been computed using perturbation theory as well as string theory. Lattice computations have been slower here, although the first computations of transport coefficients have been concluded. These indicate that the mean free time of quarks and gluons in the QGP may be comparable to the average interparticle spacing: hence the QGP is a liquid as far as its flow properties go. This is very much an active field of research, and these conclusions may evolve rapidly. The incorporation of dissipative phenomena into hydrodynamics is another active research area. Jet quenching effect Detailed predictions were made in the late 1970s for the production of jets at the CERN Super Proton–Antiproton Synchrotron. UA2 observed the first evidence for jet production in hadron collisions in 1981, which shortly after was confirmed by UA1. The subject was later revived at RHIC. One of the most striking physical effects obtained at RHIC energies is the effect of quenching jets. At the first stage of interaction of colliding relativistic nuclei, partons of the colliding nuclei give rise to the secondary partons with a large transverse impulse ≥ 3–6 GeV/s. Passing through a highly heated compressed plasma, partons lose energy. The magnitude of the energy loss by the parton depends on the properties of the quark–gluon plasma (temperature, density). In addition, it is also necessary to take into account the fact that colored quarks and gluons are the elementary objects of the plasma, which differs from the energy loss by a parton in a medium consisting of colorless hadrons. Under the conditions of a quark–gluon plasma, the energy losses resulting from the RHIC energies by partons are estimated as . This conclusion is confirmed by comparing the relative yield of hadrons with a large transverse impulse in nucleon-nucleon and nucleus-nucleus collisions at the same collision energy. The energy loss by partons with a large transverse impulse in nucleon-nucleon collisions is much smaller than in nucleus-nucleus collisions, which leads to a decrease in the yield of high-energy hadrons in nucleus-nucleus collisions. This result suggests that nuclear collisions cannot be regarded as a simple superposition of nucleon-nucleon collisions. For a short time, ~1 μs, and in the final volume, quarks and gluons form some ideal liquid. The collective properties of this fluid are manifested during its movement as a whole. Therefore, when moving partons in this medium, it is necessary to take into account some collective properties of this quark–gluon liquid. Energy losses depend on the properties of the quark–gluon medium, on the parton density in the resulting fireball, and on the dynamics of its expansion. Losses of energy by light and heavy quarks during the passage of a fireball turn out to be approximately the same. In November 2010, CERN announced the first direct observation of jet quenching, based on experiments with heavy-ion collisions. Direct photons and dileptons Direct photons and dileptons are arguably most penetrating tools to study relativistic heavy ion collisions. They are produced, by various mechanisms spanning the space-time evolution of the strongly interacting fireball. They provide in principle a snapshot on the initial stage as well. They are hard to decipher and interpret as most of the signal is originating from hadron decays long after the QGP fireball has disintegrated. Glasma hypothesis Since 2008, there is a discussion about a hypothetical precursor state of the quark–gluon plasma, the so-called "Glasma", where the dressed particles are condensed into some kind of glassy (or amorphous) state, below the genuine transition between the confined state and the plasma liquid. This would be analogous to the formation of metallic glasses, or amorphous alloys of them, below the genuine onset of the liquid metallic state. Although the experimental high temperatures and densities predicted as producing a quark–gluon plasma have been realized in the laboratory, the resulting matter does not behave as a quasi-ideal state of free quarks and gluons, but, rather, as an almost perfect dense fluid. Actually, the fact that the quark–gluon plasma will not yet be "free" at temperatures realized at present accelerators was predicted in 1984, as a consequence of the remnant effects of confinement. Neutron stars It has been hypothesized that the core of some massive neutron stars may be a quark–gluon plasma. In-laboratory formation of deconfined matter A quark–gluon plasma (QGP) or quark soup is a state of matter in quantum chromodynamics (QCD) which exists at extremely high temperature and/or density. This state is thought to consist of asymptotically free strong-interacting quarks and gluons, which are ordinarily confined by color confinement inside atomic nuclei or other hadrons. This is in analogy with the conventional plasma where nuclei and electrons, confined inside atoms by electrostatic forces at ambient conditions, can move freely. Experiments to create artificial quark matter started at CERN in 1986/87, resulting in first claims that were published in 1991. It took several years before the idea became accepted in the community of particle and nuclear physicists. Formation of a new state of matter in Pb–Pb collisions was officially announced at CERN in view of the convincing experimental results presented by the CERN SPS WA97 experiment in 1999, and later elaborated by Brookhaven National Laboratory's Relativistic Heavy Ion Collider. Quark matter can only be produced in minute quantities and is unstable and impossible to contain, and will radioactively decay within a fraction of a second into stable particles through hadronization; the produced hadrons or their decay products and gamma rays can then be detected. In the quark matter phase diagram, QGP is placed in the high-temperature, high-density regime, whereas ordinary matter is a cold and rarefied mixture of nuclei and vacuum, and the hypothetical quark stars would consist of relatively cold, but dense quark matter. It is believed that up to a few microseconds (10−12 to 10−6 seconds) after the Big Bang, known as the quark epoch, the Universe was in a quark–gluon plasma state. The strength of the color force means that unlike the gas-like plasma, quark–gluon plasma behaves as a near-ideal Fermi liquid, although research on flow characteristics is ongoing. Liquid or even near-perfect liquid flow with almost no frictional resistance or viscosity was claimed by research teams at RHIC and LHC's Compact Muon Solenoid detector. QGP differs from a "free" collision event by several features; for example, its particle content is indicative of a temporary chemical equilibrium producing an excess of middle-energy strange quarks vs. a nonequilibrium distribution mixing light and heavy quarks ("strangeness production"), and it does not allow particle jets to pass through ("jet quenching"). Experiments at CERN's Super Proton Synchrotron (SPS) begun experiments to create QGP in the 1980s and 1990s: the results led CERN to announce evidence for a "new state of matter" in 2000. Scientists at Brookhaven National Laboratory's Relativistic Heavy Ion Collider announced they had created quark–gluon plasma by colliding gold ions at nearly the speed of light, reaching temperatures of 4 trillion degrees Celsius. Current experiments (2017) at the Brookhaven National Laboratory's Relativistic Heavy Ion Collider (RHIC) on Long Island (New York, USA) and at CERN's recent Large Hadron Collider near Geneva (Switzerland) are continuing this effort, by colliding relativistically accelerated gold and other ion species (at RHIC) or lead (at LHC) with each other or with protons. Three experiments running on CERN's Large Hadron Collider (LHC), on the spectrometers ALICE, ATLAS and CMS, have continued studying the properties of QGP. CERN temporarily ceased colliding protons, and began colliding lead ions for the ALICE experiment in 2011, in order to create a QGP. A new record breaking temperature was set by ALICE: A Large Ion Collider Experiment at CERN in August 2012 in the ranges of 5.5 trillion () kelvin as claimed in their Nature PR. The formation of a quark–gluon plasma occurs as a result of a strong interaction between the partons (quarks, gluons) that make up the nucleons of the colliding heavy nuclei called heavy ions. Therefore, experiments are referred to as relativistic heavy ion collision experiments. Theoretical and experimental works show that the formation of a quark–gluon plasma occurs at the temperature of T ≈ 150–160 MeV, the Hagedorn temperature, and an energy density of ≈ 0.4–1 GeV / fm3. While at first a phase transition was expected, present day theoretical interpretations propose a phase transformation similar to the process of ionisation of normal matter into ionic and electron plasma. Quark–gluon plasma and the onset of deconfinement The central issue of the formation of a quark–gluon plasma is the research for the onset of deconfinement. From the beginning of the research on formation of QGP, the issue was whether energy density can be achieved in nucleus-nucleus collisions. This depends on how much energy each nucleon loses. An influential reaction picture was the scaling solution presented by Bjorken. This model applies to ultra-high energy collisions. In experiments carried out at CERN SPS and BNL RHIC more complex situation arose, usually divided into three stages: Primary parton collisions and baryon stopping at the time of complete overlapping of the colliding nuclei. Redistribution of particle energy and new particles born in the QGP fireball. The fireball of QGP matter equilibrates and expands before hadronizing. More and more experimental evidence points to the strength of QGP formation mechanisms—operating even in LHC-energy scale proton-proton collisions. Further reading Books Review articles with a historical perspective of the field See also Color confinement Color-glass condensate Hadrons (that is mesons and baryons) Hadronization Hagedorn temperature Neutron star Plasma physics QCD matter Quantum electrodynamics Quantum chromodynamics Quantum hydrodynamics Relativistic plasma Relativistic nuclear collision Strangeness production Strange matter List of unsolved problems in physics References External links The Relativistic Heavy Ion Collider at Brookhaven National Laboratory The Alice Experiment at CERN The Indian Lattice Gauge Theory Initiative Quark matter reviews: 2004 theory, 2004 experiment Quark–Gluon Plasma reviews: 2011 theory Lattice reviews: 2003, 2005 BBC article mentioning Brookhaven results (2005) Physics News Update article on the quark–gluon liquid, with links to preprints "Hadrons and Quark–Gluon Plasma" by Jean Letessier and Johann Rafelski Cambridge University Press (2002) , Cambridge, UK; Quark matter Phases of matter Exotic matter Gluons
Quark–gluon plasma
[ "Physics", "Chemistry" ]
4,787
[ "Quark matter", "Phases of matter", "Astrophysics", "Exotic matter", "Nuclear physics", "Matter" ]
20,842,009
https://en.wikipedia.org/wiki/King%20Abdullah%20Canal
The King Abdullah Canal is the largest irrigation canal system in Jordan and runs parallel to the east bank of the Jordan River. It was previously known as the East Ghor Main Canal and renamed in 1987 after Abdullah I of Jordan. Water sources and technical features The main water source for the King Abdullah Canal (KAC) is the Yarmouk River and the Al-Mukhaibeh wells within the Yarmouk valley: farther south, additional water flows from Wadi el-Arab and from the Zarqa River, and its reservoir behind King Talal Dam. As a result of the 1994 Israel–Jordan peace treaty, some Yarmouk River water is also stored seasonally in Lake Tiberias, being conveyed through a pipe. The canal's design capacity is 20 m3/second at the northern entrance of the Canal and 2.3 m3/second at its southern end. Water flows by gravity along its 110 km length, ranging in elevation from about 230 meters below sea level to almost 400 meters below. The Canal supplies water for irrigation and 90 million cubic meters/year of drinking water for Greater Amman through the Deir Allah-Amman carrier, which has been constructed in two phases in the mid-80s and in the early 2000s. The Zarqa River contains a mixture of treated wastewater and natural water flow, which influences the water quality downstream of the Zarqa River intake into the KAC. History The canal was designed in 1957 and was built in phases. Construction began in 1959, and the first section was completed in 1961. By 1966, the upstream portion to Wadi Zarqa was completed. The canal was then 70 km in length, and was subsequently extended three times between 1969 and 1987. The United States, through United States Agency for International Development (USAID) provided financing for the initial phase of project, after obtaining explicit assurances from the Jordanian government that Jordan would not withdraw more water from the Yarmouk than the amount allocated to it according to the Johnston Plan. It was also involved in later phases. The original canal was part of a larger project - the Greater Yarmouk project - which envisioned two storage dams on the Yarmouk, and a future West Ghor Canal, on the West Bank of the Jordan. This other canal was never built, because Israel captured the West Bank from Jordan during the 1967 Six-Day War. After the Six-Day War, the Palestine Liberation Organization (PLO) operated from bases within Jordan, and launched several attacks on Israeli settlements in the Jordan Valley, including attacks on water facilities. Israel responded with raids in Jordan, in an attempt to force king Hussein to rein in the PLO. The canal was the target of at least four of these raids, and was virtually knocked out of commission. The United States intervened to resolve the conflict, and the canal was repaired after Hussein undertook to stop PLO activity in the area. References Irrigation projects Water politics in the Middle East Jordan River Canals Water supply and sanitation in Jordan
King Abdullah Canal
[ "Engineering" ]
604
[ "Irrigation projects" ]
20,842,572
https://en.wikipedia.org/wiki/Acrylic%20rubber
Acrylic rubber, known by the chemical name alkyl acrylate copolymer (ACM) or the tradename HyTemp, is a type of rubber that has outstanding resistance to hot oil and oxidation. It belongs to specialty rubbers. It has a continuous working temperature of and an intermittent limit of . ACM is polar and lacks unsaturation. It is resistant to ozone and has low permeability to gases. Its disadvantage is its low resistance to moisture, acids, and bases. It should not be used in temperatures below . It is commonly used in automotive transmissions and hoses. It is also used in shaft seals, adhesives, beltings, gaskets and O-rings. It is used in vibration damping mounts due to the damping properties. See also Copolymer References Rubber Elastomers
Acrylic rubber
[ "Physics", "Chemistry" ]
177
[ "Materials stubs", "Synthetic materials", "Elastomers", "Materials", "Matter" ]
20,845,567
https://en.wikipedia.org/wiki/Gauge%20gravitation%20theory
In quantum field theory, gauge gravitation theory is the effort to extend Yang–Mills theory, which provides a universal description of the fundamental interactions, to describe gravity. Gauge gravitation theory should not be confused with the similarly named gauge theory gravity, which is a formulation of (classical) gravitation in the language of geometric algebra. Nor should it be confused with Kaluza–Klein theory, where the gauge fields are used to describe particle fields, but not gravity itself. Overview The first gauge model of gravity was suggested by Ryoyu Utiyama (1916–1990) in 1956 just two years after birth of the gauge theory itself. However, the initial attempts to construct the gauge theory of gravity by analogy with the gauge models of internal symmetries encountered a problem of treating general covariant transformations and establishing the gauge status of a pseudo-Riemannian metric (a tetrad field). In order to overcome this drawback, representing tetrad fields as gauge fields of the translation group was attempted. Infinitesimal generators of general covariant transformations were considered as those of the translation gauge group, and a tetrad (coframe) field was identified with the translation part of an affine connection on a world manifold . Any such connection is a sum of a linear world connection and a soldering form where is a non-holonomic frame. For instance, if is the Cartan connection, then is the canonical soldering form on . There are different physical interpretations of the translation part of affine connections. In gauge theory of dislocations, a field describes a distortion. At the same time, given a linear frame , the decomposition motivates many authors to treat a coframe as a translation gauge field. Difficulties of constructing gauge gravitation theory by analogy with the Yang–Mills one result from the gauge transformations in these theories belonging to different classes. In the case of internal symmetries, the gauge transformations are just vertical automorphisms of a principal bundle leaving its base fixed. On the other hand, gravitation theory is built on the principal bundle of the tangent frames to . It belongs to the category of natural bundles for which diffeomorphisms of the base canonically give rise to automorphisms of . These automorphisms are called general covariant transformations. General covariant transformations are sufficient in order to restate Einstein's general relativity and metric-affine gravitation theory as the gauge ones. In terms of gauge theory on natural bundles, gauge fields are linear connections on a world manifold , defined as principal connections on the linear frame bundle , and a metric (tetrad) gravitational field plays the role of a Higgs field responsible for spontaneous symmetry breaking of general covariant transformations. Spontaneous symmetry breaking is a quantum effect when the vacuum is not invariant under the transformation group. In classical gauge theory, spontaneous symmetry breaking occurs if the structure group of a principal bundle is reducible to a closed subgroup , i.e., there exists a principal subbundle of with the structure group . By virtue of the well-known theorem, there exists one-to-one correspondence between the reduced principal subbundles of with the structure group and the global sections of the quotient bundle . These sections are treated as classical Higgs fields. The idea of the pseudo-Riemannian metric as a Higgs field appeared while constructing non-linear (induced) representations of the general linear group , of which the Lorentz group is a Cartan subgroup. The geometric equivalence principle postulating the existence of a reference frame in which Lorentz invariants are defined on the whole world manifold is the theoretical justification for the reduction of the structure group of the linear frame bundle to the Lorentz group. Then the very definition of a pseudo-Riemannian metric on a manifold as a global section of the quotient bundle leads to its physical interpretation as a Higgs field. The physical reason for world symmetry breaking is the existence of Dirac fermion matter, whose symmetry group is the universal two-sheeted covering of the restricted Lorentz group, . See also References Bibliography Gauge theories Theories of gravity
Gauge gravitation theory
[ "Physics" ]
855
[ "Theoretical physics", "Theories of gravity" ]
20,847,164
https://en.wikipedia.org/wiki/Stone%20functor
In mathematics, the Stone functor is a functor S: Topop → Bool, where Top is the category of topological spaces and Bool is the category of Boolean algebras and Boolean homomorphisms. It assigns to each topological space X the Boolean algebra S(X) of its clopen subsets, and to each morphism fop: X → Y in Topop (i.e., a continuous map f: Y → X) the homomorphism S(f): S(X) → S(Y) given by S(f)(Z) = f−1[Z]. See also Stone's representation theorem for Boolean algebras Pointless topology References Abstract and Concrete Categories. The Joy of Cats . Jiri Adámek, Horst Herrlich, George E. Strecker. Peter T. Johnstone, Stone Spaces. (1982) Cambridge university Press Functors Boolean algebra General topology
Stone functor
[ "Mathematics" ]
194
[ "Boolean algebra", "General topology", "Functions and mappings", "Mathematical structures", "Category theory stubs", "Mathematical logic", "Mathematical objects", "Topology stubs", "Fields of abstract algebra", "Topology", "Mathematical relations", "Category theory", "Functors" ]
20,849,954
https://en.wikipedia.org/wiki/Amino%20acid%20neurotransmitter
An amino acid neurotransmitter is an amino acid which is able to transmit a nerve message across a synapse. Neurotransmitters (chemicals) are packaged into vesicles that cluster beneath the axon terminal membrane on the presynaptic side of a synapse in a process called endocytosis. Amino acid neurotransmitter release (exocytosis) is dependent upon calcium Ca2+ and is a presynaptic response. Types Excitatory amino acids (EAA) will activate post-synaptic cells. inhibitory amino acids (IAA) depress the activity of post-synaptic cells. See also Amino acid non-protein functions Monoamine neurotransmitter References Neurochemistry Molecular neuroscience Amino acids Acidic amino acids Neurotransmitters
Amino acid neurotransmitter
[ "Chemistry", "Biology" ]
177
[ "Biomolecules by chemical classification", "Neurotransmitters", "Amino acids", "Molecular neuroscience", "Molecular biology", "Biochemistry", "Neurochemistry" ]
790,283
https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28temperature%29
List of orders of magnitude for temperature Detailed list for 100 K to 1000 K Most ordinary human activity takes place at temperatures of this order of magnitude. Circumstances where water naturally occurs in liquid form are shown in light grey. SI multiples References External links Online Temperature Conversion Temperature Threshold temperatures
Orders of magnitude (temperature)
[ "Physics", "Chemistry", "Mathematics" ]
57
[ "Physical phenomena", "Phase transitions", "Quantity", "Threshold temperatures", "Orders of magnitude", "Units of measurement" ]
790,468
https://en.wikipedia.org/wiki/Pulfrich%20effect
The Pulfrich effect is a psychophysical percept wherein lateral motion of an object in the field of view is interpreted by the visual cortex as having a depth component, due to a relative difference in signal timings between the two eyes. Overview The effect is generally induced by placing a dark filter over one eye but can also occur spontaneously in several eye diseases such as cataract, optic neuritis, or multiple sclerosis. In such cases, symptoms such as difficulties judging the paths of oncoming cars have been reported. The phenomenon is named for German physicist Carl Pulfrich, who first described it in 1922. Carl Pulfrich was the brother-in-law of Heinrich Hertz. The effect has been exploited as the basis for some television, film, and game 3D presentations. Demonstration In the classic Pulfrich effect experiment, a subject views a pendulum swinging in a plane perpendicular to the observer's line of sight. When a neutral density filter (a darkened lens—typically gray) is placed in front of, say, the right eye, the pendulum seems to take on an elliptical orbit, appearing closer as it swings toward the right and farther as it swings toward the left, so that if it were to theoretically be viewed from above, it would appear to be revolving counterclockwise. Conversely, if the left eye is covered, the pendulum would appear to be revolving clockwise-from-top, appearing closer as it swings toward the left and farther as it swings toward the right. A similar effect can be achieved by using a stationary camera and continuously rotating an otherwise stationary object. If the movement stops, the eye looking through the dark lens (which could be either eye depending on the direction the camera is moving) will "catch up" and the effect will disappear. One advantage of this system is that people not wearing the glasses will see a perfectly normal picture. Explanation The widely accepted explanation of the apparent depth is that a reduction in retinal illumination (relative to the fellow eye) yields a corresponding delay in signal transmission, imparting instantaneous spatial disparity in moving objects. This seems to occur because visual system latencies are generally shorter (i.e., the visual system responds more quickly) for bright targets as compared to dim targets. This motion with depth is the visual system's solution to a moving target when a difference in retinal illuminance, and hence a difference in signal latencies, exists between the two eyes. The Pulfrich effect has typically been measured under full field conditions with dark targets on a bright background, and yields about a 15 ms delay for a factor of ten difference in average retinal illuminance. These delays increase monotonically with decreased luminance over a wide (> 6 log-units) range of luminance. The effect is also seen with bright targets on a black background and exhibits the same luminance-to-latency relationship. Use in stereoscopy The Pulfrich effect has been utilized to enable a type of stereoscopy, or 3-D visual effect, in visual media such as film and TV. As in other kinds of stereoscopy, glasses are used to create the illusion of a three-dimensional image. By placing a neutral filter (e.g., the darkened lens from a pair of sunglasses) over one eye, an image, as it moves right to left (or left to right, but not up and down) will appear to move in depth, either toward or away from the viewer. Because the Pulfrich effect depends on motion in a particular direction to instigate the illusion of depth, it is not useful as a general stereoscopic technique. For example, it cannot be used to show a stationary object apparently extending into or out of the screen; similarly, objects moving vertically will not be seen as moving in depth. Incidental movement of objects will create spurious artifacts, and these incidental effects will be seen as artificial depth not related to actual depth in the scene. Many of the applications of Pulfrich involve deliberately causing just this sort of effect, which has given the technique a bad reputation. When the only movement is lateral movement of the camera then the effect is as real as any other form of stereoscopy, but this seldom happens except in highly contrived situations. It can, however, be effective as a novelty effect in contrived visual scenarios. One advantage of material produced to take advantage of the Pulfrich effect is that it is fully backward-compatible with "regular" viewing; unlike stereoscopic (two-image) video, a 3D Pulfrich effect only has one image and as a result does not produce the ghosting effect for those not wearing glasses or the color distortion of technologies such as anaglyph. The Pulfrich effect can also be achieved by wearing a sunglass lens over one eye, and since sunglasses are very common, the need to distribute "special" 3D glasses is reduced. The effect achieved a small degree of popularity in television in the late 1980s and 1990s. On Sunday, January 22, 1989 the Super Bowl XXIII halftime show and a specially produced commercial for Diet Coke were telecast using this effect. In the commercial, objects moving in one direction appeared to be nearer to the viewer (actually in front of the television screen) and when moving in the other direction, appeared to be farther from the viewer (behind the television screen). Forty million pairs of paper-framed 3D viewing "glasses" were distributed by Coca-Cola USA for the event (though they were originally produced and intended for a May 1988 3D episode of Moonlighting that never finished production due to a writer's strike). The right eye's filter was grayed purple (resembling red wine color), while the left was very light amber (resembling white wine color). These colors complemented each other to produce the Pulfrich effect while avoiding distortion in the broadcast's natural colors. The commercial was in this case restricted to objects (such as refrigerators and skateboarders) moving down a steep hill from left to right across the screen, a directional dependency determined by which eye was covered by the darker filter. The commercial was said to be created using Nuoptix 3D technology to create the Pulfrich effect. Examples The effect was also used well throughout the whole 1993 Doctor Who charity special Dimensions in Time and in dream sequences of the 1997 3rd Rock from the Sun two-part season 2 finale Nightmare on Dick Street. In many countries in Europe, a series of short 3D films, produced in the Netherlands, were shown on television. Glasses were sold at a chain of petrol stations. These short films were mainly travelogues of Dutch localities. A Power Rangers Lightspeed Rescue movie called Power Rangers in 3D: Triple Force (later broadcast as two-part Trakeena's Revenge) sold on VHS through McDonald's purportedly used "Circlescan 4D" technology, which is based on the Pulfrich effect, but there was very little 3D present. In the United States and Canada, six million 3D Pulfrich glasses were distributed to viewers for an episode of Discovery Channel's Shark Week in 2000. Animated programs that employed the Pulfrich effect in specific segments of its programs include Yo Yogi!, The Bots Master, and Space Strikers; they typically achieved the effect through the use of constantly moving background and foreground layers. In France, "Le Magazine de la Santé", a long-lasting popular medicine TV-show, has extensively presented the effect in October 2016, inviting its viewers "to see the program in 3D for the first time". Some episodes of the Italian/German TV game show "Tutti Frutti" utilised the effect. One of the showgirls stripped topless while others danced around her in an anticlockwise pattern, while two additional rear layers were created by graphics moving at different speeds. It is not known how viewing glasses were distributed. Episodes are widely available on the internet, but only a few use the Pulfrich effect. The video game Orb-3D for the Nintendo Entertainment System used the effect (by having the player's ship always moving) and came packed with a pair of glasses. So did Jim Power: The Lost Dimension in 3-D for the Super NES, using constantly scrolling backgrounds to cause the effect. References External links EP0325019 - patent using the Pulfrich effect Optical illusions 3D imaging
Pulfrich effect
[ "Physics" ]
1,727
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
790,823
https://en.wikipedia.org/wiki/Factorial%20moment
In probability theory, the factorial moment is a mathematical quantity defined as the expectation or average of the falling factorial of a random variable. Factorial moments are useful for studying non-negative integer-valued random variables, and arise in the use of probability-generating functions to derive the moments of discrete random variables. Factorial moments serve as analytic tools in the mathematical field of combinatorics, which is the study of discrete mathematical structures. Definition For a natural number , the -th factorial moment of a probability distribution on the real or complex numbers, or, in other words, a random variable with that probability distribution, is where the is the expectation (operator) and is the falling factorial, which gives rise to the name, although the notation varies depending on the mathematical field. Of course, the definition requires that the expectation is meaningful, which is the case if or . If is the number of successes in trials, and is the probability that any of the trials are all successes, then Examples Poisson distribution If a random variable has a Poisson distribution with parameter λ, then the factorial moments of are which are simple in form compared to its moments, which involve Stirling numbers of the second kind. Binomial distribution If a random variable has a binomial distribution with success probability and number of trials , then the factorial moments of are where by convention, and are understood to be zero if r > n. Hypergeometric distribution If a random variable has a hypergeometric distribution with population size , number of success states } in the population, and draws }, then the factorial moments of are Beta-binomial distribution If a random variable has a beta-binomial distribution with parameters , , and number of trials , then the factorial moments of are Calculation of moments The rth raw moment of a random variable X can be expressed in terms of its factorial moments by the formula where the curly braces denote Stirling numbers of the second kind. See also Factorial moment measure Moment (mathematics) Cumulant Factorial moment generating function Notes References Moment (mathematics) Factorial and binomial topics
Factorial moment
[ "Physics", "Mathematics" ]
431
[ "Mathematical analysis", "Moments (mathematics)", "Factorial and binomial topics", "Physical quantities", "Combinatorics", "Moment (physics)" ]
791,710
https://en.wikipedia.org/wiki/Partial%20agonist
In pharmacology, partial agonists are drugs that bind to and activate a given receptor, but have only partial efficacy at the receptor relative to a full agonist. They may also be considered ligands which display both agonistic and antagonistic effects—when both a full agonist and partial agonist are present, the partial agonist actually acts as a competitive antagonist, competing with the full agonist for receptor occupancy and producing a net decrease in the receptor activation observed with the full agonist alone. Clinically, partial agonists can be used to activate receptors to give a desired submaximal response when inadequate amounts of the endogenous ligand are present, or they can reduce the overstimulation of receptors when excess amounts of the endogenous ligand are present. Some currently common drugs that have been classed as partial agonists at particular receptors include buspirone, aripiprazole, buprenorphine, nalmefene and norclozapine. Examples of ligands activating peroxisome proliferator-activated receptor gamma as partial agonists are honokiol and falcarindiol. Delta 9-tetrahydrocannabivarin (THCV) is a partial agonist at CB2 receptors and this activity might be implicated in ∆9-THCV-mediated anti-inflammatory effects. Additionally, Delta-9-Tetrahydrocannabinol (THC) is a partial agonist at both the CB1 and CB2 receptors, with the former being responsible for its psychoactive effects. See also Competitive antagonist Intrinsic sympathomimetic activity of beta blockers Inverse agonist Mixed agonist/antagonist References Pharmacodynamics
Partial agonist
[ "Chemistry" ]
362
[ "Pharmacology", "Pharmacology stubs", "Pharmacodynamics", "Medicinal chemistry stubs" ]
792,684
https://en.wikipedia.org/wiki/Granularity
Granularity (also called graininess) is the degree to which a material or system is composed of distinguishable pieces, "granules" or "grains" (metaphorically). It can either refer to the extent to which a larger entity is subdivided, or the extent to which groups of smaller indistinguishable entities have joined together to become larger distinguishable entities. Precision and ambiguity Coarse-grained materials or systems have fewer, larger discrete components than fine-grained materials or systems. A coarse-grained description of a system regards large subcomponents. A fine-grained description regards smaller components of which the larger ones are composed. The concepts granularity, coarseness, and fineness are relative; and are used when comparing systems or descriptions of systems. An example of increasingly fine granularity: a list of nations in the United Nations, a list of all states/provinces in those nations, a list of all cities in those states, etc. Physics A fine-grained description of a system is a detailed, exhaustive, low-level model of it. A coarse-grained description is a model where some of this fine detail has been smoothed over or averaged out. The replacement of a fine-grained description with a lower-resolution coarse-grained model is called coarse-graining. (See for example the second law of thermodynamics) Molecular dynamics In molecular dynamics, coarse graining consists of replacing an atomistic description of a biological molecule with a lower-resolution coarse-grained model that averages or smooths away fine details. Coarse-grained models have been developed for investigating the longer time- and length-scale dynamics that are critical to many biological processes, such as lipid membranes and proteins. These concepts not only apply to biological molecules but also inorganic molecules. Coarse graining may remove certain degrees of freedom, such as the vibrational modes between two atoms, or represent the two atoms as a single particle. The ends to which systems may be coarse-grained is simply bound by the accuracy in the dynamics and structural properties one wishes to replicate. This modern area of research is in its infancy, and although it is commonly used in biological modeling, the analytic theory behind it is poorly understood. Computing In parallel computing, granularity means the amount of computation in relation to communication, i.e., the ratio of computation to the amount of communication. Fine-grained parallelism means individual tasks are relatively small in terms of code size and execution time. The data is transferred among processors frequently in amounts of one or a few memory words. Coarse-grained is the opposite: data is communicated infrequently, after larger amounts of computation. The finer the granularity, the greater the potential for parallelism and hence speed-up, but the greater the overheads of synchronization and communication. Granularity disintegrators exist as well and are important to understand in order to determine the accurate level of granularity. In order to attain the best parallel performance, the best balance between load and communication overhead needs to be found. If the granularity is too fine, the performance can suffer from the increased communication overhead. On the other side, if the granularity is too coarse, the performance can suffer from load imbalance. Reconfigurable computing and supercomputing In reconfigurable computing and in supercomputing these terms refer to the data path width. The use of about one-bit wide processing elements like the configurable logic blocks (CLBs) in an FPGA is called fine-grained computing or fine-grained reconfigurability, whereas using wide data paths, such as, for instance, 32 bits wide resources, like microprocessor CPUs or data-stream-driven data path units (DPUs) like in a reconfigurable datapath array (rDPA) is called coarse-grained computing or coarse-grained reconfigurability. Data and information The granularity of data refers to the size in which data fields are sub-divided. For example, a postal address can be recorded, with coarse granularity, as a single field: address = 200 2nd Ave. South #358, St. Petersburg, FL 33701-4313 USA or with fine granularity, as multiple fields: street address = 200 2nd Ave. South #358 city = St. Petersburg state = FL postal code = 33701-4313 country = USA or even finer granularity: street = 2nd Ave. South address number = 200 suite/apartment = #358 city = St. Petersburg state = FL postal-code = 33701 postal-code-add-on = 4313 country = USA Finer granularity has overheads for data input and storage. This manifests itself in a higher number of objects and methods in the object-oriented programming paradigm or more subroutine calls for procedural programming and parallel computing environments. It does however offer benefits in flexibility of data processing in treating each data field in isolation if required. A performance problem caused by excessive granularity may not reveal itself until scalability becomes an issue. Within database design and data warehouse design, data grain can also refer to the smallest combination of columns in a table which makes the rows (also called records) unique. See also Complex systems Complexity Cybernetics Granular computing Granularity (parallel computing) Dennett's three stances High- and low-level Levels of analysis Meta-systems Multiple granularity locking Precision (computer science) Self-organization Specificity (linguistics) Systems thinking Notes References Statistical mechanics Business terms
Granularity
[ "Physics" ]
1,172
[ "Statistical mechanics" ]
793,890
https://en.wikipedia.org/wiki/Feeder%20line%20%28network%29
A feeder line is a peripheral route or branch in a network, which connects smaller or more remote nodes with a route or branch carrying heavier traffic. The term is applicable to any system based on a hierarchical network. In telecommunications, a feeder line branches from a main line or trunk line. In electrical engineering, a feeder line is a type of transmission line. In addition Feeders are the power lines through which electricity is transmitted in power systems. Feeder transmits power from Generating station or substation to the distribution points. They are similar to distributors except the fact that there is no intermediate tapping done and hence the current flow remains same at the sending as well as the receiving end. In radio engineering, a feeder connects radio equipment to an antenna, usually open wire (air-insulated wire line) or twin-lead from a shortwave transmitter. In power engineering, a feeder line is part of an electric distribution network, usually a radial circuit of intermediate voltage. In public transport The concept of feeder lines is important in public transportation. The term is particularly used in US air travel and rail transport. Feeder lines play a crucial role in public transportation systems by ensuring connectivity between high-capacity routes and more localized departure and destination points. In this hierarchical network, efficient, high-capacity routes serve as the main arteries, linking significant nodes such as major transit stations or central business districts. Feeder lines, on the other hand, branch off from these main routes, connecting smaller or more remote areas to these hubs. This structure helps facilitate smooth and efficient travel across a region, allowing passengers to transition seamlessly from local to long-distance travel segments. For instance, in urban transit planning, bus routes often act as feeders to high-capacity systems like subways or light rail, collecting passengers from various neighborhoods and transporting them to major transit hubs. This setup is essential for optimizing the overall efficiency and accessibility of public transportation networks, ensuring that even areas not directly served by high-capacity routes can still benefit from the broader transit system. See also Feeder link Power engineering Public transport Network topology
Feeder line (network)
[ "Mathematics", "Engineering" ]
420
[ "Network topology", "Energy engineering", "Topology", "Power engineering", "Electrical engineering" ]
794,163
https://en.wikipedia.org/wiki/Annualized%20failure%20rate
Annualized failure rate (AFR) gives the estimated probability that a device or component will fail during a full year of use. It is a relation between the mean time between failure (MTBF) and the hours that a number of devices are run per year. AFR is estimated from a sample of like components—AFR and MTBF as given by vendors are population statistics that can not predict the behaviour of an individual unit. Hard disk drives For example, AFR is used to characterize the reliability of hard disk drives. The relationship between AFR and MTBF (in hours) is: This equation assumes that the device or component is powered on for the full 8766 hours of a year, and gives the estimated fraction of an original sample of devices or components that will fail in one year, or, equivalently, 1 − AFR is the fraction of devices or components that will show no failures over a year. It is based on an exponential failure distribution (see failure rate for a full derivation). Note: Some manufacturers count a year as 8760 hours. This ratio can be approximated by, assuming a small AFR, For example, a common specification for PATA and SATA drives may be an MTBF of 300,000 hours, giving an approximate theoretical 2.92% annualized failure rate i.e. a 2.92% chance that a given drive will fail during a year of use. The AFR for a drive is derived from time-to-fail data from a reliability-demonstration test (RDT). AFR will increase towards and beyond the end of the service life of a device or component. Google's 2007 study found, based on a large field sample of drives, that actual AFRs for individual drives ranged from 1.7% for first year drives to over 8.6% for three-year-old drives. A CMU 2007 study showed an estimated 3% mean AFR over 1–5 years based on replacement logs for a large sample of drives. See also Failure rate Frequency of exceedance References Engineering failures Rates
Annualized failure rate
[ "Technology", "Engineering" ]
424
[ "Systems engineering", "Reliability engineering", "Technological failures", "Engineering failures", "Civil engineering" ]
794,330
https://en.wikipedia.org/wiki/Data%20profiling
Data profiling is the process of examining the data available from an existing information source (e.g. a database or a file) and collecting statistics or informative summaries about that data. The purpose of these statistics may be to: Find out whether existing data can be easily used for other purposes Improve the ability to search data by tagging it with keywords, descriptions, or assigning it to a category Assess data quality, including whether the data conforms to particular standards or patterns Assess the risk involved in integrating data in new applications, including the challenges of joins Discover metadata of the source database, including value patterns and distributions, key candidates, foreign-key candidates, and functional dependencies Assess whether known metadata accurately describes the actual values in the source database Understanding data challenges early in any data intensive project, so that late project surprises are avoided. Finding data problems late in the project can lead to delays and cost overruns. Have an enterprise view of all data, for uses such as master data management, where key data is needed, or data governance for improving data quality. Introduction Data profiling refers to the analysis of information for use in a data warehouse in order to clarify the structure, content, relationships, and derivation rules of the data. Profiling helps to not only understand anomalies and assess data quality, but also to discover, register, and assess enterprise metadata. The result of the analysis is used to determine the suitability of the candidate source systems, usually giving the basis for an early go/no-go decision, and also to identify problems for later solution design. How data profiling is conducted Data profiling utilizes methods of descriptive statistics such as minimum, maximum, mean, mode, percentile, standard deviation, frequency, variation, aggregates such as count and sum, and additional metadata information obtained during data profiling such as data type, length, discrete values, uniqueness, occurrence of null values, typical string patterns, and abstract type recognition. The metadata can then be used to discover problems such as illegal values, misspellings, missing values, varying value representation, and duplicates. Different analyses are performed for different structural levels. E.g. single columns could be profiled individually to get an understanding of frequency distribution of different values, type, and use of each column. Embedded value dependencies can be exposed in a cross-columns analysis. Finally, overlapping value sets possibly representing foreign key relationships between entities can be explored in an inter-table analysis. Normally, purpose-built tools are used for data profiling to ease the process. The computation complexity increases when going from single column, to single table, to cross-table structural profiling. Therefore, performance is an evaluation criterion for profiling tools. When is data profiling conducted? According to Kimball, data profiling is performed several times and with varying intensity throughout the data warehouse developing process. A light profiling assessment should be undertaken immediately after candidate source systems have been identified and DW/BI business requirements have been satisfied. The purpose of this initial analysis is to clarify at an early stage if the correct data is available at the appropriate detail level and that anomalies can be handled subsequently. If this is not the case the project may be terminated. Additionally, more in-depth profiling is done prior to the dimensional modeling process in order assess what is required to convert data into a dimensional model. Detailed profiling extends into the ETL system design process in order to determine the appropriate data to extract and which filters to apply to the data set. Additionally, data profiling may be conducted in the data warehouse development process after data has been loaded into staging, the data marts, etc. Conducting data at these stages helps ensure that data cleaning and transformations have been done correctly and in compliance of requirements. Benefits and examples The benefits of data profiling are to improve data quality, shorten the implementation cycle of major projects, and improve users' understanding of data. Discovering business knowledge embedded in data itself is one of the significant benefits derived from data profiling. Data profiling is one of the most effective technologies for improving data accuracy in corporate databases. See also Data quality Data governance Master data management Database normalization Data visualization Analysis paralysis Data analysis References Data analysis Data management Data quality
Data profiling
[ "Technology" ]
863
[ "Data management", "Data" ]
794,346
https://en.wikipedia.org/wiki/Iconv
In Unix and Unix-like operating systems, iconv (an abbreviation of internationalization conversion) is a command-line program and a standardized application programming interface (API) used to convert between different character encodings. "It can convert from any of these encodings to any other, through Unicode conversion." History Initially appearing on the HP-UX operating system,iconv() as well as the utility was standardized within XPG4 and is part of the Single UNIX Specification (SUS). Implementations Most Linux distributions provide an implementation, either from the GNU Standard C Library (included since version 2.1, February 1999), or the more traditional GNU libiconv, for systems based on other Standard C Libraries. The iconv function on both is licensed as LGPL, so it is linkable with closed source applications. Unlike the libraries, the iconv utility is licensed under GPL in both implementations. The GNU libiconv implementation is portable, and can be used on various UNIX-like and non-UNIX systems. Version 0.3 dates from December 1999. The uconv utility from International Components for Unicode provides an iconv-compatible command-line syntax for transcoding. Most BSD systems use NetBSD's implementation, which first appeared in December 2004. Support Currently, over a hundred different character encodings are supported in the GNU variant. Ports Under Microsoft Windows, the iconv library and the utility is provided by GNU's libiconv found in Cygwin and GnuWin32 environments; there is also a "purely Win32" implementation called "win-iconv" that uses Windows' built-in routines for conversion. The iconv function is also available for many programming languages. The command has also been ported to the IBM i operating system. Usage stdin can be converted from ISO-8859-1 to current locale and output to stdout using: iconv -f iso-8859-1 An input file infile can be converted from ISO-8859-1 to UTF-8 and output to output file outfile using: iconv -f iso-8859-1 -t utf-8 <infile> -o <outfile> See also uconv luit List of Unix commands International Components for Unicode References External links iconv() OpenGroup Standards page GNU libiconv, code win_iconv HP software Unix text processing utilities Unix SUS2008 utilities IBM i Qshell commands C POSIX library
Iconv
[ "Technology" ]
525
[ "IBM i Qshell commands", "Computing commands" ]
794,439
https://en.wikipedia.org/wiki/Sodium%20sulfate
Sodium sulfate (also known as sodium sulphate or sulfate of soda) is the inorganic compound with formula Na2SO4 as well as several related hydrates. All forms are white solids that are highly soluble in water. With an annual production of 6 million tonnes, the decahydrate is a major commodity chemical product. It is mainly used as a filler in the manufacture of powdered home laundry detergents and in the Kraft process of paper pulping for making highly alkaline sulfides. Forms Anhydrous sodium sulfate, known as the rare mineral thenardite, used as a drying agent in organic synthesis. Heptahydrate sodium sulfate, a very rare form. Decahydrate sodium sulfate, known as the mineral mirabilite, widely used by chemical industry. It is also known as Glauber's salt. History The decahydrate of sodium sulfate is known as Glauber's salt after the Dutch–German chemist and apothecary Johann Rudolf Glauber (1604–1670), who discovered it in Austrian spring water in 1625. He named it (miraculous salt), because of its medicinal properties: the crystals were used as a general-purpose laxative, until more sophisticated alternatives came about in the 1900s. However, J. Kunckel later alleged that it was known as a secret medicine in Saxony already in the mid-16th century. In the 18th century, Glauber's salt began to be used as a raw material for the industrial production of soda ash (sodium carbonate), by reaction with potash (potassium carbonate). Demand for soda ash increased, and the supply of sodium sulfate had to increase in line. Therefore, in the 19th century, the large-scale Leblanc process, producing synthetic sodium sulfate as a key intermediate, became the principal method of soda-ash production. Chemical properties Sodium sulfate is a typical electrostatically bonded ionic sulfate. The existence of free sulfate ions in solution is indicated by the easy formation of insoluble sulfates when these solutions are treated with Ba2+ or Pb2+ salts: Sodium sulfate is unreactive toward most oxidizing or reducing agents. At high temperatures, it can be converted to sodium sulfide by carbothermal reduction (aka thermo-chemical sulfate reduction (TSR), high temperature heating with charcoal, etc.): This reaction was employed in the Leblanc process, a defunct industrial route to sodium carbonate. Sodium sulfate reacts with sulfuric acid to give the acid salt sodium bisulfate: Sodium sulfate displays a moderate tendency to form double salts. The only alums formed with common trivalent metals are NaAl(SO4)2 (unstable above 39 °C) and NaCr(SO4)2, in contrast to potassium sulfate and ammonium sulfate which form many stable alums. Double salts with some other alkali metal sulfates are known, including Na2SO4·3K2SO4 which occurs naturally as the mineral aphthitalite. Formation of glaserite by reaction of sodium sulfate with potassium chloride has been used as the basis of a method for producing potassium sulfate, a fertiliser. Other double salts include 3Na2SO4·CaSO4, 3Na2SO4·MgSO4 (vanthoffite) and NaF·Na2SO4. Physical properties Sodium sulfate has unusual solubility characteristics in water. Its solubility in water rises more than tenfold between 0 °C and 32.384 °C, where it reaches a maximum of 49.7 g/100 mL. At this point the solubility curve changes slope, and the solubility becomes almost independent of temperature. This temperature of 32.384 °C, corresponding to the release of crystal water and melting of the hydrated salt, serves as an accurate temperature reference for thermometer calibration. Structure Crystals of the decahydrate consist of [Na(OH2)6]+ ions with octahedral molecular geometry. These octahedra share edges such that 8 of the 10 water molecules are bound to sodium and 2 others are interstitial, being hydrogen-bonded to sulfate. These cations are linked to the sulfate anions by hydrogen bonds. The Na–O distances are about 240 pm. Crystalline sodium sulfate decahydrate is also unusual among hydrated salts in having a measurable residual entropy (entropy at absolute zero) of 6.32 J/(K·mol). This is ascribed to its ability to distribute water much more rapidly compared to most hydrates. Production The world production of sodium sulfate, almost exclusively in the form of the decahydrate, amounts to approximately 5.5 to 6 million tonnes annually (Mt/a). In 1985, production was 4.5 Mt/a, half from natural sources, and half from chemical production. After 2000, at a stable level until 2006, natural production had increased to 4 Mt/a, and chemical production decreased to 1.5 to 2 Mt/a, with a total of 5.5 to 6 Mt/a. For all applications, naturally produced and chemically produced sodium sulfate are practically interchangeable. Natural sources Two thirds of the world's production of the decahydrate (Glauber's salt) is from the natural mineral form mirabilite, for example as found in lake beds in southern Saskatchewan. In 1990, Mexico and Spain were the world's main producers of natural sodium sulfate (each around 500,000 tonnes), with Russia, United States, and Canada around 350,000 tonnes each. Natural resources are estimated at over 1 billion tonnes. Major producers of 200,000 to 1,500,000 tonnes/year in 2006 included Searles Valley Minerals (California, US), Airborne Industrial Minerals (Saskatchewan, Canada), Química del Rey (Coahuila, Mexico), Minera de Santa Marta and Criaderos Minerales Y Derivados, also known as Grupo Crimidesa (Burgos, Spain), Minera de Santa Marta (Toledo, Spain), Sulquisa (Madrid, Spain), Chengdu Sanlian Tianquan Chemical (Tianquan County, Sichuan, China), Hongze Yinzhu Chemical Group (Hongze District, Jiangsu, China), (Shanxi, China), Sichuan Province Chuanmei Mirabilite (, Dongpo District, Meishan, Sichuan, China), and Kuchuksulphat JSC (Altai Krai, Siberia, Russia). Anhydrous sodium sulfate occurs in arid environments as the mineral thenardite. It slowly turns to mirabilite in damp air. Sodium sulfate is also found as glauberite, a calcium sodium sulfate mineral. Both minerals are less common than mirabilite. Chemical industry About one third of the world's sodium sulfate is produced as by-product of other processes in chemical industry. Most of this production is chemically inherent to the primary process, and only marginally economical. By effort of the industry, therefore, sodium sulfate production as by-product is declining. The most important chemical sodium sulfate production is during hydrochloric acid production, either from sodium chloride (salt) and sulfuric acid, in the Mannheim process, or from sulfur dioxide in the Hargreaves process. The resulting sodium sulfate from these processes is known as salt cake. Mannheim: Hargreaves: The second major production of sodium sulfate are the processes where surplus sodium hydroxide is neutralised by sulfuric acid to obtain sulfate () by using copper sulfate (CuSO4) (as historically applied on a large scale in the production of rayon by using copper(II) hydroxide). This method is also a regularly applied and convenient laboratory preparation.     ΔH = -112.5 kJ (highly exothermic) In the laboratory it can also be synthesized from the reaction between sodium bicarbonate and magnesium sulfate, by precipitating magnesium carbonate. However, as commercial sources are readily available, laboratory synthesis is not practised often. Formerly, sodium sulfate was also a by-product of the manufacture of sodium dichromate, where sulfuric acid is added to sodium chromate solution forming sodium dichromate, or subsequently chromic acid. Alternatively, sodium sulfate is or was formed in the production of lithium carbonate, chelating agents, resorcinol, ascorbic acid, silica pigments, nitric acid, and phenol. Bulk sodium sulfate is usually purified via the decahydrate form, since the anhydrous form tends to attract iron compounds and organic compounds. The anhydrous form is easily produced from the hydrated form by gentle warming. Major sodium sulfate by-product producers of 50–80 Mt/a in 2006 include Elementis Chromium (chromium industry, Castle Hayne, NC, US), Lenzing AG (200 Mt/a, rayon industry, Lenzing, Austria), Addiseo (formerly Rhodia, methionine industry, Les Roches-Roussillon, France), Elementis (chromium industry, Stockton-on-Tees, UK), Shikoku Chemicals (Tokushima, Japan) and Visko-R (rayon industry, Russia). Applications Commodity industries With US pricing at $30 per tonne in 1970, up to $90 per tonne for salt cake quality, and $130 for better grades, sodium sulphate is a very cheap material. The largest use is as filler in powdered home laundry detergents, consuming approximately 50% of world production. This use is waning as domestic consumers are increasingly switching to compact or liquid detergents that do not include sodium sulfate. Papermaking Another formerly major use for sodium sulfate, notably in the US and Canada, is in the Kraft process for the manufacture of wood pulp. Organics present in the "black liquor" from this process are burnt to produce heat, needed to drive the reduction of sodium sulfate to sodium sulfide. However, due to advances in the thermal efficiency of the Kraft recovery process in the early 1960s, more efficient sulfur recovery was achieved and the need for sodium sulfate makeup was drastically reduced. Hence, the use of sodium sulfate in the US and Canadian pulp industry declined from 1,400,000 tonnes per year in 1970 to only approx. 150,000 tonnes in 2006. Glassmaking The glass industry provides another significant application for sodium sulfate, as second largest application in Europe. Sodium sulfate is used as a fining agent, to help remove small air bubbles from molten glass. It fluxes the glass, and prevents scum formation of the glass melt during refining. The glass industry in Europe has been consuming from 1970 to 2006 a stable 110,000 tonnes annually. Textiles Sodium sulfate is important in the manufacture of textiles, particularly in Japan, where this is the largest application. Sodium sulfate is added to increase the ionic strength of the solution and so helps in "levelling", i.e. reducing negative electrical charges on textile fibres, so that dyes can penetrate evenly (see the theory of the diffuse double layer (DDL) elaborated by Gouy and Chapman). Unlike the alternative sodium chloride, it does not corrode the stainless steel vessels used in dyeing. This application in Japan and US consumed in 2006 approximately 100,000 tonnes. Food industry Sodium sulfate is used as a diluent for food colours. It is known as E number additive E514. Heat storage The high heat-storage capacity in the phase change from solid to liquid, and the advantageous phase change temperature of makes this material especially appropriate for storing low-grade solar heat for later release in space heating applications. In some applications the material is incorporated into thermal tiles that are placed in an attic space, while in other applications, the salt is incorporated into cells surrounded by solar–heated water. The phase change allows a substantial reduction in the mass of the material required for effective heat storage (the heat of fusion of sodium sulfate decahydrate is 82 kJ/mol or 252 kJ/kg), with the further advantage of a consistency of temperature as long as sufficient material in the appropriate phase is available. For cooling applications, a mixture with common sodium chloride salt (NaCl) lowers the melting point to . The heat of fusion of NaCl·Na2SO4·10H2O, is actually increased slightly to 286 kJ/kg. Small-scale applications In the laboratory, anhydrous sodium sulfate is widely used as an inert drying agent, for removing traces of water from organic solutions. It is more efficient, but slower-acting, than the similar agent magnesium sulfate. It is only effective below about , but it can be used with a variety of materials since it is chemically fairly inert. Sodium sulfate is added to the solution until the crystals no longer clump together; the two video clips (see above) demonstrate how the crystals clump when still wet, but some crystals flow freely once a sample is dry. Glauber's salt, the decahydrate, is used as a laxative. It is effective for the removal of certain drugs, such as paracetamol (acetaminophen) from the body; thus it can be used after an overdose. In 1953, sodium sulfate was proposed for heat storage in passive solar heating systems. This takes advantage of its unusual solubility properties, and the high heat of crystallisation (78.2 kJ/mol). Other uses for sodium sulfate include de-frosting windows, starch manufacture, as an additive in carpet fresheners, and as an additive to cattle feed. At least one company, Thermaltake, makes a laptop computer chill mat (iXoft Notebook Cooler) using sodium sulfate decahydrate inside a quilted plastic pad. The material slowly turns to liquid and recirculates, equalizing laptop temperature and acting as an insulation. Safety Although sodium sulfate is generally regarded as non-toxic, it should be handled with care. The dust can cause temporary asthma or eye irritation; this risk can be prevented by using eye protection and a paper mask. Transport is not limited, and no Risk Phrase or Safety Phrase applies. References External links Calculators: surface tensions, and densities, molarities, and molalities of aqueous sodium sulfate Sodium compounds Sulfates Alchemical substances Articles containing video clips Desiccants E-number additives Photographic chemicals
Sodium sulfate
[ "Physics", "Chemistry" ]
2,989
[ "Sulfates", "Alchemical substances", "Salts", "Desiccants", "Materials", "Matter" ]
795,199
https://en.wikipedia.org/wiki/Breast%20milk
Breast milk (sometimes spelled as breastmilk) or mother's milk is milk produced by the mammary glands in the breasts of women. Breast milk is the primary source of nutrition for newborn infants, comprising fats, proteins, carbohydrates, and a varying composition of minerals and vitamins. Breast milk also contains substances that help protect an infant against infection and inflammation, such as symbiotic bacteria and other microorganisms and immunoglobulin A, whilst also contributing to the healthy development of the infant's immune system and gut microbiome. Use and methods of consumption The World Health Organization (WHO) and UNICEF recommend exclusive breastfeeding with breast milk for the first six months of an infant’s life. This period is followed by the incorporation of nutritionally adequate and safe complementary solid foods at six months, a stage when an infant’s nutrient and energy requirements start to surpass what breast milk alone can provide. Continuation of breastfeeding is recommended up to two years of age. This guidance is due to the protective benefits of breast milk, which include less infections such as diarrhea—a protection not afforded by formula milk. Breast milk constitutes the sole source of nutrition for exclusively breastfed newborns, supplying all necessary nutrients for infants up to six months. Beyond this age, breast milk continues to be a source of energy for children up to two years old, providing over half of a child's energy needs up to the age of one and a third of the needs between one and two years of age. Despite the capability of most newborns to latch onto the mother's breast within an hour of birth, globally, sixty percent of infants are not breastfed within this crucial first hour. Breastfeeding within the first hour of life protects the newborn from acquiring infections and reduces risk of death during the neonatal period. Alternatively, breast milk can be expressed using a breast pump and administered via baby bottle, cup, spoon, supplementation drip system, or nasogastric tube. This method is especially beneficial for preterm babies who may initially lack the ability to suck effectively. Using cups to feed expressed breast milk and other supplements results in improved breastfeeding outcomes in terms of both duration and extent, compared with traditional bottle and tube feeding. For mothers unable to produce an adequate supply of breast milk, the use of pasteurized donor human breast milk is a viable option. In the absence of pasteurized donor milk, commercial formula milk is recommended as a secondary alternative. However, unpasteurized breast milk from a source other than the infant's mother, particularly when shared informally, carries the risk of vertically transmitting bacteria, viruses (such as HIV), and other microorganisms from the donor to the infant, rendering it an unsafe alternative. Benefits Breastfeeding offers health benefits to mother and child even after infancy. These benefits include proper heat production and adipose tissue development, a 73% decreased risk of sudden infant death syndrome, increased intelligence, decreased likelihood of contracting middle ear infections, cold and flu resistance, a tiny decrease in the risk of childhood leukemia, lower risk of childhood onset diabetes, decreased risk of asthma and eczema, decreased dental problems, decreased risk of obesity later in life, and a decreased risk of developing psychological disorders, including in adopted children. In addition, feeding an infant breast milk is associated with lower insulin levels and higher leptin levels compared feeding an infant via powdered-formula. Many of the infection-fighting and immune system related benefits are associated with human milk oligosaccharides. Breastfeeding also provides health benefits for the mother. It assists the uterus in returning to its pre-pregnancy size and reduces post-partum bleeding, through the production of oxytocin (see Production). Breastfeeding can also reduce the risk of breast cancer later in life. Lactation may also reduce the risk for both mother and infant from both types of diabetes. Lactation may protect the infant from specifically developing Type 2 diabetes, as studies have shown that bioactive ingredients in human breast milk could prevent excess weight gain during childhood via contributing to a feeling of energy and satiety. The lower risk of child-onset diabetes may be more applicable to infants who were born from diabetic mothers. The reason is that while breastfeeding for at least the first six months of life minimizes the risk of type 1 diabetes from occurring in the infant, inadequate breastfeeding in an infant prenatally exposed to diabetes was associated with a higher risk of the child developing diabetes later. There are arguments that breastfeeding may contribute to protective effects against the development of type 1 diabetes because the alternative of bottle-feeding may expose infants to unhygienic feeding conditions. Though it is almost universally prescribed, in some countries during the 1950s, the practice of breastfeeding went through a period where it was out of vogue and the use of infant formula was considered superior to breast milk. However, it is since universally recognized that there is no commercial formula that can adequately substitute for breast milk. In addition to the appropriate amounts of carbohydrate, protein, and fat, breast milk provides vitamins, minerals, digestive enzymes, and hormones. Breast milk also contains antibodies and lymphocytes from the mother that may help the baby resist infections. The immune function of breast milk is individualized, as the mother, through her touching and taking care of the baby, comes into contact with pathogens that colonize the baby, and, as a consequence, her body makes the appropriate antibodies and immune cells. At around four months of age, the internal iron supplies of the infant, held in the hepatic cells of the liver, are exhausted. The American Academy of Pediatrics recommends that at this time that an iron supplement should be introduced. Other health organisations such as the NHS in the UK have no such recommendation. Breast milk contains less iron than formula, but the iron is more bioavailable as lactoferrin, which carries more safety for mothers and children than ferrous sulphate. Both the AAP and the NHS recommend vitamin D supplementation for breastfed infants. Vitamin D can be synthesised by the infant via exposure to sunlight; however, many infants are deficient due to being kept indoors or living in areas with insufficient sunlight. Formula is supplemented with vitamin D for this reason. Production Under the influence of the hormones prolactin and oxytocin, women produce milk after childbirth to feed the baby. The initial milk produced is referred to as colostrum, which is high in the immunoglobulin IgA, which coats the gastrointestinal tract. This helps to protect the newborn until its own immune system is functioning properly. It also creates a mild laxative effect, expelling meconium and helping to prevent the build-up of bilirubin (a contributory factor in jaundice). Male lactation can occur; the production or administration of the hormone prolactin is necessary to induce lactation (see male lactation). Actual inability to produce enough milk is rare, with studies showing that mothers from malnourished regions still produce amounts of milk of similar quality to that of mothers in developed countries. There are many reasons a mother may not produce enough breast milk. Some of the most common reasons are an improper latch (i.e., the baby does not connect efficiently with the nipple), not nursing or pumping enough to meet supply, certain medications (including estrogen-containing hormonal contraceptives), illness, and dehydration. A rarer reason is Sheehan's syndrome, also known as postpartum hypopituitarism, which is associated with prolactin deficiency and may require hormone replacement. The amount of milk produced depends on how often the mother is nursing and/or pumping: the more the mother nurses her baby or pumps, the more milk is produced. It is beneficial to nurse when the baby wants to nurse rather than on a schedule. A Cochrane review came to the conclusion that a greater volume of milk is expressed whilst listening to relaxing audio during breastfeeding, along with warming and massaging of the breast prior to and during feeding. A greater volume of milk expressed can also be attributed to instances where the mother starts pumping milk sooner, even if the infant is unable to breastfeed. Sodium concentration is higher in hand-expressed milk, when compared with the use of manual and electric pumps, and fat content is higher when the breast has been massaged, in conjunction with listening to relaxing audio. This may be important for low birthweight infants. If pumping, it is helpful to have an electric, high-grade pump so that all of the milk ducts are stimulated. Galactagogues increase milk supply, although even herbal variants carry risks. Non-pharmaceutical methods should be tried first, such as pumping out the mother's breast milk supply often, warming or massaging the breast, as well as starting milk pumping earlier after the child is born if they cannot drink milk at the breast. Composition Breast milk contains fats, proteins, carbohydrates (including lactose and human milk oligosaccharides), and a varying composition of minerals and vitamins. The composition changes over a single feed as well as over the period of lactation. Changes are particularly pronounced in marsupials. During the first few days after delivery, the mother produces colostrum. This is a thin yellowish fluid that is the same fluid that sometimes leaks from the breasts during pregnancy. It is rich in protein and antibodies that provide passive immunity to the baby (the baby's immune system is not fully developed at birth). Colostrum also helps the newborn's digestive system to grow and function properly. Colostrum will gradually change to become mature milk. In the first 3–4 days it will appear thin and watery and will taste very sweet; later, the milk will be thicker and creamier. Human milk quenches the baby's thirst and hunger and provides the proteins, sugar, minerals, and antibodies that the baby needs. In the 1980s and 1990s, lactation professionals (De Cleats) used to make a differentiation between foremilk and hindmilk. But this differentiation causes confusion as there are not two types of milk. Instead, as a baby breastfeeds, the fat content very gradually increases, with the milk becoming fattier and fattier over time. The level of Immunoglobulin A (IgA) in breast milk remains high from day 10 until at least 7.5 months post-partum. Human milk contains 0.8–0.9% protein, 4.5% fat, 7.1% carbohydrates, and 0.2% ash (minerals). Carbohydrates are mainly lactose; several lactose-based oligosaccharides (also called human milk oligosaccharides) have been identified as minor components. The fat fraction contains specific triglycerides of palmitic and oleic acid (O-P-O triglycerides), and also lipids with trans bonds (see: trans fat). The lipids are vaccenic acid, and conjugated linoleic acid (CLA) accounting for up to 6% of the human milk fat. The principal proteins are alpha-lactalbumin, lactoferrin (apo-lactoferrin), IgA, lysozyme, and serum albumin. In an acidic environment such as the stomach, alpha-lactalbumin unfolds into a different form and binds oleic acid to form a complex called HAMLET that kills tumor cells. This is thought to contribute to the protection of breastfed babies against cancer. Non-protein nitrogen-containing compounds, making up 25% of the milk's nitrogen, include urea, uric acid, creatine, creatinine, amino acids, and nucleotides. Breast milk has circadian variations; some of the nucleotides are more commonly produced during the night, others during the day. Mother's milk has been shown to supply endocannabinoids (the natural neurotransmitters that cannabis simulates) 2-arachidonoylglycerol, anandamide, oleoylethanolamide, palmitoylethanolamide, N-arachidonoyl glycine, eicosapentaenoyl ethanolamide, docosahexaenoyl ethanolamide, N-palmitoleoyl-ethanolamine, dihomo-γ-linolenoylethanolamine, N-stearoylethanolamine, prostaglandin F2alpha ethanolamides and prostaglandin F2 ethanolamides, Palmitic acid esters of hydroxy-stearic acids (PAHSAs). They may act as an appetite stimulant, but they also regulate appetite so infants do not eat too much. That may be why formula-fed babies have a higher caloric intake than breastfed babies. Breast milk is not sterile and has its own microbiome, but contains as many as 600 different species of various bacteria, including beneficial Bifidobacterium breve, B. adolescentis, B. longum, B. bifidum, and B. dentium, which contribute to colonization of the infant gut. As a result, it can be defined as a probiotic food, depending on how one defines "probiotic". Breast milk also contains a variety of somatic cells and stem cells and the proportion of each cell type differs from individual to individual. The somatic cells are mainly lactocytes and myoepithelial cells derived from the mother's mammary glands. The stem cells found in human breast milk have been shown to be able to differentiate into a variety of other cells involved in the production of bodily tissues and a small proportion of these cross over the nursing infant's intestinal tract into the bloodstream to reach certain organs and transform into fully functional cells. Because of its diverse population of cells and multifarious functions, researchers have argued that breast milk should be considered a living tissue. Breast milk contains a unique type of sugars, human milk oligosaccharides (HMOs), which were not present in traditional infant formula, however they are increasing added by many manufacturers. HMOs are not digested by the infant but help to make up the intestinal flora. They act as decoy receptors that block the attachment of disease causing pathogens, which may help to prevent infectious diseases. They also alter immune cell responses, which may benefit the infant. As of 2015 more than a hundred different HMOs have been identified; both the number and composition vary between women and each HMO may have a distinct functionality. The breast milk of diabetic mothers has been shown to have a different composition from that of non-diabetic mothers. It may contain elevated levels of glucose and insulin and decreased polyunsaturated fatty acids. A dose-dependent effect of diabetic breast milk on increasing language delays in infants has also been noted, although doctors recommend that diabetic mothers breastfeed despite this potential risk. Women breastfeeding should consult with their physician regarding substances that can be unwittingly passed to the infant via breast milk, such as alcohol, viruses (HIV or HTLV-1), or medications. Even though most infants infected with HIV contract the disease from breastfeeding, most infants that are breastfed by their HIV positive mothers never contract the disease. While this paradoxical phenomenon suggests that the risk of HIV transmission between an HIV positive mother and her child via breastfeeding is small, studies have also shown that feeding infants with breast milk of HIV-positive mothers can actually have a preventative effect against HIV transmission between the mother and child. This inhibitory effect against the infant contracting HIV is likely due to unspecified factors exclusively present in breast milk of HIV-positive mothers. Most women that do not breastfeed use infant formula, but breast milk donated by volunteers to human milk banks can be obtained by prescription in some countries. In addition, research has shown that women who rely on infant formula could minimize the gap between the level of immunity protection and cognitive abilities a breastfed child benefits from versus the degree to which a bottle-fed child benefits from them. This can be done by supplementing formula-fed infants with bovine milk fat globule membranes (MFGM) meant to mimic the positive effects of the MFGMs which are present in human breast milk. Storage of expressed breast milk Expressed breast milk can be stored. Lipase may cause thawed milk to taste soapy or rancid due to milk fat breakdown. It is still safe to use, and most babies will drink it. Scalding it will prevent rancid taste at the expense of antibodies. It should be stored with airtight seals. Some plastic bags are designed for storage periods of less than 72 hours. Others can be used for up to 12 months if frozen. This table describes safe storage time limits. Comparison to other milks All mammalian species produce milk, but the composition of milk for each species varies widely and other kinds of milk are often very different from human breast milk. As a rule, the milk of mammals that nurse frequently (including human babies) is less rich, or more watery, than the milk of mammals whose young nurse less often. Human milk is noticeably thinner and sweeter than cow's milk. Whole cow's milk contains too little iron, retinol, vitamin E, vitamin C, vitamin D, unsaturated fats or essential fatty acids for human babies. Whole cow's milk also contains too much protein, sodium, potassium, phosphorus and chloride which may put a strain on an infant's immature kidneys. In addition, the proteins, fats and calcium in whole cow's milk are more difficult for an infant to digest and absorb than the ones in breast milk. The composition of marsupial and monotreme milk contains essential nutrients, growth factors and immunological properties to support the development of joeys and puggles. Note: Milk is generally fortified with vitamin D in the U.S. and Canada. Non-fortified milk contains only 2 IU per 3.5 oz. Effects of medications and other substances on milk content Almost all medicines, or drugs, pass into breastmilk in small amounts by a concentration gradient. The amount of the drug bound by maternal plasma proteins, the size of the drug molecule, the pH and/or pKa of the drug, and the lipophilicity of the drug all determine whether and how much of the drug will pass into breastmilk. Medications that are mostly non-protein bound, low in molecular weight, and highly lipid-soluble are more likely to enter the breast milk in larger quantities. Some drugs have no effect on the baby and can be used whilst breastfeeding, while other medications may be dangerous and harmful to the infant. Some medications considered generally safe for use by a breastfeeding mother, with a doctor’s or pharmacist’s advice, include simple analgesics or pain killers such as paracetamol/acetaminophen, anti-hypertensives such as the ACE-inhibitors enalapril and captopril, anti-depressants of the SSRI and SNRI classes, and medications for gastroesophageal reflux such as omeprazole and ranitidine. Conversely, there are medications that are known to be toxic to the baby and thus should not be used in breastfeeding mothers, such as chemotherapeutic agents which are cytotoxic like cyclosporine, immunosuppressants like methotrexate, amiodarone, or lithium. Furthermore, drugs of abuse, such as cocaine, amphetamines, heroin, and marijuana cause adverse effects on the infant during breastfeeding. Adverse effects include seizures, tremors, restlessness, and diarrhea. To reduce infant exposure to medications used by the mother, use topical therapy or avoid taking the medication during breastfeeding times when possible. Hormonal products and combined oral contraceptives should be avoided during the early postpartum period as they can interfere with lactation. There are some medications that may stimulate the production of breast milk. These medications may be beneficial in cases where women with hypothyroidism may be unable to produce milk. A Cochrane review looked at the drug domperidone (10 mg three times per day) with results showing a significant increase in volume of milk produced over a period of one to two weeks. However, another review concluded little evidence that use of domperidone and metoclopramide to enhance milk supply works. Instead, non-pharmacological approaches such as support and more frequent breastfeeding may be more efficacious. Finally, there are other substances besides medications that may appear in breast milk. Alcohol use during pregnancy carries a significant risk of serious birth defects, but consuming alcohol after the birth of the infant is considered safe. High caffeine intake by breastfeeding mothers may cause their infants to become irritable or have trouble sleeping. A meta-analysis has shown that breastfeeding mothers who smoke expose their infants to nicotine, which may cause respiratory illnesses, including otitis media in the nursing infant. Market There is a commercial market for human breast milk, both in the form of a wet nurse service and as a milk product. As a product, breast milk is exchanged by human milk banks, as well as directly between milk donors and customers as mediated by websites on the internet. Human milk banks generally have standardized measures for screening donors and storing the milk, sometimes even offering pasteurization, while milk donors on websites vary in regard to these measures. A study in 2013 came to the conclusion that 74% of breast milk samples from providers found from websites were colonized with gram-negative bacteria or had more than 10,000 colony-forming units/mL of aerobic bacteria. Bacterial growth happens during transit. According to the FDA, bad bacteria in food at room temperature can double every 20 minutes. Human milk is considered to be healthier than cow's milk and infant formula when it comes to feeding an infant in the first six months of life, but only under extreme situations do international health organizations support feeding an infant breast milk from a healthy wet nurse rather than that of its biological mother. One reason is that the unregulated breast milk market is fraught with risks, such as drugs of abuse and prescription medications being present in donated breast milk. The transmission of these substances through breast milk can do more harm than good when it comes to the health outcomes of the infant recipient. Fraud In the United States, the online marketplace for breast milk is largely unregulated and the high premium has encouraged food fraud. Human breast milk may be diluted with other liquids to increase volume including cow’s milk, soy milk, and water, thus undermining its health benefits. A 2015 CBS article cites an editorial led by Dr. Sarah Steele in the Journal of the Royal Society of Medicine, in which they say that "health claims do not stand up clinically and that raw human milk purchased online poses many health risks." CBS found a study from the Center for Biobehavioral Health at Nationwide Children's Hospital in Columbus that "found that 11 out of 102 breast milk samples purchased online were actually blended with cow's milk." The article also explains that milk purchased online may be improperly sanitized or stored, so it may contain food-borne illness and infectious diseases such as hepatitis and HIV. Consumption by adults Restaurants and recipes A minority of people, including restaurateurs Hans Lochen of Switzerland and Daniel Angerer of Austria, who operates a restaurant in New York City, have used human breast milk, or at least advocated its use, as a substitute for cow's milk in dairy products and food recipes. An Icecreamist in London's Covent Garden started selling an ice cream named Baby Gaga in February 2011. Each serving cost £14. All the milk was donated by a Mrs Hiley who earned £15 for every 10 ounces and called it a "great recession beater". The ice cream sold out on its first day. Despite the success of the new flavour, the Westminster Council officers removed the product from the menu to make sure that it was, as they said, "fit for human consumption." Tammy Frissell-Deppe, a family counsellor specialized in attachment parenting, published a book, titled A Breastfeeding Mother's Secret Recipes, providing a lengthy compilation of detailed food and beverage recipes containing human breast milk. Human breast milk is not produced or distributed industrially or commercially, because the use of human breast milk as an adult food is considered unusual to the majority of cultures around the world, and most disapprove of such a practice. In Costa Rica, there have been trials to produce human cheese, and custard from human milk, as an alternative to weaning. Bodybuilders While there is no scientific evidence that shows that breast milk is advantageous for adults, according to several 2015 news sources, breast milk is being used by bodybuilders for its nutritional value. In a February 2015 ABC News article, one former competitive body builder said, "It isn't common, but I've known people who have done this. It's certainly talked about quite a bit on the bodybuilding forums on the Internet." Calling bodybuilders "a strange breed of individuals", he said, "Even if this type of thing is completely unsupported by research, they're prone to gym lore and willing to give it a shot if there is any potential effect." At the time the article was written, in the U.S., the price of breast milk procured from milk banks that pasteurize the milk, and have expensive quality and safety controls, was about , and the price in the alternative market online, bought directly from mothers, ranges from , compared to cow's milk at about . Erotic lactation For sexual purposes, some couples have decided to induce lactation outside a pregnancy through a practice called "Erotic lactation". Breast milk contamination Breast milk is oftentimes used as an environmental bioindicator given its ability to accumulate certain chemicals, including organochlorine pesticides. Research has found that certain organic contaminants such as PCBs, organochlorine pesticides, PCDDs, PBDEs, and DDT can contaminate breastmilk. According to research done in 2002, the levels of the organochlorine pesticides, PCBs, and dioxins have declined in breast milk in countries where these chemicals have been banned or otherwise regulated, while levels of PBDEs are rising. Pesticide contamination in breastmilk Pesticides and other toxic substances bioaccumulate; i.e., creatures higher up the food chain will store more of them in their body fat. This is an issue in particular for the Inuit, whose traditional diet is predominantly meat. Studies are looking at the effects of polychlorinated biphenyls and persistent organic pollutants in the body; the breast milk of Inuit mothers is extraordinarily high in toxic compounds. The CDC has provided some resources for breastfeeding mothers to reference for safe medication use, including LactMed, Mother to Baby, and The InfantRisk Center. Contamination effects of organochlorine pesticides on infants When a mother is exposed to organochlorine pesticides (OCP's), her infant can be exposed to these OCP's through breast milk intake. This result is supported by a study done in India, which revealed that in each lactation period there is a loss of OCPs from the mother's body involved in the nursing of their children. A longitudinal study was conducted to assess pesticide residues in human breast milk samples and evaluate the risk-exposure of infants to these pesticides from consumption of mother’s milk in Ethiopia. The estimated daily intake (EDI) of infants in the present study was above provisional tolerable daily intake (PTDI) during the first month of breastfeeding which indicates that there is a health risk for infants consuming breast milk at an early stage of breastfeeding in the study areas. Based on these studies, the exposure of women during pregnancy to these OCPs may lead to various health problems for fetus such as low birth weight, disturbance of thyroid hormone, and neurodevelopmental delay. See also Breastmilk storage and handling Blocked milk duct Breastfeeding in public Breast milk jewelry Human milk banking in North America La Leche League International Lactation room Lactivism Mary Rose Tully References External links Drug Interactions with Human Milk Human milk and lactation by Carol L. Wagner (Overview article, eMedicine, December 14, 2010) United Nations University Centre – Constituents of human milk – including comparison of human and cow's milk ones Children's Health Topics: Breastfeeding A comparison between human milk and cow's milk and The composition of cow's milk Meigs, EB (August 30, 1913) The comparative composition of human milk and of cow's milk, J.Biol.Chem 147–168 Breast Breastfeeding Body fluids Neonatology Midwifery Milk by animal Immunology Babycare
Breast milk
[ "Biology" ]
6,109
[ "Immunology" ]
795,334
https://en.wikipedia.org/wiki/Dislocation
In materials science, a dislocation or Taylor's dislocation is a linear crystallographic defect or irregularity within a crystal structure that contains an abrupt change in the arrangement of atoms. The movement of dislocations allow atoms to slide over each other at low stress levels and is known as glide or slip. The crystalline order is restored on either side of a glide dislocation but the atoms on one side have moved by one position. The crystalline order is not fully restored with a partial dislocation. A dislocation defines the boundary between slipped and unslipped regions of material and as a result, must either form a complete loop, intersect other dislocations or defects, or extend to the edges of the crystal. A dislocation can be characterised by the distance and direction of movement it causes to atoms which is defined by the Burgers vector. Plastic deformation of a material occurs by the creation and movement of many dislocations. The number and arrangement of dislocations influences many of the properties of materials. The two primary types of dislocations are sessile dislocations which are immobile and glissile dislocations which are mobile. Examples of sessile dislocations are the stair-rod dislocation and the Lomer–Cottrell junction. The two main types of mobile dislocations are edge and screw dislocations. Edge dislocations can be visualized as being caused by the termination of a plane of atoms in the middle of a crystal. In such a case, the surrounding planes are not straight, but instead bend around the edge of the terminating plane so that the crystal structure is perfectly ordered on either side. This phenomenon is analogous to half of a piece of paper inserted into a stack of paper, where the defect in the stack is noticeable only at the edge of the half sheet. The theory describing the elastic fields of the defects was originally developed by Vito Volterra in 1907. In 1934, Egon Orowan, Michael Polanyi and G. I. Taylor, proposed that the low stresses observed to produce plastic deformation compared to theoretical predictions at the time could be explained in terms of the theory of dislocations. History The theory describing the elastic fields of the defects was originally developed by Vito Volterra in 1907. The term 'dislocation' referring to a defect on the atomic scale was coined by G. I. Taylor in 1934. Prior to the 1930s, one of the enduring challenges of materials science was to explain plasticity in microscopic terms. A simplistic attempt to calculate the shear stress at which neighbouring atomic planes slip over each other in a perfect crystal suggests that, for a material with shear modulus , shear strength is given approximately by: The shear modulus in metals is typically within the range 20 000 to 150 000 MPa indicating a predicted shear stress of 3 000 to 24 000 MPa. This was difficult to reconcile with measured shear stresses in the range of 0.5 to 10 MPa. In 1934, Egon Orowan, Michael Polanyi and G. I. Taylor, independently proposed that plastic deformation could be explained in terms of the theory of dislocations. Dislocations can move if the atoms from one of the surrounding planes break their bonds and rebond with the atoms at the terminating edge. In effect, a half plane of atoms is moved in response to shear stress by breaking and reforming a line of bonds, one (or a few) at a time. The energy required to break a row of bonds is far less than that required to break all the bonds on an entire plane of atoms at once. Even this simple model of the force required to move a dislocation shows that plasticity is possible at much lower stresses than in a perfect crystal. In many materials, particularly ductile materials, dislocations are the "carrier" of plastic deformation, and the energy required to move them is less than the energy required to fracture the material. Mechanisms A dislocation is a linear crystallographic defect or irregularity within a crystal structure which contains an abrupt change in the arrangement of atoms. The crystalline order is restored on either side of a dislocation but the atoms on one side have moved or slipped. Dislocations define the boundary between slipped and unslipped regions of material and cannot end within a lattice and must either extend to a free edge or form a loop within the crystal. A dislocation can be characterised by the distance and direction of movement it causes to atoms in the lattice which is called the Burgers vector. The Burgers vector of a dislocation remains constant even though the shape of the dislocation may change. A variety of dislocation types exist, with mobile dislocations known as glissile and immobile dislocations called sessile. The movement of mobile dislocations allow atoms to slide over each other at low stress levels and is known as glide or slip. The movement of dislocations may be enhanced or hindered by the presence of other elements within the crystal and over time, these elements may diffuse to the dislocation forming a Cottrell atmosphere. The pinning and breakaway from these elements explains some of the unusual yielding behavior seen with steels. The interaction of hydrogen with dislocations is one of the mechanisms proposed to explain hydrogen embrittlement. Dislocations behave as though they are a distinct entity within a crystalline material where some types of dislocation can move through the material bending, flexing and changing shape and interacting with other dislocations and features within the crystal. Dislocations are generated by deforming a crystalline material such as metals, which can cause them to initiate from surfaces, particularly at stress concentrations or within the material at defects and grain boundaries. The number and arrangement of dislocations give rise to many of the properties of metals such as ductility, hardness and yield strength. Heat treatment, alloy content and cold working can change the number and arrangement of the dislocation population and how they move and interact in order to create useful properties. Generating dislocations When metals are subjected to cold working (deformation at temperatures which are relatively low as compared to the material's absolute melting temperature, i.e., typically less than ) the dislocation density increases due to the formation of new dislocations. The consequent increasing overlap between the strain fields of adjacent dislocations gradually increases the resistance to further dislocation motion. This causes a hardening of the metal as deformation progresses. This effect is known as strain hardening or work hardening. Dislocation density in a material can be increased by plastic deformation by the following relationship: . Since the dislocation density increases with plastic deformation, a mechanism for the creation of dislocations must be activated in the material. Three mechanisms for dislocation formation are homogeneous nucleation, grain boundary initiation, and interfaces between the lattice and the surface, precipitates, dispersed phases, or reinforcing fibers. Homogeneous nucleation The creation of a dislocation by homogeneous nucleation is a result of the rupture of the atomic bonds along a line in the lattice. A plane in the lattice is sheared, resulting in 2 oppositely faced half planes or dislocations. These dislocations move away from each other through the lattice. Since homogeneous nucleation forms dislocations from perfect crystals and requires the simultaneous breaking of many bonds, the energy required for homogeneous nucleation is high. For instance, the stress required for homogeneous nucleation in copper has been shown to be , where is the shear modulus of copper (46 GPa). Solving for , we see that the required stress is 3.4 GPa, which is very close to the theoretical strength of the crystal. Therefore, in conventional deformation homogeneous nucleation requires a concentrated stress, and is very unlikely. Grain boundary initiation and interface interaction are more common sources of dislocations. Irregularities at the grain boundaries in materials can produce dislocations which propagate into the grain. The steps and ledges at the grain boundary are an important source of dislocations in the early stages of plastic deformation. Frank–Read source The Frank–Read source is a mechanism that is able to produce a stream of dislocations from a pinned segment of a dislocation. Stress bows the dislocation segment, expanding until it creates a dislocation loop that breaks free from the source. Surfaces The surface of a crystal can produce dislocations in the crystal. Due to the small steps on the surface of most crystals, stress in some regions on the surface is much larger than the average stress in the lattice. This stress leads to dislocations. The dislocations are then propagated into the lattice in the same manner as in grain boundary initiation. In single crystals, the majority of dislocations are formed at the surface. The dislocation density 200 micrometres into the surface of a material has been shown to be six times higher than the density in the bulk. However, in polycrystalline materials the surface sources do not have a major effect because most grains are not in contact with the surface. Interfaces The interface between a metal and an oxide can greatly increase the number of dislocations created. The oxide layer puts the surface of the metal in tension because the oxygen atoms squeeze into the lattice, and the oxygen atoms are under compression. This greatly increases the stress on the surface of the metal and consequently the amount of dislocations formed at the surface. The increased amount of stress on the surface steps results in an increase in dislocations formed and emitted from the interface. Dislocations may also form and remain in the interface plane between two crystals. This occurs when the lattice spacing of the two crystals do not match, resulting in a misfit of the lattices at the interface. The stress caused by the lattice misfit is released by forming regularly spaced misfit dislocations. Misfit dislocations are edge dislocations with the dislocation line in the interface plane and the Burgers vector in the direction of the interface normal. Interfaces with misfit dislocations may form e.g. as a result of epitaxial crystal growth on a substrate. Irradiation Dislocation loops may form in the damage created by energetic irradiation. A prismatic dislocation loop can be understood as an extra (or missing) collapsed disk of atoms, and can form when interstitial atoms or vacancies cluster together. This may happen directly as a result of single or multiple collision cascades, which results in locally high densities of interstitial atoms and vacancies. In most metals, prismatic dislocation loops are the energetically most preferred clusters of self-interstitial atoms. Interaction and arrangement Geometrically necessary dislocations Geometrically necessary dislocations are arrangements of dislocations that can accommodate a limited degree of plastic bending in a crystalline material. Tangles of dislocations are found at the early stage of deformation and appear as non well-defined boundaries; the process of dynamic recovery leads eventually to the formation of a cellular structure containing boundaries with misorientation lower than 15° (low angle grain boundaries). Pinning Adding pinning points that inhibit the motion of dislocations, such as alloying elements, can introduce stress fields that ultimately strengthen the material by requiring a higher applied stress to overcome the pinning stress and continue dislocation motion. The effects of strain hardening by accumulation of dislocations and the grain structure formed at high strain can be removed by appropriate heat treatment (annealing) which promotes the recovery and subsequent recrystallization of the material. The combined processing techniques of work hardening and annealing allow for control over dislocation density, the degree of dislocation entanglement, and ultimately the yield strength of the material. Persistent slip bands Repeated cycling of a material can lead to the generation and bunching of dislocations surrounded by regions that are relatively dislocation free. This pattern forms a ladder like structure known as a persistent slip bands (PSB). PSB's are so-called, because they leave marks on the surface of metals that even when removed by polishing, return at the same place with continued cycling. PSB walls are predominately made up of edge dislocations. In between the walls, plasticity is transmitted by screw dislocations. Where PSB's meet the surface, extrusions and intrusions form, which under repeated cyclic loading, can lead to the initiation of a fatigue crack. Movement Glide Dislocations can slip in planes containing both the dislocation line and the Burgers vector, the so called glide plane. For a screw dislocation, the dislocation line and the Burgers vector are parallel, so the dislocation may slip in any plane containing the dislocation. For an edge dislocation, the dislocation and the Burgers vector are perpendicular, so there is one plane in which the dislocation can slip. Climb Dislocation climb is an alternative mechanism of dislocation motion that allows an edge dislocation to move out of its slip plane. The driving force for dislocation climb is the movement of vacancies through a crystal lattice. If a vacancy moves next to the boundary of the extra half plane of atoms that forms an edge dislocation, the atom in the half plane closest to the vacancy can jump and fill the vacancy. This atom shift moves the vacancy in line with the half plane of atoms, causing a shift, or positive climb, of the dislocation. The process of a vacancy being absorbed at the boundary of a half plane of atoms, rather than created, is known as negative climb. Since dislocation climb results from individual atoms jumping into vacancies, climb occurs in single atom diameter increments. During positive climb, the crystal shrinks in the direction perpendicular to the extra half plane of atoms because atoms are being removed from the half plane. Since negative climb involves an addition of atoms to the half plane, the crystal grows in the direction perpendicular to the half plane. Therefore, compressive stress in the direction perpendicular to the half plane promotes positive climb, while tensile stress promotes negative climb. This is one main difference between slip and climb, since slip is caused by only shear stress. One additional difference between dislocation slip and climb is the temperature dependence. Climb occurs much more rapidly at high temperatures than low temperatures due to an increase in vacancy motion. Slip, on the other hand, has only a small dependence on temperature. Dislocation avalanches Dislocation avalanches occur when multiple simultaneous movement of dislocations occur. Dislocation Velocity Dislocation velocity is largely dependent upon shear stress and temperature, and can often be fit using a power law function: where is a material constant, is the applied shear stress, is a constant that decreases with increasing temperature. Increased shear stress will increase the dislocation velocity, while increased temperature will typically decrease the dislocation velocity. Greater phonon scattering at higher temperatures is hypothesized to be responsible for increased damping forces which slow the dislocation movement. Geometry Two main types of mobile dislocations exist: edge and screw. Dislocations found in real materials are typically mixed, meaning that they have characteristics of both. Edge A crystalline material consists of a regular array of atoms, arranged into lattice planes. An edge dislocation is a defect where an extra half-plane of atoms is introduced midway through the crystal, distorting nearby planes of atoms. When enough force is applied from one side of the crystal structure, this extra plane passes through planes of atoms breaking and joining bonds with them until it reaches the grain boundary. The dislocation has two properties, a line direction, which is the direction running along the bottom of the extra half plane, and the Burgers vector which describes the magnitude and direction of distortion to the lattice. In an edge dislocation, the Burgers vector is perpendicular to the line direction. The stresses caused by an edge dislocation are complex due to its inherent asymmetry. These stresses are described by three equations: where is the shear modulus of the material, is the Burgers vector, is Poisson's ratio and and are coordinates. These equations suggest a vertically oriented dumbbell of stresses surrounding the dislocation, with compression experienced by the atoms near the "extra" plane, and tension experienced by those atoms near the "missing" plane. Screw A screw dislocation can be visualized by cutting a crystal along a plane and slipping one half across the other by a lattice vector, the halves fitting back together without leaving a defect. If the cut only goes part way through the crystal, and then slipped, the boundary of the cut is a screw dislocation. It comprises a structure in which a helical path is traced around the linear defect (dislocation line) by the atomic planes in the crystal lattice. In pure screw dislocations, the Burgers vector is parallel to the line direction. An array of screw dislocations can cause what is known as a twist boundary. In a twist boundary, the misalignment between adjacent crystal grains occurs due to the cumulative effect of screw dislocations within the material. These dislocations cause a rotational misorientation between the adjacent grains, leading to a twist-like deformation along the boundary. Twist boundaries can significantly influence the mechanical and electrical properties of materials, affecting phenomena such as grain boundary sliding, creep, and fracture behavior The stresses caused by a screw dislocation are less complex than those of an edge dislocation and need only one equation, as symmetry allows one radial coordinate to be used: where is the shear modulus of the material, is the Burgers vector, and is a radial coordinate. This equation suggests a long cylinder of stress radiating outward from the cylinder and decreasing with distance. This simple model results in an infinite value for the core of the dislocation at and so it is only valid for stresses outside of the core of the dislocation. If the Burgers vector is very large, the core may actually be empty resulting in a micropipe, as commonly observed in silicon carbide. Mixed In many materials, dislocations are found where the line direction and Burgers vector are neither perpendicular nor parallel and these dislocations are called mixed dislocations, consisting of both screw and edge character. They are characterized by , the angle between the line direction and Burgers vector, where for pure edge dislocations and for screw dislocations. Partial Partial dislocations leave behind a stacking fault. Two types of partial dislocation are the Frank partial dislocation which is sessile and the Shockley partial dislocation which is glissile. A Frank partial dislocation is formed by inserting or removing a layer of atoms on the {111} plane which is then bounded by the Frank partial. Removal of a close packed layer is known as an intrinsic stacking fault and inserting a layer is known as an extrinsic stacking fault. The Burgers vector is normal to the {111} glide plane so the dislocation cannot glide and can only move through climb. In order to lower the overall energy of the lattice, edge and screw dislocations typically disassociate into a stacking fault bounded by two Shockley partial dislocations. The width of this stacking-fault region is proportional to the stacking-fault energy of the material. The combined effect is known as an extended dislocation and is able to glide as a unit. However, dissociated screw dislocations must recombine before they can cross slip, making it difficult for these dislocations to move around barriers. Materials with low stacking-fault energies have the greatest dislocation dissociation and are therefore more readily cold worked. Stair-rod and the Lomer–Cottrell junction If two glide dislocations that lie on different {111} planes split into Shockley partials and intersect, they will produce a stair-rod dislocation with a Lomer-Cottrell dislocation at its apex. It is called a stair-rod because it is analogous to the rod that keeps carpet in-place on a stair. Jog A Jog describes the steps of a dislocation line that are not in the glide plane of a crystal structure. A dislocation line is rarely uniformly straight, often containing many curves and steps that can impede or facilitate dislocation movement by acting as pinpoints or nucleation points respectively. Because jogs are out of the glide plane, under shear they cannot move by glide (movement along the glide plane). They instead must rely on vacancy diffusion facilitated climb to move through the lattice. Away from the melting point of a material, vacancy diffusion is a slow process, so jogs act as immobile barriers at room temperature for most metals. Jogs typically form when two non-parallel dislocations cross during slip. The presence of jogs in a material increases its yield strength by preventing easy glide of dislocations. A pair of immobile jogs in a dislocation will act as a Frank–Read source under shear, increasing the overall dislocation density of a material. When a material's yield strength is increased via dislocation density increase, particularly when done by mechanical work, it is called work hardening. At high temperatures, vacancy facilitated movement of jogs becomes a much faster process, diminishing their overall effectiveness in impeding dislocation movement. Kink Kinks are steps in a dislocation line parallel to glide planes. Unlike jogs, they facilitate glide by acting as a nucleation point for dislocation movement. The lateral spreading of a kink from the nucleation point allows for forward propagation of the dislocation while only moving a few atoms at a time, reducing the overall energy barrier to slip. Example in two dimensions (2D) In two dimensions (2D) only the edge dislocations exist, which play a central role in melting of 2D crystals, but not the screw dislocation. Those dislocations are topological point defects which implies that they cannot be created isolated by an affine transformation without cutting the hexagonal crystal up to infinity (or at least up to its border). They can only be created in pairs with antiparallel Burgers vector. If a lot of dislocations are e. g. thermally excited, the discrete translational order of the crystal is destroyed. Simultaneously, the shear modulus and the Young's modulus disappear, which implies that the crystal is molten to a fluid phase. The orientational order is not yet destroyed (as indicated by lattice lines in one direction) and one finds - very similar to liquid crystals - a fluid phase with typically a six-folded director field. This so-called hexatic phase still has an orientational stiffness. The isotropic fluid phase appears, if the dislocations dissociate into isolated five-folded and seven-folded disclinations. This two step melting is described within the so-called Kosterlitz-Thouless-Halperin-Nelson-Young-theory (KTHNY theory), based on two transitions of Kosterlitz-Thouless-type. Observation Transmission electron microscopy (TEM) Transmission electron microscopy can be used to observe dislocations within the microstructure of the material. Thin foils of material are prepared to render them transparent to the electron beam of the microscope. The electron beam undergoes diffraction by the regular crystal lattice planes into a diffraction pattern and contrast is generated in the image by this diffraction (as well as by thickness variations, varying strain, and other mechanisms). Dislocations have different local atomic structure and produce a strain field, and therefore will cause the electrons in the microscope to scatter in different ways. Note the characteristic 'wiggly' contrast of the dislocation lines as they pass through the thickness of the material in the figure (dislocations cannot end in a crystal, and these dislocations are terminating at the surfaces since the image is a 2D projection). Dislocations do not have random structures, the local atomic structure of a dislocation is determined by the Burgers vector. One very useful application of the TEM in dislocation imaging is the ability to experimentally determine the Burgers vector. Determination of the Burgers vector is achieved by what is known as ("g dot b") analysis. When performing dark field microscopy with the TEM, a diffracted spot is selected to form the image (as mentioned before, lattice planes diffract the beam into spots), and the image is formed using only electrons that were diffracted by the plane responsible for that diffraction spot. The vector in the diffraction pattern from the transmitted spot to the diffracted spot is the vector. The contrast of a dislocation is scaled by a factor of the dot product of this vector and the Burgers vector (). As a result, if the Burgers vector and vector are perpendicular, there will be no signal from the dislocation and the dislocation will not appear at all in the image. Therefore, by examining different dark field images formed from spots with different g vectors, the Burgers vector can be determined. Other methods Field ion microscopy and atom probe techniques offer methods of producing much higher magnifications (typically 3 million times and above) and permit the observation of dislocations at an atomic level. Where surface relief can be resolved to the level of an atomic step, screw dislocations appear as distinctive spiral features – thus revealing an important mechanism of crystal growth: where there is a surface step, atoms can more easily add to the crystal, and the surface step associated with a screw dislocation is never destroyed no matter how many atoms are added to it. Chemical etching When a dislocation line intersects the surface of a metallic material, the associated strain field locally increases the relative susceptibility of the material to acid etching and an etch pit of regular geometrical format results. In this way, dislocations in silicon, for example, can be observed indirectly using an interference microscope. Crystal orientation can be determined by the shape of the etch pits associated with the dislocations. If the material is deformed and repeatedly re-etched, a series of etch pits can be produced which effectively trace the movement of the dislocation in question. Dislocation forces Forces on dislocations Dislocation motion as a result of external stress on a crystal lattice can be described using virtual internal forces which act perpendicular to the dislocation line. The Peach-Koehler equation can be used to calculate the force per unit length on a dislocation as a function of the Burgers vector, , stress, , and the sense vector, . The force per unit length of dislocation is a function of the general state of stress, , and the sense vector, . The components of the stress field can be obtained from the Burgers vector, normal stresses, , and shear stresses, . Forces between dislocations The force between dislocations can be derived from the energy of interactions of the dislocations, . The work done by displacing cut faces parallel to a chosen axis that creates one dislocation in the stress field of another displacement. For the and directions: The forces are then found by taking the derivatives. Free surface forces Dislocations will also tend to move towards free surfaces due to the lower strain energy. This fictitious force can be expressed for a screw dislocation with the component equal to zero as: where is the distance from free surface in the direction. The force for an edge dislocation with can be expressed as: References External links Defects in Crystals/ Prof. Dr. Helmut Föll website Chapter 5 contains a wealth of information on dislocations; DoITPoMS Online tutorial on dislocations, including movies of dislocations in bubble rafts; Difference between Edge dislocation and Screw dislocation Difference between Edge dislocation and Screw dislocation in detail; Scanning Tunneling Microscope – Gallery Image gallery, including a dislocations page, seen at the atomic level of metal surfaces, by the surface physics group at the Faculty of Physics, Vienna University of Technology, Austria. Volterra, V., "On the equilibrium of multiply-connected bodies," trans. by D. H. Delphenich Somigliana, C., "On the theory of elastic distortions," transl. by D. H. Delphenich Crystallographic defects Mineralogy concepts
Dislocation
[ "Chemistry", "Materials_science", "Engineering" ]
6,030
[ "Crystallographic defects", "Crystallography", "Materials degradation", "Materials science" ]
796,240
https://en.wikipedia.org/wiki/Failover
Failover is switching to a redundant or standby computer server, system, hardware component or network upon the failure or abnormal termination of the previously active application, server, system, hardware component, or network in a computer network. Failover and switchover are essentially the same operation, except that failover is automatic and usually operates without warning, while switchover requires human intervention. Systems designers usually provide failover capability in servers, systems or networks requiring near-continuous availability and a high degree of reliability. At the server level, failover automation usually uses a "heartbeat" system that connects two servers, either through using a separate cable (for example, RS-232 serial ports/cable) or a network connection. In the most common design, as long as a regular "pulse" or "heartbeat" continues between the main server and the second server, the second server will not bring its systems online; however a few systems actively use all servers and can failover their work to remaining servers after a failure. There may also be a third "spare parts" server that has running spare components for "hot" switching to prevent downtime. The second server takes over the work of the first as soon as it detects an alteration in the "heartbeat" of the first machine. Some systems have the ability to send a notification of failover. Certain systems, intentionally, do not failover entirely automatically, but require human intervention. This "automated with manual approval" configuration runs automatically once a human has approved the failover. Failback is the process of restoring a system, component, or service previously in a state of failure back to its original, working state, and having the standby system go from functioning back to standby. The use of virtualization software has allowed failover practices to become less reliant on physical hardware through the process referred to as migration in which a running virtual machine is moved from one physical host to another, with little or no disruption in service. Failover and Failback technology are also regularly used in the Microsoft SQL Server database, in which SQL Server Failover Cluster Instance (FCI) is installed/configured on top of the Windows Server failover Cluster (WSFC). The SQL Server groups and resources running on WSFC can manually be failover to the second node for any planned maintenance on the first node OR automatically failover to the second node in case of any issues on the first node. In the same way, a failback operation can be performed to the first node once the issue is resolved or maintenance is done on it. History The term "failover", although probably in use by engineers much earlier, can be found in a 1962 declassified NASA report. The term "switchover" can be found in the 1950s when describing '"Hot" and "Cold" Standby Systems', with the current meaning of immediate switchover to a running system (hot) and delayed switchover to a system that needs starting (cold). A conference proceedings from 1957 describes computer systems with both Emergency Switchover (i.e. failover) and Scheduled Failover (for maintenance). See also Computer cluster Data integrity Fault-tolerance Fencing (computing) High-availability cluster IT disaster recovery Load balancing Log shipping Safety engineering Teleportation (virtualization) References Computer networking Fault-tolerant computer systems
Failover
[ "Technology", "Engineering" ]
677
[ "Computer networking", "Computer engineering", "Reliability engineering", "Computer network stubs", "Computer systems", "Computer science", "Fault-tolerant computer systems", "Computing stubs" ]
6,242,952
https://en.wikipedia.org/wiki/Single-base%20extension
Single-base extension (SBE) is a method for determining the identity of a nucleotide base at a specific position along a nucleic acid. The method is used to identify a single-nucleotide polymorphism (SNP). In the method, an oligonucleotide primer hybridizes to a complementary region along the nucleic acid to form a duplex, with the primer’s terminal 3’-end directly adjacent to the nucleotide base to be identified. Using a DNA polymerase, the oligonucleotide primer is enzymatically extended by a single base in the presence of all four nucleotide terminators; the nucleotide terminator complementary to the base in the template being interrogated is incorporated and identified. The presence of all four terminators suppresses misincorporation of non-complementary nucleotides. Many approaches can be taken for determining the identity of an incorporated terminator, including fluorescence labeling, mass labeling for mass spectrometry, isotope labeling, and tagging the base with a hapten and detecting chromogenically with an anti-hapten antibody-enzyme conjugate (e.g., via an ELISA format). The method was invented by Philip Goelet, Michael Knapp, Richard Douglas and Stephen Anderson while working at the company Molecular Tool. This approach was designed for high-throughput SNP genotyping and was originally called "Genetic Bit Analysis" (GBA). Illumina, Inc. utilizes this method in their Infinium technology (http://www.illumina.com/technology/beadarray-technology/infinium-hd-assay.html) to measure DNA methylation levels in the human genome. References Philip Goelet, Michael R. Knapp, Stephen Anderson, (1999), U.S. Patent No 5,888,819. Washington, DC: U.S. Patent and Trademark Office. Biochemistry detection methods Genetics techniques
Single-base extension
[ "Chemistry", "Engineering", "Biology" ]
423
[ "Biochemistry methods", "Genetics techniques", "Genetic engineering", "Chemical tests", "Biochemistry detection methods" ]
6,243,282
https://en.wikipedia.org/wiki/Meshfree%20methods
In the field of numerical analysis, meshfree methods are those that do not require connection between nodes of the simulation domain, i.e. a mesh, but are rather based on interaction of each node with all its neighbors. As a consequence, original extensive properties such as mass or kinetic energy are no longer assigned to mesh elements but rather to the single nodes. Meshfree methods enable the simulation of some otherwise difficult types of problems, at the cost of extra computing time and programming effort. The absence of a mesh allows Lagrangian simulations, in which the nodes can move according to the velocity field. Motivation Numerical methods such as the finite difference method, finite-volume method, and finite element method were originally defined on meshes of data points. In such a mesh, each point has a fixed number of predefined neighbors, and this connectivity between neighbors can be used to define mathematical operators like the derivative. These operators are then used to construct the equations to simulate—such as the Euler equations or the Navier–Stokes equations. But in simulations where the material being simulated can move around (as in computational fluid dynamics) or where large deformations of the material can occur (as in simulations of plastic materials), the connectivity of the mesh can be difficult to maintain without introducing error into the simulation. If the mesh becomes tangled or degenerate during simulation, the operators defined on it may no longer give correct values. The mesh may be recreated during simulation (a process called remeshing), but this can also introduce error, since all the existing data points must be mapped onto a new and different set of data points. Meshfree methods are intended to remedy these problems. Meshfree methods are also useful for: Simulations where creating a useful mesh from the geometry of a complex 3D object may be especially difficult or require human assistance Simulations where nodes may be created or destroyed, such as in cracking simulations Simulations where the problem geometry may move out of alignment with a fixed mesh, such as in bending simulations Simulations containing nonlinear material behavior, discontinuities or singularities Example In a traditional finite difference simulation, the domain of a one-dimensional simulation would be some function , represented as a mesh of data values at points , where We can define the derivatives that occur in the equation being simulated using some finite difference formulae on this domain, for example and Then we can use these definitions of and its spatial and temporal derivatives to write the equation being simulated in finite difference form, then simulate the equation with one of many finite difference methods. In this simple example, the steps (here the spatial step and timestep ) are constant along all the mesh, and the left and right mesh neighbors of the data value at are the values at and , respectively. Generally in finite differences one can allow very simply for steps variable along the mesh, but all the original nodes should be preserved and they can move independently only by deforming the original elements. If even only two of all the nodes change their order, or even only one node is added to or removed from the simulation, that creates a defect in the original mesh and the simple finite difference approximation can no longer hold. Smoothed-particle hydrodynamics (SPH), one of the oldest meshfree methods, solves this problem by treating data points as physical particles with mass and density that can move around over time, and carry some value with them. SPH then defines the value of between the particles by where is the mass of particle , is the density of particle , and is a kernel function that operates on nearby data points and is chosen for smoothness and other useful qualities. By linearity, we can write the spatial derivative as Then we can use these definitions of and its spatial derivatives to write the equation being simulated as an ordinary differential equation, and simulate the equation with one of many numerical methods. In physical terms, this means calculating the forces between the particles, then integrating these forces over time to determine their motion. The advantage of SPH in this situation is that the formulae for and its derivatives do not depend on any adjacency information about the particles; they can use the particles in any order, so it doesn't matter if the particles move around or even exchange places. One disadvantage of SPH is that it requires extra programming to determine the nearest neighbors of a particle. Since the kernel function only returns nonzero results for nearby particles within twice the "smoothing length" (because we typically choose kernel functions with compact support), it would be a waste of effort to calculate the summations above over every particle in a large simulation. So typically SPH simulators require some extra code to speed up this nearest neighbor calculation. History One of the earliest meshfree methods is smoothed particle hydrodynamics, presented in 1977. Libersky et al. were the first to apply SPH in solid mechanics. The main drawbacks of SPH are inaccurate results near boundaries and tension instability that was first investigated by Swegle. In the 1990s a new class of meshfree methods emerged based on the Galerkin method. This first method called the diffuse element method (DEM), pioneered by Nayroles et al., utilized the MLS approximation in the Galerkin solution of partial differential equations, with approximate derivatives of the MLS function. Thereafter Belytschko pioneered the Element Free Galerkin (EFG) method, which employed MLS with Lagrange multipliers to enforce boundary conditions, higher order numerical quadrature in the weak form, and full derivatives of the MLS approximation which gave better accuracy. Around the same time, the reproducing kernel particle method (RKPM) emerged, the approximation motivated in part to correct the kernel estimate in SPH: to give accuracy near boundaries, in non-uniform discretizations, and higher-order accuracy in general. Notably, in a parallel development, the Material point methods were developed around the same time which offer similar capabilities. Material point methods are widely used in the movie industry to simulate large deformation solid mechanics, such as snow in the movie Frozen. RKPM and other meshfree methods were extensively developed by Chen, Liu, and Li in the late 1990s for a variety of applications and various classes of problems. During the 1990s and thereafter several other varieties were developed including those listed below. List of methods and acronyms The following numerical methods are generally considered to fall within the general class of "meshfree" methods. Acronyms are provided in parentheses. Smoothed particle hydrodynamics (SPH) (1977) Diffuse element method (DEM) (1992) Dissipative particle dynamics (DPD) (1992) Element-free Galerkin method (EFG / EFGM) (1994) Reproducing kernel particle method (RKPM) (1995) Finite point method (FPM) (1996) Finite pointset method (FPM) (1998) hp-clouds Natural element method (NEM) Material point method (MPM) Meshless local Petrov Galerkin (MLPG) (1998) Generalized-strain mesh-free (GSMF) formulation (2016) Moving particle semi-implicit (MPS) Generalized finite difference method (GFDM) Particle-in-cell (PIC) Moving particle finite element method (MPFEM) Finite cloud method (FCM) Boundary node method (BNM) Meshfree moving Kriging interpolation method (MK) Boundary cloud method (BCM) Method of fundamental solutions (MFS) Method of particular solution (MPS) Method of finite spheres (MFS) Discrete vortex method (DVM) Reproducing Kernel Particle Method (RKPM) (1995) Generalized/Gradient Reproducing Kernel Particle Method (2011) Finite mass method (FMM) (2000) Smoothed point interpolation method (S-PIM) (2005). Meshfree local radial point interpolation method (RPIM). Local radial basis function collocation Method (LRBFCM) Viscous vortex domains method (VVD) Cracking Particles Method (CPM) (2004) Discrete least squares meshless method (DLSM) (2006) Immersed Particle Method (IPM) (2006) Optimal Transportation Meshfree method (OTM) (2010) Repeated replacement method (RRM) (2012) Radial basis integral equation method Least-square collocation meshless method (2001) Exponential Basis Functions method (EBFs) (2010) Related methods: Moving least squares (MLS) – provide general approximation method for arbitrary set of nodes Partition of unity methods (PoUM) – provide general approximation formulation used in some meshfree methods Continuous blending method (enrichment and coupling of finite elements and meshless methods) – see eXtended FEM, Generalized FEM (XFEM, GFEM) – variants of FEM (finite element method) combining some meshless aspects Smoothed finite element method (S-FEM) (2007) Gradient smoothing method (GSM) (2008) Advancing front node generation (AFN) Local maximum-entropy (LME) – see Space-Time Meshfree Collocation Method (STMCM) – see , Meshfree Interface-Finite Element Method (MIFEM) (2015) - a hybrid finite element-meshfree method for numerical simulation of phase transformation and multiphase flow problems Recent development The primary areas of advancement in meshfree methods are to address issues with essential boundary enforcement, numerical quadrature, and contact and large deformations. The common weak form requires strong enforcement of the essential boundary conditions, yet meshfree methods in general lack the Kronecker delta property. This make essential boundary condition enforcement non-trivial, at least more difficult than the Finite element method, where they can be imposed directly. Techniques have been developed to overcome this difficulty and impose conditions strongly. Several methods have been developed to impose the essential boundary conditions weakly, including Lagrange multipliers, Nitche's method, and the penalty method. As for quadrature, nodal integration is generally preferred which offers simplicity, efficiency, and keeps the meshfree method free of any mesh (as opposed to using Gauss quadrature, which necessitates a mesh to generate quadrature points and weights). Nodal integration however, suffers from numerical instability due to underestimation of strain energy associated with short-wavelength modes, and also yields inaccurate and non-convergent results due to under-integration of the weak form. One major advance in numerical integration has been the development of a stabilized conforming nodal integration (SCNI) which provides a nodal integration method which does not suffer from either of these problems. The method is based on strain-smoothing which satisfies the first order patch test. However, it was later realized that low-energy modes were still present in SCNI, and additional stabilization methods have been developed. This method has been applied to a variety of problems including thin and thick plates, poromechanics, convection-dominated problems, among others. More recently, a framework has been developed to pass arbitrary-order patch tests, based on a Petrov–Galerkin method. One recent advance in meshfree methods aims at the development of computational tools for automation in modeling and simulations. This is enabled by the so-called weakened weak (W2) formulation based on the G space theory. The W2 formulation offers possibilities to formulate various (uniformly) "soft" models that work well with triangular meshes. Because a triangular mesh can be generated automatically, it becomes much easier in re-meshing and hence enables automation in modeling and simulation. In addition, W2 models can be made soft enough (in uniform fashion) to produce upper bound solutions (for force-driving problems). Together with stiff models (such as the fully compatible FEM models), one can conveniently bound the solution from both sides. This allows easy error estimation for generally complicated problems, as long as a triangular mesh can be generated. Typical W2 models are the Smoothed Point Interpolation Methods (or S-PIM). The S-PIM can be node-based (known as NS-PIM or LC-PIM), edge-based (ES-PIM), and cell-based (CS-PIM). The NS-PIM was developed using the so-called SCNI technique. It was then discovered that NS-PIM is capable of producing upper bound solution and volumetric locking free. The ES-PIM is found superior in accuracy, and CS-PIM behaves in between the NS-PIM and ES-PIM. Moreover, W2 formulations allow the use of polynomial and radial basis functions in the creation of shape functions (it accommodates the discontinuous displacement functions, as long as it is in G1 space), which opens further rooms for future developments. The W2 formulation has also led to the development of combination of meshfree techniques with the well-developed FEM techniques, and one can now use triangular mesh with excellent accuracy and desired softness. A typical such a formulation is the so-called smoothed finite element method (or S-FEM). The S-FEM is the linear version of S-PIM, but with most of the properties of the S-PIM and much simpler. It is a general perception that meshfree methods are much more expensive than the FEM counterparts. The recent study has found however, some meshfree methods such as the S-PIM and S-FEM can be much faster than the FEM counterparts. The S-PIM and S-FEM works well for solid mechanics problems. For CFD problems, the formulation can be simpler, via strong formulation. A Gradient Smoothing Methods (GSM) has also been developed recently for CFD problems, implementing the gradient smoothing idea in strong form. The GSM is similar to [FVM], but uses gradient smoothing operations exclusively in nested fashions, and is a general numerical method for PDEs. Nodal integration has been proposed as a technique to use finite elements to emulate a meshfree behaviour. However, the obstacle that must be overcome in using nodally integrated elements is that the quantities at nodal points are not continuous, and the nodes are shared among multiple elements. See also Continuum mechanics Smoothed finite element method G space Weakened weak form Boundary element method Immersed boundary method Stencil code Particle method References Further reading Belytschko, T., Chen, J.S. (2007). Meshfree and Particle Methods, John Wiley and Sons Ltd. . Liu, G.R. 1st edn, 2002. Mesh Free Methods, CRC Press. . Li, S., Liu, W.K. (2004). Meshfree Particle Methods, Berlin: Springer Verlag. , also as electronic ed.. External links The USACM blog on Meshfree Methods Numerical analysis Numerical differential equations Computational fluid dynamics
Meshfree methods
[ "Physics", "Chemistry", "Mathematics" ]
3,054
[ "Computational fluid dynamics", "Computational mathematics", "Computational physics", "Mathematical relations", "Numerical analysis", "Approximations", "Fluid dynamics" ]
6,247,207
https://en.wikipedia.org/wiki/Heteroclinic%20network
In mathematics, a heteroclinic network is an invariant set in the phase space of a dynamical system. It can be thought of loosely as the union of more than one heteroclinic cycle. Heteroclinic networks arise naturally in a number of different types of applications, including fluid dynamics and populations dynamics. The dynamics of trajectories near to heteroclinic networks is intermittent: trajectories spend a long time performing one type of behaviour (often, close to equilibrium), before switching rapidly to another type of behaviour. This type of intermittent switching behaviour has led to several different groups of researchers using them as a way to model and understand various type of neural dynamics. References Dynamical systems
Heteroclinic network
[ "Physics", "Mathematics" ]
151
[ "Mathematical analysis", "Mechanics", "Mathematical analysis stubs", "Dynamical systems" ]
6,248,163
https://en.wikipedia.org/wiki/%CE%91-Bungarotoxin
α-Bungarotoxin is one of the bungarotoxins, components of the venom of the elapid Taiwanese banded krait snake (Bungarus multicinctus). It is a type of α-neurotoxin, a neurotoxic protein that is known to bind competitively and in a relatively irreversible manner to the nicotinic acetylcholine receptor found at the neuromuscular junction, causing paralysis, respiratory failure, and death in the victim. It has also been shown to play an antagonistic role in the binding of the α7 nicotinic acetylcholine receptor in the brain, and as such has numerous applications in neuroscience research. History Bungarotoxins are a group of toxins that are closely related with the neurotoxic proteins predominantly present in the venom of kraits. These toxins are directly linked to the three-finger toxin superfamily. Among them, α-bungarotoxin (α-BTX)  stands out, being a peptide toxin produced by the Elapid Taiwanese banded krait snake, also known as the many-banded krait or the Taiwanese or Chinese krait. Elapid Taiwanese banded family krait snake (Bungarus multicinctus) is part of the Elapide snake family. The krait venom, like the majority of the snake venoms, involves a combination of proteins that together lead to a remarkable range of neurologic consequences. The Elapid snake family is known for their potent α-neurotoxic venom, which has a postsynaptic mechanism of action. These neurotoxins primarily affect the nervous system, blocking the nerve impulse transmission, leading to paralysis and potentially death if untreated. The first time that many-banded krait was described was in 1861 by the scientist Edward Blyth. It was characterized by its distinctive black-and-white banded pattern along its body, with a maximum length of 1.85 m. This very venomous species is found in central and southern China and Southeast Asia. Their venom contains various neurotoxins, being α-BTX one of them. According to later research on its mechanism of action, α-bungarotoxin binds irreversibly to the postsynaptic nicotinic acetylcholine receptor (nAChR) at the neuromuscular junction. By this way, it inhibits the action of acetylcholine competitively, leading to respiratory failure, paralysis and even death. In South and Southeast Asia, envenomation from the many-banded krait bite is a common and life-threatening medical condition when not promptly treated. Upon the snakebite, the venom is injected into the victim's tissues. It starts diffusing and spreading throughout the surrounding tissues via the bloodstream. Once the venom is in the circulatory system, it can reach the target organs and tissues. In this case,  α-bungarotoxin specifically targets the nervous system, interfering with the nerve impulse transmission. Nevertheless, krait bites usually take place at night and do not show any local symptoms, so victims are not aware of the bite. This delays receiving medical care, which makes it the major cause of mortality associated with krait envenomation. The primary target of neurotoxins is the neuromuscular junction of skeletal muscles, where the motor nerve terminal and the nicotinic acetylcholine receptor are the major target sites. Their neurotoxic effect is often referred to as resistant neurotoxicity. This is because of the damage caused to nerve terminals that leads to acetylcholine depletion at the neuromuscular junction. The regeneration of the synapses can take days, which prolongs the paralysis and recovery process for the victim. In addition, the severity of the paralysis ranges from mild to life-threatening depending on the degree of envenomination, its composition and the early therapeutic intervention. Antivenom therapy is the current standard treatment for snake envenoming. In China, the Bungarus multicinctus monovalent antivenom (BMMAV) is produced and, in Taiwan the Neuro bivalent antivenom (NBAV). Both antivenoms are immunoreactive to the neurotoxins found in the venom, including the α-BTX, which neutralize the venom lethality. BMMAV is specifically designed to neutralize the venom of the Bungarus multicinctus, therefore being more efficacious compared to NBAV. On the other hand, NBAV targets the venom from multiple species of snakes that produce neurotoxic effects, including the Bungarus multicinctus. The use of BMMAV or NBAV might differ based on availability, regional protocols and the specific venomous snake that is present in the area. Structure and available forms α-Bungarotoxin consists of an 8 kDa,  single polypeptide chain that contains 74 amino acid residues. This polypeptide chain is cross-linked by five disulfide bridges, categorizing the α-bungarotoxin as a type II α-neurotoxin within the three-finger toxin family. These disulfide bridges are formed between the specific cysteine residues and are important for the stability and function of the toxin. Furthermore, α-bungarotoxin contains ten residues of half-cysteine per molecule. The specific arrangements of disulfide bridges formed by these cysteine residues result in the 11-ring structure within the toxin molecule. This 11-ring structure is particularly essential for the toxin interactions with the target receptors and modulation of the neurotransmission at the neuromuscular junction. The amino acid sequence of the α-bungarotoxin contains a high frequency of homodipeptides, with ten pairs present where serine and proline dipeptides occur twice in the sequence. The active site of the toxin is located in the region from position 24 to position 45 within the sequence. There are some key amino acids commonly found in this region that include cysteine, arginine, glycine, lysine and valine. As previously mentioned, cysteine is crucial for the disulfide bridges formation in proteins. Arginine and lysine can participate in interactions with negatively charged molecules or residues, so they may play a role in the binding to specific receptors or substrates. Glycine may contribute to the flexibility and conformational dynamics of the α-bungarotoxin. Lastly, the valine residue may help maintain the hydrophobic core of the toxin. Similar to other α-neurotoxins within the three-finger toxin family, α-bungarotoxin exhibits a tertiary structure that is characterized by three projecting "finger" loops, a C-terminal tail, and a small globular core stabilized by four disulfide bonds. Notably, an additional disulfide bond is present in the second loop, facilitating a proper binding through the mobility of the tips of fingers I and II. Furthermore, hydrogen bonds contribute to the formation of an antiparallel  β-sheet, maintaining the parallel orientation of the second and third loops. The structural integrity of the three-finger toxin is preserved by four of the disulfide bridges, while the fifth bridge, located on the tip of the second loop, can be reduced without compromising toxicity. The α-bungarotoxin polypeptide chain shows significant sequence homology with other neurotoxins from cobra and sea snake venoms, particularly with the α-toxin from Naja nivea. Comparing α-bungarotoxin with these homologous toxins from cobra and sea snake venoms, it was revealed that there is a high degree of conservation in certain residues. For instance, there are 18 constant residues, which include the eight half-cysteines, that are observed in all toxin sequences. Therefore, α-bungarotoxin shares common structural motifs with other toxins of the three-fingered family. For example, α-cobra toxin, erabutoxin A, and candoxin contain three adjacent loops coming up from a globular, small and hydrophobic core that is cross-linked by four conserved disulfide bridges. This conservation suggests the presence of essential functional elements that are shared among these neurotoxins. Lastly, the abundance of the disulfide bonds and the limited secondary structure that is observed in the α-bungarotoxin explains its exceptional stability, which makes it resistant to denaturation even under extreme conditions such as boiling and exposure to strong acids. Synthesis Chemical synthesis Due to its very large and complex structure, synthesizing α-bungarotoxin has represented a great challenge for synthetic chemists. [16] A study conducted by O. Brun et al. proposed a mechanism for the chemical synthesis of this neurotoxin. It involves a strategy utilizing peptide fragments and native chemical ligation (NCL). Due to its length, synthesizing a full linear peptide using solid-phase peptide synthesis (SPPS) is not achievable, thus, the synthesis was done by choosing three peptide fragments that can further undergo the native chemical ligation. This method produces a native peptide bond between two fragments by reacting thioester (C-terminal) with cysteine (N-terminal). The synthesis strategy employed was from the C-terminus towards the N-terminus. Firstly, the shorter peptide fragments are synthesized via automated SPPS. The first two peptides have a Trp-Cys ligation point, while the ligation with the last fragment occurs in a Gly-Cys ligation point. Additionally, in this study, an alkyne functionality was introduced at the N-terminus of the peptide chain. This allows the conjugation of different molecules such as fluorophores via bioorthogonal reactions. By fluorescently labelling the chemically synthesised peptide it was shown it has the same effect and functionality on the nicotinic receptors as the naturally occurring α-bungarotoxin. Purification Due to the challenging chemical synthesis of the neurotoxin, most studies were conducted using a purified form. To investigate the effects of the α-bungarotoxin, the toxin has to be isolated from the venom of the elapid snake. The purification of the polypeptide is done via column chromatography. Firstly, the venom is dissolved in ammonium acetate buffer and then loaded on the CM-Sephadex column. The elution of the compound is done in two different steps by using an ammonium acetate buffer at a flow rate of 35 nl/h. The steps involve using two linear gradients of buffers while increasing the pH. Biosynthesis α-Bungarotoxin is a peptide, therefore it undergoes the protein synthesis pathway, involving transcription and translation. The specific genes encoding for the protein are transcribed into mRNA, which is then translated via the ribosomes, leading to the synthesis of the prepropeptide. Lastly, post-translational modification and folding occur. The mature peptide is stored in the venom gland until envenomation when it gets released. Mechanism of action The venom of snakes contains numerous proteins and peptide toxins that exhibit high affinity and specificity for a larger range of receptors. α-Bungarotoxin is a nicotinic receptor antagonist that binds irreversibly to the receptor, inhibiting the action of acetylcholine at the neuromuscular junctions. Nicotinic receptors are one of the two subtypes of cholinergic receptors, that respond to the neurotransmitter acetylcholine. Nicotinic acetylcholine receptors (nAChRs) are ligand-gated ion channels, being part of the ionotropic receptors. When a ligand is bound to it, it regulates excitability by controlling the ion flow during action potential during neurotransmission, primarily through the activation of voltage-gated ion channels upon depolarization of the plasma membrane. The depolarization is induced by an influx of cations, mainly that of sodium ions. For the overall modulation of cellular excitability, an influx of sodium ions and an efflux of potassium ions into the intracellular space is necessary. In the central and peripheral nervous system, α-bungarotoxin acts by inducing paralysis in skeletal muscles by binding to a subtype of nicotinic receptors α7. α-Neurotoxins are known as "curare-mimetic toxins" due to their similar effects to the arrow poison tubocurarine. A difference between α-neurotoxins and curare alkaloids is that they bind irreversibly and reversibly specifically. α-Neurotoxins block the action of acetylcholine (ACh) at the postsynaptic membrane by irreversibly inhibiting the ion flow. From the same toxin family of Bungarotoxins (BTX), κ-BTX was shown to act postsynaptically on α3 and α4 neuronal nicotinic receptors with little effect on the muscular nAChRs, targeted by α-BTX. In contrast, β- and γ-BTX act presynaptically by reducing ACh release. It is important to note that neurotoxins are named based on the receptor type they target. The nicotinic receptors are made up of five subunits each and contain two binding sites for snake venom neurotoxins.[20] The α7-nAChR is a homopentamer consisting of five identical α7 subunits. The α7 receptor is known to have a higher Ca2+ permeability compared to other nicotinic receptors. Changes in Ca2+ intracellularly can activate important cellular pathways such as the STAT pathway or the NF-κB signalling. Consistency with experimental data on the amount of toxin per receptor is evident in the observation that a lone molecule of the toxin is adequate to inhibit channel opening. Some computational studies of the mechanism of inhibition using normal mode dynamics suggest that a twist-like motion caused by ACh binding may be responsible for pore opening and that this motion is inhibited by toxin binding. Metabolism The following section describes the ADME (absorption, distribution, metabolism and excretion) of α-bungarotoxin. It is important to note that there is limited information available on the pharmacokinetics of this neurotoxin. More research is needed to be able to fully understand the metabolism of this neurotoxin inside the body. Absorption: α-bungarotoxin enters the body after envenomation into the bloodstream at the bite site. Through the venom, a mixture of proteins and different molecules enter the body.   Distribution: Once in the bloodstream, α-bungarotoxin circulates throughout the body. Its distribution may be influenced by factors such as blood flow, tissue permeability, and the presence of binding proteins. Additionally, knowing it binds to nAChRs, it can be predicted where the neurotoxin would be present: neuromuscular junctions, autonomic ganglia, peripheral nerves, and adrenal medulla. One of the main locations would be also the central nervous system (CNS), including the brain. Specific regions such as the hippocampus, cortex, and basal ganglia contain these receptors. Metabolism: The metabolic pathways of this neurotoxins have not been fully understood yet, however, it is thought to be metabolised in the liver. Researching venom metabolism is challenging due to the multiple components present in it. Toxins that are not bound may undergo elimination through opsonization by the reticuloendothelial system, mainly involving the liver and kidneys, or they may undergo degradation through cellular internalization facilitated by lysosomes. Excretion: It is common for proteins and peptides to be excreted via the hepatic and renal pathways. In the liver, the amino acids present undergo transamination. This way the amino acids are converted into ammonia and keto acids. Lastly, these substances are excreted via the kidney. However, it is important to take into account that α-bungarotoxin binds irreversibly to the receptors, which would result in a very low metabolic and excretion rate, as most of the neurotoxin would be present at the receptor sites. Indications, availability, efficacy, adverse effects Indications The α-bungarotoxin is among the most well-characterized snake toxins, with its high affinity and specificity for nicotinic acetylcholine receptors. It is a competitive antagonist at nAChR, where it irreversibly and competitively blocks the receptor at the acetylcholine binding sites. It binds to the α1 subunit contained in muscle nAChRs, as well as subsets of neuronal nAChRs like α7-α10. In addition, it was shown that α-bungarotoxin binds to, and block, a subset of GABAA receptors where the β3 subunits connect with each other. With this knowledge in mind, researchers can use α-bungarotoxin as an experimental tool for studying the properties of cholinergic receptors. In addition, by knowing the different and specific binding sites, researchers are able to visualize and track receptor localization and dynamics within cells. This technique has been shown to be easy with the use of a 13-amino acid (WRYYESSLEPYPD) mimotope, which forms a high affinity α-bungarotoxin binding site with the receptors. It has been extensively used in research to study the localization and distribution of these receptors. Through techniques like fluorophore or enzyme conjugation followed by microscopy or immunohistochemical staining, respectively, could give insights about the complex organization and function of the nervous system. With the mentioned techniques, researchers can work towardards a drug development, and understand the disease mechanism. They can idenitify potential drug targets by selectively regulating the activity of certain receptors. Therefore, observe how receptors behave when in contact with the α-bungarotoxin compared to when there is no toxin, researchers can study the mechanism of the toxin. Availability α-Bungarotoxin is available for purchase from multiple biotechnological companies, such as Sigma-Aldrich or Biotium. Researchers may purchase it from there to perform a variety of researches on the toxin. Regarding bioavailability, researchers performed a study in the spinal cord during embryonic development in the embryos of chicks. They found that that binding of α-bungarotoxin was specific and saturable within the concentration range of 1-34 mM. Meaning, as the concentration of α-bungarotoxin increased, the binding site became more and more limited. Reaching the maximum number at 34 mM. Once there was no binding sites available anymore, nicotine behaved in a competitive manner and pushed out the already-bound α-bungarotoxin. Another thing they found was that the dissociation constant (Kd) was 8.0 nM - a concentration of α-bungarotoxin where half of the binding site were occupied. Moreover, maximum binding capacity (Bmax) was found to be 106 +/- 12 fmol/mg - the maximum number of binding sites available per unit of protein. Finally, exogenously administered α-bungarotoxin showed to penetrate the spinal cord tissue and bind to its specific sites after 7 days. Efficacy The efficacy of α-bungarotoxin can be assessed by analyzing their binding affinity. It affects how the signal transmits at the skeletal neuromuscular junction by binding to the postsynaptic nAChRs at high affinity. The affinity of the toxin for this receptor is measured with a dissociation constant (Kd), ranging from 10-11  to 10-9 M. In addition to binding to skeletal neuromuscular junctions, it can specifically bind to different neuronal subsets, such as α7. This binding affinity is only slightly lower with Kd measured in the range of  10-9  to 10-8 M. It can also be analyzed through receptor inhibition, specifically inhibiting the action of acetylcholine on nAChRs. One study found that 5 mirograms/ml of the toxin completely blocks the endplate potential and extrajunctional acetylcholine sensitivity of surface fibers, within approximately 35 minutes in normal and chronically denervated muscles. They performed a washout period of 6.5 hours, which resulted in a partial recovery of the endplate potential, with an amplitude of 0.72 +/- 0.033 mV in normal muscles. In denervated muscles, a partial recovery of acetylholine sensitivity was observed, with an amplitude of 41.02 +/- 3.95 mV/nC compared to a control amplitude of 1215 +/- 197 mV/nC. This same study also found a small population of acetylhcoline receptors (1% of the total population) to react with α-bungarotoxin reversibly. With the toxin, either 20microM carbamylcholine or decamethonium was used simultaneously in normal muscles. Once the toxin and the drug were washed out, the muscle restored a twitch to control levels within 2 hours. The susceptibility of different species to the venom of a krait snake, which contains alpha-bungarotoxin, varies based on their genetic makeup. α-Bungarotoxin binds best to the acetylholine alpha-subunit containing aromatic amino acid residues at positions 187 and 189 - e.g. shrews, cats and mice. In species like humans and hedgehogs, which have nonaromatic amino acid residues at the same positions, have a decreased binding affinity of α-bungarotoxin. Finally, snakes and mongooses have specific amino acid substitutions at 187, 189, and 194, alpha-subunits, which makes the binding of the toxin non-existent. Adverse effects In humans, exposure to α-bungarotoxin can lead to various symptoms, such as headache, dizziness, unconsciousness, visual and speech disturbances, and occasionally seizures. Onset of severe abdominal pain and muscular paralysis within 10 hours and may last for 4 days. Finally, respiratory paralysis can lead to death. Additionally, it can also lead to mild symptoms like dermatitis and allergic reactions, or stronger symptoms like blood coagulation, disseminated intravascular coagulation, tissue injury, and hemorrhage. In animals, studies have been done to analyze the effect of the α-bungarotoxin on animals. One study showed this toxin causing paralysis in chickens by blocking neuromuscular transmission at the motor end-plate. This led to muscle weakness and ultimately, paralysis. In ancient days, these venoms were already widespread across the world. Then, folklore medicine utilized plant-based and bioactive inhibitor compounds to treat bites from venomous animals like snakes and scorpions. This approach proved successful in preventing envenomation, effectively mitigating the harmful effects of venom on the victims. Today, treatment for krait bites involves antivenom, which can lead to various undesirable and potentially life-threatening side effects, such as nausea, urticarial, hypotension, cyanosis, and severe allergic reactions. Toxicity α-Bungarotoxin belongs to a group of bungarotoxins, which are a type of poisonous proteins found in the venom of kraits - among the six most deadly snakes in Asia. Their bite can lead to respiratory paralysis and death. α-Bungarotoxin irreversibly and competitively binds to muscular and neuronal acetylcholine receptors. The paralysis happens due to the neuromuscular transmission at the postsynaptic site being blocked. values, representing lethal dose required to cause death in 50%, were studied in mice using different routes of administration. Subcutaneous administration showed that 0.108 mg/kg was needed to kill 50% of mice. Intravenous administration resulted in a slightly higher LD50 value of 0.113 mg/kg. However, when it was administered intraperitoneally, the LD50 value was 0.08 mg/kg. These values can aid in risk assessment of the toxin. See also β-Bungarotoxin κ-Bungarotoxin References External links Structure at GenBank Ion channel toxins Nicotinic antagonists Snake toxins Bungarus Neurotoxins
Α-Bungarotoxin
[ "Chemistry" ]
5,150
[ "Neurochemistry", "Neurotoxins" ]
6,248,586
https://en.wikipedia.org/wiki/Cistrome
In simple words, the cistrome refers a collection of regulatory elements of a set of genes, including transcription factor binding sites and histone modifications. More specifically, "the set of cis-acting targets of a trans-acting factor on a genome-wide scale, also known as the in vivo genome-wide location of transcription factor binding sites or histone modifications". The term cistrome is a portmanteau of (from cistron) + ome (from genome). The term cistrome was coined by investigators at the Dana–Farber Cancer Institute and Harvard Medical School. Technologies such as chromatin immunoprecipitation combined with microarray analysis "ChIP-on-chip" or with massively parallel DNA sequencing "ChIP-Seq" have greatly facilitated the definition of the cistrome of transcription factors and other chromatin associated proteins. References Further reading Molecular genetics
Cistrome
[ "Chemistry", "Biology" ]
189
[ "Molecular genetics", "Molecular biology" ]
6,248,691
https://en.wikipedia.org/wiki/Bake-out
Bake-out, in several areas of technology and fabrication, and in building construction, refers to the process of using high heat temperature (heat), and possibly vacuum, to remove volatile compounds from materials and objects before placing them into situations where the slow release of the same volatile compounds would contaminate the contents of a container or vessel, spoil a vacuum, or cause discomfort (odor or irritation) or illness. Bake-out is an artificial acceleration of the process of outgassing. In manufacturing In various physics and vacuum device engineering, such as particle accelerators, semiconductor fabrication, and vacuum tubes, bake-out is a manufacturing process, the period of time when a part or device is placed in a vacuum chamber (or its operating vacuum state, for devices which operate in vacuum) and heated, usually by built-in heaters. This drives off gases, which are removed by continued operation of the vacuum pump. Low hydrogen annealing, or hydrogen bake-out, is used to help reduce or remove hydrogen in stainless bulk steel. In construction In building construction, bake-out is the use of heat to remove volatile organic compounds such as solvents remaining in paint, carpets, and other building materials from a building after its construction, to reduce annoying odors or improve indoor air quality. The building interior is heated to a much higher temperature than normal and kept at that temperature for an extended period of time, to encourage such compounds to vaporize into the air, which is vented (released to the atmosphere). See also Vacuum Indoor air quality Volatile organic compounds References Building biology Vacuum systems ja:ベーキング
Bake-out
[ "Physics", "Engineering" ]
334
[ "Building engineering", "Vacuum", "Vacuum systems", "Building biology", "Matter" ]
6,249,223
https://en.wikipedia.org/wiki/Bicinchoninic%20acid
Bicinchoninic acid () or BCA is a weak acid composed of two carboxylated quinoline rings. It is an organic compound with the formula (C9H5NCO2H)2. The molecule consists of a pair of quinoline rings, each bearing a carboxylic acid group. Its sodium salt forms a purple complex with cuprous ions. Bicinchoninic acid is most commonly employed in the bicinchoninic acid (BCA) assay, which is used to determine the total concentration of protein in a solution. Bicinchoninic acid is used to detect the presence of cuprous ions, due to its purple coloration via a biuret reaction. In this assay, two molecules of bicinchoninic acid chelate a single Cu+ ion, forming a purple water-soluble complex that strongly absorbs light at 562 nm. References Quinolines Dicarboxylic acids Aromatic acids Dimers (chemistry)
Bicinchoninic acid
[ "Chemistry", "Materials_science" ]
206
[ "Dimers (chemistry)", "Polymer chemistry" ]
6,249,365
https://en.wikipedia.org/wiki/Immunoproteomics
Immunoproteomics is the study of large sets of proteins (proteomics) involved in the immune response. Examples of common applications of immunoproteomics include: The isolation and mass spectrometric identification of MHC (major histocompatibility complex) binding peptides Purification and identification of protein antigens binding specific antibodies (or other affinity reagents) Comparative immunoproteomics to identify proteins and pathways modulated by a specific infectious organism, disease or toxin. The identification of proteins in immunoproteomics is carried out by techniques including gel based, microarray based, and DNA based techniques, with mass spectroscopy typically being the ultimate identification method. Applications Immunology Immunoproteomics is and has been used to increase scientific understanding of both autoimmune disease pathology and progression. Using biochemical techniques, gene and ultimately protein expression can be measured with high fidelity. With this information, the biochemical pathways causing pathology in conditions such as multiple sclerosis and Crohn's disease can potentially be elucidated. Serum antibody identification in particular has proven to be very useful as a diagnostic tool for a number of diseases in modern medicine, in large part due to the relatively high stability of serum antibodies. Immunoproteomic techniques are additionally used for the isolation of antibodies. By identifying and proceeding to sequence antibodies, scientists are able to identify potential protein targets of said antibodies. In doing so, it is possible to determine the antigen(s) responsible for a particular immune response. Identification and engineering of antibodies involved in autoimmune disease pathology may offer novel techniques in disease therapy. Drug engineering By identifying the antigens responsible for a particular immune response, it is possible to identify viable targets for novel drugs. In addition, specific antigens can further be classified based on immunoreactivity for identification of future potential vaccine preparations. In addition to the identification of vaccine candidates, immunoproteomic techniques such as western blotting can additionally be used for measuring the efficacy of a given vaccine. Technology and instrumentation Mass spectrometry Mass spectrometry can be used in the sequencing of MHC binding motifs, which can subsequently be used to predict T cell epitopes. The technique of peptide mass fingerprinting (PMF) can be used to check a peptide's mass spectrum against a database of protein digests which have already been documented. If the mass spectrum of the protein of interest as well as the database protein share a large amount of homology, it is likely that the protein of interest is contained within the sample. Affinity proteomics Affinity proteomics is a high-throughput method of studying the proteome with antibody or other affinity reagents (e.g. aptamers). Large numbers (dozens to hundreds) of immune-related cytokines and related markers can be simultaneously assayed in solution, in contrast to a solid substrate such as a microarray. 2-D gel electrophoresis and western blotting Two-dimensional gel electrophoresis (2-D gel) techniques in culmination with western blotting has been used for many years in the identification of immune response magnitude. This can be accomplished by comparing various samples against molecular-weight size markers for qualitative analysis and against known amounts of protein standards for quantitative analysis. 2-D liquid chromatography By coupling liquid chromatography with a variety of other immunodetection techniques such as serological proteome analysis (SERPA), it is possible to analyze the hydrophobicity, PI, relative mass, and antibody reactivity of antibodies within a given serum. Microarray Microarray analysis of various serums can be used as a means to identify changes in gene expression before, after, and during a given immune response. See also Immunomics References External links An Introduction to Protein Identification Branches of immunology Proteomics
Immunoproteomics
[ "Biology" ]
816
[ "Branches of immunology" ]
6,249,798
https://en.wikipedia.org/wiki/Gusperimus
Gusperimus is an immunosuppressive drug. It is a derivative of the naturally occurring HSP70 inhibitor spergualin, and inhibits the interleukin-2-stimulated maturation of T cells to the S and G2/M phases and the polarization of the T cells into IFN-gamma-secreting Th1 effector T cells, resulting in the inhibition of growth of activated naive CD4 T cells. Gusperimus was developed by Bristol-Myers Squibb. Currently, it is manufactured and sponsored for use as an orphan drug and for clinical studies by the Japanese company Euro Nippon Kayaku. The patent claim (see quotation) is that Gusperimus may be useful for a variety of hyperreactive inflammatory diseases such as autoimmune diseases. The drug is available in vials containing 100 mg each. There is little information about the pharmacokinetic properties of gusperimus. Overview The European Commission assigned orphan drug status to Gusperimus in 2001 for the treatment of granulomatosis with polyangiitis, a serious form of vasculitis frequently associated with permanent disability and/or fatal outcome. There have been many cases of patients resistant to all forms of usual treatment responding very well to Gusperimus. It has been proposed that gusperimus may benefit patients with the neurological disease amyotrophic lateral sclerosis (ALS or Lou Gehrig's disease). ALS causes permanent motor deficits and disabilities up to the point that almost all motor functions, including breathing and bladder control, are lost. Patients usually have no intellectual impairments. Currently, there are no results from controlled studies in ALS patients. There have also been positive and negative anecdotal reports in patients with multiple sclerosis. As with ALS, there are no sufficient studies in MS patients. Gusperimus may possibly be of use in more common diseases and conditions such as rheumatoid arthritis, Crohn's disease, lupus erythematosus, and the prevention and therapy of transplant rejection or graft-versus-host disease. Adverse effects Currently, only provisional and preliminary data about side-effects is available. The following side-effects have been noticed so far: Dysgeusia (abnormal or bad taste) Drug induced leukopenia (very common) Significant infections related to therapy. It is not known if therapy with gusperimus may increase the risk of malignant diseases (lymphoma, leukemia, solid tumors), as is the case with other highly potent immunosuppressant agents such as ciclosporin or tacrolimus. Interactions There has been little experience about clinically relevant interactions. These might be: Other immunosuppressant drugs : Risk of infections increased. Myelotoxic drugs like 6-Mercaptopurin : Risk of serious bone marrow damage increased. Certain NSAIDs : Increased risk of hepatotoxic reactions. Dosage Gusperimus is used in therapeutic cycles. The daily dose and the length of each cycle as well as the length of the treatment free interval depend on the degree of leukopenia/neutropenia caused by gusperimus. It is recommended to obtain complete WBC (White Blood Cell) counts during and after each cycle frequently. Synonyms 2Common references are: (+−)-15-Deoxyspergualin, 1-Amino-19-guanidino-11-hydroxy-4,9,12-triazanonadecane-10,13-dione, 15-Deoxyspergualin, 15-Deoxyspergualin Hydrochloride, 7-{(Aminoiminomethyl)amino]-N-[2-[[4-[(3-aminopropyl)amino]butyl]amino]]-1-hydroxy-2-oxoethyl]heptanamide, Gusperimus (Trihydrochloride), N-[[[4-[(3-Aminopropyl)amino]-butyl]carbamoyl]hydroxymethyl]]-7-guanidinoheptanamide, Spanidin Synthesis The BOC derivative of 4-aminobutanol is oxidized with Collins reagent to afford the aldehyde. Condensation with the ylide obtained from reaction of 3-triphenylphosphonium propionic acid with lithium hexamethyldisilazane leads to the chain extended acid. The carboxylic acid is then activated by conversion to its N-hydroxysuccinimide ester; That group is displaced by ammonia to give the corresponding amide and the BOC group is removed by acid to give the intermediate. Treatment of the aminoamide with 1-amidino-3,5-dimethylpyrazole leads to an exchange of the amidine function and formation of the corresponding guanidine. The saturated guanidino-amide is obtained by catalytic hydrogenation. Amides are well known to participate in the formation of carbinolamines and aminals. Reaction with the glyoxilamide from spermidine (shown as its hydrate) leads to displacement of one of the hydroxyl groups and formation of the corresponding carbinolamine, gusperimus. References https://web.archive.org/web/20120204123305/http://www.bizbozos.com/nci_Gusperimus http://ec.europa.eu/enterprise/pharmaceuticals/register/o034.htm https://web.archive.org/web/20060109201017/http://www.als.net/research/treatments/treatmentDetail.asp?treatmentID=858 Orphan drugs Guanidines Immunosuppressants
Gusperimus
[ "Chemistry" ]
1,240
[ "Guanidines", "Functional groups" ]
23,844,274
https://en.wikipedia.org/wiki/Wynnea%20americana
Wynnea americana, commonly known as moose antlers or rabbit ears, is a species of fungus in the family Sarcoscyphaceae. The uncommon species is recognizable by its spoon-shaped or rabbit ear–shaped fruit bodies that may reach up to tall. It has dark brown and warty outer surfaces, while the fertile spore-bearing inner surface is orange to pinkish to reddish brown. It is distinguished from other species in its genus by the pustules (small bumps) on the outer surface, and microscopically by the large asymmetrical longitudinally ribbed spores with a sharply pointed tip. The spores are made in structures called asci, which have thickened rings at one end that are capped by a hinged structure known as the operculum—a lid that opens to release spores from the ascus. In eastern North America, where it is typically found growing in the soil underneath hardwood trees, the inedible species is found from New York to Michigan south to Mexico. It has also been collected from Costa Rica, India, and Japan. Taxonomy Wynnea americana was first described in 1905 by American mycologist Roland Thaxter. Thaxter found several clusters of fruit bodies in Burbank, Tennessee in 1888, and believed the fungus to be Wynnea macrotis, one of the first identified species of genus Wynnea. An 1896 visit to the same location as well as Cranberry, North Carolina yielded further specimens. This time, however, Thaxter noticed that the fruit bodies were not attached to humus, as expected, but rather to "a large, irregularly lobed, brown, firm, tuber-like body buried a few inches deep in the humus." Microscopic examination of this structure and other tissue of the fruit body convinced Thaxter the material was sufficiently different from known Wynnea species to justify the creation of a new species. Both the Tennessee and the North Carolina specimens were used as syntypes to describe the taxon; the Tennessee specimen has since been designated the lectotype (the name-bearing type specimen). In 1946, French mycologist Marcelle Louise Fernande Le Gal determined that the ascus in W. americana was similar in structure to those species he placed in the suboperculate series. The common names for W. americana are "moose antlers", or "rabbit ears". Description The fruit bodies (technically called apothecia) of W. americana are erect and spoon- or ear-shaped, and may reach up to tall by wide with the edges usually rolled inward. The outside surface is dark brown–purplish, while the inner surface—the spore-bearing hymenium—is pinkish orange to dull purplish red or brown at maturity. The outer surface may develop wrinkles in maturity. The apothecia, which occur singly or in groups of up to about 25, arise from a short stalk. The stalk is variable in length and solid, dark outside, white within. The stalks originate from a sclerotium, a compact mass of hardened mycelium. The sclerotium has an almost gelatinous consistency with irregularly shaped lobes and internal chambers, and may reach a diameter of . The sclerotium's function is thought to supply moisture and nutrients, or to serve as a resistant structure capable of sustaining the fungus through times of stress. W. macrotis is the only other species in the genus to bear a sclerotium. Wynnea americana has no discernible odor, and its taste is unknown. It has been described as inedible due to its toughness. Clusters of fruit bodies are connected by large underground masses of compacted mycelia known as sclerotia. Armillaria mycelium is commonly found in the underground tissue, though it is unclear if there is a parasitic relationship. Microscopic characteristics With many cup fungi, microscopic analysis of the anatomy and structure of the apothecium is necessary for accurate identification of species, or to help distinguish between related species that have a similar external appearance. In W. americana, the ectal excipulum (the outer layer of tissue comprising the apothecia) is 125 μm thick, and composed of dark angular to roughly spherical cells that are 40–70 μm in diameter. The angular cells form pyramidal warts on the outer surface. The medullary excipulum (the inner fleshy layer of tissue underneath the ectal excipulum) is almost gelatinous, composed of interwoven hyphae 10 μm in diameter. Several structural components are involved in spore discharge in W. americana, such as the ascus, the operculum, the suboperculum. The spore-bearing cells, the asci, are 330–400 μm long by 16–20 μm wide. The ascus has a thickened apical ring that is capped by a hinged operculum, a lid that is opened when spores are to be released from the ascus. The presence of the apical ring beneath the operculum and the slanted opening that results is a condition known as "suboperculate", and is shared with Cookeina tricholoma and Phillipsia domingensis, also in the family Sarcoscyphaceae. The spores are scaphoid (boat-shaped), and have dimensions of 35–38 by 12–14 μm. They are marked with prominent longitudinal grooves, and when mature, are apiculate (ending abruptly in a short point). The spores typically contain several oil droplets. The paraphyses (sterile cells interspersed among the asci) are 8–9 μm long and have internal partitions called septa. The structure of the septa has been investigated using transmission electron microscopy, which has revealed that W. americana has a single pore plugged by a "fan-shaped matrix"—an electron-dense region with a torus-shaped ring of translucent tissue wrapped around it. The pore plug resembles those found in the Sarcoscyphaceae species Sarcoscypha occidentalis and Phillipsia domingensis. Similar species The closely related Wynnea sparassoides, known in the vernacular as the "stalked cauliflower fungus", has a fruit body resembling a yellow-brown cauliflower atop a long brown stem. In comparison to W. americana, W. gigantea has apothecia that are smaller, more rounded at the tips, more numerous in a single specimen, and paler in color. Donald H. Pfister, in his 1979 monograph on the genus Wynnea, suggests that the pustulate appearance of the outer surface clearly distinguishes W. americana from the other species in the genus. It also may resemble the purplish-brown and relatively stout Otidea smithii, as well as Mitodis lingua, the inside of which is darker than the outside; neither of these are connected to dense below-ground tissue. Distribution and habitat In North America, W. americana has been collected from several locations, including Tennessee, New York, West Virginia, North Carolina, Ohio, and Pennsylvania. It has also been collected in Mexico, Costa Rica, India, and Japan. The fruit bodies grow solitarily or in clusters on the ground in deciduous forests, and prefer moist, organic soils. In both Asia and North America, fruit bodies are most often produced during August and September. The single Central American collection, from Costa Rica, was made in early November. References Sarcoscyphaceae Fungi described in 1905 Fungi of Asia Fungi of Central America Fungi of North America Inedible fungi Fungus species
Wynnea americana
[ "Biology" ]
1,585
[ "Fungi", "Fungus species" ]
23,847,917
https://en.wikipedia.org/wiki/4D-RCS%20Reference%20Model%20Architecture
The 4D/RCS Reference Model Architecture is a reference model for military unmanned vehicles on how their software components should be identified and organized. The 4D/RCS has been developed by the Intelligent Systems Division (ISD) of the National Institute of Standards and Technology (NIST) since the 1980s. This reference model is based on the general Real-time Control System (RCS) Reference Model Architecture, and has been applied to many kinds of robot control, including autonomous vehicle control. Overview 4D/RCS is a reference model architecture that provides a theoretical foundation for designing, engineering, integrating intelligent systems software for unmanned ground vehicles. According to Balakirsky (2003) 4D/RCS is an example of deliberative agent architecture. These architectures "include all systems that plan to meet future goal or deadline. In general, these systems plan on a model of the world rather than planning directly on processed sensor output. This may be accomplished by real-time sensors, a priori information, or a combination of the two in order to create a picture or snapshot of the world that is used to update a world model". The course of action of a deliberative agent architecture is based on the world model and the commanded mission goal, see image. This goal "may be a given system state or physical location. To meet the goal systems of this kind attempts to compute a path through a multi-dimensional space contained in the real world". The 4D/RCS is a hierarchical deliberative architecture, that "plans up to the subsystem level to compute plans for an autonomous vehicle driving over rough terrain. In this system, the world model contains a pre-computed dictionary of possible vehicle trajectories known as an ego-graph as well as information from the real-time sensor processing. The trajectories are computed based on a discrete set of possible vehicle velocities and starting steering angles. All of the trajectories are guaranteed to be dynamically correct for the given velocity and steering angle. The systems runs under a fixed planning cycle, with the sensed information being updated into the world model at the beginning of the cycle. These update information include information on what area is currently under observation by the sensors, where detected obstacles exist, and vehicle status". History The National Institute of Standards and Technology's (NIST) Intelligent Systems Division (ISD) has been developing the RCS reference model architecture for over 30 years. 4D/RCS is the most recent version of RCS developed for the Army Research Lab Experimental Unmanned Ground Vehicle program. The 4D in 4D/RCS signifies adding time as another dimension to each level of the three-dimensional (sensor processing, world modeling, behavior generation), hierarchical control structure. ISD has studied the use of 4D/RCS in defense mobility, transportation, robot cranes, manufacturing, and several other applications. 4D/RCS integrates the NIST Real-time Control System (RCS) architecture with the German (Bundeswehr University of Munich) VaMoRs 4-D approach to dynamic machine vision. It incorporates many concepts developed under the U.S. Department of Defense Demo I, Demo II, and Demo III programs, which demonstrated increasing levels of robotic vehicle autonomy. The theory embodied in 4D/RCS borrows heavily from cognitive psychology, semiotics, neuroscience, and artificial intelligence. Three US Government funded military efforts known as Demo I (US Army), Demo II (DARPA), and Demo III (US Army), are currently underway. Demo III (2001) demonstrated the ability of unmanned ground vehicles to navigate miles of difficult off-road terrain, avoiding obstacles such as rocks and trees. James Albus at NIST provided the Real-time Control System which is a hierarchical control system. Not only were individual vehicles controlled (e.g. throttle, steering, and brake), but groups of vehicles had their movements automatically coordinated in response to high level goals. In 2002, the DARPA Grand Challenge competitions were announced. The 2004 and 2005 DARPA competitions allowed international teams to compete in fully autonomous vehicle races over rough unpaved terrain and in a non-populated suburban setting. The 2007 DARPA challenge, the DARPA urban challenge, involved autonomous cars driving in an urban setting. 4D/RCS Building blocks The 4D/RCS architecture is characterized by a generic control node at all the hierarchical control levels. The 4D/RCS hierarchical levels are scalable to facilitate systems of any degree of complexity. Each node within the hierarchy functions as a goal-driven, model-based, closed-loop controller. Each node is capable of accepting and decomposing task commands with goals into actions that accomplish task goals despite unexpected conditions and dynamic perturbations in the world. 4D/RCS Hierarchy 4D/RCS prescribes a hierarchical control principle that decomposed high level commands into actions that employ physical actuators and sensors. The figure for example shows a high level block diagram of a 4D/RCS reference model architecture for a notional Future Combat System (FCS) battalion. Commands flow down the hierarchy, and status feedback and sensory information flows up. Large amounts of communication may occur between nodes at the same level, particularly within the same subtree of the command tree: At the Servo level : Commands to actuator groups are decomposed into control signals to individual actuators. At the Primitive level : Multiple actuator groups are coordinated and dynamical interactions between actuator groups are taken into account. At the Subsystem level : All the components within an entire subsystem are coordinated, and planning takes into consideration issues such as obstacle avoidance and gaze control. At the Vehicle level : All the subsystems within an entire vehicle are coordinated to generate tactical behaviors. At the Section level : Multiple vehicles are coordinated to generate joint tactical behaviors. At the Platoon level : Multiple sections containing a total of 10 or more vehicles of different types are coordinated to generate platoon tactics. At the Company level : Multiple platoons containing a total of 40 or more vehicles of different types are coordinated to generate company tactics. At the Battalion level : Multiple companies containing a total of 160 or more vehicles of different types are coordinated to generate battalion tactics. At all levels, task commands are decomposed into jobs for lower level units and coordinated schedules for subordinates are generated. At all levels, communication between peers enables coordinated actions. At all levels, feedback from lower levels is used to cycle subtasks and to compensate for deviations from the planned situations. 4D/RCS control loop At the heart of the control loop through each node is the world model, which provides the node with an internal model of the external world. The world model provides a site for data fusion, acts as a buffer between perception and behavior, and supports both sensory processing and behavior generation. A high level diagram of the internal structure of the world model and value judgment system is shown in the figure. Within the knowledge database, iconic information (images and maps) is linked to each other and to symbolic information (entities and events). Situations and relationships between entities, events, images, and maps are represented by pointers. Pointers that link symbolic data structures to each other form syntactic, semantic, causal, and situational networks. Pointers that link symbolic data structures to regions in images and maps provide symbol grounding and enable the world model to project its understanding of reality onto the physical world. Sensory processing performs the functions of windowing, grouping, computation, estimation, and classification on input from sensors. World modeling maintains knowledge in the form of images, maps, entities, and events with states, attributes, and values. Relationships between images, maps, entities, and events are defined by pointers. These relationships include class membership, ontologies, situations, and inheritance. Value judgment provides criteria for decision making. Behavior generation is responsible for planning and execution of behaviors. Computational nodes The 4D/RCS nodes have internal structure such as shown in the figure. Within each node there typically are four functional elements or processes: behavior generation, world modeling, sensory processing, and value judgment. There is also a knowledge database that represents the node's best estimate of the state of the world at the range and resolution that are appropriate for the behavioral decisions that are the responsibility of that node. These are supported by a knowledge database, and a communication system that interconnects the functional processes and the knowledge database. Each functional element in the node may have an operator interface. The connections to the Operator Interface enable a human operator to input commands, to override or modify system behavior, to perform various types of teleoperation, to switch control modes (e.g., automatic, teleoperation, single step, pause), and to observe the values of state variables, images, maps, and entity attributes. The Operator Interface can also be used for programming, debugging, and maintenance. Five levels of the architecture The figure is a computational hierarchy view of the first five levels in the chain of command containing the Autonomous Mobility Subsystem in the 4D/RCS architecture developed for Demo III. On the right of figure, Behavior Generation (consisting of Planner and Executor) decompose high level mission commands into low level actions. The text inside the Planner at each level indicates the planning horizon at that level. In the center of the figure, each map has a range and resolution that is appropriate for path planning at its level. At each level, there are symbolic data structures and segmented images with labeled regions that describe entities, events, and situations that are relevant to decisions that must be made at that level. On the left is a sensory processing hierarchy that extracts information from the sensory data stream that is needed to keep the world model knowledge database current and accurate. The bottom (Servo) level has no map representation. The Servo level deals with actuator dynamics and reacts to sensory feedback from actuator sensors. The Primitive level map has range of 5 m with resolution of 4 cm. This enables the vehicle to make small path corrections to avoid bumps and ruts during the 500 ms planning horizon of the Primitive level. The Primitive level also uses accelerometer data to control vehicle dynamics and prevent rollover during high speed driving. At all levels, 4D/RCS planners are designed to generate new plans well before current plans become obsolete. Thus, action always takes place in the context of a recent plan, and feedback through the executors closes reactive control loops using recently selected control parameters. To meet the demands of dynamic battlefield environments, the 4D/RCS architecture specifies that replanning should occur within about one-tenth of the planning horizon at each level. Inter-Node Interactions within a Hierarchy Sensory processing and behavior generation are both hierarchical processes, and both are embedded in the nodes that form the 4D/RCS organizational hierarchy. However, the SP and BG hierarchies are quite different in nature and are not directly coupled. Behavior generation is a hierarchy based on the decomposition of tasks and the assignment of tasks to operational units. Sensory processing is a hierarchy based on the grouping of signals and pixels into entities and events. In 4D/RCS, the hierarchies of sensory processing and behavior generation are separated by a hierarchy of world modeling processes. The WM hierarchy provides a buffer between the SP and BG hierarchies with interfaces to both. Criticisms There have been major criticisms of this architectural form, according to Balakirsky (2003) due to the fact that "the planning is performed on a model of the world rather than on the actual world, and the complexity of the computing large plans... Since the world is not static, and may change during this time delay that occurs between sensing, plan conception, and final execution, the validation of the computed plans have been called into question". References Further reading Albus, J.S (1988). System Description and Design Architecture for Multiple Autonomous Undersea Vehicles. NISTTN 1251, National Institute of Standards and Technology, Gaithersburg, MD, September 1988 James S. Albus (2002). "4D/RCS A Reference Model Architecture for Intelligent Unmanned Ground Vehicles". In: Proceedings of the SPIE 16th Annual International Symposium on Aerospace/Defense Sensing, Simulation and Controls, Orlando, FL, April 1–5, 2002. James Albus et al. (2002). 4D/RCS: A Reference Model Architecture For Unmanned Vehicle Systems Version 2.2. NIST August 2002 External links RCS The Real-time Control Systems Architecture NIST Homepage Control theory Industrial computing Uncrewed vehicles
4D-RCS Reference Model Architecture
[ "Mathematics", "Technology", "Engineering" ]
2,611
[ "Applied mathematics", "Control theory", "Automation", "Industrial engineering", "Industrial computing", "Dynamical systems" ]
23,850,329
https://en.wikipedia.org/wiki/International%20conference%20on%20Physics%20of%20Light%E2%80%93Matter%20Coupling%20in%20Nanostructures
The International Conference on Physics of Light–Matter Coupling in Nanostructures (PLMCN) is a yearly academic conference on various topics of semiconductor science and nanophotonics. Topic The conferences are devoted to the fundamental and technological issues relevant to the realization of a new generation of optoelectronic devices based on advanced low-dimensional and photonic structures, such as low threshold polariton lasers, new optical switches, single photon emitters, photonic band-gap structures, etc. They review the most recent achievements in the fundamental understanding of strong light–matter coupling, and follow the progress in the development of epitaxial and processing technologies of wide-bandgap semiconductors and organic nanostructures and microcavities providing the basis for advanced optical studies. The conferences are open to new emerging fields such as carbon nanotubes and quantum information. The scope of these conferences covers both physics and application of a variety of phenomena related to light–matter coupling in solids such as: Light–matter coupling in microcavities and photonic crystals Basic exciton–polariton physics Bose–Einstein condensates and polariton superfluid Spin-related phenomena Physics and application of quantum dots Plasmons and near-field optics in light matter coupling Growth and characterization of advanced wide-bandgap semiconductors (GaN, ZnSe, ZnO, organic materials) Novel optical devices (polariton lasers, single-photon emitters, entangled-photon pair generators, optical switches...) Quantum information science Editions The International Conference on Physics of Light–Matter Coupling in Nanostructures started in 2000 in Saint-Nectaire, France. The 14th edition was held as PLMCN14 instead of PLMCN13. The next issue in 2015 was held as PLMCN2014 instead of PLMCN15. The next issue after that was, confusingly, labeled as both PLMCN2015 and PLMCN16. The next conference, PLMCN17, reverted to the traditional labeling but now in sync with the edition number. The pattern broke again with PLMCN2020 for the 21st edition which was held online due to the COVID-19 pandemic, instead of in Clermont-Ferrand as initially planned. No conference was scheduled for 2021, to avoid holding another PLMCN online, and thus, the PLMCN22 in Cuba got back in sync, this time both with the year (2022) and the edition number (22th). List of previous editions: PLMCN0: Saint-Nectaire, France (2000) PLMCN1: Rome, Italy (2001) PLMCN2: Rethymno, Greece (2002) PLMCN3: Acireale, Italy (2003) PLMCN4: Saint Petersburg, Russia (2004) PLMCN5 : Glasgow, Scotland (2005) PLMCN6: Magdeburg, Germany (2006) PLMCN7: Havana, Cuba (2007) PLMCN8: Tokyo, Japan (2008) PLMCN9: Lecce, Italy (2009) PLMCN10: Cuernavaca, Mexico (2010) PLMCN11: Berlin, Germany (2011) PLMCN12 : Hangzhou, China (2012) PLMCN14: Hersonissos, Crete (2013) PLMCN2014: Montpellier, France (2014) PLMCN16: Medellin, Colombia (2015) PLMCN17: Nara, Japan (2016) PLMCN18: Würzburg, Germany (2017) PLMCN19: Chengdu, China (2018) PLMCN20: Moscow and Suzdal, Russia (2019) PLMCN2020: online (Clermont-Ferrand host), France (2020) PLMCN22: Varadero, Cuba (2022) PLMCN23: Medellin, Colombia (2023) PLMCN24: Tbilisi, Georgia (9-13 April 2024) Next scheduled edition: PLMCN25: Xiamen, China (8-13 April 2025) Logo The logo is a cat that travels around the world featuring each particular venue's folklore. It is designed every year by Alexey Kavokin (University of Southampton), one of the creators and chairmen of the conference. See also International Conference on the Physics of Semiconductors References External links Twitter account Physics conferences Technology conferences Nanotechnology institutions
International conference on Physics of Light–Matter Coupling in Nanostructures
[ "Materials_science" ]
934
[ "Nanotechnology", "Nanotechnology institutions" ]
23,850,944
https://en.wikipedia.org/wiki/Alpha%20glucan
α-Glucans (alpha-glucans) are polysaccharides of D-glucose monomers linked with glycosidic bonds of the alpha form. α-Glucans use cofactors in a cofactor site in order to activate a glucan phosphorylase enzyme. This enzyme causes a reaction that transfers a glucosyl portion between orthophosphate and α-I,4-glucan. The position of the cofactors to the active sites on the enzyme are critical to the overall reaction rate thus, any alteration to the cofactor site leads to the disruption of the glucan binding site. Alpha-glucan is also commonly found in bacteria, yeasts, plants, and insects. Whereas the main pathway of α-glucan synthesis is via glycosidic bonds of glucose monomers, α-glucan can be comparably synthesized via the maltosyl transferase GlgE and branching enzyme GlgB. This alternative pathway is common in many bacteria, which use GlgB and GlgE or the GlgE pathway exclusively for the biosynthesis of α-glucan. The GlgE pathway is especially prominent in actinomycetes, such as mycobacteria and streptomycetes. However, α-glucans in mycobacteria have a slight variation in the length of linear chains, which point to the fact that the branching enzyme in mycobacteria makes shorter branches compared to glycogen synthesis. For organisms that can utilize both classic glycogen synthesis and the GlgE pathway, only GlgB enzyme is present, which indicates that the GlgB enzyme is shared between both pathways. Other uses for α-glucan have been developed based on its availability in bacteria. The accumulation of glycogen Neisseria polysacchera and other bacteria are able to use in α-glucan to catalyze glucose units to form α-1,4-glucan and liberating fructose in the process. To regulate carbohydrate metabolism, more resistant starch was necessary. An α-glucan coated starch molecule produced from Neisseria polysacchera was able to improve some of the physiochemical properties in comparison to raw normal starch, especially in loading efficiency of bioactive molecules. Alpha-glucan was used in conjunction with modified starch molecules that contained porous starch granules via hydrolysis with amylotic enzymes such as α-amylase, β-amylase, and glucoamylase. An α-glucan coating boasts protection from digestive environments, such as the small intestine, efficient encapsulation, and preservation rates. This design promotes the growth of the development of α-glucan-based bio-materials and many implications for its usage in food and pharmaceutical industries. Examples of alpha glucans dextran, α-1,6-glucan glycogen, α-1,4- and α-1,6-glucan pullulan, α-1,4- and α-1,6-glucan starch, α-1,4- (such as amylose) and α-1,6-glucan (including amylopectin) References Page that explains alpha-glucan linkages in starch. Polysaccharides
Alpha glucan
[ "Chemistry", "Biology" ]
733
[ "Carbohydrates", "Biotechnology stubs", "Biochemistry stubs", "Biochemistry", "Polysaccharides" ]
23,851,199
https://en.wikipedia.org/wiki/Observatoire%20Oceanologique%20de%20Villefranche
The Observatoire Oceanologique de Villefranche (Villefranche-sur-Mer Marine Station) is a satellite campus of Sorbonne University (Sorbonne Faculty of Science and Engineering) in Villefranche-sur-Mer on the Côte d'Azur, France. It houses two research/teaching laboratories co-administered by Sorbonne University and the CNRS. The two laboratories are focused on Developmental Biology, and Oceanography. The facility traces it roots back a laboratory established in 1882 by Hermann Fol with the encouragement of Charles Darwin and continues to work to this day with organisms from the Bay of Villefranche Bay, including protists, ascidians, sea urchins and jellyfish. History In 1809 Charles Alexandre Lesueur and François Péron are credited with discovering the exceptional diversity of zooplankton in the bays of Villefranche and Cap de Nice and they were the first to describe new species from the bay (Péron & Lesueur 1809). In the 1850s, the zoologist Carl Vogt visited Villefranche and studied the planktonic fauna found in the bay, notably the gelatinous zooplankton (Vogt 1852). He was followed by Johannes Peter Müller and Ernst Haeckel who both described planktonic protists, radiolaria, from the Bay of Villefranche (Müller 1858; Haeckel 1860). In 1882, encouraged by Darwin, the zoologist and discoverer of fertilization Hermann Fol along with Jule Barrois of the Université de Lille, established a laboratory in Villefranche in a former Lazert building. They acquired use of buildings previously leased to the Russian Navy as a coal depot in 1884, the Galériens and the Vielle Forge. Barrois and Fol were forced to give up the facility in 1888 at the demand of Alexis Korotneff of the University of Kiev who had frequented the laboratory in previous years and now wanted to establish a Russian research facility: The "Russian Zoological Station" (Mosse 1952) . Russian, French, and American biologists including Hipployte Pergallo, Aleksei Alekseevich Korotnev, Carl Vogt, Hermann Fol, Jules Henri Barrois, Élie Metchnikoff and Louis Agassiz among others worked on the planktonic fauna and embryos collected in the bay. To this day the Bay of Villefranche remains an exceptional natural resource for the study of plankton. Since the 1930s the facility has been administered by the University of Paris. Building The marine station is situated in historical buildings constructed in 1769 as part of the military harbour of the Kingdom of Sardinia which had Turin as its capital. The main building (bâtiment des galeriens) where the laboratories are now located was first used as a hospital and prison for galley slaves (mainly Turkish prisoners) who manned the war boats built in the adjacent drydock. In 1858 it was leased to the Russian Navy by the then governing authority of the Kingdom of Sardina for use as coal depot. Mission Education: a teaching team composed of faculty members of Université Pierre et Marie Curie oversees many courses in oceanography for French and foreign students enrolled primarily at master's degree level. Research: the two laboratories are focused on developmental biology and cell biology and the other on oceanography (biological, biochemical, physical and chemical oceanography). Observation: Comprehensive monitoring programs sample both the coastal environment of the Bay of Villefranche and an offshore site, 28 miles from Cap Ferrat. Activities in the Observatory include also research and development of new observation techniques such optical devices, gliders and floats. References Haeckel E. 1860. Abbildungen und Diagnosen neuer Gattungen und Arten von lebenden Radiolarien des Mittelmeeres. Monatsberichte der königliche Akademie der Wissenschaften zu Berlin, pp 835–845 Mosse W.E. 1952. The Russians at Villafrance. Slavonic & East European Review. 30:425-443 Müller J. 1858. Las über die Thalassicollen, Polycystinen und Acanthometren des Mittelmeeres. Abh Königl Akad Wiss Berlin, 1858:1-63 Péron F., Lesueur C.A. 1809 Tableau des caractères génériques et spécifiques de toutes les espèces de méduses connues jusqu'à ce jour. Ann Mus Nat Hist Natur (Paris) 14:325-366. Vogt C. 1852. Ueber die Siphonophoren. Zeit Wissensch Zool 3:522-525 References Laboratories in France Oceanography French Riviera Education in Villefranche-sur-Mer Buildings and structures in Villefranche-sur-Mer
Observatoire Oceanologique de Villefranche
[ "Physics", "Environmental_science" ]
1,008
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
23,852,047
https://en.wikipedia.org/wiki/Asymptotic%20homogenization
In mathematics and physics, homogenization is a method of studying partial differential equations with rapidly oscillating coefficients, such as where is a very small parameter and is a 1-periodic coefficient: , . It turns out that the study of these equations is also of great importance in physics and engineering, since equations of this type govern the physics of inhomogeneous or heterogeneous materials. Of course, all matter is inhomogeneous at some scale, but frequently it is convenient to treat it as homogeneous. A good example is the continuum concept which is used in continuum mechanics. Under this assumption, materials such as fluids, solids, etc. can be treated as homogeneous materials and associated with these materials are material properties such as shear modulus, elastic moduli, etc. Frequently, inhomogeneous materials (such as composite materials) possess microstructure and therefore they are subjected to loads or forcings which vary on a length scale which is far bigger than the characteristic length scale of the microstructure. In this situation, one can often replace the equation above with an equation of the form where is a constant tensor coefficient and is known as the effective property associated with the material in question. It can be explicitly computed as from 1-periodic functions satisfying: This process of replacing an equation with a highly oscillatory coefficient with one with a homogeneous (uniform) coefficient is known as homogenization. This subject is inextricably linked with the subject of micromechanics for this very reason. In homogenization one equation is replaced by another if for small enough , provided in some appropriate norm as . As a result of the above, homogenization can therefore be viewed as an extension of the continuum concept to materials which possess microstructure. The analogue of the differential element in the continuum concept (which contains enough atom, or molecular structure to be representative of that material), is known as the "Representative Volume Element" in homogenization and micromechanics. This element contains enough statistical information about the inhomogeneous medium in order to be representative of the material. Therefore averaging over this element gives an effective property such as above. Classical results of homogenization theory were obtained for media with periodic microstructure modeled by partial differential equations with periodic coefficients. These results were later generalized to spatially homogeneous random media modeled by differential equations with random coefficients which statistical properties are the same at every point in space. In practice, many applications require a more general way of modeling that is neither periodic nor statistically homogeneous. For this end the methods of the homogenization theory have been extended to partial differential equations, which coefficients are neither periodic nor statistically homogeneous (so-called arbitrarily rough coefficients). The method of asymptotic homogenization Mathematical homogenization theory dates back to the French, Russian and Italian schools. The method of asymptotic homogenization proceeds by introducing the fast variable and posing a formal expansion in : which generates a hierarchy of problems. The homogenized equation is obtained and the effective coefficients are determined by solving the so-called "cell problems" for the function . See also Asymptotic analysis Γ-convergence Mosco convergence Effective medium approximations Notes References Asymptotic analysis Partial differential equations
Asymptotic homogenization
[ "Mathematics" ]
672
[ "Mathematical analysis", "Asymptotic analysis" ]
12,877,572
https://en.wikipedia.org/wiki/Fan%20%28machine%29
A fan is a powered machine that creates airflow. A fan consists of rotating vanes or blades, generally made of wood, plastic, or metal, which act on the air. The rotating assembly of blades and hub is known as an impeller, rotor, or runner. Usually, it is contained within some form of housing, or case. This may direct the airflow, or increase safety by preventing objects from contacting the fan blades. Most fans are powered by electric motors, but other sources of power may be used, including hydraulic motors, handcranks, and internal combustion engines. Mechanically, a fan can be any revolving vane, or vanes used for producing currents of air. Fans produce air flows with high volume and low pressure (although higher than ambient pressure), as opposed to compressors which produce high pressures at a comparatively low volume. A fan blade will often rotate when exposed to an air-fluid stream, and devices that take advantage of this, such as anemometers and wind turbines, often have designs similar to that of a fan. Typical applications include climate control and personal thermal comfort (e.g., an electric table or floor fan), vehicle engine cooling systems (e.g., in front of a radiator), machinery cooling systems (e.g., inside computers and audio power amplifiers), ventilation, fume extraction, winnowing (e.g., separating chaff from cereal grains), removing dust (e.g. sucking as in a vacuum cleaner), drying (usually in combination with a heat source) and providing draft for a fire. Some fans may be indirectly used for cooling in the case of industrial heat exchangers. While fans are effective at cooling people, they do not cool air. Instead, they work by evaporative cooling of sweat and increased heat convection into the surrounding air due to the airflow from the fans. Thus, fans may become less effective at cooling the body if the surrounding air is near body temperature and contains high humidity. History Fans made with leaves were prevalent in ancient Egypt and India. In ancient India, they were handheld fans made from bamboo strips or other plant fiber, that could be rotated or fanned to move air. During British rule, the word came to be used by Anglo-Indians to mean a large swinging flat fan, fixed to the ceiling and pulled by a servant called the punkawallah. For purposes of air conditioning, the Han dynasty craftsman and engineer Ding Huan (fl. 180 CE) invented a manually operated rotary fan with seven wheels that measured 3 m (10 ft) in diameter; in the 8th century, during the Tang dynasty (618–907), the Chinese applied hydraulic power to rotate the fan wheels for air conditioning, while the rotary fan became even more common during the Song dynasty (960–1279). During the Heian period (794–1185) in Japan, fans adapted the role of symbolizing social class as well as a mechanical role. The tessen, a Japanese fan used in Feudal times, was a dangerous weapon hidden in plain sight in the shape of a regular fan, a weapon used by samurais when katanas were not ideal. In the 17th century, the experiments of scientists, including Otto von Guericke, Robert Hooke, and Robert Boyle, established the basic principles of vacuum and airflow. The English architect Sir Christopher Wren applied an early ventilation system in the Houses of Parliament that used bellows to circulate air. Wren's design was the catalyst for much later improvement and innovation. The first rotary fan used in Europe was for mine ventilation during the 16th century, as illustrated by Georg Agricola (1494–1555). John Theophilus Desaguliers, a British engineer, demonstrated the successful use of a fan system to draw out stagnant air from coal mines in 1727—ventilation was essential in coal mines to prevent asphyxiation—and soon afterward he installed a similar apparatus in Parliament. The civil engineer John Smeaton, and later John Buddle installed reciprocating air pumps in the mines in the North of England, though the machinery was liable to breaking down. Steam In 1849 a 6m radius steam-driven fan, designed by William Brunton, was made operational in the Gelly Gaer Colliery of South Wales. The model was exhibited at the Great Exhibition of 1851. Also in 1851 David Boswell Reid, a Scottish doctor installed four steam-powered fans in the ceiling of St George's Hospital in Liverpool so that the pressure produced by the fans would force the incoming air upward and through vents in the ceiling. Improvements in the technology were made by James Nasmyth, Frenchman Theophile Guibal and J. R. Waddle. Electrical Between 1882 and 1886 Schuyler Wheeler invented a fan powered by electricity. It was commercially marketed by the American firm Crocker & Curtis electric motor company. In 1885 a desktop direct drive electric fan was commercially available by Stout, Meadowcraft & Co. in New York. In 1882, Philip Diehl developed the world's first electric ceiling mounted fan. During this intense period of innovation, fans powered by alcohol, oil, or kerosene were common around the turn of the 20th century. In 1909, KDK of Japan pioneered the invention of mass-produced electric fans for home use. In the 1920s, industrial advances allowed steel fans to be mass-produced in different shapes, bringing fan prices down and allowing more homeowners to afford them. In the 1930s, the first art deco fan (the "Silver Swan") was designed by Emerson. By the 1940s, Crompton Greaves of India became the world's largest manufacturer of electric ceiling fans mainly for sale in India, Asia, and the Middle East. By the 1950s, table and stand fans were manufactured in bright colors and were eye-catching. Window and central air conditioning in the 1960s caused many companies to discontinue production of fans, but in the mid-1970s, with an increasing awareness of the cost of electricity and the amount of energy used to heat and cool homes, turn-of-the-century styled ceiling fans became popular again as both decorative and energy-efficient. In 1998 William Fairbank and Walter K. Boyd invented the high-volume low-speed (HVLS) ceiling fan, designed to reduce energy consumption by using long fan blades rotating at low speed to move a relatively large volume of air. Social implications Before powered fans were widely accessible, their use related to the social divide between social classes. In Britain and China, they were initially only installed in the buildings of Parliament and in noble homes. In Ancient Egypt (3150 BC), servants were required to fan Pharaohs and important figures. In parts of the world such as India, where the temperature reaches above , standing and electric box fans are essential in the business world for customer comfort and an efficient work environment. Fans have become solar-powered, energy-efficient, and battery-powered in places with unreliable energy sources. In South Korea, fans play a part in an old wives tale. Many older South Korean citizens believe in the unscientific and unsupported myth of fan death due to excessive use of an electric fan; Korean electric fans usually turn off after a few hours to protect from fan death. Typical room electrical fans consume 50 to 100 watts of power, while air-conditioning units use 500 to 4000 watts; fans use less electricity but do not cool the air, simply providing evaporative cooling of sweat. Commercial fans are louder than AC units and can be disruptively loud. According to the U.S. Consumer Product Safety Commission, reported incidents related to box fans include, fire (266 incidents), potential fire (29 incidents), electrocution (15), electric shock (4 incidents), and electrical hazard (2 incidents). Injuries related to AC units mostly relate to their falling from buildings. Types Mechanical revolving blade fans are made in a wide range of designs. They are used on the floor, table, desk, or hung from the ceiling (ceiling fan) and can be built into a window, wall, roof, etc. Tower fans tend to have smaller blades inside. Electronic systems generating significant heat, such as computers, incorporate fans. Appliances such as hair dryers and space heaters also use fans. They move air in air-conditioning systems and in automotive engines. Fans used for comfort inside a room create a wind chill by increasing the heat transfer coefficient but do not lower temperatures directly. Fans used to cool electrical equipment or in engines or other machines cool the equipment directly by exhausting hot air into the cooler environment outside of the machine so that cooler air flows in. Three main types of fans are used for moving air, axial, centrifugal (also called radial) and cross flow (also called tangential). The American Society of Mechanical Engineers Performance Testing Code 11 (PTC) provides standard procedures for conducting and reporting tests on fans, including those of the centrifugal, axial, and mixed flows. Axial-flow Axial-flow fans have blades that force air to move parallel to the shaft about which the blades rotate. This type of fan is used in a wide variety of applications, ranging from small cooling fans for electronics to the giant fans used in cooling towers. Axial flow fans are applied in air conditioning and industrial process applications. Standard axial flow fans have diameters of 300–400 mm or 1,800–2,000 mm and work under pressures up to 800 Pa. Special types of fans are used as low-pressure compressor stages in aircraft engines. Examples of axial fans are: Table fan: Basic elements of a typical table fan include the fan blade, base, armature, and lead wires, motor, blade guard, motor housing, oscillator gearbox, and oscillator shaft. The oscillator is a mechanism that motions the fan from side to side. The armature axle shaft comes out on both ends of the motor; one end of the shaft is attached to the blade, and the other is attached to the oscillator gearbox. The motor case joins the gearbox to contain the rotor and stator. The oscillator shaft combines the weighted base and the gearbox. A motor housing covers the oscillator mechanism. The blade guard joins the motor case for safety. Domestic extractor fan: Wall- or ceiling-mounted, the domestic extractor fan is employed to remove moisture and stale air from domestic dwellings. Bathroom extractor fans typically utilize a four-inch (100 mm) impeller, while kitchen extractor fans typically use a six-inch (150 mm) impeller as the room is often bigger. Axial fans with five-inch (125 mm) impellers are also used in larger bathrooms, though they are much less common. Domestic axial extractor fans are unsuitable for duct runs over 3 m or 4 m, depending on the number of bends in the run, as the increased air pressure in longer pipework inhibits the fan's performance. Continuous running extractor fans run continuously at a very slow rate, running fast when necessary, for example when a bathroom light is switched on. At working speed, they are just normal extractor fans. They extract typically 5 to 10 l/sec at continuous speed and use little electricity, 1 or 2 watts, for low annual cost. Some have humidity sensors to control trickle operation. They have the advantage of ensuring ventilation and preventing the build-up of humidity. Alternatively, a normal extractor fan may be fitted to operate intermittently at full power for the same purpose. In cold weather they may have noticeably cool the room they are in, or, if the door is open, the house. Electro-mechanical fans: Among collectors, are rated according to their condition, size, age, and number of blades. Four-blade designs are the most common. Five-blade or six-blade designs are rare. The materials from which the components are made, such as brass, are important factors in fan desirability. A ceiling fan is a fan suspended from the ceiling of a room. Most ceiling fans rotate at relatively low speeds and do not have blade guards because they are inaccessible and unwieldy. Ceiling fans are used in both residential and industrial/commercial settings. In automobiles, a mechanical or electrically driven fan provides engine cooling and prevents the engine from overheating by blowing or drawing air through a coolant-filled radiator. The fan may be driven with a belt and pulley off the engine's crankshaft or an electric motor switched on or off by a thermostatic switch. Computer fan for cooling electrical components and in laptop coolers. Fans inside audio power amplifiers help to draw heat away from the electrical components. Variable pitch fan: A variable-pitch fan is used to precisely control static pressure within supply ducts. The blades are arranged to rotate upon a control-pitch hub. The fan wheel will spin at a constant speed. The blades follow the control pitch hub. As the hub moves toward the rotor, the blades increase their angle of attack, and an increase in flow results. Centrifugal Often called a "squirrel cage" (because of its general similarity in appearance to exercise wheels for pet rodents) or "scroll fan", the centrifugal fan has a moving component (called an impeller) that consists of a central shaft about which a set of blades that form a spiral, or ribs, are positioned. Centrifugal fans blow air at right angles to the intake of the fan and spin the air outwards to the outlet (by deflection and centrifugal force). The impeller rotates, causing air to enter the fan near the shaft and move perpendicularly from the shaft to the opening in the scroll-shaped fan casing. A centrifugal fan produces more pressure for a given air volume, and is used where this is desirable such as in leaf blowers, blowdryers, air mattress inflators, inflatable structures, climate control in air handling units and various industrial purposes. They are typically noisier than comparable axial fans (although some types of centrifugal fans are quieter such as in air handling units). Cross-flow The cross-flow or tangential fan, sometimes known as a tubular fan, was patented in 1893 by Paul Mortier, and is used extensively in heating, ventilation, and air conditioning (HVAC), especially in ductless split air conditioners. The fan is usually long relative to its diameter, so the flow remains approximately two-dimensional away from the ends. The cross-flow fan uses an impeller with forward-curved blades, placed in a housing consisting of a rear wall and a vortex wall. Unlike radial machines, the main flow moves transversely across the impeller, passing the blading twice. The flow within a cross-flow fan may be broken up into three distinct regions: a vortex region near the fan discharge, called an eccentric vortex, the through-flow region, and a paddling region directly opposite. Both the vortex and paddling regions are dissipative, and as a result, only a portion of the impeller imparts usable work on the flow. The cross-flow fan, or transverse fan, is thus a two-stage partial admission machine. The popularity of the crossflow fan in HVAC comes from its compactness, shape, quiet operation, and ability to provide a high-pressure coefficient. Effectively a rectangular fan in terms of inlet and outlet geometry, the diameter readily scales to fit the available space, and the length is adjustable to meet flow rate requirements for the particular application. Common household tower fans are also cross-flow fans. Much of the early work focused on developing the cross-flow fan for both high- and low-flow-rate conditions and resulted in numerous patents. Key contributions were made by Coester, Ilberg and Sadeh, Porter and Markland, and Eck. One interesting phenomenon particular to the cross-flow fan is that, as the blades rotate, the local air incidence angle changes. The result is that in certain positions, the blades act as compressors (pressure increase), while at other azimuthal locations, the blades act as turbines (pressure decrease). Since the flow enters and exits the impeller radially, the crossflow fan has been studied and prototyped for potential aircraft applications. Due to the two-dimensional nature of the flow, the fan can be integrated into a wing for use in both thrust production and boundary-layer control. A configuration that utilizes a crossflow fan located at the wing leading edge is the FanWing design concept initially developed around 1997 and under development by a company of the same name. This design creates lift by deflecting the wake downward due to the rotational direction of the fan, causing a large Magnus force, similar to a spinning leading-edge cylinder. Another configuration utilizing a crossflow fan for thrust and flow control is the propulsive wing, another experimental concept prototype initially developed in the 1990s and 2000s. In this design, the crossflow fan is placed near the trailing edge of a thick wing and draws the air from the wing's suction (top) surface. By doing this, the propulsive wing is nearly stall-free, even at extremely high angles of attack, producing very high lift. However, the fanwing and propulsive wing concepts remain experimental and have only been used for unmanned prototypes. A cross-flow fan is a centrifugal fan in which the air flows straight through the fan instead of at a right angle. The rotor of a cross-flow fan is covered to create a pressure differential. A cross-flow fan has two walls outside the impeller and a thick vortex wall inside. The radial gap decreases in the direction of the impeller rotation. The rear wall has a log-spiral profile, while the vortex stabilizer is a thin horizontal wall with a rounded edge. The resultant pressure difference allows air to flow straight through the fan, even though the fan blades counter the flow of air on one side of the rotation. Cross-flow fans give airflow along the entire width of the fan; however, they are noisier than ordinary centrifugal fans. Cross-flow fans are often used in ductless air conditioners, air doors, in some types of laptop coolers, in automobile ventilation systems, and for cooling in medium-sized equipment such as photocopiers. Bladeless fans Dyson Air Multiplier fans introduced to the consumer market in 2009 have popularized a 1981 design by Toshiba that produces a fan that has no exposed fan blades or other visibly moving parts (unless augmented by other features such as for oscillation and directional adjustment). A relatively small quantity of air from a high-pressure-bladed impeller fan, which is contained inside the base rather than exposed, induces the slower flow of a larger airmass through a circular or oval-shaped opening via a low-pressure area created by an airfoil surface shape (the Coandă effect). Air curtains and air doors also utilize this effect to help retain warm or cool air within an otherwise exposed area that lacks a cover or door. Air curtains are commonly used on open-face dairy, freezer, and vegetable displays to help retain chilled air within the cabinet using a laminar airflow circulated across the display opening. The airflow is typically generated by a mechanical fan of any type, as described in this article, and is hidden in the base of the display cabinet. HVAC linear slot diffusers also utilize this effect to increase airflow evenly in rooms compared to registers while reducing the energy used by the air handling unit blower. Installation Fans may be installed in various ways, depending on the application. They are often used in free installations without any housing. There are also some specialised installations. Ducted fan In vehicles, a ducted fan is a method of propulsion in which a fan, propeller or rotor is surrounded by an aerodynamic duct or shroud which enhances its performance to create aerodynamic thrust or lift to transport the vehicle. Jet fan In ventilation systems, a jet fan, also known as an impulse or induction fan, ejects a stream of air that entrains ambient air to circulate the ambient air. The system takes up less space than conventional ventilation ducting and can significantly increase the rates of inflow of fresh air and expulsion of stale air. Noise Fans generate noise from the rapid flow of air around blades and obstacles causing vortexes, and from the motor. Fan noise is roughly proportional to the fifth power of fan speed; halving speed reduces noise by about 15 dB. The perceived loudness of fan noise also depends on the frequency distribution of the noise. This depends on the shape and distribution of moving parts, especially of the blades, and of stationary parts, struts in particular. Like with tire treads, and similar to the principle of acoustic diffusors, an irregular shape and distribution can flatten the noise spectrum, making the noise sound less disturbing. The inlet shape of the fan can also influence the noise levels generated by the fan. Optimal temperature for use The optimal temperature for using a fan to cool down remains uncertain. While fans are commonly used to lower body temperature through evaporative cooling, there is a point at which the convection effect of moving air can counteract this benefit. This temperature, at which fan use may become detrimental, is currently unknown. Health organizations offer varying guidance on fan usage in high temperatures. The Centers for Disease Control and Prevention (CDC) advises against fan use when temperatures exceed 32.2 °C (90 °F), while the World Health Organization (WHO) suggests avoiding fan use above 40 °C (104 °F). Recent studies have shed further light on this issue, though their findings are somewhat contradictory. One study found limited additional benefit from fan use above 35 °C (95 °F), while another study reported a 31% reduction in cardiac stress among elderly individuals using fans at 38 °C (100 °F). Fan motor drive methods Standalone fans are usually powered by an electric motor, often attached directly to the motor's output, with no gears or belts. The motor is either hidden in the fan's center hub or extends behind it. For big industrial fans, three-phase asynchronous motors are commonly used, may be placed near the fan, and drive it through a belt and pulleys. Smaller fans are often powered by shaded pole AC motors, or brushed or brushless DC motors. AC-powered fans usually use mains voltage, while DC-powered fans typically use low voltage, typically 24V, 12V, or 5 V. The fan is often connected to machines with a rotating part rather than being powered separately. This is commonly seen in motor vehicles with internal combustion engines, large cooling systems, locomotives, and winnowing machines, where the fan is connected to the drive shaft or through a belt and pulleys. Another common configuration is a dual-shaft motor, where one end of the shaft drives a mechanism, while the other has a fan mounted on it to cool the motor itself. Window air conditioners commonly use a dual-shaft fan to operate separate fans for the interior and exterior parts of the device. Where electrical power or rotating parts are not readily available, other methods may drive fans. High-pressure gases such as steam can drive a small turbine, and high-pressure liquids can drive a pelton wheel, either of which can provide the rotational drive for a fan. Large, slow-moving energy sources, such as a flowing river, can also power a fan using a water wheel and a series of step-down gears or pulleys to increase the rotational speed to that required for efficient fan operation. Solar power Electric fans used for ventilation may be powered by solar panels instead of mains current. This is an attractive option because once the capital costs of the solar panel have been covered, the resulting electricity is free. If ventilation needs are greatest during sunny weather, a solar-powered fan can be suitable. A typical example uses a detached 10-watt, solar panel and is supplied with appropriate brackets, cables, and connectors. It can be used to ventilate up to of area and can move air at up to . Because of the wide availability of 12 V brushless DC electric motors and the convenience of wiring such a low voltage, such fans usually operate on 12 volts. The detached solar panel is typically installed in the spot that gets most of the sunlight and then connected to the fan mounted as far as away. Other permanently mounted and small portable fans include an integrated (non-detachable) solar panel. See also References External links Turbomachinery Cooling technology Ventilation Heating, ventilation, and air conditioning Mechanical engineering Chemical engineering Gas compressors Turbines Thermodynamics Fluid dynamics Aerodynamics Articles containing video clips
Fan (machine)
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
5,093
[ "Applied and interdisciplinary physics", "Turbomachinery", "Gas compressors", "Dynamical systems", "Chemical equipment", "Chemical engineering", "Aerospace engineering", "Turbines", "Aerodynamics", "Thermodynamics", "Mechanical engineering", "nan", "Piping", "Fluid dynamics" ]
12,880,414
https://en.wikipedia.org/wiki/Appleton%E2%80%93Hartree%20equation
The Appleton–Hartree equation, sometimes also referred to as the Appleton–Lassen equation, is a mathematical expression that describes the refractive index for electromagnetic wave propagation in a cold magnetized plasma. The Appleton–Hartree equation was developed independently by several different scientists, including Edward Victor Appleton, Douglas Hartree and German radio physicist H. K. Lassen. Lassen's work, completed two years prior to Appleton and five years prior to Hartree, included a more thorough treatment of collisional plasma; but, published only in German, it has not been widely read in the English speaking world of radio physics. Further, regarding the derivation by Appleton, it was noted in the historical study by Gillmor that Wilhelm Altar (while working with Appleton) first calculated the dispersion relation in 1926. Equation The dispersion relation can be written as an expression for the frequency (squared), but it is also common to write it as an expression for the index of refraction: The full equation is typically given as follows: or, alternatively, with damping term and rearranging terms: Definition of terms: : complex refractive index : imaginary unit : electron collision frequency : angular frequency : ordinary frequency (cycles per second, or Hertz) : electron plasma frequency : electron gyro frequency : permittivity of free space : ambient magnetic field strength : electron charge : electron mass : angle between the ambient magnetic field vector and the wave vector Modes of propagation The presence of the sign in the Appleton–Hartree equation gives two separate solutions for the refractive index. For propagation perpendicular to the magnetic field, i.e., , the '+' sign represents the "ordinary mode," and the '−' sign represents the "extraordinary mode." For propagation parallel to the magnetic field, i.e., , the '+' sign represents a left-hand circularly polarized mode, and the '−' sign represents a right-hand circularly polarized mode. See the article on electromagnetic electron waves for more detail. is the vector of the propagation plane. Reduced forms Propagation in a collisionless plasma If the electron collision frequency is negligible compared to the wave frequency of interest , the plasma can be said to be "collisionless." That is, given the condition , we have , so we can neglect the terms in the equation. The Appleton–Hartree equation for a cold, collisionless plasma is therefore, Quasi-longitudinal propagation in a collisionless plasma If we further assume that the wave propagation is primarily in the direction of the magnetic field, i.e., , we can neglect the term above. Thus, for quasi-longitudinal propagation in a cold, collisionless plasma, the Appleton–Hartree equation becomes, See also Mary Taylor Slow Plasma (physics) Waves in plasmas References Citations and notes Electromagnetic radiation Waves in plasmas Plasma physics equations
Appleton–Hartree equation
[ "Physics" ]
588
[ "Waves in plasmas", "Physical phenomena", "Equations of physics", "Electromagnetic radiation", "Plasma phenomena", "Waves", "Radiation", "Plasma physics equations" ]
19,683,347
https://en.wikipedia.org/wiki/Fayet%E2%80%93Iliopoulos%20D-term
In theoretical physics, the Fayet–Iliopoulos D-term (introduced by Pierre Fayet and John Iliopoulos) is a D-term in a supersymmetric theory obtained from a vector superfield V simply by an integral over all of superspace: Because a natural trace must be a part of the expression, the action only exists for U(1) vector superfields. In terms of the components, it is proportional simply to the last auxiliary D-term of the superfield V. It means that the corresponding D that appears in D-flatness conditions (and whose square enters the ordinary potential) is additively shifted by , the coefficient. References Supersymmetric quantum field theory
Fayet–Iliopoulos D-term
[ "Physics" ]
145
[ "Supersymmetric quantum field theory", "Quantum physics stubs", "Quantum mechanics", "Supersymmetry", "Symmetry" ]
19,683,961
https://en.wikipedia.org/wiki/Solvent%20casting%20and%20particulate%20leaching
In solvent casting and particulate leaching (SCPL), a polymer is dissolved in an organic solvent. Particles, mainly salts, with specific dimensions are then added to the solution. The mixture is shaped into its final geometry. For example, it can be cast onto a glass plate to produce a membrane or in a three-dimensional mold to produce a scaffold. When the solvent evaporates, it creates a structure of composite material consisting of the particles together with the polymer. The composite material is then placed in a bath which dissolves the particles, leaving behind a porous structure. References Polymers
Solvent casting and particulate leaching
[ "Chemistry", "Materials_science" ]
124
[ "Polymers", "Polymer chemistry" ]
19,688,599
https://en.wikipedia.org/wiki/Offset%20agreement
Offsets are compensatory trade agreements, reciprocal trade agreements, between an exporting foreign company, or possibly a government acting as intermediary, and an importing entity. Offset agreements often involve trade in military goods and services and are alternatively called: industrial compensations, industrial cooperation, offsets, industrial and regional benefits, balances, juste retour or equilibrium, to define mechanisms more complex than counter-trade. Counter-trade can also be considered one of the many forms of defense offset, to compensate a purchasing country. The incentive for the exporter results from the conditioning of the core transaction to the acceptance of the offset obligation. The main difference between a generic offset and counter-trade, both common practices in the international defense trade, is the involvement of money. In counter-trade, goods are paid through barters or other mechanisms without the exchange of money, while in other generic offsets money is the main medium of exchange. Definition Offsets can be defined as provisions to an import agreement, between an exporting foreign company, or possibly a government acting as intermediary, and an importing public entity, that oblige the exporter to undertake activities in order to satisfy a second objective of the importing entity, distinct from the acquisition of the goods and/or services that form the core transaction. The incentive for the exporter results from the conditioning of the core transaction to the acceptance of the offset obligation. Often, the proclaimed aim of this process is to even-up a country's balance of trade. However, some forms of offsets transactions do not represent trade flows going from the initial importer towards the initial exporter. Offsets are frequently an integral part of international defense contracts. Offset as unfair burden The U.S. government's definition of offset agreement is the most crucial, since the U.S. aerospace and defense industry is the biggest exporter of aerospace and defence products, and therefore engaged in the majority of the world's offsets. The U.S. has a Commerce Department Division, the Bureau of Industry and Security (BIS), that deals specifically with U.S. defense offset agreements with foreign nations as a main subset of U.S. industrial security. BIS - whose main task is protecting U.S. security from the point of view of export of high technology, fostering commercially acceptable U.S. foreign policy, and protecting U.S. economic interests - deals with U.S. aerospace and defense companies that export defense products, systems or services, involving "offset agreements," that is, those sales' collateral or additional agreements requested by purchasers. BIS defines offsets as "mandatory compensations required by foreign governments when purchasing weapon systems and services." The U.S. government underlines the compulsory aspect of this trade practice, since the United States, with other weapons exporting countries, such as Germany and France, opposes offsets as forms of protectionism and harmful transgressions of free market rules. These governments frown on offset agreements, consider them to be both market distorting and inefficient. In 2002, U.S. companies had 49% of global defense industry export, EU% 35. Data from AECMA 2002 Facts and Figures. Offset as partnership In 2008 the Brazilian Minister of Strategic Affairs, speaking of a major defense purchase by his country, highlighted this key point: "We will not simply be buyers or clients, but partners." The competition by different companies “in offering comparable weapons to a country" is also on the level of "sharing" or "partnership" with the purchaser, Roberto Mangabeira Unger added. Offset as marketing tool In weapons trade, defense contractors are fully aware that offsets are powerful marketing tools to motivate the purchase, by showing and giving additional advantages for the purchasing country besides investing in military equipment. The U.S. defense industry position seems to be more practical, and somewhat quietly nonaligned with U.S. government economic or political assessment of defense offsets. Generally speaking, one can understand offsets as a widespread sales technique. As such, they are not restricted to weapons sale, they belong to commerce itself, in the same way that rebates, price-pack deals, or loyalty rewards programs do. Understanding "defense offsets" to be part of a sales technique helps to curb the justified yet excessive emphasis on their mandatory nature. Often, defense offsets are more motivating than the primary defense acquisition, for personal or political reasons. This may seem irrational, but it is part of commerce. If one adds the prevalent political aspect of spending huge public funds on modern weapons, then the motivating significance of defense offsets could not be underestimated in contemporary decision processes of democracies. Prime defense contractors are well aware of offsets' power in the psychologies of democracies. As anyone can understand, the seller will include the cost of the "Envelope B," that is, of the offset and its added value for the purchaser, in its total cost. In other words, the client will pay for the offset; it is not a free lunch. But the key question is: to what extent is the offset proposal a factor in the consideration of defense contractor's tender during the evaluation and the decision procedures? Transparency International clearly summarizes that using offsets as marketing tools makes offsets "the ideal playground for corruption": The universe of this military niche of offsets trade is sophisticated and less innocuous than commonly believed. In 2000 Daniel Pearl wrote an article about the universe of offsets: "could the sale of U.S. weapons in the Persian Gulf help an oil concern unload gasoline stations in Europe? Yes, under the new logic of international arms deals." Pearl describes the new-found world of indirect offsets: The market size of the international offset business is related to the volume of international weapons trade in the world. According to SIPRI, in 2007 there were US$51 billion of weapons exports, an approximate value because it is open source, and not all weapons deals data are open source information. Defense offset example As an example of a defense offset proposal, we could describe a hypothetical case of Nation P (Purchaser) buying 300 tanks from defense company S (Seller, of Nation S).<ref>See also U.S. Bureau for Industry and Security'''s example to explain the offset. BIS Annual Report 2007, p. 136.</ref> The total sale contract is $400 million and Nation P (Purchaser) requests offset of 120%. Defense Company S (Seller) is obliged to fulfill an offset equal to 120% of the sales contract, that is $480M. Nation P agrees on a list of specific offset deals and programs to fulfill the agreed total obligation with Company S (Seller). The offset agreement includes both direct and indirect offsets. Nation P also assigns a credit value for each typology of offsets offered by Company S. The credit value for the offset obligations is not the "actual value," but it is the "actual value" by a multiplier, that expresses the degree of interest of Nation P (Purchaser) in the proposed offsets. In other words, something deemed very valuable by nation P will have a high multiplier that expresses the importance and the value to Nation P of that type of offset. The multiplier (for instance 2, or 5, or 7) translates nation P's attached value into the credit value that eventually accounts for the fulfillment of the agreed sum of $480M (120% of offset); it is evident that with no multipliers, 120% offset would be nonsense. Most of the offset packages are divided into direct and indirect offsets. Here is a hypothetical, complex offset offer, divided into direct and indirect offsets in Nation P. Direct Offsets (military and related to the product of the Company S, i.e. the tanks in this example) Co-production: Nation P chooses one or more local companies to manufacture some components of the tanks, such as turrets and some of the internal components. The actual value of the components is $70 million. Nation P assigns a multiplier of 3, since this develops capabilities of its military industrial base and creates jobs in Nation P. The total credit value for the fulfillment of the overall offset obligation is $70M x 3 = $210M. Indirect Offsets (civilian and other agreements not related to the production of the weapons item, the tanks. They could be also military or security related, but not directly connected with the main acquisition, i.e., the tanks) Foreign Direct Investments: Company S makes investments in 5 (defense or non-defense) companies in Nation P. The total value of the investments is $14.5M, and the multiplier is 4, a high multiplier, since Nation "P" suffers from a chronic lack of Foreign Direct Investments. This makes an additional credit value for Company S of $58M. Technology Transfer: Company S provides water desalination technologies to one Nation P company. This technology is particularly appreciated by Nation "P". Its actual value is $20M, but the credit value is 7 times the actual value, that is, $140M. Export Assistance and Marketing: Company S provides commercial assistance to market products and services of a Nation "P" company in a difficult market, such as, for instance, the Middle East. The assistance is offered for 8 years, at the value of $3M per year. Nation P considers this assistance to export as important to create new revenue streams and jobs for its company, and sets a multiplier of 3. Credit value: $72M. (Since company S is not an expert on marketing and export assistance, it may hire a specialist company to subcontract the job. Such a subcontractor is also known as an "offset fulfiller"). Nation P controls not only the supply of the military systems or services, but also the implementation of the offsets according to the offset agreement included or related to the main supply contract. This control is within the Minister of Defense and/or Ministry of Economy or Finance, or Ministry of Industry and Trade. Often arms importing nations establish special agencies for the supervision of the defense offsets. Types of offset Offset proposals often make a distinction between direct and indirect offset. Direct offset is a side agreement that is directly related to the main product/service that is bought/sold, that is military equipment, systems, or services. They may be also called military offsets. For example, a buyer of military equipment may be given the right to produce a component of a related technology in the buyer's country. Indirect offsets are side agreements that are not directly related to the product/service that is bought/sold. Most people refer to such category of offsets as civilian offsets, though there are many indirect offsets that are not civilian offsets. Indirect offsets may take the form of services, investments, counter-trade and/or co-production. For instance, Greek companies produce part of the Lockheed C-130 that they bought from U.S. The Greek co-production is a U.S. direct offset. Or, in a more sophisticated form of offset involving three countries, Portugal is in charge of the maintenance of Kuwaiti Lockheed Martin aircraft. This is a Portuguese "direct offset", since Portugal bought the aircraft, and is partner in charge of their maintenance. An investment in a security software company of Romania, or in assisting the export and marketing in difficult areas of a Belgian environmental company are other forms of actual indirect offsets. The most common types of direct/indirect offsets are: The most complete and accurate list of actual offsets can be found in the BIS Annual Reports to Congress, where all forms of registered offset are codified, according to the old Standard Industrial Classification. Some offset mechanisms Since offsets typically include military departments of sovereign nations comparable to the US Defense Department, many countries have offset laws, public regulations or, alternatively, formal internal offset policies. Defense prime contractor, offset service providers In medium and large tenders for weapons systems the bid can be very complex, involving one or more than one company as bidder. The main offer is guided by a prime contractor that may have other companies associated to the bid as partners or as subcontractors. However, in regards to the agreed offset proposal only the prime contractor is liable toward the final client for its fulfillment. Since offsets are increasingly complex, the prime contractor may hire subcontractors to fulfill its contractual obligations. While the liability for the offsets stays with the prime contractor, the job can be executed by a subcontractor, or an offset fulfiller. One of the collateral benefits of offsets in U.S. (probably not offsetting the adverse effects of defense offsets on economy and jobs) is the proliferation of legal and economic jobs in the "offsets sector," that goes from international legal companies to the fully staffed international offices of U.S. defense companies. In addition, the establishment of new companies in "offset" venture capital, in "offset" marketing assistance: offset fulfillers that provide their support services to the defense and aerospace industry. While many offset programs originally evolved as a result of defense and aerospace sales, today many new offset / Industrial Partnership programs have developed as a result of nations wanting to improve their industrial base, level of technology and desire to become more self-sufficient across a broad range of industry sectors. As a result of this new interest and demand for Industrial Partnership, there has been both a growth and establishment of new offset associations. There are both Global and Domestic "Offset" / Industrial Partnership organizations. The largest and most widely attended global association is GOCA, the Global Offset and Countertrade Association, whose purpose is to promote trade and commerce between companies around the world and their foreign customers through a greater understanding of countertrade and offset. Each year GOCA hosts several industry meetings in partnership with other European and American offset organizations such as the ADS, Aerospace Defence Security and DIOA, the U.S. Defense Industry Offset Association. DIOA was established in 1985 by members of the U.S. Defense Industry to foster education, networking and guidelines for professionalism in offsets implementations. The industry meetings referenced above are attended by leading global industrial companies, companies that specialize in providing support services for Industrial Partnership fulfillment, and by various government and military authorities, mostly from national ministries of defense and economy, who oversee and monitor offset and Industrial Partnership programs. The purpose of these industry meetings is to educate, and to foster relationships between the various offset obligors and beneficiaries so that real and sustainable economic benefits can be delivered to nations who are seeking to improve their respective economies. Offsets and Industrial Partnership programs have been evolving for over three decades and there are dozens of articles that describe various successes and challenges that have resulted from implementing offset programs. The most widely read publication that deals with the developments in the Offset and Industrial Participation arena is Countertrade and Offset ("CTO"). Offset fulfillers /service providers may perform various support services: including marketing assistance, sourcing and structuring venture capital or other forms of corporate credit for Foreign Direct Investment (FDI), or more complex things, such as a joint ventures to produce shrimp (Devcorp, Bahrain), a sugar factory, or, recently, environment and renewable energy. Offset certificates, penalties, confidentiality clauses, pre-offset activities Offsets are of various durations. They may be planned to last for 1 or 2 years, but 8-10 year plans are very common; exceptionally long is an offset like Al Yamamah Program, a BAE-UK offset in Saudi Arabia in place since 1987. Clients (sovereign countries) have in place mechanisms to control their implementations, and to certificate milestone accomplishments in their offset programs. An offset supervision authority certifies the advancement in the offset completion in percentages, issuing offset certificates. These certificates may be issued to prime contractors fulfilling their offset agreement, but also to offset fulfillers, that have subcontracted the job from prime contractors and registered as such in the foreign countries. When there are multipliers, such certificates express the percentage of completion in "credit value" ("actual value" X multiplier). Offset fulfillers redeems the offsets certificates through contracts or subcontracts with the prime contractor. More recently, given the importance and the growth of offset practices around the world, offset fulfillers can "sell" their certificates to prime contractors other than their initial one, as long as they have national offset commissions authorizations. In this profitable niche of defense industry made by offset specialists, lawyers and companies - there is also a "currency" and a "trade" of offset certificates. Like in any contract, there are forms of penalty for failing to complete offset obligations. Many nations have rigid systems of penalties, including the use of bank guarantees, while other nations believe in continued negotiations that are based on "the best effort" clauses. The list of incentives and penalties for offset is no different from many other systems of procurement, with two remarkable exceptions: 1. In the offset business there are two contracts proceeding in parallel, i.e. primary (A) and side (B) contract: A) The supply of defense equipment/services from the Defense Company to the Client (Foreign State), according to contractual specification (quality, quantity, time, etc.) B) The offsets progress as monitored by the same Client (Foreign State), but most of the times by a different State entity, according to contractual offset agreement (quality, quantity, time, etc.). These two contracts impact on each other and problems with one can affect the other. However, since most offsets today are not "direct," this may create confusions and distortions, especially due to "indirect offsets." 2. In direct offsets contracts there are legitimate "clauses of confidentiality", that in several countries may even assume value of official classification, up to secret of state. In European Union States, however, extending state secret classifications to indirect offsets - that have nothing to do with military or state security - is considered an abuse. For instance, classifying an offset not-related to state security or military preparedness - such as indirect and civilian offsets in pharmaceutical research, environmental technologies or export assistance of any non-military/security products - leads not only to major market distortions but also to possibly unpunished corruption protected by baseless secrecy. Pre-offset activities are allowed and welcomed by several countries; they are like offsets without certainties to obtain "credit value". These activities are straight marketing activities, similar to lobbying, to promote specific defense purchases. These pre-offset activities must be registered as such with national authorities. Often pre-offset activities will receive certificates, after a sale. Defense companies include them in the marketing budget, but after the sale, these offsets go into their offset budgets, and count toward offset fulfillment. These kinds of pre-offset activities are sales arguments to convince the buying state of the strength and reliability of the aerospace & defense company as potential supplier and partner. The pre-offsets arena is also the delicate and problematic field of sale-facilitators and, precisely because the client is a state, this must be monitored with additional care, since this field is prone to abuse and outright corruption. Foreign military and direct commercial sales – "no known offsets" and FMS For the U.S., the major world exporter of weapons by far, there are two main ways to sell weapons to a foreign country. The first is referred to as "Direct Commercial Sale" and it is a company to government sale. The second way is referred to as "Foreign Military Sales", that is a government to government sale. A Direct Commercial Sale is highly supervised by U.S. Government and even by the U.S. Congress, in spite of its free market appearance. The arms trade, because of its connection with national security, is never free from strict government supervision. For a sale to a foreign country's Defense Department, a U.S. defense firm must be licensed. It is checked by the Defense Department and by the State Department, and, in the case of relevant sales, even authorized or vetoed by U.S. Congress. Direct Commercial Sales are highly regulated because of security, political, and commercial reasons. Even from the point of view of indirect and non-military offset agreements, U.S. Defense companies and their subcontractors (offset fulfillers) must present detailed report of their offset activities to the Commerce Department, Bureau of Industry and Security (BIS). Foreign Military Sales are indirect sales of weapons produced by one or more U.S. contractors through an agency of the Department of Defense, the Defense Security Cooperation Agency (DSCA). In a way, DSCA acts as Prime Contractor's agent in promoting and selling U.S. made weapons to foreign countries. The known FMS disadvantage is that DSCA adds to the final sale price a small percentage for its own administrative costs; the advantage is some free training with U.S. Armed Forces for joint international operations. In this type of sale, however, there are two important aspects in regards to the offset business. 1) Since 1990, under a specific directive by President George Bush, no U.S. Federal Agency or U.S. Government employee can be involved in the offset business. To each press release about a FMS (or of any documents regarding a FMS), there is a standard disclaimer: "There are no known offset agreements proposed in connection with this (potential) sale." The Defense Security Cooperation Agency acts on behalf of the prime contractor with a foreign government and refuses even the mere knowledge about offsets, but under the FMS program prime contractors are allowed to put all their offset costs into the final price. The cost of the offset is not even itemized in the FMS offer, and if the client wants to discuss or simply to know the cost of the offset, the client must speak directly with the Contractor and not with DSCA. In other words, U.S. Government cannot deal with offsets; U.S. Defense Prime Contractors can and do. DSCA has made available in its complete manual details and analytical explanations for U.S. Defense companies on how to include the offset various costs into their contract and invoices. So prime contractors, even under FMS, are allowed to recover all costs "of any offsets which are associated with those contracts." To be sure, "U.S. Government agencies may not enter into or commit U.S. companies to any offset agreement."De facto during the cold war offsets had different functions and often U.S. Government Agencies were directly involved. President Bush, by ending the cold war with a victory, likewise ended U.S. agencies' liability in delicate practices like the offsets (1990), since they lost the primary political value they had during the cold war. 2) U.S. funds assigned by United States Foreign Military Financing (FMF) that may be connected with Foreign Military Sales (FMS) cannot be used for any type of offsets, if they are non-repayable funds. On the other hand, FMF funds may be used for offset costs if they are loans. In any case, "U.S. Government agencies may not enter into or commit U.S. firms to any offset agreement." List of offset policies of different countries The following is a cursory survey on some countries' offset policies. It does not enter into details, and basically it gives: 1) the legal base for the offset; 2) the purchase threshold above which there is a requests for offset; 3) the requested "quantity" of offset by the country in terms of percentages of the contract value; 4) the applied multipliers, that qualify ("quality") through a number the appreciations of a certain type of offsets (the "Credit Value" of an offset is the "Actual Value" by the multiplier); 5) and some remarks or specific information, including the websites of the National Offset activities. A detailed list of the National Laws and Policies of the Countries of the European Union can be found in the website of European Defense Agency in a new "EU Offset Portal". Another very useful analysis of country policies can be found in Belgian Ministry of Economy (in charge of Belgian offsets). This publicly available document gives one of the most intelligent global analyses of countries offsets policies, with a purchaser's perspective, that is the point of view of weapons importer countries. From a similar point of view, one can see the purchaser's point of view on offsets in UAE offset web-page, in the new Kuwaiti articulated offset policy. On the seller's point of view, in Bureau of Industry and Security (BIS) Annual Reports to U.S. Congress one can find the position on offsets of weapons exporters countries like U.S. Magazines and specialized publications like Jane's Defence Industry, EPICOS, Countertrade & Offset or CTO give detail accounts and updates on national policies, requests, changes, etc. Australia The Department of Defense (Defense Material Organization) is in charge of the offset. The threshold is 5 million Australian dollars. Multipliers go from 1 to 6. By rule, Australia does not accept indirect (civilian) offsets, unless such offsets brings benefits to the Australian Defense Industry. Austria Offset Agreements are negotiated by the Federal Ministry of Economic Affairs and Labor on a case-by-case basis; the percentage of offset is above 100%, up to 200% (and sometimes even more) of the contract values. Austria has one of the highest requests in the world for nominal quantity offsets. However, multipliers can go up to 10. The minimum value of the sale for mandatory offsets is 726,000 Euro. This high offset policy, applied to EADS Eurofighter Typhoons purchase in 2002 and still ongoing, was changed in 2008-9 when Austria signed the voluntary EU Code of Conduct on Offset. The relevant changes are the reduction of the offset quantity to 100% (that is the value of the purchase contract), the value of multipliers, and the monitoring function of the Federal Ministry of Defence and Sports. Bahrain Bahrain has no offset policy. The Kingdom imports about 80% of its weapons from U.S. via FMS, and the rest of weapons procurement comes from U.K. However, a specific offset policy is expected soon. Belgium A Royal Decree (6/2/1997, and modified-6/12/2001) is the legal foundation of the Belgian Industrial Benefits Program. The program is directed by Ministry of Economic Affairs (Industrial Benefits in the Field of Defense Procuremen).t The threshold value is usually 11 million euro, but it is lower if it is not a public and open tender. The minimum required offset is 100%. Multipliers are not specified. The focus is on high technology and new or additional business flow. The Belgian offset guidelines are very sophisticated. One of the most important and explicit points is the so-called "newness aspect:" offsets, such export assistance, "must create unambiguously a new or additional business flow in export" for Belgian companies. Belgium distinguishes three forms of offsets: direct, semi-direct, and indirect. Belgium has a specific problem with the Industrial Benefit Programme, related to the de facto split of the country into Wallonia and Flanders. Belgian offset program is more meticulous than other European countries, but Belgian citizenry is aware that paying for offsets, instead of getting weapons at off-the-shelf price, implies not-so-transparent transfers of taxpayers' money to designated Belgian companies through political decisions. One can rightly consider such offset-transfer as confidential state subsidy. Flemish parties claim that subsidies to Wallonia cost proportionally more than the price West Germany paid for East Germany. The subsidies/offset issue was particularly acute in the repartition of direct and indirect industrial benefits for one of the recent acquisition with offsets: 242 Mowag Piranha for the Belgian Army from General Dynamics, over 700 M Euro. One of the controversial offsets connected with the Piranha III is a direct offset for the Piranha III co-production, and the disputed choice of a 90mm gun (produced by a Wallonian company) instead of 105mm NATO standard cannon. Brazil Brazilian offset policy was formally implemented by the Brazilian Air Force in 1990. However, the practice had already been used constantly at least from 1975. The Brazilian Mod updated its offset policy in 2018 (Resolution 61/GM-MD) when a new threshold of US$50 million was established, minor updates followed in 2021. The last update was made in 2023, by Resolution MD n°3.990. Deriving from the MD policy, Air Force, Navy, and Army have separate specific offset guidelines. The total offset request is 100% whenever possible. Multipliers are usually between 1 and 4, exceptionally 5. Brazilian offset policy emphasizes technological development of its defense industry, especially aerospace, through mainly technology transfers and training. Bulgaria Public Procurement Law 2004, revised in 2009. Direct offsets are supervised by the Defense Ministry, and indirect offsets by the Ministry of Economy and Energy. There is however a Permanent Inter-Ministerial Council of Special Purpose Public Procurement to approve offset agreements. The Agency for Information, Technology and Communications (SAITC) assists and coordinates offset projects. The threshold for offset is 5 million Euro. The minimum Value is 110% of the contract value. Multipliers are between 1 and 3. Offsets are generally 30% direct and 70% indirect. Canada Canada's offset agreements (known as Industrial and Industrial and Technological Benefits (ITB) and Value Proposition (VP)) are managed by the ITB/VP directorate within the Canadian government's Innovation, Science and Economic Development (ISED) department. The IRB Policy, the predecessor to the ITB/VP Policy, was created in 1986 to assist Canadian companies in leveraging government procurement. The Canadian government revamped the policy into the ITB/VP program in 2014. The new policy requires prime contractors to place sub-contracts and investments in the sectors of strategic value to the Canadian economy. Contractors must usually achieve 100% in offsets. Offset achievements can be direct (transactions achieved in the performance of the contract) or indirect (transactions which are not directly related to the procured items and are, instead, related to investments, technology cooperation, and product mandates). The ITB/VP program applies only to defence and Coast Guard procurements (and resulting contracts) for which the Canadian government has invoked the National Security Exemption (NSE). The NSE excludes a procurement from all trade agreements to which Canada is a signatory. Not all NSE-invoked procurements are subject to the ITB/VP policy, but all ITB/VP procurements are NSE-invoked. The Canadian government typically invokes the NSE for ITB/VP purposes on all defence and Coast Guard procurements over $100M (CAD) and those between $20M and $100M on a case-by-case basis. An industry bidder submits two components as part of the offset proposal: and ITB portion and a VP portion. The ITB portion identifies the total offsets a bidder commits to achieving (measured in Canadian Content Value, or CCV) once under contract. The VP portion identifies the CCV a bidder commits to achieving toward five pillars: Work in the Canadian Defence Industry Canadian Supplier Development Research and Development Exports from Canada Skills and Development Training The ITB/VP policy requires contractors to allocate a certain percentage (typically 15%) of the total contract value to small and medium-sized businesses. As of 2022, Canada also requires defence contractors to allocate 5% of the contract value to indigenously owned businesses. Defence contractors must also comply with Canada's gender and diversity policies. Czech Republic Offset regulations are set by Government Resolution 9 - 2005. The Ministry of Trade and Industry is in charge of Industrial Cooperation (also through an Offset Commission). The minimum value of the contract is CZK 500 million. The minimum offset percentage is 100 per cent. No multipliers are used. Offset focus is on new technologies, co-operation and technology transfer. Minimum 20% of direct offset. The Offset Commission issues Annual Reports on the status of the offsets. One of the major offset controversies in Czech Republic is about the purchase of General Dynamics Pandur II, between 2003 and 2009. In February 2010 anti-corruption police have opened an investigation. According to The Prague Post "the investigation will center on two main issues: the alleged bribery of politicians and the military's reasons for paying three times more for the Pandur than Portugal, which also purchased" the same armoured vehicle. Denmark Ministry of Defense makes the acquisition, but the Ministry of Business and Growth is monitoring offset implementations stipulated in the individual ICC's (Industrial Cooperation Contracts). The Guidelines for Industrial Cooperation was issued in 2014 and replaced the former Offset Law which was rejected by the European Commission. The threshold is DKK 50 million and offset requirement varies from each project up to 100%. Multipliers up to 8 can be considered for R&D and technology or financial transfers. Denmark signed a trilateral agreement with UK and The Netherlands on "best practice for the application of abatements in offset" regarding swaps of offset obligations. Estonia No offset law. Estonia is particularly interested in counter-trade. Finland No law, only public guidelines on Industrial Participation. The Ministry of Defense is in charge, Defense Materiel Industry and Industrial Participation, but with The Ministry of Trade and Industry. The minimum contract value for offsets is EUR 10 million. 100% minimum of offset requirement. Multipliers are between 0.3 and 3.0 (for Finnish export). Technology transfer multipliers are negotiated. Finland's focus is on its domestic defense industry. France No formal offset policy, but has counter-trade and offset departments in the Ministry of Economic Affairs and in the Ministry of Defense. However, France like U.S., is almost completely independent on its own military needs, and it has a minimal amount of weapon procurements from foreign countries. Germany German official position is that offset arrangements are economically counterproductive in defense trade. However, Germany applies a policy of "industrial balances," based on 100% of the contract value. German Federal Ministry of Defense (BMVg), and The Federal Office of Defense Technology and Procurement (Das Bundesamt für Wehrtechnik und Beschaffung - BWB, that is an Acquisition Agency) are in charge of procurement and cooperation. The Agency has a branch office for U.S. and Canada in Reston, Virginia. It is worth noting that Germany, while being the third world exporter of weapons, does not have huge "defense corporations", that is, large companies whose core business is weapons production, but civilian companies that produce weapons in addition to their main business. WT - Wehr Technik is a source of information on BWB activities. Germany has a quota of 11% of defense global export and, according to SIPRI data, between 2004 and 2009 German weapons export doubled. Greece Offset regulation is in the official Procurement Law, 3433/2006. The Hellenic Ministry of National Defense is in charge through the department of General Armaments Directorate (GAD), and the Division of Offsets (DO). The threshold for offset request is EUR10 million. The minimum offset requirement is between 80 and 120%. Multipliers are from 1 to 10. Greece does not accept indirect offsets, since it is focused on the strengthening of its military capabilities. According to SIPRI data for 2006–2010, Greece is the 5th weapons importer of the world, with a global quota of 4%, about half of India (9%), and two-thirds of China's import (6%). It is worth noting that Chinese GDP is about 20 times bigger than Greece's nominal GDP. Greece is the major EU importer of weapons and the first U.S. defense client in the European Union. According to K. Vasileios, editor of EPICOS, "currently, there are 122 open offset contracts that were signed between 1997 and 2010 but have not been executed due to various issues." Hungary The offset legal base is a Government Decree 228/2004 and offset authority is the Ministry for National Development and Economy - Directive No. 23/2008. The threshold is HUF1 billion (3.5 Mil Euro) with a minimum offset requirement of 100%. Multipliers can go up to 15. The confidentiality clause on offsets is essentially commercial, as a normal Non-Disclosure Agreement. Japan No formal offset policy. Japan Defense Agency (JDA) depended directly on the Prime Minister and was in charge of defense procurement, thorough the Bureau of Equipment and the Bureau of Finance. However, since 2007 the Japan Defense Agency (JDA) has been transformed into a full Ministry of Defense, with a minister in cabinet-level decisions. Most of the Japanese defense import is from U.S., and it is regulated by bilateral agreements. The majority of defense bidding goes through the representation of Japanese trading companies, though direct bidding is theoretically possible. There is an unspoken policy for the largest Japanese companies that is an understood as "Buy Japanese" products policy. Two remarks: Japan, Germany and Italy, in spite of evident cultural differences, have similar political attitudes in "balancing" foreign weapons procurements. But Japan, in the name of political principles and then of official laws, restrained itself on weapon exports, and since the 1970s Japanese defense industry is self-confined to Japanese domestic market. India Government introduced a Defense Procurement Procedure in 2005, revised in 2006, 2008, 2011, 2013 and latest in 2016. A new round of amendments are underway (as of 2018 October). The offset policy mandates foreign suppliers to spend at least 30% of the contract value in India. The offset limit has now been increased from Rs 300 crore to Rs 2,000 crore. Israel Ministry of Industry, Trade and Labor is in charge of the offset policy and implementation. Threshold is US$100,000. Minimum offset request is 35%, multipliers are either 1 or 2. The main point about Israel and its offset policy is the fact that Israel has been for a long time the largest beneficiary of United States Foreign Military Financing (FMF), getting more than 50% of the entire available U.S. FMF budget. This condition sets a limit to Israeli offsets request to U.S. Italy There is no public law on offsets. There is not even an official name for the offset policy. The public position is that Italy has no general offset policy, just ad hoc (offset) policies. The National Armament Directorate of Ministry of Defense is in charge of offsets, while specific internal commissions of each branch of the Italian Armed Forces (Aviation, Army, and Navy) monitor offset fulfillment. The threshold for offset is 5 million Euro. The minimum offset request is 70%, but generally goes up to 100%. The highest multiplier is 3. The focus is on export opportunities for Italian defense companies. There is no website or web-page for a nameless offset program, the closest would be the Minister of Defense website. Kuwait New Guidelines for Kuwait Offset Program were published in 2007, following to a Minister of Finance directive, that regards all foreign procurements related to both military and non military contracts. The National Offset Company (NOC) is a state-owned company, and its activities in the Kuwait Offset Program are on behalf of the Ministry of Finance. The offset commitment is still at 35% of the monetary value of military contracts. Threshold is KD 3m, (for civilian contracts is KD 10m.) Since 2007 there have been fundamental changes, making offsets requirements more effective and complex, including the multipliers systems, with more attention on the tangible benefits for Kuwait. Lithuania Resolution No. 918/03 of the Government of the Republic of Lithuania (15-7-03). The Ministry of Economy is in charge of offsets. The threshold is LTL 5 million, about 1.5 million euro. Minimum offset requirement of 100%. Multipliers are between 1 and 5. Morocco Morocco, has yet no Offset policy. The only existing offset agreement has been signed with Alstom which won the contract to build Morocco's high-speed train system and which is also responsible for the construction of tram networks in Rabat and Casablanca. In January the company signed an agreement with the government that will see it establish a local production base for cabling and electronic components, creating 5000 jobs over 10 years, as well as establishing a rail sector training institute. Alstom also signalled its intention to step up purchases from Moroccan suppliers and service providers (such as local back-office offshoring services), for use in projects in other countries as well. To stimulate its industry, the government is working on a new procurement offset (compensation industrielle) policy. Under the policy reportedly being developed by the minister for industry and trade, foreign companies that win government tenders worth more than Dh200m (€17.69m) will be obliged to carry out local investments and purchases worth at least 50% of the value of the contracts. The measure is intended to boost the local industrial sector by ensuring that foreign companies invest locally, use local subcontractors and locally made products, and transfer technology to the country. The policy will not affect US companies, as the 2004 US-Morocco free trade agreement bars the imposition of such requirements on US firms. Since its election in 2012 as President of the CGEM , Miriem Bensaleh Chaqroun has been lobbying for the creation of a legal framework making mandatory for state controlled companies, offices, local authorities to sign offset agreements. The action was the creation of the commission compensation industrielle et accès aux marchés publics in 2012 which was headed by the former CEO of the CGEM Mehdi El Idrissi. The commission published in 2014 a Guide de la compensation industrielle in order to explain to them the concept of Offset agreements. In 2015 the CGEM made a series of proposition to the government, including the creation of an agency of the offset. The role of this agency would be: To be the sole interlocutor in the development of offset contracts. To guide foreign businesses to feasible projects in collaboration with Moroccan companies and sectors to promote and encourage industrial activities based on the industrial policy and the competitive advantages of the country. To measure the chances of success of projects and help develop contracts that will bind the foreign company with the Moroccan suppliers. To monitor and evaluate the implementation of offset projects. Netherlands The Ministry of Economic Affairs - Commissariat for Military Production (CMP) is in charge of offset policy and implementation (following to a protocol of agreement with the Ministry of Defense). The threshold for offset is EUR 5 million. Minimum offset requirement of 100% . Multipliers are between 1 and 5. The focus is on innovation and marketing support and it is directed by the Ministry of Economic Affairs. Guidelines to an Industrial Benefits and Offsets Program in the Netherlands are available. Norway Norwegian Ministry of Defense has the responsibility for Industrial Cooperation Agreements (ICA) and the supervision of the agreements during their implementation. The offset threshold of contract value is usually NOK 50 million, (about 5.5 Million Euro). The required offset quantity is 100% of the contract value and multipliers are from 1 to 5. Note: Norway is not part of European Union, but has joined the European Defense Agency with no voting rights. The guidelines of procurement and offsets can be found at online. Poland Ministry of Economy is in charge of offset. Polish Offset Law was issued in 1999; Offset regulations were approved in 1996 and revised in 2007. The threshold value for offsets is 5 million Euro and the request is for 100% of offset. Multipliers are between 2.0 and 5.0. Direct offsets for country's defense industry and opening of new export markets are Polish priorities. In 2003 Lockheed Martin sold 48 F-16 fighters to Poland for 3,5 billion USD via FMS. The Program is called Peace Sky, and it is financed with a U.S. FMF of 3,8 Billion USD for 15 years. Lockheed Martin offered an offset package to Poland of US$6 Billion in U.S. business investments. Polish officials called the agreement "the deal of the century." The value of the offset is 170% of the contract price. The Baltimore Sun reported that "such side deals have long been criticized in Washington as a form of kickback that defies the natural forces of free trade, including when Duncan Hunter called offsets "economic bribes" during a hearing on Capitol Hill this summer (2004)." Rick Kirkland, Lockheed Martin V.P. replied: "It's part of the price of international business, if we couldn't offer them an acceptable package of offsets, they wouldn't be buying an American airplane. It's that simple." Portugal Defense Minister's directive on Contrapartidas (offsets) has been issued in 2002. Decree-Law 153/2006 and 154/2006 regulates Portuguese Contrapartidas. The Permanent Commission on Offsets (CPC) is a government agency, which depends on the Ministries of Defense and of Economy, and it is in charge of negotiating and supervising offsets. The threshold is 10 million Euro and the minimum offset request is 100%. Multipliers have been set between 1 and 5 in 2006. There is no preference with regard to direct or indirect offsets. In 2005 the Portuguese government signed a deal worth 364 M euros to acquire 260 Pandur II armored vehicle from General Dynamics. The Portuguese Pandur II includes an associated offset agreement for a value of 516 M euro. Patria, the only competitor of General Dynamics, was excluded on technical reasons in 2004. Patria unusually appealed the Portuguese Government's decision in court, complaining that General Dynamicss offset package - decisive for the assignment of the contract - was a fake. The Portuguese tender was the first of many in a European commercial war between Patria and General Dynamics, almost exclusively played on offsets and that ended in 2008 with the arrest of Jorma Witakorpi, Patria's CEO, for the Patria case in Slovenia. Qatar Qatar has no official offset policy but foreign Defense Companies involved with the Qatar Ministry of Defense are encouraged to invest and to build partnership in R&D and Education in Qatar. Romania Ministry of National Defense and an Agency for Special Offset Techniques are in charge, and Law 336/2007 regulates offsets. Romania requests offset for defense purchase above 3 million Euro, and the minimum amount of offset proposals is 80% of the contract value. Multipliers can go to 5. Indirect offset are accepted, especially in ecology and shipbuilding. Romania is the only EU country that did not sign the EU Code of Conduct on Offsets (July 2009) Saudi Arabia Saudi Economic Offset Program is under the Deputy Minister of Defense. Saudi offset request is that 35% of their contract value is invested in Saudi jobs creation and training, economic diversification, technology transfer and foreign direct investments in general. Threshold is 400 million Saudi Reals (US$107 million). UK and France have established bilateral offset program with Saudi Arabia. UK Al Yamamah Economic Offset Program (I, II and III) is the most complex and longest program; it began in 1987 and remains in effect. The French Offset is directed by Societe Francaise d'Exportation de Systemes Avances (SOFRESA), a private company operating on behalf of the French government. The U.S., in spite of the fact the most of its defense sales to the Kingdom are U.S. Defense Department Foreign Military Sales, leaves offsets to the private contractors, such as Lockheed Martin, SAIC, Boeing, and General Dynamics. Foreign Direct Investments are authorized and supervised by SAGIA, and they receive high multipliers according to the most strategic sectors and the Kingdom's priorities (such as water, electricity, communications, etc.). Saudi's offset market has an enormous significance for the Saudi non-oil economy since Saudi Arabia spends about 10% of its GDP in defense procurement. Slovakia Ministry of Economy is the governing body for offsets. The threshold (not established clearly) can go down to 130,000 euro. The amount of offset proposal is negotiable, but usually is equivalent to 100% of the contract value. Higher multipliers are for direct offsets. Slovenia Ministry of Defense is in charge, and offset guidelines were issued in 2000. The threshold is around 500,000 Euro, and the offset request is of 100% of the contract value. Multipliers go from 1 to 7. Foreign Direct Investments and technology transfers have the highest multipliers. The 2006 purchase of 135 Patria AMV infantry fighting vehicles is the largest in Slovenian history of military procurement (278 million Euro, deliveries 2007–2013), and the Patria case is the political controversy over claims of bribery of Slovenian officials by the Finnish company Patria. According to Jorma Witakorpi, Patria CEO at the time of the sale, a kickback of about 21 M Euro (7,5% of the total contract price) has been paid to decision makers, Slovenian politicians and military, to further the sale. The Patria AMV vehicles offset agreement has direct offset for 30% of the value of the contract (co-production of the AMV in Slovenia), and 70% of indirect offsets, mostly assistance to export for Slovenian companies. South Korea The Defense Acquisition Program Administration (DAPA) is in charge of country offset policy, that was published in 2008. Threshold is US$10 million. The minimum required offset is 10% for non-commercial trade and 50% for commercial trade. Multipliers are between 1 and 6. The contractual Memorandum of Understanding on offsets is a substantial part of the main contract. Spain Ministry of Defense - General Direction of Armaments and Material (DGAM) - Industrial Cooperation Agency of Spain (ICA) is responsible for the negotiation and the supervision of offsets. The guidelines for offset are not public, but issued by the Minister of Defense through internal and confidential procedures. The general request is 100% of the contract value. Multipliers are between 2 and 5. Sweden The offset policy was issued by the Government in 1999. The Industrial Participation program is directed by Minister of Defense, Defence Material Administration (FMV), and offset guidelines were issued in 2002. The contract value offset threshold is about 10 million Euro. The request for offset is 100%. Multipliers can be applied only to 10% of the total offset value. Only defense related offsets (direct offsets) are accepted, since Sweden applies art. 346 of the European Treaty of Lisbon. Switzerland The Federal Department of Defence, Civil Protection and Sport, division "armasuisse" is in charge of offsets. The threshold for offset request is 15 million Swiss Francs. The offsets are minimum 100% and multipliers are between 2 and 3. Turkey The Minister of Defense through an undersecretary for the Defense Industries is in charge of the Industrial Participation / Offset Directive (2007). The threshold is about US$5 million. The minimum required offset is 50%. Multipliers are between 1 and 6. The offset fulfillment time is 2 years, which is unusually short. Mostly interested in direct offsets such as technology transfer and license production for the development of the Turkish defense industry. United Arab Emirates The United Arab Emirates Offset Program Bureau (OFFSET) is in charge of offsets; chairman of the bureau is the Crown Prince of Abu Dhabi. The criteria are more sophisticated than most offset policies. The request for offset is 60% of the contract value. The offset credit is not evaluated on the investments, but through profit over time of an offset venture (a kind of multiplier ex post). Joint ventures in the UAE and partnership with local companies are the most common offset proposal, as direct and indirect offset. United Kingdom No official Policy. The Ministry of Defence is in charge, but offsets go through UKTI, UK Trade & Investment, under the Minister of State for Trade, Investment and Business. In 2007 the Prime Minister announced a change, transferring responsibility for defense trade from the Defence Export Services Organization (DESO) to UK Trade and Investment (UKTI). Since April 2008 UKTI DSO (Defence & Security Organization) has responsibility for supporting both defense and security exports. The general threshold is 10 million GBP, but through bilateral agreements with Germany and France, has been reciprocally set to 50 million GBP. The offset is generally around 100%, no multipliers. United States The U.S. is formally against offsets. To date, the U.S. is the only country that prohibits government officials and employees, as well as Government agencies, to get involved in any offset business. U.S. depends on foreign defense "prime" contractors for less than 2% of its defense procurement. However, many countries consider the Buy American Act practically equivalent to the offset policies of other countries. Criticism of offset agreements While widely practiced, some, such as the US government, consider such agreements to be "market distorting and inefficient". On April 16, 1990, a US Presidential Policy statement was released, stating that "the decision whether to engage in offsets [...] resides with the companies involved" and that "no agency of the U.S. Government shall encourage, enter directly into, or commit U.S. firms to any offset arrangement in connection with the sale of defense goods or services for foreign governments." EU position on defense offsets The most recent common European Union quasi-agreement on defense offsets is The Code of Conduct on Offsets, signed by all EU countries (with the exception of Romania and Denmark) in October 2008. The primary purpose of the voluntary and non-binding Code is to promote a "European Defense Technological and Industrial Base" and to outline a road map to arrive to a complete elimination of offset practices within the domestic EU market. In other words, to open to competitive bids the EU Defense and Security market and to overcome competition restrictions of EU Treaties of Rome and Lisbon, art. 346. The ideal goal is "competition in the EU Defense Market" and "Government-to-Government off-the-shelf sales." The realistic target is humbler, though: to self-restrain and limit the offset quantity to 100% of the contract value. The actual situation in EU is described in detail in a study on defense offsets in the Union countries commissioned by the European Defense Agency and published in 2007. According to this study the volume of EU offset agreements in 2006 was above 4-5 billion euro. The distribution of these offsets is as shown in the diagram: Direct Offsets, Military Indirect Offsets, Civilian Indirect Offsets. European policy on offsets is still regulated by the Treaty establishing the European Community. Art. 223 of the Treaty of Rome (1958), then article 296 of the EU Treaty of Amsterdam (1999); since December 2009, the Treaty of Lisbon (art. 346) protects member States weapons production and trade from competition rules of the common European market. In spite of 50 years of European history Article 223 (Rome) and Article 346 (Lisbon) are practically identical. Today the hinge of EU policy on offsets is still the same article, that is, Art. 346 of the Lisbon Treaty. This article preserves the national right to the secret of state related to its own security and military production and procurement. This is the relevant part of Article 346': The first part of the article states that European Union has no authority over national states policies and decisions on their defense/security choices. In other words, EU has no saying about domestic preference for homemade planes or tanks, or for preferred military offsets choice. The second part, however, asserts a shared principle by all EU states regarding the non-military/indirect offsets, that is, EU reserves its right to supervise and regulate indirect-non-military offset effects, so that they do not "adversely affect the condition of competition" in the internal common EU market. Any civilian-indirect-offset has distortion effects in the common market, and this distortion is amplified by the ignorance about specific offset agreements outside the circle of defense contractors and national authorities. U.S. started monitoring offsets adverse effects in United States when a small paper-making equipment company in Wisconsin (Beloit Corporation) got in trouble without understanding that the reason was a hidden cause, that is, an indirect offset by Northrop (now Northrop Grumman) with the Finnish Ministry of Defense. Only a concerned Wisconsin politician, Sen. Russell D. Feingold, discovered the real reasons in 1992, after being informed that a tender for supply of machinery of the value about 50M USD was not awarded to the Wisconsin company, but to a Finnish company (Valmet Corporation) as part of an offset deal with the Finnish Government. This U.S. offset story brought to light the issue of the impact of confidential agreements by defense companies on U.S. non-military business, in some instances with devastating effects. Feingold's discovery is enlightening for the EU common market as well, where interferences and adverse impacts on EU companies are allowed by an unjustified national attitude for confidentiality or secrecy on indirect, non military, offset deals. Art. 346 of Lisbon Treaty, written more than 50 years ago, is there to wisely avoid disruptive effects caused by unjustified military secrecy in civilian offsets in the common European market. In EU market of Defense, approximately of the size of $250B, with 27 sovereign state authorities that can claim secret of state -from Germany to Cyprus and Luxembourg-, there is a potential for indirect non-military offsets of $60B, that is, more 1000 times the distortion problem caused by Northrop (and the Finnish Ministry of Defense) to Beloit. Offset associations and publications The two main global organizations in the USA that deal with offsets are: G.O.C.A - Global Offset and Countertrade Association is a main source of information on the uses of counter-trade and offset. DMA - Defence Manufacturers Association, based in UK but opened to many other countries. One domestic U.S. based organization that deals with offsets is: DIOA - Defense Industrial Offset Association, membership is primarily open to U.S. based defense contractors. In Europe, ECCO (European Club for Countertrade and Offset) hosts two symposiums a year and started publishing various volumes that explains offset in various domains, such as finance, ethics, the economy, and international law. There are many regional or national offset conferences and symposiums, but recently GOCA and DMA jointly organize global offset meetings every two years. The first global meeting on offset took place in 2004 in Sintra, Portugal; then in Athens, Greece (2006); the third in Seville, Spain, in 2008. GOCA and DIOA hold both individual and joint conferences several times per year. Countertrade & Offset is a fortnightly magazine on the offset industry; the same publisher has also a quarterly for the industry: The Offset Guidelines Quarterly Bulletin. A thesis that focuses on offset in the European Union and Directove 2009/81/EC, can be downloaded from www.furterdefence.com Bibliography Arrowsmith, S. (2003) Government Procurement in the WTO (Kluwer Law International: London) Axelson, M. with James, A. (2000) The Defense Industry and Globalization, FOA-R-00-01698-179-SE (Stockholm: Division of Defence Analysis) Brauer, J and Dunne, P, (2004). Arms Trade and Economic Development, Theory, policy and cases in arms offsets, Routledge, London. Correa, Gilberto Mohr. Resultados da Política de Offset da Aeronáutica: Incremento nas Capacidades Tecnológicas das Organizações do Setor Aeroespacial Brasileiro. 2017. 152f. Dissertação de mestrado em Ciências e Tecnologias Espaciais, Área Gestão Tecnológica – Instituto Tecnológico de Aeronáutica, São José dos Campos Gallart, JM (1996). From offsets to industrial co-operation: Spain's changing strategies as an arms importer, in Martin, S (ed), The Economics of Offsets. Hall, P. and Markowski, S (1994). On the normality and abnormality of offset obligations, Defence Economics, 5, 3, 173–188. Hartley, K, (1983), NATO Arms Co-operation, Allen and Unwin, London Ianakiev, Gueorgui & Nickolay E. Mladenov (2008). "Offset Policies in Defence Procurement: Lessons for the European Defence Equipment Market", Défense nationale et sécurité collective, " Hors Série: Les marchés publics de défense". Ianakiev, Gueorgui (2014), " Defence Offsets: Regulation and Impact on the Integration of the European Defence Equipment Market ", in Bellais, Renaud (ed) (2014), "The Evolving Boundaries of Defence: An Assessment of Recent Shifts in Defence Activities", Emerald Group Publishing Limited. Kaushal, V and Behera, L. (2013) Defence Acquisition: International Best Practices, Pentagon Press . Magahy, B, Vilhena da Cunha, F., Pyman, M., Defence Offsets: Addressing The Risks Of Corruption & Raising Transparency, Transparency International, (J. Muravska and A. Wegener eds.), 2010 Matthews, R. 2002. Saudi Arabia: Defense Offsets and Development, chapter 8 in J. Brauer and J.P. Dunne, The Arms Industry in Developing Nations: History and Post-Cold War Assessment. London: Palgrave. Sandler, T, Hartley, K eds, Handbook of Defense Economics, Elsevier, Amsterdam. Vol. 1, 1995, ; Vol. 2, Defense in a Globalized World 2007, . Hartley, K (2004). Offsets and the Joint Strike Fighter in the UK and The Netherlands, in Brauer and Dunne, eds, Arms Trade and Economic Development, Martin, S, ed.,(1996). The Economics of Offsets, Harwood, London. Nackman, M. A Critical Examination Of Offsets In International Defense Procurements: Policy Options For The United States, Public Contract Law Journal Vol. 40, No. 2 Winter 2011 Reich, A. (1999) International Public Procurement Law: The Evolution of International Regimes on Public Purchasing (Kluwer Law International: London) Russin, Richard J., Offsets in International Military Procurement, Public Contract Law Journal. Fall 1994 Taylor, T (2004). Using procurement as a development strategy, in Brauer, J and Dunne, P (eds) Udis, B and Maskus, KE (1991). Offsets as industrial policy: Lessons from aerospace, Defence Economics, 2,2, 151–164. U.S. DoC (2007). Offsets in Defense Trade, Eleventh Report to Congress, U.S. Department of Commerce, Washington, DC. U.S. DoC (2011). Offsets in Defense Trade, Fifteenth Report to Congress, U.S. Department of Commerce, Washington, DC Welt, L., Wilson D.(1998) Offsets in the Middle East, Middle East Policy, Vol. 6, No. 2, pp. 36–53 See also Agreement on Government Procurement Al Yamamah Arms industry Canadian Arms trade Counter trade Defense contractor Defense Security Cooperation Agency European Defence Agency List of countries by military expenditures List of United States defense contractors Offset loan Patria case Portuguese Pandur Quid pro quo Trade pact References External links The European Club for Countertrade and Offsets (ECCO) Deutsches Kompensations Forum EPICOS Industrial Cooperation and Offsets Global Offset and Countertrade Association (G.O.C.A.) Countertrade & Offset - CTO Offset Market Exchange (OMX) Bureau of Industry and Security Commercial treaties Military acquisition Aerospace Protectionism Arms industry fr:Compensation industrielle
Offset agreement
[ "Physics" ]
13,365
[ "Spacetime", "Space", "Aerospace" ]
19,690,563
https://en.wikipedia.org/wiki/Stichodactyla%20toxin
Stichodactyla toxin (ShK, ShkT) is a 35-residue basic peptide from the sea anemone Stichodactyla helianthus that blocks a number of potassium channels. Related peptides form a conserved family of protein domains known as the ShkT domain. Another well-studied toxin of the family is BgK from Bunodosoma granulifera. An analogue of ShK called ShK-186 or Dalazatide is in human trials as a therapeutic for autoimmune diseases. History Stichodactyla helianthus is a species of sea anemone (Phylum: Cnidaria) belonging to the family Stichodactylidae. Helianthus comes from the Greek words helios meaning sun, and anthos meaning flower, which corresponds to the species' common name "sun anemone". It is sessile and uses potent neurotoxins for defense against its primary predator, the spiny lobster. The venom contains, among other components, numerous ion channel-blocking peptides. In 1995, a group led by Olga Castaneda and Evert Karlsson isolated ShK, a potassium channel-blocking 35-residue peptide from S. helianthus. The same year, William Kem and his collaborator Michael Pennington synthesized and folded ShK, and showed it blocked neuronal and lymphocyte voltage-dependent potassium channels. In 1996, Ray Norton determined the three-dimensional structure of ShK. In 2005–2006, George Chandy, Christine Beeton and Michael Pennington developed ShK-170 and ShK-186 (ShK-L5), selective blockers of Kv1.3. ShK-186, now called Dalazatide, was advanced to human trials in 2015-2017 by Shawn Iadonato and Eric Tarcha, as the first-in-man Kv1.3 blocker for autoimmune disease. Structure ShK is cross-linked by three disulfide bridges: Cys3-Cys35, Cys12-Cys28, and Cys17-Cys32. The solution structure of ShK reveals two short α-helices comprising residues 14-19 and 21–24; the N-terminal eight residues adopt an extended conformation, followed by a pair of interlocking turns that resemble a 310 helix; the C-terminal Cys35 residue forms a nearly head-to-tail cyclic structure through a disulfide bond with Cys3. Phylogenetic relationships of ShK and ShK domains The SMART database at the EMBL, as of May 2018, lists 3345 protein domains with structural resemblance to ShK in 1797 proteins (1 to 8 domains/protein), many in the worm Caenorhabditis elegans and venomous snakes. The majority of these domains are in metallopeptidases, whereas others are in prolyl 4-hydroxylases, tyrosinases, peroxidases, oxidoreductases, or proteins containing epidermal growth factor-like domains, thrombospondin-type repeats, or trypsin-like serine protease domains. The only human proteins containing ShK-like domains are MMP-23 (matrix metalloprotease 23) and MFAP-2 (microfibril-associated glycoprotein 2). Channel targets The ShK peptide blocks potassium (K+) ion channels Kv1.1, Kv1.3, Kv1.6, Kv3.2 and KCa3.1 with nanomolar to picomolar potency, and has no effect on the HERG (Kv11.1) cardiac potassium channel. The neuronal Kv1.1 channel and the T lymphocyte Kv1.3 channel are most potently inhibited by ShK. Binding configuration in K+ channels ShK and its analogues are blockers of the channel pore. They bind to all four subunits in the K+ channel tetramer by interacting with the shallow 'vestibule' at the outer entrance to the channel pore. These peptides are anchored in the external vestibule by two key interactions. The first is Lys22, which protrudes into and occludes the channel's pore like a "cork in a bottle" and blocks the passage of potassium ions through the channel pore. The second is the neighboring Tyr23, which together with Lys22 forms a “functional dyad” required for channel block. Many K+ channel-blocking peptides contain such a dyad of a lysine and a neighboring aromatic or aliphatic residue. Some K+ channel-blocking peptides lack the functional dyad, but even in these peptides a lysine physically blocks the channel, regardless of the position of the lysine in the peptide sequence. Additional interactions anchor ShK and its analogues in the external vestibule and contribute to potency and selectivity. For example, Arg11 and Arg29 in ShK interact with two Asp386 residues in adjacent subunits in the mouse Kv1.3 external vestibule (corresponds to Asp433 in human Kv1.3). Analogues that block the Kv1.3 channel Several ShK analogues have been generated to enhance specificity for the Kv1.3 channel over the neuronal Kv1.1 channel and other closely related channels. ShK-Dap22: This was the first analogue that showed some degree of specificity for Kv1.3. The pore-occluding lysine22 of ShK is replaced by diaminopropionic acid (Dap) in ShK-Dap22. Dap is a non-natural lysine analogue with a shorter side chain length (2.5 Å from Cα) than lysine (6.3 Å). Dap22 interacts with residues further out in the external vestibule in contrast to lysine22, which interacts with the channel's selectivity filter. As a consequence, the orientations of ShK and ShK-Dap22 in the external vestibule are significantly different. ShK-Dap22 exhibits >20-fold selectivity for Kv1.3 over closely related channels in whole-cell patch clamp experiments, but in equilibrium binding assays it binds Kv1.1-Kv1.2 heterotetramers with almost the same potency as ShK, which is not predicted from the study of homotetrameric Kv1.1 or Kv1.2 channels.ShK-F6CA: Attaching a fluorescein to the N-terminus of the peptide via a hydrophilic AEEA linker (2-aminoethoxy-2-ethoxy acetic acid; mini-PEG) resulted in a peptide, ShK-F6CA (fluorescein-6-carboxyl), with 100-fold specificity for Kv1.3 over Kv1.1 and related channels. Attachment of a tetramethylrhodamine or a biotin via the AEEA linker to ShK's N-terminus did not increase specificity for Kv1.3 over Kv1.1. The enhanced specificity of ShK-F6CA might be explained by differences in charge: F6CA is negatively charged; tetramethylrhodamine is positively charged; and biotin is neutral. Subsequent studies with other analogues suggest that the negatively charged F6CA likely interacts with residues on the turret of the Kv1.3 channel as shown for ShK-192 and ShK-EWSS. ShK-170, ShK-186, ShK-192 and ShK-EWSS: Based on ShK-F6CA, additional analogues were made. Attaching a L-phosphotyrosine to the N-terminus of ShK via an AEEA linker resulted in a peptide, ShK-170, with 100-1000-fold specificity for Kv1.3 over related channels. ShK-186 (a.k.a. SL5; a.k.a. Dalazatide) is identical to ShK-170 except the C-terminal carboxyl is replaced by an amide. ShK-186 blocks Kv1.3 with an IC50 of 69 pM and exhibits the same specificity for Kv1.3 over closely related channels as ShK-170. The L-phosphotyrosine of ShK-170 and ShK-186 rapidly gets dephosphorylated in vivo generating an analogue, ShK-198, with reduced specificity for Kv1.3. To overcome this problem, ShK-192 and ShK-EWSS were developed. In ShK-192, the N-terminal L-phosphotyrosine is replaced by a non-hydrolyzable para-phosphonophenylalanine (Ppa), and Met21 is replaced by the non-natural amino acid norleucine to avoid methionine oxidation. In ShK-EWSS, the AEEA linker and L-phosphotyrosine are replaced by the residues glutamic acid (E), tryptophan (W) and two serines (S). Both ShK-192 and ShK-EWSS are highly specific for Kv1.3 over related channels. ShK-K18A: Docking and molecular dynamics simulations on Kv1.3 and Kv1.1 followed by umbrella sampling simulations, paved the way to the selective Kv1.3 inhibitor ShK-K18A. ShK-related peptides in parasitic worms: AcK1, a 51-residue peptide from hookworms Ancylostoma caninum and Ancylostoma ceylanicum, and BmK1, the C-terminal domain of a metalloprotease from filarial worm Brugia malayi, adopt helical structures closely resembling ShK. AcK1 and BmK1 block Kv1.3 channels at nanomolar-micromolar concentrations, and they suppress rat effector memory T cells without affecting naïve and central memory T cell subsets. Further, they suppress IFN-g production by human T cells and they inhibit the Delayed-type hypersensitivity response caused by skin-homing effector memory T cells. Teladorsagia circumcincta is an economically important parasite that infects sheep and goats. TcK6, a 90-residue protein with a C-terminal ShK-related domain, is upregulated during the mucosal dwelling larval stage of this parasite. TcK6 causes modest suppression of thapsigargin-triggered IFN-g production by sheep T cells, suggesting that the parasite use this protein for immune evasion by modulating mucosal T cells. Extending circulating half-life Due to their low molecular mass, ShK and its analogues are prone to rapid renal elimination. In rats, the half-life is ~6 min for ShK-186 and ~11 min for ShK-198, with a clearance rate of ~950 ml/kg·min. In monkeys, the half-life is ~12 min for ShK-186 and ~46 min for ShK-198, with a clearance rate of ~80 ml/kg·min. PEGylation of ShK: Conjugation of polyethylene glycol (PEG) to ShK[Q16K], an ShK analogue, increased its molecular mass and thereby reduced renal clearance and extended plasma half-life to 15 h in mice and 64 h in cynomolgus monkeys. PEGylation can also decrease immunogenicity and protect a peptide from proteolysis and non-specific adsorption to inert surfaces. PEGylated ShK[Q16K] prevented adoptive-transfer experimental autoimmune encephalomyelitis in rats, a model for multiple sclerosis. Conjugation of ShK to larger proteins: The circulating half-life of peptides can be prolonged by coupling them to larger proteins or protein domains. By screening a combinatorial ShK peptide library, novel analogues were identified, which when fused to the C-termini of IgG1-Fc retained picomolar potency, effectively suppressed in vivo delayed type hypersensitivity and exhibited a prolonged circulating half-life. Prolonged effects despite rapid plasma clearance: SPECT/CT imaging studies with a 111In-DOTA-conjugate of ShK-186 in rats and squirrel monkeys revealed a slow release from the injection site and blood levels above the channel blocking dose for 2 and 7 days, respectively. Studies on human peripheral blood T cells showed that a brief exposure to ShK-186 was sufficient to suppress cytokine responses. These findings suggest that ShK-186, despite its short circulating half-life, may have a prolonged therapeutic effect. In rats, the peptide is effective in treating disease in animal models of autoimmune diseases when administered once a day to once in 3 days. In humans, subcutaneous injections twice a week are sufficient to ameliorate disease in patients with plaque psoriasis. Peptide delivery The low molecular mass of ShK and its analogues, combined with their high isoelectric points, makes it unlikely that these peptides will be absorbed from the stomach or intestine following oral administration. Sub-lingual delivery is a possibility. A fluorescent ShK analogue was absorbed into the blood stream at pharmacological concentrations following sublingual administration with a mucoadhesive chitosan-based gel, with or without the penetration enhancer cetrimide. Delivery of the peptide as an aerosol through the lung, or across the skin, or as eye drops are also possibilities. Modulation of T cell function During T cell-activation, calcium enters lymphocytes through store-operated CRAC channels (calcium release activated channel) formed as a complex of Orai and Stim proteins. The rise in intracellular calcium initiates a signaling cascade culminating in cytokine production and proliferation. The Kv1.3 K+ channel and the calcium-activated KCa3.1 K+ channel in T cells promote calcium entry into the cytoplasm through CRAC by providing a counterbalancing cation efflux. Blockade of Kv1.3 depolarizes the membrane potential of T cells, suppresses calcium signaling and IL-2 production, but not IL2-receptor expression. Kv1.3 blockers have no effect on activation pathways that are independent of a rise in intracellular calcium (e.g. anti-CD28, IL-2). Expression of the Kv1.3 and KCa3.1 channels varies during T cell activation and differentiation into memory T cells. When naïve T cells and central memory T cells (TCM) are activated they upregulate KCa3.1 expression to ~500 per cell without significant change in Kv1.3 numbers. In contrast, when terminally differentiated effector memory subsets (TEM, TEMRA [T effector memory re-expressing CD45RA]) are activated, they upregulate Kv1.3 to 1500 per cell without changes in KCa3.1. The Kv1.3 channel number increases and the KCa3.1 channel number decreases as T cells are chronically activated. As a result of this differential expression, blockers of KCa3.1 channels preferentially suppress the function of naïve and TCM cells, while ShK and its analogues that selectively inhibit Kv1.3 channels preferentially suppress the function of chronically activated effector memory T cells (TEM, TEMRA). Of special interest are the large number of ShK analogues developed at Amgen that suppressed interleukin-2 and interferon gamma production by T cells. This inhibitory effect of Kv1.3 blockers is partial and stimulation strength dependent, with reduced inhibitory efficacy on T cells under strengthened anti-CD3/CD28 stimulation. Chronically activated CD28null effector memory T cells are implicated in autoimmune diseases (e.g. lupus, Crohn's disease, rheumatoid arthritis, multiple sclerosis). Blockade of Kv1.3 channels in these chronically activated T cells suppresses calcium signaling, cytokine production (interferon gamma, interleukin-2, interleukin 17), and cell proliferation. Effector memory T cells that are CD28+ are refractory to suppression by Kv1.3 blockers when they are co-stimulated by anti-CD3 and anti-CD28 antibodies, but are sensitive to suppression when stimulated by anti-CD3 antibodies alone. In vivo, ShK-186 paralyzes effector-memory T cells at the site of an inflammatory delayed type hypersensitivity response and prevents these T cells from activating in the inflamed tissue. In contrast, ShK-186 does not affect the homing and motility of naive and TCM cells to and within lymph nodes, most likely because these cells express the KCa3.1 channel and are therefore protected from the effect of Kv1.3 blockade. Effects on microglia Kv1.3 plays an important role in microglial activation. ShK-223, an analogue of ShK-186, decreased lipopolysaccharide (LPS) induced focal adhesion formation by microglia, reversed LPS-induced inhibition of microglial migration, and inhibited LPS-induced upregulation of EH domain containing protein 1 (EHD1), a protein involved in microglia trafficking. Increased Kv1.3 expression was reported in microglia in Alzheimer plaques. Kv1.3 inhibitors may have use in the management of Alzheimer's disease, as reported in a proof-of-concept study in which a small molecule Kv1.3 blocker (PAP-1) alleviated Alzheimer's disease-like characteristics in a mouse model of AD. Efficacy of analogues in animal models of human diseases Experimental autoimmune encephalomyelitis (EAE), a model for multiple sclerosis ShK, ShK-Dap22, ShK-170 and PEGylated ShK-Q16K prevent adoptive-transfer EAE in Lewis rats, a model of multiple sclerosis. Since multiple sclerosis is a relapsing-remitting disease, ShK-186 and ShK-192 were evaluated in a relapsing-remitting EAE model in DA (Dark Agouti) rats. Both prevented and treated disease when administered once a day to once in three days. Thus, Kv1.3 inhibitors are effective in treating disease in rat models of multiple sclerosis when administered alone, and therapeutic effectiveness does not appear to be compromised by compensatory over-expression of KCa3.1 channels. Pristane-induced arthritis (PIA), a model for rheumatoid arthritis ShK-186 was effective in treating PIA when administered every day or on alternate days. A scorpion toxin inhibitor of KV1.3 was also effective in this model. In both these studies, blockade of Kv1.3 alone was sufficient to ameliorate disease and simultaneous blockade of KCa3.1 was not necessary as has been suggested. Rat models of atopic dermatitis Most infiltrating T-cells in skin lesions from patients with moderate-to-severe atopic dermatitis (AD) express high levels of Kv1.3, suggesting that inhibitors of Kv1.3 may be effective in treating AD. Ovalbumin-induced delayed type hypersensitivity and oxazolone-induced dermatitis are considered to be models of atopic dermatitis. ShK, ShK-170, ShK-186, ShK-192 and ShK-IgG-Fc were all effective in the ovalbumin-induced delayed type hypersensitivity model, while a topical formulation of ShK-198 was effective in treating oxazolone-induced dermatitis. Even where compensation by KCa3.1 channels was reported to over-ride KV1.3 block, ShK administered alone suppressed delayed type hypersensitivity significantly in 2 of 3 studies, albeit modestly. Psoriasis Psoriasis is a severe autoimmune disease of the skin that afflicts many people worldwide. Despite the success of recent biologics in ameliorating disease, there is still a search for safe and effective drugs for psoriasis. KV1.3 inhibitors (ShK, PAP-1) have been reported to treat disease in psoriasiform (psoriasis-like) SCID (severe combined immunodeficiency) mouse model. In a Phase 1b placebo-controlled clinical study in patients with plaque psoriasis, ShK-186 administered twice a week (30 or 60 mg/dose/patient) by subcutaneous injection caused improvements with a statistically significant reduction in their PASI (Psoriasis Area and Severity Index) score between baseline and day 32. These patients also exhibited reduced plasma levels of multiple inflammation markers and decreased expression of T cell activation markers on peripheral blood memory T cells. Diet-induced obesity and fatty liver disease Obesity and diabetes are major healthcare problems globally. There is need for safe drugs for these metabolic diseases. In a mouse model of diet-induced obesity, ShK-186 counteracted the negative effects of increased caloric intake. It reduced weight gain, adiposity, and fatty liver; decreased blood levels of cholesterol, sugar, HbA1c, insulin, and leptin; and enhanced peripheral insulin sensitivity. Genetic deletion of the Kv1.3 gene has the same effect, indicating that ShK-186's effect is due to Kv1.3 blockade. At least two mechanisms contribute to ShK-186's therapeutic benefits. The high calorie diet induced Kv1.3 expression in brown fat tissues. By blocking Kv1.3, ShK-186 doubled glucose uptake and increased β-oxidation of fatty acids, glycolysis, fatty acid synthesis and uncoupling protein 1 expression by brown fat. As a consequence of brown fat activation, oxygen consumption and energy expenditure were augmented. The obesity diet also induced Kv1.3 expression in the liver, and ShK-186 caused profound alterations in energy and lipid metabolism in the liver. ShK, its analogues or other Kv1.3 blockers may have use in controlling the negative consequences of high calorie diets. Arousal and anesthesia The mechanisms of general anesthesia involve multiple molecular targets and pathways that are not completely understood. Sevoflurane is a common anesthetic used to induce general anesthesia during surgery. Rats continually exposed to sevoflurane lose their righting reflex as an index of loss of consciousness. In these rats, microinfusion of ShK into the central medial thalamic nucleus (CMT) reversed sevoflurane-induced anesthesia in rodents. ShK-treated rats righted themselves fully (restored consciousness) despite being continually exposed to sevoflurane. ShK-microinfusion into neighboring regions of the brain did not have this effect. Sevoflurane enhanced potassium currents in the CMT, while ShK and ShK-186 countered this effect. These studies suggest that ShK-sensitive K+ channels in the CMT are important for suppressing arousal during anesthesia. Preventing brain damage following therapeutic brain radiation Brain radiation is used to treat tumors of the head, neck, and brain, but this treatment carries a significant risk of neurologic injury. Injury is, in part, due to the activation of microglia and microglia-mediated damage of neurons. Neuroprotective therapies for radiation-induced brain injury are still limited. In a mouse model of brain radiation, ShK-170 reversed neurological deficits, and protected neurons from radiation-induced brain injury by suppressing microglia. Toxicity of ShK and its analogues ShK and ShK-Dap22 ShK peptide has a low toxicity profile in mice. ShK is effective in treating autoimmune diseases at 10 to 100 mg/kg bodyweight. It has a median paralytic dose of approximately 25 mg/kg bodyweight (250-2500 higher than the pharmacological dose). In rats the therapeutic safety index is greater than 75-fold. ShK-Dap22 displayed a lower toxicity profile. A 1.0 mg dose did not induce any hyperactivity, seizures or mortality in rats. The median paralytic dose for ShK-Dap22 is about 200 mg/kg bodyweight (2000-20000 higher than pharmacological dose). PEGylated ShK[Q16K] showed no adverse toxicity in monkeys over a period of several months. ShK-186/Dalazatide ShK-186 also displays a low toxicity profile in rats. Daily administration of ShK-170 or ShK-186 (100 μg/kg/day) by subcutaneous injection over 4 weeks in rats does not induce any changes in blood counts, blood chemistry or histopathology. By virtue of suppressing only TEM and TEMRA cells, ShK-186 did not compromise protective immune responses to influenza virus and chlamydial infection in rats, most likely because naïve and TCM cells unaffected by Kv1.3 blockade mounted effective immune responses. ShK-186 is poorly immunogenic and did not elicit anti-ShK antibodies in rats repeatedly administered the peptide. This is possibly because the peptide's disulfide-bonded structure hinders processing and antigen presentation by antigen-presenting cells. ShK-186 also shares sequence and structural similarity to a ShK-like domain in matrix metalloprotease 23, which may cause the immune system to assume it is a normal protein in the body. ShK-186 was safe in non-human primates. In Phase 1a and 1b trials in healthy human volunteers, ShK-186 was well tolerated, no grade 3 or 4 adverse effects or laboratory abnormalities were noted, and the predicted range of drug exposures were achieved. The most common adverse events were temporary mild (Grade 1) hypoesthesia and paresthesia involving the hands, feet, or perioral area. Mild muscle spasms, sensitivity of teeth, and injection site pain were also observed. Functions of ShK-like proteins MMP-23 MMP-23 belongs to the family of zinc- and calcium-dependent matrix metalloproteases. It is anchored in the cell membrane by an N-terminal prodomain, and it contains three extracellular domains: catalytic metalloprotease domain, ShK domain and immunoglobulin-like cell adhesion molecule (Ig-CaM) domain. The prodomain traps the voltage-gated potassium channel KV1.3, but not the closely related KV1.2 channel, in the endoplasmic reticulum. Studies with chimeras suggest that the prodomain interacts with the KV1.3 region from the S5 transmembrane segment to the C terminus. NMR studies of the prodomain reveal a single trans-membrane alpha-helix, joined by a short linker to a juxta-membrane alpha-helix, which is associated with the surface of the membrane. The prodomain shares topological similarity with proteins (KCNE1, KCNE2, KCNE4) known to trap potassium channels in the secretory pathway, suggesting a shared mechanism of channel regulation. MMP-23's catalytic domain displays structural homology with catalytic domains in other metalloproteases, and likely functions as an endopeptidase. MMP-23's ShK domain lies immediately after the catalytic domain and is connected to the IgCAM domain by a short proline-rich linker. It shares phylogenetic relatedness to sea anemone toxins and ICR-CRISP domains, being most similar to the BgK toxin from sea anemone Bunodosoma granulifera. This ShK domain blocks voltage-gated potassium channels (KV1.6 > KV1.3 > KV1.1 = KV3.2 > Kv1.4, in decreasing potency) in the nanomolar to low micromolar range. KV1.3 is required for sustaining calcium signaling during activation of human T cells. By trapping KV1.3 in the endoplasmic reticulum via the prodomain, and by blocking the KV1.3 channel with the ShK domain, MMP-23 may serve as an immune checkpoint to reduce excessive T cell activation during an immune response. In support, increased expression of MMP-23 in melanoma cancer cells decreases tumor-infiltrating lymphocytes, and is associated with cancer recurrence and shorter periods of progression-free survival. However, in melanomas, expression of MMP-23 does not correlate with Kv1.3 expression, suggesting that MMP-23's deleterious effect in melanomas may not be connected with its Kv1.3 channel-modulating function. MMP-23's C-terminal IgCAM domain shares sequence similarity with IgCAM domains in proteins known to mediate protein-protein and protein-lipid interactions (e.g. CDON, human Brother of CDO, ROBO1-4, hemicentin, NCAM1 and NCAM2). In summary, the four domains of MMP-23 may work synergistically to modulate immune responses in vivo. Mab7 In male Caenorhabditis elegans worms, the absence of a protein called Mab7 () results in malformed sensory rays that are required for mating. Introduction of Mab7 into these male worms restores normal development of normal sensory rays. Introduction of Mab7 proteins lacking the ShK domain does not correct the defect of sensory rays, suggesting a role for the ShK-domain of Mab7 in sensory ray development. HMP2 and PMP1 HMP2 and PMP-1 are astacin metalloproteinases from the Cnidarian Hydra vulgaris and the jellyfish Podocoryne carnea that contain ShK-like domains at their C-termini. Both these ShK-domains contain the critical pore-occluding lysine required for K+ channel block. HMP2 plays a critical role in foot regeneration of Hydra, while PMP-1 is found in the feeding organ of the jelly fish and the ShK-domain may paralyze prey after they are ingested. CRISPs More distantly related are Cysteine-rich secretory proteins (CRISPs), which contain a ShK-like 'Cystine-rich domain' as well as a larger CAP-like 'Pathogenesis related 1' domain. These proteins are involved in mammalian reproduction as well as in the venoms of some snakes. In both cases, the mechanism is believed to involve inhibition of ion channel activity. References External links Ion channel toxins Neurotoxins Cysteine-rich proteins
Stichodactyla toxin
[ "Chemistry", "Biology" ]
6,600
[ "Neurochemistry", "Neurotoxins", "Cysteine-rich proteins", "Protein classification" ]
8,232,682
https://en.wikipedia.org/wiki/Robust%20optimization
Robust optimization is a field of mathematical optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the parameters of the problem itself and/or its solution. It is related to, but often distinguished from, probabilistic optimization methods such as chance-constrained optimization. History The origins of robust optimization date back to the establishment of modern decision theory in the 1950s and the use of worst case analysis and Wald's maximin model as a tool for the treatment of severe uncertainty. It became a discipline of its own in the 1970s with parallel developments in several scientific and technological fields. Over the years, it has been applied in statistics, but also in operations research, electrical engineering, control theory, finance, portfolio management logistics, manufacturing engineering, chemical engineering, medicine, and computer science. In engineering problems, these formulations often take the name of "Robust Design Optimization", RDO or "Reliability Based Design Optimization", RBDO. Example 1 Consider the following linear programming problem where is a given subset of . What makes this a 'robust optimization' problem is the clause in the constraints. Its implication is that for a pair to be admissible, the constraint must be satisfied by the worst pertaining to , namely the pair that maximizes the value of for the given value of . If the parameter space is finite (consisting of finitely many elements), then this robust optimization problem itself is a linear programming problem: for each there is a linear constraint . If is not a finite set, then this problem is a linear semi-infinite programming problem, namely a linear programming problem with finitely many (2) decision variables and infinitely many constraints. Classification There are a number of classification criteria for robust optimization problems/models. In particular, one can distinguish between problems dealing with local and global models of robustness; and between probabilistic and non-probabilistic models of robustness. Modern robust optimization deals primarily with non-probabilistic models of robustness that are worst case oriented and as such usually deploy Wald's maximin models. Local robustness There are cases where robustness is sought against small perturbations in a nominal value of a parameter. A very popular model of local robustness is the radius of stability model: where denotes the nominal value of the parameter, denotes a ball of radius centered at and denotes the set of values of that satisfy given stability/performance conditions associated with decision . In words, the robustness (radius of stability) of decision is the radius of the largest ball centered at all of whose elements satisfy the stability requirements imposed on . The picture is this: where the rectangle represents the set of all the values associated with decision . Global robustness Consider the simple abstract robust optimization problem where denotes the set of all possible values of under consideration. This is a global robust optimization problem in the sense that the robustness constraint represents all the possible values of . The difficulty is that such a "global" constraint can be too demanding in that there is no that satisfies this constraint. But even if such an exists, the constraint can be too "conservative" in that it yields a solution that generates a very small payoff that is not representative of the performance of other decisions in . For instance, there could be an that only slightly violates the robustness constraint but yields a very large payoff . In such cases it might be necessary to relax a bit the robustness constraint and/or modify the statement of the problem. Example 2 Consider the case where the objective is to satisfy a constraint . where denotes the decision variable and is a parameter whose set of possible values in . If there is no such that , then the following intuitive measure of robustness suggests itself: where denotes an appropriate measure of the "size" of set . For example, if is a finite set, then could be defined as the cardinality of set . In words, the robustness of decision is the size of the largest subset of for which the constraint is satisfied for each in this set. An optimal decision is then a decision whose robustness is the largest. This yields the following robust optimization problem: This intuitive notion of global robustness is not used often in practice because the robust optimization problems that it induces are usually (not always) very difficult to solve. Example 3 Consider the robust optimization problem where is a real-valued function on , and assume that there is no feasible solution to this problem because the robustness constraint is too demanding. To overcome this difficulty, let be a relatively small subset of representing "normal" values of and consider the following robust optimization problem: Since is much smaller than , its optimal solution may not perform well on a large portion of and therefore may not be robust against the variability of over . One way to fix this difficulty is to relax the constraint for values of outside the set in a controlled manner so that larger violations are allowed as the distance of from increases. For instance, consider the relaxed robustness constraint where is a control parameter and denotes the distance of from . Thus, for the relaxed robustness constraint reduces back to the original robustness constraint. This yields the following (relaxed) robust optimization problem: The function is defined in such a manner that and and therefore the optimal solution to the relaxed problem satisfies the original constraint for all values of in . It also satisfies the relaxed constraint outside . Non-probabilistic robust optimization models The dominating paradigm in this area of robust optimization is Wald's maximin model, namely where the represents the decision maker, the represents Nature, namely uncertainty, represents the decision space and denotes the set of possible values of associated with decision . This is the classic format of the generic model, and is often referred to as minimax or maximin optimization problem. The non-probabilistic (deterministic) model has been and is being extensively used for robust optimization especially in the field of signal processing. The equivalent mathematical programming (MP) of the classic format above is Constraints can be incorporated explicitly in these models. The generic constrained classic format is The equivalent constrained MP format is defined as: Probabilistically robust optimization models These models quantify the uncertainty in the "true" value of the parameter of interest by probability distribution functions. They have been traditionally classified as stochastic programming and stochastic optimization models. Recently, probabilistically robust optimization has gained popularity by the introduction of rigorous theories such as scenario optimization able to quantify the robustness level of solutions obtained by randomization. These methods are also relevant to data-driven optimization methods. Robust counterpart The solution method to many robust program involves creating a deterministic equivalent, called the robust counterpart. The practical difficulty of a robust program depends on if its robust counterpart is computationally tractable. See also Stability radius Minimax Minimax estimator Minimax regret Robust statistics Robust decision making Robust fuzzy programming Stochastic programming Stochastic optimization Info-gap decision theory Taguchi methods References Further reading H.J. Greenberg. Mathematical Programming Glossary. World Wide Web, http://glossary.computing.society.informs.org/, 1996-2006. Edited by the INFORMS Computing Society. Ben-Tal A., El Ghaoui, L. and Nemirovski, A. (2006). Mathematical Programming, Special issue on Robust Optimization, Volume 107(1-2). Ben-Tal A., El Ghaoui, L. and Nemirovski, A. (2009). Robust Optimization. Princeton Series in Applied Mathematics, Princeton University Press. Dodson, B., Hammett, P., and Klerx, R. (2014) Probabilistic Design for Optimization and Robustness for Engineers John Wiley & Sons, Inc. Kouvelis P. and Yu G. (1997). Robust Discrete Optimization and Its Applications, Kluwer. Nejadseyfi, O., Geijselaers H.J.M, van den Boogaard A.H. (2018). "Robust optimization based on analytical evaluation of uncertainty propagation". Engineering Optimization 51 (9): 1581-1603. doi:10.1080/0305215X.2018.1536752. Rustem B. and Howe M. (2002). Algorithms for Worst-case Design and Applications to Risk Management, Princeton University Press. Wald, A. (1950). Statistical Decision Functions, John Wiley, NY. External links ROME: Robust Optimization Made Easy Robust Decision-Making Under Severe Uncertainty Robustimizer: Robust optimization software Mathematical optimization
Robust optimization
[ "Mathematics" ]
1,765
[ "Mathematical optimization", "Mathematical analysis" ]
8,233,045
https://en.wikipedia.org/wiki/Loomis%E2%80%93Whitney%20inequality
In mathematics, the Loomis–Whitney inequality is a result in geometry, which in its simplest form, allows one to estimate the "size" of a -dimensional set by the sizes of its -dimensional projections. The inequality has applications in incidence geometry, the study of so-called "lattice animals", and other areas. The result is named after the American mathematicians Lynn Harold Loomis and Hassler Whitney, and was published in 1949. Statement of the inequality Fix a dimension and consider the projections For each 1 ≤ j ≤ d, let Then the Loomis–Whitney inequality holds: Equivalently, taking we have implying A special case The Loomis–Whitney inequality can be used to relate the Lebesgue measure of a subset of Euclidean space to its "average widths" in the coordinate directions. This is in fact the original version published by Loomis and Whitney in 1949 (the above is a generalization). Let E be some measurable subset of and let be the indicator function of the projection of E onto the jth coordinate hyperplane. It follows that for any point x in E, Hence, by the Loomis–Whitney inequality, and hence The quantity can be thought of as the average width of in the th coordinate direction. This interpretation of the Loomis–Whitney inequality also holds if we consider a finite subset of Euclidean space and replace Lebesgue measure by counting measure. The following proof is the original one Corollary. Since , we get a loose isoperimetric inequality: Iterating the theorem yields and more generallywhere enumerates over all projections of to its dimensional subspaces. Generalizations The Loomis–Whitney inequality is a special case of the Brascamp–Lieb inequality, in which the projections πj above are replaced by more general linear maps, not necessarily all mapping onto spaces of the same dimension. References Sources Incidence geometry Geometric inequalities
Loomis–Whitney inequality
[ "Mathematics" ]
398
[ "Combinatorics", "Geometric inequalities", "Inequalities (mathematics)", "Theorems in geometry", "Incidence geometry" ]
8,236,444
https://en.wikipedia.org/wiki/Sommerfeld%E2%80%93Kossel%20displacement%20law
The Sommerfeld–Kossel displacement law states that the first spark (singly ionized) spectrum of an element is similar in all details to the arc (neutral) spectrum of the element preceding it in the periodic table. Likewise, the second (doubly ionized) spark spectrum of an element is similar in all details to the first (singly ionized) spark spectrum of the element preceding it, or to the arc (neutral) spectrum of the element with atomic number two less, and so forth. Hence, the spectra of C I (neutral carbon), N II (singly ionized nitrogen), and O III (doubly ionized oxygen) atoms are similar, apart from shifts of the spectra to shorter wavelengths. C I, N II, and O III all have the same number of electrons, six, and the same ground-state electron configuration: . The law was discovered by and named after Arnold Sommerfeld and Walther Kossel, who set it forth in a paper submitted to Verhandungen der Deutschen Physikalischen Gesellschaft in early 1919. References Gerhard Herzberg translated from German with the help of the author by J. W. T. Spinks Atomic Spectra and Atomic Structure (Dover, 1945) Mehra, Jagdish, and Helmut Rechenberg The Historical Development of Quantum Theory. Volume 1 Part 1 The Quantum Theory of Planck, Einstein, Bohr and Sommerfeld 1900 – 1925: Its Foundation and the Rise of Its Difficulties. (Springer, 1982) A. R. Striganov and N. S. Sventitskii Tables of Spectral Lines of Neutral and Ionized Atoms (Plenum, 1968) Notes Atomic physics Quantum mechanics Spectroscopy
Sommerfeld–Kossel displacement law
[ "Physics", "Chemistry" ]
356
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Theoretical physics", "Quantum mechanics", " molecular", "Atomic physics", "Atomic", "Spectroscopy", " and optical physics" ]
8,236,934
https://en.wikipedia.org/wiki/Direct%20analysis%20in%20real%20time
In mass spectrometry, direct analysis in real time (DART) is an ion source that produces electronically or vibronically excited-state species from gases such as helium, argon, or nitrogen that ionize atmospheric molecules or dopant molecules. The ions generated from atmospheric or dopant molecules undergo ion-molecule reactions with the sample molecules to produce analyte ions. Analytes with low ionization energy may be ionized directly. The DART ionization process can produce positive or negative ions depending on the potential applied to the exit electrode. This ionization can occur for species desorbed directly from surfaces such as bank notes, tablets, bodily fluids (blood, saliva and urine), polymers, glass, plant leaves, fruits & vegetables, clothing, and living organisms. DART is applied for rapid analysis of a wide variety of samples at atmospheric pressure and in the open laboratory environment. It does not need a specific sample preparation, so it can be used for the analysis of solid, liquid and gaseous samples in their native state. With the aid of DART, exact mass measurements can be done rapidly with high-resolution mass spectrometers. DART mass spectrometry has been used in pharmaceutical applications, forensic studies, quality control, and environmental studies. History DART resulted from conversations between Laramee and Cody about the development of an atmospheric pressure ion source to replace the radioactive sources in handheld chemical weapons detectors.DART was developed in late 2002 to early 2003 by Cody and Laramee as a new atmospheric pressure ionization process, and a US patent application was filed in April 2003. Although the development of DART actually predated the desorption electrospray ionization (DESI) ion source, the initial DART publication did not appear until shortly after the DESI publication, and both ion sources were publicly introduced in back-to-back presentations by R. G. Cooks and R. B. Cody at the January 2005 ASMS Sanibel Conference. DESI and DART are considered as pioneer techniques in the field of ambient ionization, since they operate in the open laboratory environment and do not require sample pretreatment. In contrast to the liquid spray used by DESI, the ionizing gas from the DART ion source contains a dry stream containing excited state species. Principle of operation Ionization process Formation of metastable species As the gas (M) enters the ion source, an electric potential in the range of +1 to +5 kV is applied to generate a glow discharge. The glow discharge plasma contains and short-lived energetic species including electrons, ions, and excimers. Ion/electron recombination leads to the formation of long-lived excited-state neutral atoms or molecules (metastable species, M*) in the flowing afterglow region. The DART gas can be heated from room temperature (RT) to 550 °C to facilitate desorption of analyte molecules. Heating is optional but may be necessary depending on the surface or chemical being analyzed. The heated stream of gaseous metastable species passes through a porous exit electrode that is biased to a positive or negative potential in the range 0 to 530V. When biased to a positive potential, the exit electrode acts to remove electrons and negative ions formed by Penning ionization from the gas stream to prevent ion/electron recombination and ion loss. If the exit electrode is biased to a negative potential, electrons can be generated directly from the electrode material by surface Penning ionization. An insulator cap at the terminal end of the ion source protects the operator from harm. {M} + energy -> {M^{\ast}} DART can be used for the analysis of solid, liquid or gaseous samples. Liquids are typically analyzed by dipping an object (such as a glass rod) into the liquid sample and then presenting it to the DART ion source. Vapors are introduced directly into the DART gas stream. Positive ion formation Once the metastable carrier gas atoms (M*) released from the source, they initiate Penning ionization of nitrogen, atmospheric water and other gaseous species. Although some compounds can be ionized directly by Penning ionization, the most common positive-ion formation mechanism for DART involves ionization of atmospheric water. {M^{\ast}} + {N2} -> {M} + {N2}^{+\bullet} + e^- {M^{\ast}} + {H2O} -> {M} + {H2O}^{+\bullet} + e^- Although the exact ion formation mechanism is not clear, water can be ionized directly by Penning ionization. Another proposal is that water is ionized by the same mechanism that has been proposed for atmospheric pressure chemical ionization {N2^{+\bullet}} + {2N2} -> {N4^{+\bullet}} + {N2} {N4^{+\bullet}} + {H2O} -> {2N2} + {H2O}^{+\bullet} Ionized water can undergo further ion-molecule reactions to form protonated water clusters (). {H2O^{+\bullet}} + {H2O} -> {H3O+} + {OH^{\bullet}} {H3O^+} + \mathit{n}H2O -> {[{\mathit{n}H2O} + H]}^+ The stream of protonated water clusters acts as a secondary ionizing species and generates analytes ions by chemical ionization mechanisms at atmospheric pressure. Here protonation, deprotonation, direct charge transfer and adduct ion formation may occur. {S} + {[{\mathit{n}H2O} + H]}^{+} -> {[{S} + H]}^+ + \mathit{n}H2O {N4}^{+\bullet} + S -> {2N2} + S^{+\bullet} {O2}^{+\bullet} + S -> {O2} + S^{+\bullet} {NO^+} + S -> {NO} + S^{+\bullet} {[NH4]^+} + S -> {[{S} + NH4]}^+ Metastable argon atoms do not have enough internal energy to ionize water, so DART ionization with argon gas requires the use of a dopant. Negative ion formation In negative-ion mode, the potential of the exit grid electrode can be set to negative potentials. Penning electrons undergo electron capture with atmospheric oxygen to produce O2−. The O2− will produce radical anions. Several reactions are possible, depending on the analyte. {O2} + {e}^{-} -> {O2} ^{-\bullet} {O2} ^{-\bullet} + {S} -> {S}^{-\bullet} + O2 {S} + {e}^{-} -> {S}^{-\bullet} {SX} + {e}^{-} -> {S}^{-} + {X}^{\bullet} {SH} -> {[S-H]}^{-} + {H}^{+} The negative ion sensitivity of DART gases varies with the efficiency in forming electrons by Penning ionization, which means that the negative ion sensitivity increases with the internal energy of the metastable species, for example nitrogenᐸneonᐸhelium. Instrumentation Source to analyzer interface Analyte ions are formed at ambient pressure during Penning and chemical ionization. The mass spectrometry analysis, however, takes place at high vacuum condition. Therefore, ions entering the mass spectrometer, first go through a source - to - analyzer interface (vacuum interface), which was designed in order to bridge the atmospheric pressure region to the mass spectrometer vacuum. It also minimizes spectrometer contamination. In the original JEOL atmospheric pressure interface used for DART, ions are directed to the ion guide through (outer) і and (inner) іі skimmer orifices by applying a slight potential difference between them: orifice і : 20 V and orifice іі : 5 V. The alignment of the two orifices is staggered to trap neutral contamination and protect the high-vacuum region. Charged species (ions) are guided to the second orifice through an intermediate cylindrical electrode ("ring lens"), but neutral molecules travel in a straight pathway and are thus blocked from entering the ion guide. The neutral contamination is then removed by the pump. The DART source can be operated in surface desorption mode or transmission mode. In the ordinary surface desorption mode, the sample is positioned in a way, which enables the reactive DART reagent ion stream to flow on to the surface while allowing the flow of desorbed analyte ions into interface. Therefore, this mode requires that the gas stream grazes the sample surface and does not block gas flow to the mass spectrometer sampling orifice. In contrast, transmission mode DART (tm-DART) uses a custom-made sample holder and introduces the sample at a fixed geometry. Coupling with separation techniques DART can be combined with many separation techniques. Thin-layer chromatography (TLC) plates have been analyzed by positioning them directly in the DART gas stream. Gas chromatography has been carried out by coupling gas chromatography columns directly into the DART gas stream through a heated interface. Eluate from a high-pressure liquid chromatograph (HPLC) can be also introduced to the reaction zone of the DART source and analyze. DART can be coupled with capillary electrophoresis (CE) and the eluate of CE is guided to the mass spectrometer through the DART ion source. Mass spectra In positive ion mode, DART produces predominantly protonated molecules [M+H]+ and in negative-ion mode deprotonated molecules [M-H]−. Both negative and positive modes of DART provides relatively simple mass spectra. Depending on the type of analyte, other species may be formed, such as multiple charged adducts. DART is categorized as a soft ionization technique. Fragmentation can be rarely observed for some molecules. Use of DART compared to traditional methods minimizes sample amount, sample preparation, eliminates extraction steps, decreases limit of detection and analysis time. Also it provides a broad range sensitivity, simultaneous determination of multi-drug analytes and sufficient mass accuracy for formulation determination. The DART ion source is a kind of gas-phase ionization, and it requires some sort of volatility of the analyte to support thermally assisted desorption of analyte ions. This limits the size range of the molecules that can be analyzed by DART i.e. m/z 50 to 1200. DART-MS is capable of semi-quantitative and quantitative analysis. To accelerate sample release from the surface, the DART gas stream is usually heated to temperature in the range 100-500 °C and this operation can be employed for temperature-dependent analysis. Applications DART is being applied in many fields, including the fragrance industry, pharmaceutical industry, foods and spices, forensic science and health, materials analysis, etc. In forensic science, DART is used for analysis of explosives, warfare agents, drugs, inks and sexual assault evidence. In clinical and pharmaceutical sector, DART is utilized for body fluid analysis such as blood, plasma, urine etc. and study traditional medicines. Also DART can detect composition in medicine in a tablet form as per there is no need for sample preparation such as crushing or extracting. In food industry, DART assures the quality and authenticity assessment of food. It is also used in the analysis of mycotoxins in beverages, semi-quantitative analysis of caffeine, monitoring heat accelerated decomposition of vegetable oils and many other food safety analysis. In the manufacturing industry, to determine the deposition and release of a fragrance on surfaces such as fabric and hair and dyes in textiles, DART is often utilized. DART is used in environmental analysis. For example, analysis of organic UV filters in water, contaminants in soil, petroleum products and aerosols etc. DART also plays an important role in biological studies. It enables studying chemical profiles of plants and organisms. See also Ambient ionization Atmospheric pressure chemical ionization Atmospheric pressure photoionization Desorption atmospheric pressure photoionization Desorption electrospray ionization Electric glow discharge References Patents Robert B. Cody and James A. Laramee, “Method for atmospheric pressure ionization” issued September 27, 2005. (Priority date: April 2003). James A. Laramee and Robert B. Cody “Method for Atmospheric Pressure Analyte Ionization” issued September 26, 2006. Ion source Measuring instruments
Direct analysis in real time
[ "Physics", "Technology", "Engineering" ]
2,688
[ "Ion source", "Mass spectrometry", "Spectrum (physical sciences)", "Measuring instruments" ]
8,238,015
https://en.wikipedia.org/wiki/Fruit%20rot
Fruit rot disease may refer to: Phomopsis leaf caused in grapes by Phomopsis viticola; Kole-roga caused in coconut and betel nut by Phytophthora palmivora; Botrytis bunch rot caused by Botrytis cinerea primarily in grapes; Black mold caused by Aspergillus niger; Leaf spot, and others, caused by Alternaria alternata; Bitter rot caused by Glomerella cingulata; Cladosporium rot or Soft rot caused by Cladosporium cladosporioides; Kernel rot or Fusariosis on maize (corn) caused by Fusarium sporotrichioides; Sour rot caused by Geotrichum candidum; Penicillium rot or Blue-eye caused by Penicillium chrysogenum; Soft rot or Blue mold caused by Penicillium expansum; Brown rot caused by Monilinia fructicola; Strawberry fruit rot caused by Pestalotia longisetula
Fruit rot
[ "Biology" ]
211
[ "Set index articles on fungus common names", "Set index articles on organisms" ]
8,238,464
https://en.wikipedia.org/wiki/Black%20Data%20Processing%20Associates
Black Data Processing Associates (BDPA) is an American non-profit organization that serves the professional well-being of African Americans and other minorities working within technology. BDPA provides resources that support the professional growth and technical development of minority individuals in the information technology industry. Through education and leadership, BDPA promotes innovation, business skills, and professional development. The organization has over 50 chapters throughout the United States. BDPA National headquarters is located in Largo, Maryland. History BDPA was founded in 1975 by Earl A. Pace Jr. and David Wimberly after the two met in Philadelphia to discuss their concerns about ethnic minorities in the data processing field. The founders cited a lack of minorities in middle and upper management, low recruitment and poor preparation of minorities for these positions, and an overall lack of career mobility. The founders built an organization of 35 members, hosted presentations to improve data processing skills and launched a job opportunities announcement service. This nucleus has grown to over 50 chapters throughout the United States and thousands of members. The organization is a catalyst for professional growth and technical development for those in the IT industry. BDPA has been active in community involvement, mentorship, and classes, especially during COVID-19. In summer 2020, BDPA offered STEM-related mentorship and classes for high school students in Indiana. In 2021, BDPA collected laptops and other electronics for children's e-learning efforts for Afghan refugees at Camp Atterbury. BDPA High School Computer Competition The National High School Computer Competition (HSCC) was founded in 1986. The competition started as a two-team event between Washington, DC, and Atlanta, Georgia, and now has over 20 teams from chapters throughout the nation. See also Black in AI References External links BDPA BDPA Education and Technology Foundations (BETF) African-American professional organizations Information technology organizations based in North America Professional associations based in the United States 1975 establishments in the United States Diversity in computing Organizations established in 1975 Data activism Data and information organizations
Black Data Processing Associates
[ "Technology" ]
416
[ "Diversity in computing", "Data", "Computing and society", "Data activism", "Data and information organizations" ]
8,240,558
https://en.wikipedia.org/wiki/Line%E2%80%93line%20intersection
In Euclidean geometry, the intersection of a line and a line can be the empty set, a point, or another line. Distinguishing these cases and finding the intersection have uses, for example, in computer graphics, motion planning, and collision detection. In three-dimensional Euclidean geometry, if two lines are not in the same plane, they have no point of intersection and are called skew lines. If they are in the same plane, however, there are three possibilities: if they coincide (are not distinct lines), they have an infinitude of points in common (namely all of the points on either of them); if they are distinct but have the same slope, they are said to be parallel and have no points in common; otherwise, they have a single point of intersection. The distinguishing features of non-Euclidean geometry are the number and locations of possible intersections between two lines and the number of possible lines with no intersections (parallel lines) with a given line. Formulas A necessary condition for two lines to intersect is that they are in the same plane—that is, are not skew lines. Satisfaction of this condition is equivalent to the tetrahedron with vertices at two of the points on one line and two of the points on the other line being degenerate in the sense of having zero volume. For the algebraic form of this condition, see . Given two points on each line First we consider the intersection of two lines and in two-dimensional space, with line being defined by two distinct points and , and line being defined by two distinct points and . The intersection of line and can be defined using determinants. The determinants can be written out as: When the two lines are parallel or coincident, the denominator is zero. Given two points on each line segment The intersection point above is for the infinitely long lines defined by the points, rather than the line segments between the points, and can produce an intersection point not contained in either of the two line segments. In order to find the position of the intersection in respect to the line segments, we can define lines and in terms of first degree Bézier parameters: (where and are real numbers). The intersection point of the lines is found with one of the following values of or , where and with There will be an intersection if and . The intersection point falls within the first line segment if , and it falls within the second line segment if . These inequalities can be tested without the need for division, allowing rapid determination of the existence of any line segment intersection before calculating its exact point. Given two line equations The and coordinates of the point of intersection of two non-vertical lines can easily be found using the following substitutions and rearrangements. Suppose that two lines have the equations and where and are the slopes (gradients) of the lines and where and are the -intercepts of the lines. At the point where the two lines intersect (if they do), both coordinates will be the same, hence the following equality: We can rearrange this expression in order to extract the value of , and so, To find the coordinate, all we need to do is substitute the value of into either one of the two line equations, for example, into the first: Hence, the point of intersection is Note that if then the two lines are parallel and they do not intersect, unless as well, in which case the lines are coincident and they intersect at every point. Using homogeneous coordinates By using homogeneous coordinates, the intersection point of two implicitly defined lines can be determined quite easily. In 2D, every point can be defined as a projection of a 3D point, given as the ordered triple . The mapping from 3D to 2D coordinates is . We can convert 2D points to homogeneous coordinates by defining them as . Assume that we want to find intersection of two infinite lines in 2-dimensional space, defined as and . We can represent these two lines in line coordinates as and . The intersection of two lines is then simply given by If , the lines do not intersect. More than two lines The intersection of two lines can be generalized to involve additional lines. The existence of and expression for the -line intersection problem are as follows. In two dimensions In two dimensions, more than two lines almost certainly do not intersect at a single point. To determine if they do and, if so, to find the intersection point, write the th equation () as and stack these equations into matrix form as where the th row of the matrix is , is the 2 × 1 vector , and the th element of the column vector is . If has independent columns, its rank is 2. Then if and only if the rank of the augmented matrix is also 2, there exists a solution of the matrix equation and thus an intersection point of the lines. The intersection point, if it exists, is given by where is the Moore–Penrose generalized inverse of (which has the form shown because has full column rank). Alternatively, the solution can be found by jointly solving any two independent equations. But if the rank of is only 1, then if the rank of the augmented matrix is 2 there is no solution but if its rank is 1 then all of the lines coincide with each other. In three dimensions The above approach can be readily extended to three dimensions. In three or more dimensions, even two lines almost certainly do not intersect; pairs of non-parallel lines that do not intersect are called skew lines. But if an intersection does exist it can be found, as follows. In three dimensions a line is represented by the intersection of two planes, each of which has an equation of the form Thus a set of lines can be represented by equations in the 3-dimensional coordinate vector : where now is and is . As before there is a unique intersection point if and only if has full column rank and the augmented matrix does not, and the unique intersection if it exists is given by Nearest points to skew lines In two or more dimensions, we can usually find a point that is mutually closest to two or more lines in a least-squares sense. In two dimensions In the two-dimensional case, first, represent line as a point on the line and a unit normal vector , perpendicular to that line. That is, if and are points on line 1, then let and let which is the unit vector along the line, rotated by a right angle. The distance from a point to the line is given by And so the squared distance from a point to a line is The sum of squared distances to many lines is the cost function: This can be rearranged: To find the minimum, we differentiate with respect to and set the result equal to the zero vector: so and so In more than two dimensions While is not well-defined in more than two dimensions, this can be generalized to any number of dimensions by noting that is simply the symmetric matrix with all eigenvalues unity except for a zero eigenvalue in the direction along the line providing a seminorm on the distance between and another point giving the distance to the line. In any number of dimensions, if is a unit vector along the th line, then becomes where is the identity matrix, and so General derivation In order to find the intersection point of a set of lines, we calculate the point with minimum distance to them. Each line is defined by an origin and a unit direction vector . The square of the distance from a point to one of the lines is given from Pythagoras: where is the projection of on line . The sum of distances to the square to all lines is To minimize this expression, we differentiate it with respect to . which results in where is the identity matrix. This is a matrix , with solution , where is the pseudo-inverse of . Non-Euclidean geometry In spherical geometry, any two great circles intersect. In hyperbolic geometry, given any line and any point, there are infinitely many lines through that point that do not intersect the given line. See also Line segment intersection Line intersection in projective space Distance between two parallel lines Distance from a point to a line Line–plane intersection Parallel postulate Triangulation (computer vision) References External links Distance between Lines and Segments with their Closest Point of Approach, applicable to two, three, or more dimensions. Euclidean geometry Linear algebra Geometric algorithms Geometric intersection
Line–line intersection
[ "Mathematics" ]
1,689
[ "Linear algebra", "Algebra" ]
4,780,628
https://en.wikipedia.org/wiki/Change%20management%20%28engineering%29
The change request management process in systems engineering is the process of requesting, determining attainability, planning, implementing, and evaluating of changes to a system. Its main goals are to support the processing and traceability of changes to an interconnected set of factors. Introduction There is considerable overlap and confusion between change request management, change control and configuration management. The definition below does not yet integrate these areas. Change request management has been embraced for its ability to deliver benefits by improving the affected system and thereby satisfying "customer needs," but has also been criticized for its potential to confuse and needlessly complicate change administration. In some cases, notably in the Information Technology domain, more funds and work are put into system maintenance (and change request management) than into the initial creation of a system. Typical investment by organizations during initial implementation of large ERP systems is 15 to 20 percent of overall budget. In the same vein, Hinley describes two of Lehman's laws of software evolution: The law of continuing change: Systems that are used must change, or else automatically become less useful. The law of increasing complexity: Through changes, the structure of a system becomes ever more complex, and more resources are required to simplify it. Change request management is also of great importance in the field of manufacturing, which is confronted with many changes due to increasing and worldwide competition, technological advances and demanding customers. Because many systems tend to change and evolve as they are used, the problems of these industries are experienced to some degree in many others. Notes: In the process below, it is arguable that the change committee should be responsible not only for accept/reject decisions, but also prioritization, which influences how change requests are batched for processing. The process and its deliverables For the description of the change request management process, the meta-modeling technique is used. Figure 1 depicts the process-data diagram, which is explained in this section. Activities There are six main activities, which jointly form the change request management process. They are: Identify potential change, Analyze change request, Evaluate change, Plan change, Implement change and Review and close change. These activities are executed by four different roles, which are discussed in Table 1. The activities (or their sub-activities, if applicable) themselves are described in Table 2. Deliverables Besides activities, the process-data diagram (Figure 1) also shows the deliverables of each activity, i.e. the data. These deliverables or concepts are described in Table 3; in this context, the most important concepts are: CHANGE REQUEST and CHANGE LOG ENTRY. A few concepts are defined by the author (i.e. lack a reference), because either no (good) definitions could be found, or they are the obvious result of an activity. These concepts are marked with an asterisk (‘*’). Properties of concepts have been left out of the model, because most of them are trivial and the diagram could otherwise quickly become too complex. Furthermore, some concepts (e.g. CHANGE REQUEST, SYSTEM RELEASE) lend themselves for the versioning approach as proposed by Weerd, but this has also been left out due to diagram complexity constraints. Besides just ‘changes’, one can also distinguish deviations and waivers. A deviation is an authorization (or a request for it) to depart from a requirement of an item, prior to the creation of it. A waiver is essentially the same, but than during or after creation of the item. These two approaches can be viewed as minimalistic change request management (i.e. no real solution to the problem at hand). Examples A good example of the change request management process in action can be found in software development. Often users report bugs or desire new functionality from their software programs, which leads to a change request. The product software company then looks into the technical and economical feasibility of implementing this change and consequently it decides whether the change will actually be realized. If that indeed is the case, the change has to be planned, for example through the usage of function points. The actual execution of the change leads to the creation and/or alteration of software code and when this change is propagated it probably causes other code fragments to change as well. After the initial test results seem satisfactory, the documentation can be brought up to date and be released, together with the software. Finally, the project manager verifies the change and closes this entry in the change log. Another typical area for change request management in the way it is treated here, is the manufacturing domain. Take for instance the design and production of a car. If for example the vehicle's air bags are found to automatically fill with air after driving long distances, this will without a doubt lead to customer complaints (or hopefully problem reports during the testing phase). In turn, these produce a change request (see Figure 2 on the right), which will probably justify a change. Nevertheless, a – most likely simplistic – cost and benefit analysis has to be done, after which the change request can be approved. Following an analysis of the impact on the car design and production schedules, the planning for the implementation of the change can be created. According to this planning, the change can actually be realized, after which the new version of the car is hopefully thoroughly tested before it is released to the public. In process plants Since complex processes can be very sensitive to even small changes, proper management of change to industrial process plants is recognized as critical to safety. Undocumented, not properly risk assessed changes are a recipe for disaster. An eminent example of this is the Flixborough explosion, where improvised changes involving the bypassing of a stage in a reactor train was at the origin of the accident. The change had not been properly thought out, documented and risk-assessed, so that the event of breach of containment had not been identified. In the US, OSHA has regulations that govern how changes are to be made and documented. The main requirement is that a thorough review of a proposed change be performed by a multi-disciplinary team to ensure that as many possible viewpoints are used to minimize the chances of missing a hazard. In this context, change request management is known as Management of Change, or MOC. It is just one of many components of Process Safety Management, section 1910.119(l).1. See also Change control Change request management Engineering Change Order, Change request PRINCE2 ITIL Versioning Release management Software release life cycle Application lifecycle management Systems engineering Issue tracking system Notes and references Referenced literature and further reading Crnković I., Asklund, U. & Persson-Dahlqvist, A. (2003). Implementing and Integrating Product Data Management and Software Configuration Management. London: Artech House. Dennis, A., Wixom, B.H. & Tegarden, D. (2002). System Analysis & Design: An Object-Oriented Approach with UML. Hoboken, New York: John Wiley & Sons, Inc. Georgetown University (n.d.). Data Warehouse: Glossary. Retrieved April 13, 2006 from: https://web.archive.org/web/20060423164505/http://uis.georgetown.edu/departments/eets/dw/GLOSSARY0816.html. Hinley, D.S. (1996). Software evolution management: a process-oriented perspective. Information and Software Technology, 38, 723–730. Huang, G.H. & Mak, K.L. (1999). Current practices of engineering change management in UK manufacturing industries. International Journal of Operations & Production Management, 19(1), 21–37. IEEE (1991). Standard Glossary of Software Engineering Terminology (ANSI). The Institute of Electrical and Electronics Engineers Inc. Retrieved April 13, 2006 from: http://www.ee.oulu.fi/research/ouspg/sage/glossary/#reference_6 . Mäkäräinen, M. (2000). Software change management processes in the development of embedded software. PhD dissertation. Espoo: VTT Publications. Available online: http://www.vtt.fi/inf/pdf/publications/2000/P416.pdf. Mannan, Sam (2012). Lees' Loss Prevention in the Process Industries (4th ed.). Oxford: Butterworth-Heinemann. . NASA (2005). NASA IV&V Facility Metrics Data Program - Glossary and Definitions. Retrieved March 4, 2006 from: https://web.archive.org/web/20060307232014/http://mdp.ivv.nasa.gov/mdp_glossary.html. Pennsylvania State University Libraries (2004). CCL Manual: Glossary of Terms and Acronyms. Retrieved April 13, 2006 from: https://web.archive.org/web/20060615021317/http://www.libraries.psu.edu/tas/ cataloging/ccl/glossary.htm. Princeton University (2003). WordNet 2.0. Retrieved April 13, 2006 from: http://dictionary.reference.com/search?q=release. Rajlich, V. (1999). Software Change and Evolution. In Pavelka, J., Tel, G. & Bartošek, M. (Eds.), SOFSEM'99, Lecture Notes in Computer Science 1725, 189–202. Rigby, K. (2003). Managing Standards: Glossary of Terms. Retrieved April 1, 2006 from: https://web.archive.org/web/20060412081603/http://sparc.airtime.co.uk/users/wysywig/gloss.htm. Scott, J.A. & Nisse, D. (2001). Software Configuration Management, Guide to Software Engineering Body of Knowledge, Chapter 7, IEEE Computer Society Press. Vogl, G. (2004). Management Information Systems: Glossary of Terms. Retrieved April 13, 2006 from Uganda Martyrs University website: https://web.archive.org/web/20060411160145/http://www.321site.com/greg/courses/mis1/glossary.htm. Weerd, I. van de (2006). Meta-modeling Technique: Draft for the course Method Engineering 05/06. Retrieved March 1, 2006 from: https://bscw.cs.uu.nl/bscw/bscw.cgi/d1009019/Instructions for the process-data diagram.pdf [restricted access]. Change management Systems engineering Process safety
Change management (engineering)
[ "Chemistry", "Engineering" ]
2,236
[ "Chemical process engineering", "Systems engineering", "Safety engineering", "Process safety" ]
4,781,152
https://en.wikipedia.org/wiki/Stair%20lift
A stair lift is a mechanical device for lifting people, typically those with disabilities, up and down stairs. For sufficiently wide stairs, a rail is mounted to the treads of the stairs. A chair or lifting platform is attached to the rail. A person gets onto the chair or platform and is lifted up or down the stairs by the chair which moves along the rail. Stair lifts are known variously as stairlifts, stair-lifts, chair lifts, stair gliders and by other names. This type of chair lift should not be confused with the chairlift used by skiers. The term stair climber can refer either to stair lifts, or more commonly to the exercise equipment by the same name. Some of the first stair lifts to be produced commercially were advertised and sold in the U.S. in the 1930s by the Inclinator Company of America. Many users at the time were victims of polio. Now they are seen for use in elderly, fall-prone individuals, and disabled people who are unable to navigate stairs safely. History In the 1920s, C.C. Crispen, a Pennsylvania entrepreneur, created a way to enable an ailing friend to travel from floor to floor. Crispen's idea was to design a seat that could climb stairs. A self-taught engineer, he built the first prototype of the inclining chair. He called it the Inclin-ator. Before this Frederick Muffett of Royal Tunbridge Wells invented and patented "An Invalid Chair with Tramway for use on Staircases". Historian David Starkey in 2009 found evidence in a list of the possessions of King Henry VIII that the king used a stairlift. The 30 stone (190 kg) king, injured through jousting, used a chair that was hauled up and down stairs on a block and tackle system by servants at the Palace of Whitehall in London. Features Modern stair lifts can be found with a wide variety of features such as adjustable seat height, battery isolation switches, call stations, 'flip-up' rail, key switch, folding step, speed governor, seat belt, soft start and soft stop. Rails Straight rails for use on domestic staircases are usually made from extruded aluminum or steel and come in various cross-sectional shapes. These rails may, typically, weigh over , depending on the length. In most applications they are attached to the steps with metal brackets (sometimes called "cleats"). If a rail crosses a doorway at the bottom of the stairs or causes an obstruction, a hinge can be fitted so the end of the rail can be folded back out of the way when not in use. Curved rails are made from materials such as steel or aluminum and come in various cross-sectional shapes according to the designer. Individual designs vary a lot and probably the key criterion is to make the curves with the smallest radius possible so they will wrap tightly around objects such as newel posts. The sections of curved rails are usually packaged well to prevent damage in transit and are unwrapped and assembled on site. Rails for wheelchair platform stair lifts may be secured to walls in addition to the step fixings. Carriages The carriage is the component which moves along the rail and normally runs on small diameter rollers. In most designs the carriage is pulled by a cable or chain, or driven along the inclined rail by a rack and pinion system or other drive arrangement. Most domestic carriages have a seat with arms and a footrest. Some special models have a stand-on platform also known as a "perch" seat. For users with shorter legs a short seat can be fitted, to make the lift more comfortable to sit on. Seats can be tailored to suit individual needs. The conventional layout for a typical domestic stair lift is to have the seat at right angles to the rail so the user travels "sidesaddle". At the top of the staircase the seat can be swiveled, commonly through around 45 degrees or 90 degrees, then locked in place to allow the user to alight from it onto a landing. Stair lifts are available with either a manual swivel or a powered swivel, depending on the users ability. Most swivel seats have a safety switch so the stair lift will not move unless the seat is locked into its travel position. Special models with seats facing the bottom of the staircase have been produced for users with spinal or other conditions which prevent use of the conventional seat layout. More room is need on the landing with these special seats. Popular types Straight-rail stair lifts These are the most common type of stair lifts used in private dwellings with straight stairs and have a straight rail (track) which is attached to the steps of the staircase. Straight-rail stair lifts can usually be installed within days of being ordered and, having a rail which is simply cut to length from a stock part, they are the least expensive stair lifts. Curved-rail stair lifts Curved stair lifts are made to follow the shape of an individual staircase (curved stairs). On staircases with intermediate flat landings they eliminate the need for multiple straight stair lifts by providing a continuous ride up the entire length of the staircase. Because the rail is custom-made to follow the staircase, and because the chair is more complex than on a straight-rail stair lift (it has to be able to remain level while traveling along a track which changes direction and angle), curved-rail stair lifts are more costly than stair lifts for straight stairs. Specifying a curved-rail stair lifts usually involves careful measurement, design and manufacturing, and the installation process usually takes longer than for a straight domestic stair lift, usually between 5 – 10 weeks. One manufacturer, Acorn, can provide a curved-rail stair lift made from modular parts. This has the advantage of quick delivery time, even next day. The installer brings many parts and picks from them. They are usually similar in price to a custom-made curved-rail stair lift. Wheelchair vertical platform stair lifts Vertical platform lifts come under the general definition of a stair lift and are usually of a much heavier construction than a domestic stair lift due to the fact they are going to transport a wheelchair or scooter and the person. Most platform stair lifts are used in public access buildings and / or inside and outside private homes. The platform is large enough to accommodate a wheelchair or scooter and its user, and may have folding edge flaps which drop down and act as ramps to allow for variations in floor levels. These flaps also prevent the wheelchair from going over the edge of the platform. The rails are, necessarily, of heavy construction to support the load and the drive system is usually accommodated within a tubular section rail or aluminum extrusion. Some models have steel cables inside the tube, others have chains; yet others may use a rack and pinion system. Many wheelchair platform stairlifts are designed and built to order. Others may comprise a standard platform and carriage, with the only special requirement being the length of rails or tracks. Some stair lift chairs can also be moved and used as indoor wheelchairs. Outdoor stair lifts Outdoor stair lifts are available for straight and curved staircases. They operate similar to indoor stair lifts but include weather-resistant features to help the unit withstand extreme harsh temperatures and weather conditions. Most often, outdoor stair lifts are used on staircases for decks, home entryways or lake access. Previously owned stair lifts There is a second-user market for some types of stair lift. This is most common with straight rail domestic types. The rails can be cut to length if too long, or extended with a "joining kit". During the early days of curved rail stair lifts there was no second user market because of the difficulty of matching rails to a different layout. Even staircases built to the same design specification in neighboring houses have variations, but in most attempted "transplants" there are too many differences to make it practicable. Many owners have had to pay to have unwanted curved stair lifts removed. More recently, some curved rails have been produced to a modular design so components can be unbolted and used elsewhere, subject to design and safety considerations. In some cases, tubular section rails which are welded during manufacture, are produced by specialist rail companies so they can be used with a previously owned carriage, controls, and other components. This is, perhaps, like putting an old locomotive on new railway lines. It provides a lower cost solution than buying a totally new system. Some insurance companies have offered breakdown policies for stair lifts. Manufacturers and installers have offered an extended warranty, rather like those available for domestic white goods and brown goods. Goods stair lifts Some manufacturers produce stair lifts with trays instead of seats for moving goods between different levels, usually in commercial or industrial buildings. Some businesses have purchased normal domestic stair lifts purely as goods transporters and put items such as boxes of stationery on the seat. AC and DC power Early stair lifts mostly had alternating current (AC) drive motors which ran at full mains voltage (around 115 volts in North America, 230 volts in Europe). An "energy chain" ran alongside or through the rail to carry the power cable from the supply point to the carriage. More recently, domestic stair lifts have been powered from rechargeable batteries and use direct current (DC). One of the selling points is that a DC stair lift will continue to function during a power outage, provided the batteries are sufficiently charged. Most stair lifts have a 'charge point' where the unit will 'park' to charge its batteries. Some straight stair lifts have the ability to charge continuously no matter where they are left along the track. With most DC models the batteries are accommodated within the carriage and travel with it. Some models, however, were designed with three phase motors and the batteries (three in total) were housed in a cabinet mounted near the top or bottom of the rail. An inverter system was used to convert the DC energy to three phase AC. The power rating of drive motors for domestic straight rail stair lifts may be around 250 watts. The power requirement will be greater for heavy loads, very steep inclines, and wheelchair platform stair lifts. Controls Stair lifts are largely operated using a control on the arm of the lift. This is either a switch or a toggle type lever. This larger toggle switch enables users even with limited mobility or painful condition to use stair lifts easily and safely. Electronic controls are used extensively. Many stair lifts have radio frequency or infrared remote controllers. It is known that radiation from devices such as fluorescent lights can interfere with infrared stair lift controls. Also, heat and incandescent lights can, in some circumstances, have an adverse effect. Control circuit design varies greatly among the different manufacturers and models. Curved rail stair lifts have more complex controls than those with straight rails. The seat of a curved rail stair lift may have to be tilted so it remains horizontal whilst going around curves and negotiates different angles of incline. This requires an additional motor and link system. Also, the carriage is slowed down on bends but travels faster on straight runs. This means a more complex control system. Modern controls have small microprocessors which "learn" the characteristics of the journeys and keep the data in memory. They also record the number of journey and direction. This assists service engineers on maintenance calls. Some development of self-diagnostic controls began at the onset of the 21st century. The idea was that stair lifts would predict when components were starting to deteriorate and automatically pass the information to the service provider so a visit could be arranged. Safety To satisfy safety codes stair lifts usually have cut-out switches connected to "safety edges" and other protective devices so the drive power is disconnected if something goes wrong. Modern lifts have a high degree of comfort, but safety is always paramount. "Safety edges" are a common feature to the power pack and footplate. "Safety edges" ensure that if there is any obstruction on the stairs the stair lift will automatically stop and only travel away from the obstruction. Stair lifts are used by people of all ages and child car seats can usually be fixed a standard stair lift seat using the seat belt provided with the stair lift system. Many stair lifts are also fitted with a key, to allow the user to prevent others from using the lift. Codes of practice and technical specifications apply to stair lift manufacture. In North America these codes may be relevant: ASME A17.1 - 1990, Safety Code for Elevators and Escalators ASME A18.1 - 2005 Safety Standard for Platform Lifts and Stairway Chairlifts Produced by American Society of Mechanical Engineers An important specification used by stair lift manufacturers in Europe was British Standard BS 5776: 1996 Specification For Powered Stair lifts, produced by The British Standards Institution. It has since been replaced by BS EN 81-22:2021: Safety rules for the construction and installation of lifts.''' Note: codes of practice and technical specifications are updated occasionally. These references may be out of date by the time they are read and are shown as examples. Self-installation Today, self-installation of stair lifts is becoming a common trend for people interested in DIY projects. Stair lifts are available for purchase that can be self-installed. Professionals within the home medical equipment industry do not recommend that people attempt to install these products themselves. They believe that in terms of warranty, long-term care, and service, it is much more economical to have a trained professional install these products. In addition, these professionals are also aware of all safety measures and concerns associated with the proper installation of stair lifts, as well as the applicable local elevator codes. Travel speed Stair lifts normally have "soft" starts so the user is not jerked as the carriage starts to move. Typical travel speed for domestic straight rail stair lift carriages range between and (0.34 miles per hour). The speed of curved rail stair lift carriages may vary on the journey if the controls cause them to slow on inclines and bends. Costs Stair lifts are highly individualized units that vary in price significantly. However, many base units begin between $3,000 and $5,000. Many options affect this base pricing including: length of railing needed, any curves involved, seat upgrades, motor upgrades, seat swivel, seat and foot pedal folding, and power type. See also Central–Mid-Levels escalator (Hong Kong) Elevator Escalator Funicular Home lift Lift chair Moving walkway People mover Shopping cart conveyor Wheelchair lift References External links Elevator World, Volume One, No 1 January 1953 Hansard'', UK Parliament House of Commons Daily Debates record. References to stair lifts: 16 Mar 1990 : Column 395; 7 May 2002 : Column 3WH; Westminster Hall, Sylvia Heal in the chair; 14 Jun 2004 : Column 744W Original page, including the definition of stair lift, created for the Wikipedia in April 2006 by Philip W Baker, founder member of The Stair lift Institute, a charity which at the time was a registered member of the National Council for Voluntary Organisations. Assistive technology Chairs Elevators Stairways Vertical transport devices
Stair lift
[ "Technology", "Engineering" ]
3,061
[ "Building engineering", "Vertical transport devices", "Transport systems", "Elevators" ]