id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
2,001,621 | https://en.wikipedia.org/wiki/Solutions%20of%20the%20Einstein%20field%20equations | Solutions of the Einstein field equations are metrics of spacetimes that result from solving the Einstein field equations (EFE) of general relativity. Solving the field equations gives a Lorentz manifold. Solutions are broadly classed as exact or non-exact.
The Einstein field equations are
where is the Einstein tensor, is the cosmological constant (sometimes taken to be zero for simplicity), is the metric tensor, is a constant, and is the stress–energy tensor.
The Einstein field equations relate the Einstein tensor to the stress–energy tensor, which represents the distribution of energy, momentum and stress in the spacetime manifold. The Einstein tensor is built up from the metric tensor and its partial derivatives; thus, given the stress–energy tensor, the Einstein field equations are a system of ten partial differential equations in which the metric tensor can be solved for.
Solving the equations
It is important to realize that the Einstein field equations alone are not enough to determine the evolution of a gravitational system in many cases. They depend on the stress–energy tensor, which depends on the dynamics of matter and energy (such as trajectories of moving particles), which in turn depends on the gravitational field. If one is only interested in the weak field limit of the theory, the dynamics of matter can be computed using special relativity methods and/or Newtonian laws of gravity and the resulting stress–energy tensor can then be plugged into the Einstein field equations. But if one requires an exact solution or a solution describing strong fields, the evolution of both the metric and the stress–energy tensor must be solved for at once.
To obtain solutions, the relevant equations are the above quoted EFE (in either form) plus the continuity equation (to determine the evolution of the stress–energy tensor):
These amount to only 14 equations (10 from the field equations and 4 from the continuity equation) and are by themselves insufficient for determining the 20 unknowns (10 metric components and 10 stress–energy tensor components). The equations of state are missing. In the most general case, it's easy to see that at least 6 more equations are required, possibly more if there are internal degrees of freedom (such as temperature) which may vary throughout spacetime.
In practice, it is usually possible to simplify the problem by replacing the full set of equations of state with a simple approximation. Some common approximations are:
Vacuum:
Perfect fluid:
where
Here is the mass–energy density measured in a momentary co-moving frame, is the fluid's 4-velocity vector field, and is the pressure.
Non-interacting dust ( a special case of perfect fluid ):
For a perfect fluid, another equation of state relating density and pressure must be added. This equation will often depend on temperature, so a heat transfer equation is required or the postulate that heat transfer can be neglected.
Next, notice that only 10 of the original 14 equations are independent, because the continuity equation is a consequence of Einstein's equations. This reflects the fact that the system is gauge invariant (in general, absent some symmetry, any choice of a curvilinear coordinate net on the same system would correspond to a numerically different solution.) A "gauge fixing" is needed, i.e. we need to impose 4 (arbitrary) constraints on the coordinate system in order to obtain unequivocal results. These constraints are known as coordinate conditions.
A popular choice of gauge is the so-called "De Donder gauge", also known as the harmonic condition or harmonic gauge
In numerical relativity, the preferred gauge is the so-called "3+1 decomposition", based on the ADM formalism. In this decomposition, metric is written in the form
, where
and are functions of spacetime coordinates and can be chosen arbitrarily in each point. The remaining physical degrees of freedom are contained in , which represents the Riemannian metric on 3-hypersurfaces with constant . For example, a naive choice of , , would correspond to a so-called synchronous coordinate system: one where t-coordinate coincides with proper time for any comoving observer (particle that moves along a fixed trajectory.)
Once equations of state are chosen and the gauge is fixed, the complete set of equations can be solved. Unfortunately, even in the simplest case of gravitational field in the vacuum (vanishing stress–energy tensor), the problem is too complex to be exactly solvable. To get physical results, we can either turn to numerical methods, try to find exact solutions by imposing symmetries, or try middle-ground approaches such as perturbation methods or linear approximations of the Einstein tensor.
Exact solutions
Exact solutions are Lorentz metrics that are conformable to a physically realistic stress–energy tensor and which are obtained by solving the EFE exactly in closed form.
External reference
Scholarpedia article on the subject written by Malcolm MacCallum
Non-exact solutions
The solutions that are not exact are called non-exact solutions. Such solutions mainly arise due to the difficulty of solving the EFE in closed form and often take the form of approximations to ideal systems. Many non-exact solutions may be devoid of physical content, but serve as useful counterexamples to theoretical conjectures.
Applications
There are practical as well as theoretical reasons for studying solutions of the Einstein field equations.
From a purely mathematical viewpoint, it is interesting to know the set of solutions of the Einstein field equations. Some of these solutions are parametrised by one or more parameters. From a physical standpoint, knowing the solutions of the Einstein Field Equations allows highly-precise modelling of astrophysical phenomena, including black holes, neutron stars, and stellar systems. Predictions can be made analytically about the system analyzed; such predictions include the perihelion precession of Mercury, the existence of a co-rotating region inside spinning black holes, and the orbits of objects around massive bodies.
See also
Ricci calculus
Albert Einstein
References
General relativity
Albert Einstein | Solutions of the Einstein field equations | [
"Physics"
] | 1,216 | [
"General relativity",
"Theory of relativity"
] |
2,001,897 | https://en.wikipedia.org/wiki/Electric-field%20integral%20equation | The electric-field integral equation is a relationship that allows the calculation of an electric field () generated by an electric current distribution ().
Derivation
When all quantities in the frequency domain are considered, a time-dependency that is suppressed throughout is assumed.
Beginning with the Maxwell equations relating the electric and magnetic field, and assuming a linear, homogeneous media with permeability and permittivity :
Following the third equation involving the divergence of
by vector calculus we can write any divergenceless vector as the curl of another vector, hence
where A is called the magnetic vector potential. Substituting this into the above we get
and any curl-free vector can be written as the gradient of a scalar, hence
where is the electric scalar potential. These relationships now allow us to write
where , which can be rewritten by vector identity as
As we have only specified the curl of , we are free to define the divergence, and choose the following:
which is called the Lorenz gauge condition. The previous expression for now reduces to
which is the vector Helmholtz equation. The solution of this equation for is
where is the three-dimensional homogeneous Green's function given by
We can now write what is called the electric field integral equation (EFIE), relating the electric field to the vector potential A
We can further represent the EFIE in the dyadic form as
where here is the dyadic homogeneous Green's Function given by
Interpretation
The EFIE describes a radiated field given a set of sources , and as such it is the fundamental equation used in antenna analysis and design. It is a very general relationship that can be used to compute the radiated field of any sort of antenna once the current distribution on it is known. The most important aspect of the EFIE is that it allows us to solve the radiation/scattering problem in an unbounded region, or one whose boundary is located at infinity. For closed surfaces, it is possible to use the Magnetic Field Integral Equation or the Combined Field Integral Equation, both of which result in a set of equations with improved condition number compared to the EFIE. However, the MFIE and CFIE can still contain resonances.
In scattering problems, it is desirable to determine an unknown scattered field that is due to a known incident field . Unfortunately, the EFIE relates the scattered field to , not the incident field, so we do not know what is. This sort of problem can be solved by imposing the boundary conditions on the incident and scattered field, allowing one to write the EFIE in terms of and alone. Once this has been done, the integral equation can then be solved by a numerical technique appropriate to integral equations such as the method of moments.
Notes
By the Helmholtz theorem a vector field is described completely by its divergence and curl. As the divergence was not defined, we are justified by choosing the Lorenz Gauge condition above provided that we consistently use this definition of the divergence of in all subsequent analysis. However, other choices for are just as valid and lead to other equations, which all describe the same phenomena, and the solutions of the equations for any choice of lead to the same electromagnetic fields, and the same physical predictions about the fields and charges are accelerated by them.
It is natural to think that if a quantity exhibits this degree of freedom in its choice, then it should not be interpreted as a real physical quantity. After all, if we can freely choose to be anything, then is not unique. One may ask: what is the "true" value of measured in an experiment? If is not unique, then the only logical answer must be that we can never measure the value of . On this basis, it is often stated that it is not a real physical quantity and it is believed that the fields and are the true physical quantities.
However, there is at least one experiment in which value of the and are both zero at the location of a charged particle, but it is nevertheless affected by the presence of a local magnetic vector potential; see the Aharonov–Bohm effect for details. Nevertheless, even in the Aharonov–Bohm experiment, the divergence never enters the calculations; only along the path of the particle determines the measurable effect.
References
Gibson, Walton C. The Method of Moments in Electromagnetics. Chapman & Hall/CRC, 2008.
Harrington, Roger F. Time-Harmonic Electromagnetic Fields. McGraw-Hill, Inc., 1961. .
Balanis, Constantine A. Advanced Engineering Electromagnetics. Wiley, 1989. .
Chew, Weng C. Waves and Fields in Inhomogeneous Media. IEEE Press, 1995. .
Rao, Wilton, Glisson. Electromagnetic Scattering by Surfaces of Arbitrary Shape. IEEE Transactions on Antennas and Propagation, vol, AP-30, No. 3, May 1982. doi:10.1109/TAP.1982.1142818
Electromagnetism
Integral equations | Electric-field integral equation | [
"Physics",
"Mathematics"
] | 1,002 | [
"Electromagnetism",
"Physical phenomena",
"Integral equations",
"Mathematical objects",
"Equations",
"Fundamental interactions"
] |
2,002,473 | https://en.wikipedia.org/wiki/Solid-state%20nuclear%20magnetic%20resonance | Solid-state nuclear magnetic resonance (ssNMR) is a spectroscopy technique used to characterize atomic-level structure and dynamics in solid materials. ssNMR spectra are broader due to nuclear spin interactions which can be categorized as dipolar coupling, chemical shielding, quadrupolar interactions, and j-coupling. These interactions directly affect the lines shapes of experimental ssNMR spectra which can be seen in powder and dipolar patterns. There are many essential solid-state techniques alongside advanced ssNMR techniques that may be applied to elucidate the fundamental aspects of solid materials. ssNMR is often combined with magic angle spinning (MAS) to remove anisotropic interactions and improve the sensitivity of the technique. The applications of ssNMR further extend to biology and medicine.
Nuclear spin interactions
The resonance frequency of a nuclear spin depends on the strength of the magnetic field at the nucleus, which can be modified by isotropic (e.g. chemical shift, isotropic J-coupling) and anisotropic interactions (e.g. chemical shift anisotropy, dipolar interactions). In a classical liquid-state NMR experiment, molecular tumbling coming from Brownian motion averages anisotropic interactions to zero and they are therefore not reflected in the NMR spectrum. However, in media with no or little mobility (e.g. crystalline powders, glasses, large membrane vesicles, molecular aggregates), anisotropic local fields or interactions have substantial influence on the behaviour of nuclear spins, which results in the line broadening of the NMR spectra.
Chemical shielding
Chemical shielding is a local property of each nuclear site in a molecule or compound, and is proportional to the applied external magnetic field. The external magnetic field induces currents of the electrons in molecular orbitals. These induced currents create local magnetic fields that lead to characteristic changes in resonance frequency. These changes can be predicted from molecular structure using empirical rules or quantum-chemical calculations.
In general, the chemical shielding is anisotropic because of the anisotropic distribution of molecular orbitals around the nuclear sites. Under sufficiently fast magic angle spinning, or under the effect of molecular tumbling in solution-state NMR, the anisotropic dependence of the chemical shielding is time-averaged to zero, leaving only the isotropic chemical shift.
Dipolar coupling
Nuclear spins exhibit a magnetic dipole moment, which generates a magnetic field that interacts with the dipole moments of other nuclei (dipolar coupling). The magnitude of the interaction is dependent on the gyromagnetic ratio of the spin species, the internuclear distance r, and the orientation, with respect to the external magnetic field B, of the vector connecting the two nuclear spins (see figure). The maximum dipolar coupling is given by the dipolar coupling constant d,
,
where γ1 and γ2 are the gyromagnetic ratios of the nuclei, is the reduced Planck constant, and is the vacuum permeability. In a strong magnetic field, the dipolar coupling depends on the angle θ between the internuclear vector and the external magnetic field B (figure) according to
.
D becomes zero for . Consequently, two nuclei with a dipolar coupling vector at an angle of θm = 54.7° to a strong external magnetic field have zero dipolar coupling. θm is called the magic angle. Magic angle spinning is typically used to remove dipolar couplings weaker than the spinning rate.
Quadrupolar interaction
Nuclei with a spin quantum number >1/2 have a non-spherical charge distribution and a quadrupole moment. The quadrupole moment is a second rank tensor that couples to the surrounding electric field gradient, another second rank tensor. Nuclear quadrupole coupling is typically the second largest interaction in NMR, comparable in size to the largest interaction called Zeeman interactions. When the nuclear quadrupole coupling is not negligible relative to the Zeeman coupling, higher order corrections are needed to describe the NMR spectrum correctly. In such cases, the first-order correction to the NMR transition frequency leads to a strong anisotropic line broadening of the NMR spectrum. However, all symmetric transitions, between and levels are unaffected by the first-order frequency contribution. The second-order frequency contribution depends on the P4 Legendre polynomial, which has zero points at 30.6° and 70.1°. These anisotropic broadenings can be removed using DOR (DOuble angle Rotation) where you spin at two angles at the same time, or DAS (Double Angle Spinning) where you switch quickly between the two angles. Both techniques were developed in the late 1980s, and require specialized hardware (probe). Multiple quantum magic angle spinning (MQMAS) NMR was developed in 1995 and has become a routine method for obtaining high resolution solid-state NMR spectra of quadrupolar nuclei. A similar method to MQMAS is satellite transition magic angle spinning (STMAS) NMR developed in 2000.
J-coupling
The J-coupling or indirect nuclear spin-spin coupling (sometimes also called "scalar" coupling despite the fact that J is a tensor quantity) describes the interaction of nuclear spins through chemical bonds. J-couplings are not always resolved in solids owing to the typically large linewdiths observed in solid state NMR.
Other interactions
Paramagnetic substances are subject to the Knight shift.
Solid-state NMR line shapes
Powder pattern
A powder pattern arises in powdered samples where crystallites are randomly oriented relative to the magnetic field so that all molecular orientations are present. In presence of a chemical shift anisotropy interaction, each orientation with respect to the magnetic field gives a different resonance frequency. If enough crystallites are present, all the different contributions overlap continuously and lead to a smooth spectrum.
Fitting of the pattern in a static ssNMR experiment gives information about the shielding tensor, which are often described by the isotropic chemical shift , the chemical shift anisotropy parameter , and the asymmetry parameter .
Dipolar pattern
The dipolar powder pattern (also Pake pattern) has a very characteristic shape that arises when two nuclear spins are coupled together within a crystallite. The splitting between the maxima (the "horns") of the pattern is equal to the dipolar coupling constant .:
where γ1 and γ2 are the gyromagnetic ratios of the dipolar-coupled nuclei, is the internuclear distance, is the reduced Planck constant, and is the vacuum permeability.
Essential solid-state techniques
Magic angle spinning
Magic angle spinning (MAS) is a technique routinely used in ssNMR to improve ssNMR spectra resolution. After applying the MAS technique, NMR spectra will be sharper and narrower. This improved resolution results from manipulating a sample's spin interactions with the applied magnetic field. This is achieved by rotating the sample at a certain angle to the magnetic field to fully or partially average out anisotropic nuclear interactions such as dipolar, chemical shift anisotropy, and quadrupolar interactions. This rotation angle is called the magic angle θm (ca. 54.74°, where cos2θm = 1/3). To achieve the complete averaging of these interactions, the sample needs to be spun at a rate that is at least higher than the largest anisotropy.
Spinning a powder sample at a slower rate than the largest component of the chemical shift anisotropy results in an incomplete averaging of the interaction, and produces a set of spinning sidebands in addition to the isotropic line, centred at the isotropic chemical shift. Spinning sidebands are sharp lines separated from the isotropic frequency by a multiple of the spinning rate. Although spinning sidebands can be used to measure anisotropic interactions, they are often undesirable and removed by spinning the sample faster or by recording the data points synchronously with the rotor period.
Cross-polarization
Cross-polarization (CP) if a fundamental (Radiofrequency) RF pulse sequence and a building-block in many solid-state NMR. It is typically used to enhance the signal of a dilute nuclei with a low gyromagnetic ratio (e.g. , ) by magnetization transfer from an abundant nuclei with a high gyromagnetic ratio (e.g. ), or as a spectral editing method to get through space information (e.g. directed → CP in protein spectroscopy).
To establish magnetization transfer, RF pulses ("contact pulses") are simultaneously applied on both frequency channels to produce fields whose strength fulfil the Hartmann–Hahn condition:
where are the gyromagnetic ratios, is the spinning rate, and is an integer. In practice, the pulse power, as well as the length of the contact pulse are experimentally optimised. The power of one contact pulse is typically ramped to achieve a more broadband and efficient magnetisation transfer.
Decoupling
Spin interactions can be removed (decoupled) to increase the resolution of NMR spectra during the detection, or to extend the lifetime of the nuclear magnetization.
Heteronuclear decoupling is achieved by radio-frequency irradiation on at the frequency of the nucleus to be decoupled, which is often 1H. The irradiation can be continuous (continuous wave decoupling), or a series of pulses that extend the performance and the bandwidth of the decoupling (TPPM, SPINAL-64, SWf-TPPM)
Homonuclear decoupling is achieved with multiple-pulse sequences (WAHUHA, MREV-8, BR-24, BLEW-12, FSLG), or continuous wave modulation (DUMBO, eDUMBO). Dipolar interactions can also be removed with magic angle spinning. Ultra fast MAS (from 60 kHz up to above 111 kHz) is an efficient way to average all dipolar interactions, including 1H–1H homonuclear dipolar interactions, which extends the resolution of 1H spectra and enables the usage of pulse sequences used in solution state NMR.
Advanced solid-state NMR spectroscopy
Rotational Echo DOuble Resonance (REDOR)
Rotational Echo DOuble Resonance (REDOR) experiments, are a type of heteronuclear dipolar recoupling experiment which enables the re-introduction of heteronuclear dipolar couplings averaged by MAS. The reintroduction of such dipolar coupling reduces the intensity of the NMR signal compared to a reference spectrum where no dephasing pulse is used. REDOR can be used to measure heteronuclear distances, and are the basis of NMR crystallographic studies.
Ultra fast MAS for 1H NMR
The strong 1H-1H homonuclear dipolar interactions associated with broad NMR lines and short T2 relaxation time effectively relegate proton for bimolecular NMR. Fast MAS and reduction of dipolar interactions by deuteration have made proton ssNMR as versatile as in solution. This includes spectral dispersion in multi-dimensional experiments and structurally valuable restraints and parameters important for studying material dynamics.
Ultra-fast NMR and the sharpening of the NMR lines enable NMR pulse sequences to capitalize on proton-detection to improve the sensitivity of the experiments compared to the direct detection of a spin-1/2 system (X). Such enhancement factor is given by:
where are the gyromagnetic ratios, represent the NMR line widths, and represent the quality factor of the probe resonances.
MAS-Dynamic Nuclear Polarisation (MAS-DNP)
Magic angle spinning dynamic nuclear polarization (MAS-DNP) is a technique that increases the sensitivity of NMR experiments by several orders of magnitude. It involves transferring the very high electron polarisation from unpaired electrons to nearby nuclei. This is achieved at cryogenic temperatures thorugh a continuous microwave irradiation from a klystron or a gyrotron, with a frequency close to the corresponding electron paramagnetic resonance (EPR) frequency.
The development in the MAS-DNP instrumentation, and the improvement of polarising agents (TOTAPOL, AMUPOL, TEKPOL, etc. ) to achieve a more efficient transfer of polarisation has dramatically reduced experiments times which enabled the observation of surfaces, insensitive isotopes, and multidimensional experiments on low natural abundance nuclei, and diluted species.
Beta-Detected Nuclear Magnetic Resonance (β-NMR)
Beta-detected nuclear magnetic resonance (β-NMR) is specialized technique that has working principles similar to muon spin spectroscopy. It is used in domains such as chemistry, materials science, condensed matter physics, and biology as a powerful probe. β-NMR is practiced at facilities such as TRIUMF and ISOLDE as well as research groups in Osaka and Moscow.
What makes β-NMR different than conventional NMR is firstly, where and when the spin polarization of the nuclei occurs and secondly, how the signal is produced. To conduct a β-NMR experiment, optical pumping is performed on a radioactive beam of particles, such as 8Li and 31Mg, to polarize their nuclear spin to nearly one-hundred percent. The isotopes are subsequently implanted into a sample in vacuum in the dilute-limit to eliminate homonuclear probe interactions. The spin–lattice relaxation of the probe is monitored by the parity-violating beta-decay of the radioactive isotope. This anisotropic decay is where the signal originates for β-NMR experiment. This technique allows for investigation of the local magnetic and electronic environment within a material .
Applications
ssNMR spectroscopy serves as an effective analytical tool in biological, organic, and inorganic chemistry due to its close resemblance to liquid-state spectra while providing additional insights into anisotropic interactions.
It is used to characterize chemical composition, structure, local motions, kinetics, and thermodynamics, with the special ability to assign the observed behavior to specific sites in a molecule. It is also crucial in the area of surface and interfacial chemistry.
Biology and Medicine
Proteins and bioaggregates
ssNMR is used to study insoluble proteins and proteins such as membrane proteins and amyloid fibrils. Using the principles of MAS, protein tertiary structure information can be determined. This includes the assessment of protein dynamics.
Biomaterials
ssNMR is used to study biomaterials such as bone, teeth, hair, silk, wood, as well as viruses, plants, cells, and collected biopsies.
Drugs and drug delivery systems
ssNMR is used in pharmaceutical research for the characterization of drug polymorphs and solid dispersions.
Materials science
ssNMR spectroscopy is used in materials science to analyze solid samples. Here, molecules have restricted motion which leads to complex magnetic interactions, such as dipole-dipole coupling, chemical shift anisotropy, and quadrupolar interactions. These interactions can provide more detailed information than X-ray diffraction or solution NMR spectroscopy about the material's structure to elucidate information on the solid's (crystalline and non-crystalline) local structure and dynamics.
ssNMR has been successfully used to study metal organic frameworks, solid-state batteries, surfaces of nanoporous materials, and polymers.
References
Suggested readings for beginners
General NMR
Solid-state NMR
Levitt, Malcolm H., Spin Dynamics: Basics of Nuclear Magnetic Resonance, Wiley, Chichester, United Kingdom, 2001. (NMR basics, including solids)
Duer, Melinda J., Introduction to Solid-State NMR Spectroscopy, Blackwell, Oxford, 2004. (Some detailed examples of ssNMR spectroscopy)
Schmidt-Rohr, K. and Spiess, H.-W., Multidimensional Solid-State NMR and Polymers, Academic Press, San Diego, 1994.
External links
mrsimulator Python package for simulating solid-state NMR spectra.
SSNMRBLOG Solid-State NMR Literature Blog by Prof. Rob Schurko's Solid-State NMR group at the University of Windsor
Nuclear magnetic resonance
Scientific techniques | Solid-state nuclear magnetic resonance | [
"Physics",
"Chemistry"
] | 3,358 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
2,002,950 | https://en.wikipedia.org/wiki/Boron%20trioxide | Boron trioxide or diboron trioxide is the oxide of boron with the formula . It is a colorless transparent solid, almost always glassy (amorphous), which can be crystallized only with great difficulty. It is also called boric oxide or boria. It has many important industrial applications, chiefly in ceramics as a flux for glazes and enamels and in the production of glasses.
Structure
Boron trioxide has three known forms, one amorphous and two crystalline.
Amorphous form
The amorphous form (g-) is by far the most common. It is thought to be composed of boroxol rings which are six-membered rings composed of alternating 3-coordinate boron and 2-coordinate oxygen.
Because of the difficulty of building disordered models at the correct density with many boroxol rings, this view was initially controversial, but such models have recently been constructed and exhibit properties in excellent agreement with experiment. It is now recognized, from experimental and theoretical studies, that the fraction of boron atoms belonging to boroxol rings in glassy is somewhere between 0.73 and 0.83, with 0.75 = 3/4 corresponding to a 1:1 ratio between ring and non-ring units. The number of boroxol rings decays in the liquid state with increasing temperature.
Crystalline α form
The crystalline form (α-) is exclusively composed of BO3 triangles. Its crystal structure was initially believed to be the enantiomorphic space groups P31(#144) and P32(#145), like γ-glycine; but was later revised to the enantiomorphic space groups P3121(#152) and P3221(#154) in the trigonal crystal system, like α-quartz
Crystallization of α- from the molten state at ambient pressure is strongly kinetically disfavored (compare liquid and crystal densities). It can be obtained with prolonged annealing of the amorphous solid ~200 °C under at least 10 kbar of pressure.
Crystalline β form
The trigonal network undergoes a coesite-like transformation to monoclinic β- at several gigapascals (9.5 GPa).
Preparation
Boron trioxide is produced by treating borax with sulfuric acid in a fusion furnace. At temperatures above 750 °C, the molten boron oxide layer separates out from sodium sulfate. It is then decanted, cooled and obtained in 96–97% purity.
Another method is heating boric acid above ~300 °C. Boric acid will initially decompose into steam, (H2O(g)) and metaboric acid (HBO2) at around 170 °C, and further heating above 300 °C will produce more steam and diboron trioxide. The reactions are:
H3BO3 → HBO2 + H2O
2 HBO2 → + H2O
Boric acid goes to anhydrous microcrystalline in a heated fluidized bed. Carefully controlled heating rate avoids gumming as water evolves.
Boron oxide will also form when diborane (B2H6) reacts with oxygen in the air or trace amounts of moisture:
2B2H6(g) + 3O2(g) → 2(s) + 6H2(g)
B2H6(g) + 3H2O(g) → (s) + 6H2(g)
Reactions
Molten boron oxide attacks silicates. Containers can be passivated internally with a graphitized carbon layer obtained by thermal decomposition of acetylene.
Applications
Major component of borosilicate glass
Fluxing agent for glass and enamels
An additive used in glass fibres (optical fibres)
The inert capping layer in the Liquid Encapsulation Czochralski process for the production of gallium arsenide single crystal
As an acid catalyst in organic synthesis
As a starting material for the production of other boron compounds, such as boron carbide
See also
Boron suboxide
Boric acid
Sassolite
Tris(2,2,2-trifluoroethyl) borate
References
External links
National Pollutant Inventory: Boron and compounds
Australian Government information
US NIH hazard information. See NIH.
Material Safety Data Sheet
CDC - NIOSH Pocket Guide to Chemical Hazards - Boron oxide
Boron compounds
Acidic oxides
Glass compositions
Sesquioxides | Boron trioxide | [
"Chemistry"
] | 926 | [
"Glass compositions",
"Glass chemistry"
] |
2,003,406 | https://en.wikipedia.org/wiki/Photovoltaic%20effect | The photovoltaic effect is the generation of voltage and electric current in a material upon exposure to light. It is a physical phenomenon.
The photovoltaic effect is closely related to the photoelectric effect. For both phenomena, light is absorbed, causing excitation of an electron or other charge carrier to a higher-energy state. The main distinction is that the term photoelectric effect is now usually used when the electron is ejected out of the material (usually into a vacuum) and photovoltaic effect used when the excited charge carrier is still contained within the material. In either case, an electric potential (or voltage) is produced by the separation of charges, and the light has to have a sufficient energy to overcome the potential barrier for excitation. The physical essence of the difference is usually that photoelectric emission separates the charges by ballistic conduction and photovoltaic emission separates them by diffusion, but some "hot carrier" photovoltaic devices concepts blur this distinction.
History
The first demonstration of the photovoltaic effect, by Edmond Becquerel in 1839, used an electrochemical cell. He explained his discovery in Comptes rendus de l'Académie des sciences, "the production of an electric current when two plates of platinum or gold immersed in an acid, neutral, or alkaline solution are exposed in an uneven way to solar radiation."
The first solar cell, consisting of a layer of selenium covered with a thin film of gold, was experimented by Charles Fritts in 1884, but it had a very poor efficiency. However, the most familiar form of the photovoltaic effect uses solid-state devices, mainly in photodiodes. When sunlight or other sufficiently energetic light is incident upon the photodiode, the electrons present in the valence band absorb energy and, being excited, jump to the conduction band and become free. These excited electrons diffuse, and some reach the rectifying junction (usually a diode p–n junction) where they are accelerated into the n-type semiconductor material by the built-in potential (Galvani potential). This generates an electromotive force and an electric current, and thus some of the light energy is converted into electric energy. The photovoltaic effect can also occur when two photons are absorbed simultaneously in a process called two-photon photovoltaic effect.
Physics
In addition to the direct photovoltaic excitation of free electrons, an electric current can also arise through the Seebeck effect. When a conductive or semiconductive material is heated by absorption of electromagnetic radiation, the heating can lead to increased temperature gradients in the semiconductor material or differentials between materials. These thermal differences in turn may generate a voltage because the electron energy levels are shifted differently in different areas, creating a potential difference between those areas which in turn create an electric current. The relative contributions of the photovoltaic effect versus the Seebeck effect depend on many characteristics of the constituent materials.
All above effects generate direct current, the first demonstration of the alternating current photovoltaic effect (AC PV) was done by Dr. Haiyang Zou and Prof. Zhong Lin Wang at the Georgia Institute of Technology in 2017. The AC PV effect is the generation of alternating current (AC) in the nonequilibrium states when the light periodically shines at the junction or interface of material. The AC PV effect is based on the capacitive model that the current strongly depends on the frequency of the chopper. The AC PV effect is suggested to be a result of the relative shift and realignment between the quasi-Fermi levels of the semiconductors adjacent to the junction/interface under the nonequilibrium conditions. The electrons flow in the external circuit back and forth to balance the potential difference between two electrodes. The organic solar cell, which the materials have no initial carrier concentration, does not have the AC PV effect.
Effect of the temperature
The performance of a photovoltaic module depends on the environmental conditions, mainly on the global incident irradiance G on the module plane. However, the temperature T of the p–n junction also influences the main electrical parameters: the short-circuit current ISC, the open-circuit voltage VOC, and the maximum power Pmax. The first studies about the behavior of PV cells under varying conditions of G and T date back several decades ago.1-4 In general, it is known that VOC shows a significant inverse correlation with T, whereas for ISC that correlation is direct, but weaker, so that this increment does not compensate for the decrease of VOC. As a consequence, Pmax reduces when T increases. This correlation between the output power of a solar cell and its junction working temperature depends on the semiconductor material,2 and it is due to the influence of T on the concentration, lifetime, and mobility of the intrinsic carriers, that is, electrons and holes, inside the PV cell.
The temperature sensitivity is usually described by some temperature coefficients, each one expressing the derivative of the parameter it refers to with respect to the junction temperature. The values of these parameters can be found in any PV module data sheet; they are the following:
– β Coefficient of variation of VOC with respect to T, given by ∂VOC/∂T.
– α Coefficient of variation of ISC with respect to T, given by ∂ISC/∂T.
– δ Coefficient of variation of Pmax with respect to T, given by ∂Pmax/∂T.
Techniques for estimating these coefficients from experimental data can be found in the literature. Few studies analyse the variation of the series resistance with respect to the cell or module temperature. This dependency is studied by suitably processing the current–voltage curve. The temperature coefficient of the series resistance is estimated by using the single diode model or the double diode one.
Solar cells
In most photovoltaic applications, the radiation source is sunlight, and the devices are called solar cells. In the case of a semiconductor p–n (diode) junction solar cell, illuminating the material creates an electric current because excited electrons and the remaining holes are swept in different directions by the built-in electric field of the depletion region.
The AC PV is operated at the non-equilibrium conditions. The first study was based on a p-Si/TiO2 nanofilm. It is found that except for the DC output generated by the conventional PV effect based on a p–n junction, AC current is also produced when a flashing light is illuminated at the interface. The AC PV effect does not follow Ohm's law, being based on the capacitive model that the current strongly depends on the frequency of the chopper, but voltage is independent of the frequency. The peak current of AC at high switching frequency can be much higher than that from DC. The magnitude of the output is also associated with the light absorption of materials.
See also
Theory of solar cells
Electromotive force in solar cells
Photoelectric effect
References
Electrical phenomena
Energy conversion
Photovoltaics
Quantum chemistry
Electrochemistry | Photovoltaic effect | [
"Physics",
"Chemistry"
] | 1,450 | [
"Physical phenomena",
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Electrochemistry",
"Electrical phenomena",
" molecular",
"Atomic",
" and optical physics"
] |
2,003,810 | https://en.wikipedia.org/wiki/List%20of%20commercial%20failures%20in%20video%20games | As a hit-driven business, the great majority of the video game industry's software releases have been commercial disappointments. In the early 21st century, industry commentators made these general estimates: 10% of published games generated 90% of revenue; that around 3% of PC games and 15% of console games have global sales of more than 100,000 units per year, with even this level insufficient to make high-budget games profitable; and that about 20% of games make any profit. Within years after Steam relaxed limits on which games could be digitally distributed on its service, they reported that around 80% of games failed to reach $5000 in revenue in their first two weeks of sales.
Some of these failure events have drastically changed the video game market since its origin in the late 1970s. For example, the failure of E.T. contributed to the video game crash of 1983. Some games, though commercial failures, are well received by certain groups of gamers and are considered cult games.
The following list includes any video game software on any platform, and any video game console hardware where the commercial failure has been documented as such by the manufacture or published, or affirmed through industry sales trackers.
Video game hardware failures
32X
Unveiled by Sega at June 1994's Consumer Electronics Show, the 32X was later described as the "poor man's entry into 'next generation' games". The product was originally conceived as an entirely new console by Sega Enterprises and positioned as an inexpensive alternative for gamers into the 32-bit era. However, at the suggestion of Sega of America research and development head Joe Miller, the console was converted into an add-on to the existing Mega Drive/Genesis and made more powerful, with two 32-bit central processing unit chips and a 3D graphics processor. Nevertheless, the console failed to attract either developers or consumers as the Sega Saturn had already been announced for release the next year. In part because of this, and also to rush the 32X to market before the holiday season in 1994, the 32X suffered from a poor library of titles, including Mega Drive/Genesis ports with improvements to the number of colors that appeared on screen. Originally released at , Sega dropped the price to $99 in only a few months and ultimately cleared the remaining inventory at $19.95. About 665,000 units were sold.
3DO (Interactive Multiplayer)
Co-designed by RJ Mical and the team behind the Amiga, and marketed by Electronic Arts founder Trip Hawkins as a format, this "multimedia machine" released in 1993 was marketed as a family entertainment device and not just a video game console. Though it supported a vast library of games including many exceptional third party releases, a refusal to reduce its price of until almost the end of the product's life hampered sales. The success of subsequent next generation systems led to the platform's demise and the company's exit from the hardware market. This exit also included The 3DO Company's sale of the platform's successor, the M2, to its investor Matsushita.
Amstrad GX4000 and Amstrad CPC+ range
In 1990, Amstrad attempted to enter the console video game market with hardware based on its successful Amstrad CPC range but also capable of playing cartridge-based games with improved graphics and sound. This comprised the Amstrad CPC+ computers, including the same features as the existing CPCs, and the dedicated GX4000 console. However, only a few months later, the Mega Drive, a much-anticipated 16-bit console, was released in Europe, and the GX4000's aging 8-bit technology proved uncompetitive. Many of the games are direct ports of existing CPC games (available more cheaply on tape or disc) with few if any graphical improvements. Fewer than thirty games were released on cartridge, and the GX4000's failure ended Amstrad's involvement in the video game industry. The CPC+ range fared little better, as 8-bit computers had been all but superseded by similarly priced 16-bit machines such as the Amiga, though software hacks now make the advanced console graphics and sound accessible to users.
Apple Bandai Pippin
The Pippin is a game console designed by Apple Computer and produced by Bandai (now Bandai Namco) in the mid-1990s based around a PowerPC 603e processor and Classic Mac OS. It featured a 4x CD-ROM drive and a video output that could connect to a standard television monitor. Apple intended to license the technology to third parties; however, only two companies signed on, Bandai and Katz Media, while the only Pippin license to release a product to market was Bandai's. By the time the Bandai Pippin was released (1995 in Japan, 1996 in the United States), the market was already dominated by the Nintendo 64 and PlayStation. The Bandai Pippin also cost on launch, more expensive than the competition. Total sales were only around 42,000 units. In 2019, Apple returned to the video game industry with its game subscription service, Apple Arcade, which has proven to be successful.
Atari 5200
The Atari 5200 was created as the successor to the highly successful Atari 2600. Reasons for the console's poor reception include that most of the games were simply enhanced versions of those played on its predecessor and the awkward design of the controllers, which themselves were also prone to breaking down. The console sold only a little over a million units. When it was discontinued, its predecessor was marketed for several more years, as was its successor, the Atari 7800, which was marketed more carefully to avoid a similar debacle. Nonetheless, the failure of the Atari 5200 marked the beginning of Atari's fall in the console business.
Atari Jaguar
Released by Atari Corporation in 1993, this 64-bit system was more powerful than its contemporaries, the Genesis and the Super NES, with support for 3D graphics. Its sales were hurt by a lack of quality games and a number of crippling business practices on the part of Atari senior management. The controller was widely criticized as unwieldy with a baffling number of buttons, and the pack-in game, Cybermorph, was considered disappointing. The system never attained critical mass in the market before the release of the Saturn and PlayStation, and its failure brought the company down with it. Rob Bricken of Topless Robot described the Jaguar as "an unfortunate system, beleaguered by software droughts, rushed releases, and a lot of terrible, terrible games."
Atari Lynx
Released in 1989 in North America and Europe, and in Japan in 1990, by Atari Corporation, the Atari Lynx is a handheld game console. It was the first handheld electronic game system with a color LCD display. The system was originally developed by Epyx as the Handy Game. Forward-looking features include 16-bit graphics hardware with a blitter that can scale and distort images, a backlit display, and an ambidextrous controller layout. In late 1991, it was reported that Atari sales estimates were about 800,000, which Atari claimed was within their expected projections. In comparison, the Game Boy sold 16 million units by later that year. Overall lifetime sales were confirmed as being in the region of 3 million, a commercial failure despite positive critical reviews.
Atari VCS (2021)
The Atari VCS was developed by Atari Inc. to be a microconsole that would support numerous Atari games from its console library as well as other Linux-compatible games. Though announced in 2017 and supported by crowdfunding, publicly available units did not ship until June 2021. The console received lukewarm reception, seen as too costly compared to other consoles on the market without providing similar value. Atari reported a drop of about 90% in hardware revenue between 2021 and 2022, leading them to discontinue production of the unit and evaluating other options.
CD-i
In the 1980s, electronics company Philips together with Sony developed a new CD-based format called CD-i (Compact Disc Interactive) for various multimedia software. The first consumer-oriented player from Philips launched in 1991 with a launch price of $700 (). Although not technically a game console, Philips increasingly marketed the format as a video game platform from 1994 onwards. It was originally intended to be an add-on for the Super NES, but the deal fell through. Nintendo, however, did give Philips the rights and permission to use five Nintendo characters for the CD-i games. In 1993, Philips released two Zelda games, Link: The Faces of Evil and Zelda: The Wand of Gamelon. A year later, Philips released another Zelda game, Zelda's Adventure, and a few months later, a Mario game titled Hotel Mario. All four of these Nintendo-themed games are commonly cited by critics as being among the worst ever made. Much criticism was also aimed at the CD-i's controller. Although the Philips CD-i was extensively marketed by Philips, consumer interest remained low. Sales began to slow down by 1994, and in 1998, Philips announced that the product had been discontinued. In all, roughly 570,000 units were sold, with 133 games released.
Commodore 64 Games System
Released only in Europe in 1990, the C64GS was basically a Commodore 64 redesigned as a cartridge-based console. Aside from some hardware issues, the console did not get much attention from the public, who preferred to buy the cheaper original computer that had far more possibilities. Also, the console appeared just as the 16-bit era was starting, which left no chance for it to succeed as it was unable to compete with consoles like the Super Nintendo Entertainment System and Mega Drive.
Commodore CDTV
The CDTV was launched by Commodore in 1991. In common with the Philips CD-i and the 3DO, the CDTV was intended as an all-in-one home multimedia appliance that would play games, music, movies, and other interactive content. The name was short for "Commodore Dynamic Total Vision". The hardware was based on the Amiga 500 computer with a single-speed CD-ROM drive rather than a floppy disk drive, in a case that was designed to integrate unobtrusively with a home entertainment center. However, the expected market for home multimedia appliances did not materialize, and the CDTV was discontinued in 1993, having sold only 30,000 units. Commodore's next attempt at a CD-based console, the Amiga CD32, was considerably more successful.
Dreamcast
The Dreamcast, released globally in 1999, was Sega's final console before the company focused entirely on software. Although the console was initially successful and management in the company improved significantly after harsh lessons were learned from the Sega Saturn fiasco, the console also faced stiff competition, especially from the technically superior PlayStation 2 despite being in the market over a year ahead. The Dreamcast sold less than the Saturn, coming in at 9.13 million units compared to the Saturn's 9.5 million. The console's development was subject to further stress by an economic recession that struck Japan shortly after the console's release, forcing Sega, among other companies, to cut costs in order to survive, thus Sega refocused itself solely around software.
Fairchild Channel F
The Fairchild Channel F was a second generation console released in 1976, and the first home console unit to use interchangeable video game cartridges. It had respectable sales within its first year on the market, but soon faced competition from the Atari 2600, another cartridge-based system that was released in September 1977. Whereas the Channel F's games were generally based on intellectual and educational concepts, Atari had crafted games that were conversions of their action-based arcade video game hits, and were more popular, making the Atari 2600 the more popular system. By the end of 1977, the Atari 2600 sold about 400,000 total units compared to the 250,000 units of the Channel F. Fairchild's attempts to make more action-oriented games in 1978 failed to draw consumers to the system, and the console was completely overshadowed. By the time Fairchild sold the console technology to Zircon International in 1979, only 350,000 Channel F units had been sold in its lifetime.
FM Towns Marty
The FM Towns Marty is a fourth generation console manufactured by Fujitsu. Throughout its few years in the market, the console sold an underwhelming 45,000 units.
Genesis Nomad
The Nomad, a handheld game console by Sega released in North America in October 1995, is a portable variation of Sega's home console, the Genesis (known as the Mega Drive outside of North America). Designed from the Mega Jet, a portable version of the home console designed for use on airline flights in Japan, Nomad served to succeed the Game Gear and was the last handheld console released by Sega. Released late in the Genesis era, the Nomad had a short lifespan. Sold exclusively in North America, the Nomad was never officially released worldwide, and employs regional lockout. The handheld itself was incompatible with several Genesis peripherals, including the Power Base Converter, the Sega CD, and the Sega 32X. The release was five years into the market span of the Genesis, with an existing library of more than 500 Genesis games. With the Nomad's late release several months after the launch of the Saturn, this handheld is said to have suffered from its poorly timed launch. Sega decided to stop focusing on the Genesis in 1999, several months before the release of the Dreamcast, by which time the Nomad was being sold at less than a third of its original price. Reception for the Nomad is mixed between its uniqueness and its poor timing into the market. Blake Snow of GamePro listed the Nomad as fifth on his list of the "10 Worst-Selling Handhelds of All Time", criticizing its poor timing into the market, inadequate advertising, and poor battery life.
Gizmondo
The Gizmondo, a handheld video game device featuring GPS and a digital camera, was released by Tiger Telematics in the UK, Sweden and the U.S. starting in March 2005. With poor promotion, few games (only fourteen were ever released), short battery life, a small screen, competition from the cheaper and more reputable Nintendo DS and PSP, and controversy surrounding the company, the system was a commercial failure. Several high-ranking Tiger executives were subsequently arrested for fraud and other illegal activities related to the Gizmondo. It is so far the world's worst selling handheld console in history, and due to its failure in the European and American video game markets, it was released neither in Australia nor in Japan. Tiger Telematics went bankrupt when it was discontinued in February 2006, just 11 months after it was released.
HyperScan
Released in late 2006 by Mattel, the HyperScan was the company's first video game console since the Intellivision. It used radio frequency identification (RFID) along with traditional video game technology. The console used UDF format CD-ROMs. Games retailed for $19.99 and the console itself for $69.99 at launch, but at the end of its very short lifespan, prices of the system were down to $9.99, the games $1.99, and booster packs $0.99. The system was sold in two varieties, a cube, and a 2-player value pack. The cube box version was the version sold in stores. It included the system, controller, an X-Men game disc, and 6 X-Men cards. Two player value packs were sold online (but may have been liquidated in stores) and included an extra controller and 12 additional X-Men cards. The system was discontinued in 2007 due to poor console, game, and card pack sales. It is featured as one of the ten worst systems ever by PC World magazine.
LaserActive
Made by Pioneer Corporation in 1993 (a clone was produced by NEC as well), the LaserActive employed the trademark LaserDiscs as a medium for presenting games and also played the original LaserDisc movies. The LD-ROMs, as they were called, could hold 540 MB of data and up to 60 minutes of analog audio and video. In addition, expansion modules could be bought which allowed the LaserActive to play Genesis and/or TurboGrafx-16 games, among other things. Poor marketing combined with a high price tag for both the console itself at and the various modules (e.g., $599 for the Genesis module compared to $89 for the base console and $229 for Sega CD add-on to play CD-ROM based games) caused it to be quickly ignored by both the gaming public and the video game press. Fewer than 40 games were produced in all (at about $120 each), almost all of which required the purchase of one of the modules, and games built for one module could not be used with another. The LaserActive was quietly discontinued one year later after total sales of roughly 10,000 units.
Neo Geo CD
Released in Japan and Europe in 1994 and a year later in North America, the Neo Geo CD was first unveiled at the 1994 Tokyo Toy Show. Three versions of the Neo Geo CD were released: a front-loading version only distributed in Japan, a top-loading version marketed worldwide, and the Neo Geo CDZ, an upgraded, faster-loading version released in Japan only. The front-loading version was the original console design, with the top-loading version developed shortly before the Neo Geo CD launch as a scaled-down, cheaper alternative model. The CDZ was released on December 29, 1995 as the Japanese market replacement for SNK's previous efforts (the "front loader" and the "top loader"). The Neo Geo CD had met with limited success due to it being plagued with slow loading times that could vary from 30 to 60 seconds between loads, depending on the game. Although SNK's American home entertainment division quickly acknowledged that the system simply was unable to compete with the 3D-able powerhouse systems of the day like Nintendo's 64, Sega's Saturn and Sony's PlayStation, SNK corporate of Japan felt they could continue to maintain profitable sales in the Japanese home market by shortening the previous system's load-times. Their Japanese division had produced an excess number of single speed units and found that modifying these units to double speed was more expensive than they had initially thought, so SNK opted to sell them as they were, postponing production of a double speed model until they had sold off the stock of single speed units. As of March 1997, the Neo Geo CD had sold 570,000 units worldwide. Although this was the last known home console released under SNK's Neo Geo line, the newly reincarnated SNK Playmore relaunched the Neo Geo line with the release of the Neo Geo X in 2012. In April 2019, SNK announced at a conference in Seoul that they plan to release a Neo Geo 2 console and later a Neo Geo 3. They plan for the Neo Geo 2 to be a semi open platform console, where they will be built in games, as well as additional games that can be purchased separately. These are planned to be spiritual successors to the original Neo Geo arcade and home systems.
Neo Geo Pocket and Pocket Color
The two handheld video game consoles, created by SNK, were released between 1998–99 through markets dominated by Nintendo. The Neo Geo Pocket is considered to be an unsuccessful console, as it was immediately succeeded by the Color, a full color device allowing the system to compete more easily with the dominant Game Boy Color handheld, and which also saw a western release. Though the system enjoyed only a short life, there were some significant games released on the system. After a good sales start in both the U.S. and Japan with 14 launch titles (a record at the time) subsequent low retail support in the U.S., lack of communication with third-party developers by SNK's American management, the craze about Nintendo's Pokémon franchise, anticipation of the 32-bit Game Boy Advance, as well as strong competition from Bandai's WonderSwan in Japan, led to a sales decline in both regions. Meanwhile, SNK had been in financial trouble for at least a year – the company soon collapsed, and was purchased by American pachinko manufacturer Aruze in January 2000. Eventually on June 13, 2000, Aruze decided to quit the North American and European markets, marking the end of SNK's worldwide operations and the discontinuation of Neo Geo hardware and software there. The Neo Geo Pocket Color (and other SNK/Neo Geo products) did however, last until 2001 in Japan. It was SNK's last video game console, as the company filed for bankruptcy on October 22, 2001. Though commercially failed, the Neo Geo Pocket and Pocket Color had been regarded as influential systems. It also featured an arcade-style microswitched 'clicky stick' joystick, which was praised for its accuracy and being well-suited for fighting games. The Pocket Color system's display and 40-hour battery life were also well received. Although these were the last known systems released under SNK's Neo Geo line, the newly reincarnated SNK Playmore relaunched the Neo Geo line with the release of the Neo Geo X in 2012. In April 2019, SNK announced at a conference in Seoul that they plan to release a Neo Geo 2 console and later a Neo Geo 3. They plan for the Neo Geo 2 to be a semi open platform console, where they will be built in games, as well as additional games that can be purchased separately. These are planned to be spiritual successors to the original Neo Geo arcade and home systems.
N-Gage
Made by the Finnish mobile phone manufacturer Nokia, and released in 2003, the N-Gage is a small handheld console, designed to combine a feature-packed mobile/cellular phone with a handheld games console. The system was mocked for its taco-like design, and sales were so poor that the system's price dropped by $100 within a week of its release. Common complaints included the difficulty of swapping games (the cartridge slot was located beneath the battery slot, requiring its removal) and the fact that its cellphone feature required users to hold the device "sideways" (i.e. the long edge of the system) against their cheek. A redesigned version, the N-Gage QD, was released to eliminate these complaints. However, the N-Gage brand still suffered from a poor reputation and the QD did not address the popular complaint that the control layout was "too cluttered". The N-Gage failed to reach the popularity of the Game Boy Advance, Nintendo DS, or the Sony PSP. In November 2005, Nokia announced the failure of its product, in light of poor sales (fewer than three million units sold during the platform's three-year run, against projections of six million). Nokia ceased to consider gaming a corporate priority until 2007, when it expected improved screen sizes and quality to increase demand. However, Nokia's presence in the cell phone market was soon eclipsed by the iPhone and later Android phones, causing development to gravitate to them and sealing the fate of the N-Gage brand. In 2012, Nokia abandoned development on the Symbian OS which was the base for N-Gage and transitioned to Windows Phone.
Nintendo 64DD
A disk drive add-on and Internet appliance for the Nintendo 64, it was first announced at 1995's Nintendo Shoshinkai game show event (now called Nintendo World). The 64DD was repeatedly and notoriously delayed until its release in Japan on December 1, 1999. Nintendo, anticipating poor sales, sold the 64DD through mail order and bundled with its Randnet dialup subscription service instead of directly to retailers or consumers. As a result, the 64DD was supported by Nintendo for only a short period of time and only nine games were released for it. It was never released outside Japan. Most 64DD games were either cancelled entirely, released as normal Nintendo 64 cartridges or ported to other systems such as Nintendo's next-generation GameCube. Upon announcement of the cancellation of Randnet in 2001, Nintendo reported a total of 15,000 current 64DD users on Randnet.
Nuon
The Nuon is a DVD decoding chip from VM Labs that is also a fully programmable CPU with graphics and sound capabilities. The idea was that a manufacturer could use the chip instead of an existing MPEG-2 decoder, thus turning a DVD player into a game console. A year after launch, only eight games were available. One game, Iron Soldier 3, was recalled for not being compatible with all systems.
Ouya
The Ouya is an Android-based microconsole released in 2013 by Ouya, Inc. Even though the Ouya was a success on Kickstarter, the product was plagued by problems from the beginning. The console was very slow to ship and suffered hardware issues. On top of this, the console had a very limited library of games. The critical reception ranged from lukewarm to outright calling the console a scam. Just two years after its release, Ouya was in a dire financial situation and negotiated a buyout with Razer. Razer continued to run software services for Ouya until June 2019, after which the company deactivated all accounts and online services, rendering most apps unusable.
PC-FX
The PC-FX is the successor to the PC Engine (aka TurboGrafx-16), released by NEC in late 1994. Originally intended to compete with the Super Famicom and the Mega Drive, it instead wound up competing with the PlayStation, Sega Saturn, and Nintendo 64. The console's 32-bit architecture was created in 1992, and by 1994 it was outdated, largely due to the fact that it was unable to create 3D images, instead utilizing an architecture that relied on JPEG video. The PC-FX was severely underpowered compared to other fifth generation consoles and had a very low budget marketing campaign, with the system never managing to gain a foothold against its competition or a significant part of the marketshare. The PC-FX was discontinued in early 1998 so that NEC could focus on providing graphics processing units for the upcoming Sega Dreamcast. Around this time, NEC announced that they had only sold 100,000 units with a library of only 62 titles, most of which were dating sims. It was never released outside Japan.
PlayStation Classic
Following the release of Nintendo's NES Classic Edition and SNES Classic Edition, microconsoles that included over 20 preloaded classic games from those respective systems, Sony followed suit with the PlayStation Classic. Like the Nintendo systems, the PlayStation Classic was presented as a smaller form factor of the original PlayStation preloaded with 20 games. It was launched in early December 2018 with a suggested retail price of . The system was heavily criticized at launch. For nine of the games, it used PAL versions (favored primarily in European market) rather than NTSC (favored primarily in North American market and Japan), meaning they ran at a slower 50 Hz clock compared to the 60 Hz, which caused notable frame rate problems and impacted the gameplay style for some of the more highly-interactive titles. The emulation also lacked the feature set that Nintendo had set with its microconsoles. The included game list, while varied by region, also was noted to lack many of the titles that had made the original PlayStation successful, and had a heavy focus on the early games on the console. Some of these absences were attributed to intellectual rights (for example, Activision holding the rights to Crash Bandicoot, Spyro the Dragon, and Tony Hawk's), but other omissions were considered odd and disappointing. The system sold poorly and within the month, its suggested retail price had dropped to . By April 2019, the price had dropped to , and CNET described the PlayStation classic as "arguably one of the top flops of 2018".
PlayStation Vita
Sony's second major handheld game console, the PlayStation Vita, was released in Japan in 2011 and in the West the following year. The successor to the PlayStation Portable, Sony's intent with the system was to blend the experience of big budget, dedicated video game platforms with the trend of mobile gaming. With a relatively low price, a robust launch title assortment, and a strong debut, analysts predicted the Vita would be a success. However, sales tanked shortly after release; for instance, during Christmas 2011, sales saw a 78% drop in Japan. By 2018, when Sony announced it would end physical game production for the system, the Vita had sold fewer than 15 million units. Hardware production for the Vita ended entirely in March 2019, and Sony does not plan to release a successor. GamesIndustry.biz attributed the Vita's failure to a number of factors, including competition from smartphones and Nintendo's rival 3DS platform, its design being too conceptually similar to the PSP, and a general lack of support from Sony and other developers.
PSX (DVR)
Built upon the PlayStation 2, the PSX enhanced multimedia derivative was touted to bring convergence to the living room in 2003 by including non-gaming features such as a DVD recorder, TV tuner, and multi-use hard drive. The device was considered a failure upon its Japanese release due to its high price and lack of consumer interest, which resulted in the cancellation of plans to release it in the rest of the world. Not only was it an unsuccessful attempt by Sony Computer Entertainment head Ken Kutaragi to revive the ailing consumer electronics division, it also hurt Sony's media convergence plans.
Saturn
The Sega Saturn was the successor to the Genesis as a 32-bit fifth-generation console, released in Japan in November 1994 and in Western markets mid-1995. The console was designed as a competitor to Sony's PlayStation, released nearly at the same time. With the system selling well in Japan and Sega wanting to get a head start over the PlayStation in North America, the company decided to release the system in May instead of September 1995, which was the same time the PlayStation was going to be released in North America. This left little time to promote the product and limited quantities of the system available at retail. Sega's release strategy also backfired when, shortly after Sega's announcement, Sony announced the price of the PlayStation as being $100 less than the list price for the Saturn. The console also suffered from behind the scenes management conflicts and a lack of coordination between the Japanese and North American branches of the company, leading to the Saturn to be released shortly after the release of the 32X, which created distribution and retail problems. By the end of 1996, the PlayStation had sold 2.9 million units in the U.S., with only 1.2 million units sold by the Saturn. With the added competition from the subsequent release of the Nintendo 64, the Saturn lost market share in North America and was discontinued by 1999. With lifetime sales estimated at 9.5 million units worldwide, the Saturn is considered a commercial failure. The cancellation of a game in the Sonic the Hedgehog series, known in development as Sonic X-treme, has been considered a significant factor in the console's struggle to find an audience. The impact of the failure of the Saturn carried over into Sega's next console, the Dreamcast. However, the console gained interest in Japan and was not officially discontinued until December 7, 2000.
Stadia
Google released Stadia, a cloud gaming platform using the power of its existing data centers, in November 2019. Players could access games through web browsers, Chromecast devices, or on mobile platforms. In addition to partnering with several developers to release titles on Stadia, Google created its own Stadia Games and Entertainment division with Jade Raymond as its lead, along with acquired a handful of existing studios. Unlike prior streaming options where players had access to the full set of titles for a monthly subscription fee, Google opted to have players buy each game they wanted to play, in addition to offering a subscription tier that offered free games. This approach did not obtain significant traction with users, and by February 2021, the company closed down Stadia Games and shuttered the studios it had acquired, stating that it took too much significant investment to develop games, and instead would continue to focus on bringing other titles to the service. After another troubled year, Google stated in February 2022 they would be working to use Stadia's technology as a white-label product for corporate partners, such as delivering game demos over streaming technology. In September 2022, Google announced they were shuttering Stadia as a consumer product, with the service going offline in January 2023 and supplying refunds for those that purchased equipment, subscriptions and games. Google said "it hasn't gained the traction with users that we expected" as the reason for the shutdown, though intended to use the technology in its other business sectors. Game journalists believed that Google did not make Stadia a unique offering with nearly no exclusives, escalated by the shutting down of its studios, and requiring players to repurchase games at full price to play them on the service. Stadia also failed to offer a latency advantage over other streaming services that was promised when announced, and Google had been slow to roll out Stadia internationally, remaining behind GeForce Now and Xbox Cloud Gaming as of February 2022.
uDraw GameTablet
The uDraw GameTablet is a graphics tablet developed by THQ for use on seventh generation video game consoles, which was initially released for the Wii in late 2010. Versions for the PlayStation 3 and Xbox 360 were released in late 2011. THQ also invested in several games that would uniquely use the tablet, such as uDraw Pictionary. The Wii version had positive sales, with more than 1.7 million units sold, prompting the introduction of the unit for the other console systems. These units did not share the same popularity; 2011 holiday sales in North America fell $100 million below company targets with more than 1.4 million units left unsold by February 2012. THQ commented that if they had not attempted to sell these versions of uDraw, the company would have been profitable that respective quarter, but instead suffered an overall $56 million loss. Because of this failure, THQ was forced to shelve some of its less-successful franchises, such as the Red Faction series. THQ would eventually file for bankruptcy and sell off its assets in early 2013.
Vectrex
Though its independent monitor could display only monochrome visuals, the console's vector-based graphics and arcade-style controller with analog joystick allowed developers to create a strong games library with faithful conversions of arcade hits and critically praised exclusives. However, its release shortly before the video game crash of 1983 doomed it to an early grave.
Virtual Boy
This red monochromatic 3-D "virtual reality" system was widely panned by critics and failed due to issues related to players getting eye strain, stiff necks, nausea, and headaches when playing it, along with the console's price and lack of portability. It came out in 1995 and was Nintendo's first failed console release. Gunpei Yokoi, the designer of the platform and the person largely credited for the success of the original Game Boy handheld and the Metroid series of games, resigned from the company shortly after the Virtual Boy ceased sales in order to start his own company, although for reasons unrelated to the console's success. The Virtual Boy was included in a Time "50 Worst Inventions" list published in May 2010.
Wii U
Nintendo's Wii U was released in November 2012. It was designed as a successor to the Wii to provide a more sophisticated experience and draw back "core" gamers that had dismissed the Wii, which they found was aimed for casual gameplay. The Wii U features the GamePad, the unit's primary controller with a touchscreen allowing for dual-screen play similar to the Nintendo DS line, or can be used for off-TV play. Though the Wii U received positive coverage, it had low sales of fewer than 14 million units by the end of 2016 compared to the Wii's lifetime of 101 million units. Nintendo executives attributed the poor sales of the Wii U to a combination of factors. They admitted their messaging of the Wii U's abilities had not been clear, leading to a general perception that the unit was primarily a tablet system or an add-on to the original Wii rather than a new home console. They also recognized a failure to manage their game release schedule, and to garner significant support from third-party publishers and developers, leaving the Wii U library with gaps in software releases. Nintendo stated an expectation to sell 100 million Wii U units, and this over-estimation of sales contributed to several financial quarters of losses through 2016. Nintendo's next console, the Nintendo Switch, became a "make or break" product for the company due to the Wii U's failure, according to Reggie Fils-Aimé, and its development and marketing avoided several of the pitfalls that occurred for the Wii U; The Switch proved successful quickly, outselling the lifetime sales of the Wii U within nine months of its release. The Wii U was discontinued worldwide on January 31, 2017, a month before the Nintendo Switch was released.
Video game software failures
The following is an incomplete list of software that have been deemed commercial failures by industry sources.
Anthem
Anthem is an action role-playing game developed by BioWare and published by Electronic Arts in 2019. Its seven-year-long development period started after release of Mass Effect 3, and was envisioned by BioWare to move away from the typical role-playing formats of Mass Effect and Dragon Age, and become a type of live-service game. The game had a difficult development cycle due to shifting staff, technological difficulties in switching to the Frostbite Engine, and demands from EA's management on the direction of the game. Other BioWare projects from its other studios were put on hold to complete Anthem for release, and according to one BioWare developer, it was only the work of the last fifteen months of development that made it into the game. Due to this rush near the end of the development cycle, the game was considered to be lacking content at release, contained numerous software bugs that plagued gameplay, and was found too similar to other live-service games like Destiny and World of Warcraft. BioWare and EA had stated their intent to revamp the game to meet expectations, but the game struggled to maintain a playbase. Though the game ultimately sold over 5 million copies in its lifetime, EA had anticipated sales to be much higher and comparable to the Battlefield series, and in 2021, EA opted to terminate further development work on Anthem, though its game servers remain active as of 2024.
APB: All Points Bulletin
APB: All Points Bulletin was a multiplayer online game developed by Realtime Worlds in 2010. The game, incorporating concepts from their previous game Crackdown and past work by its lead developer David Jones, who had helped create the Grand Theft Auto series, was set around the idea of a large-scale urban battle between Enforcers and Criminals; players would be able to partake in large-scale on-going missions between the two sides. The game was originally set as both a Microsoft Windows and Xbox 360 title and as Realtime Worlds' flagship title for release in 2008, but instead the company set about developing Crackdown first, and later focused APB as a Windows-only title, potentially porting the game to the Xbox 360 later. Upon launch in June 2010, the game received lukewarm reviews, hampered by the existence of a week-long review embargo, and did not attract the expected number of subscribers to maintain its business model. Realtime Worlds, suffering from the commercial failure of the game, sold off a second project, Project MyWorld, and subsequently reduced its operations to administration and a skeleton crew to manage the APB servers while they attempted to find a buyer, including possibly Epic Games who had expressed interest in the title. However, without any acceptable offers, the company was forced to shut down APB in September 2010. Eventually, the game was sold to K2 Network, a company that has brought other Asian massive-multiplayer online games to the Western markets as free-to-play titles, and similar changes occurred to APB when it was relaunched by K2.
Artifact
Artifact is a 2018 digital collectible card game (CCG), designed by Richard Garfield, the creator of Magic: The Gathering, and developed by Valve Corporation. It was designed as a spinoff from Dota 2, and while playing out encounters with cards worked similar to other CCGs like Hearthstone, the game used the concept of multiple lanes from Dota 2, with three different playfields involved at all times. Instead of a free to play model, Artifact was released at a premium cost, and encouraged players to buy new booster packs and trade and sell individual cards on the Steam Marketplace. Valve envisioned Artifact to draw competitive players and lead to esports tournaments. At launch, the game was found to be overly complex and relied too much on random number generation mechanics in gameplay, and the monetization approach was considered as "pay to win", requiring players to invest in new cards as to be able to compete. Within half a year from release, its player base has significantly dropped; some users on Twitch began using channels labeled for Artifact to stream inappropriate content on the basis of such streams having low viewerships by that point, forcing Twitch to take moderation actions. Valve stated in around this time that they were planned to reevaluate and redesign the game to meet complaints from players. Artifact 2.0 was put into beta testing in March 2020, with one of the largest changes being the removal of the monetization options for obtaining cards. By March 2021, Valve decided to end development of Artifact, and released two free versions of the game in its current state, Artifact Classic, which incorporated the gameplay of the original release, and Artifact Foundary, which included the changes envisioned for the v2.0 release. Both versions completely removed monetization options. Valve's CEO Gabe Newell called Artifact a "giant disappointment" though considered its failure a learning experience.
Babylon's Fall
Babylon's Fall was an online action role-playing game developed by PlatinumGames and published by Square Enix for the PlayStation 4, PlayStation 5 and PC. The game was PlatinumGames' first attempt at a live service game, and was described as an attempt to combine the combat system featured on Nier: Automata with multiplayer, although the game could also be played solo. Originally teased at E3 2018, Babylon's Fall would suffer a multitude of delays away from its initial planned 2019 release date, with the game being further delayed by the COVID-19 pandemic.
Upon its eventual release on March 3, 2022, Babylon's Fall was met with little fanfare and received generally negative reviews from critics and players, many who criticised the game's lacklustre mechanics and combat, with several critics calling the game overpriced. The concurrent player count for Babylon's Fall only peaked at 1,179 on release day, and player numbers declined rapidly afterwards; by April 13, 2022, it was reported that the game's player count had fallen to below 10 concurrent players, and on May 4, 2022, the player count reportedly dropped to a single player. On September 13, 2022, Square Enix announced that they would be ending support for Babylon's Fall on February 27, 2023, and suspended digital sales of the game and in-game currency, despite initially promising that they were intending to support the game in the long-term. Babylon's Fall's commercial failure was described by Forbes as "one of the biggest mainstream misses we’ve seen in recent memories", with TechRadar also naming the game biggest disaster of the year". Despite this, PlatinumGames CEO Atsushi Inaba later said that the failure of Babylon's Fall did not affect any of the company's plans to continue expanding into live service games, and also attributed some of Babylon's Fall's faults to the separate developments of the core game and the live service elements between PlatinumGames and Square Enix, respectively.
Battlecruiser 3000AD
One of the most notorious PC video game failures, Battlecruiser 3000AD (shortened BC3K) was hyped for almost a decade before its disastrous release in the U.S. and Europe. The game was the brainchild of Derek Smart, an independent game developer renowned for lengthy and aggressive online responses to perceived criticism. The concept behind BC3K was ambitious, giving the player the command of a large starship with all the requisite duties, including navigation, combat, resource management, and commanding crew members. Advertisements appeared in the video game press in the mid-1990s hyping the game as, "The Last Thing You'll Ever Desire." Computer bulletin boards and Usenet groups were abuzz with discussion about the game. As time wore on and numerous delays were announced, excitement turned to frustration in the online community. Smart exacerbated the negative air by posting liberally on Usenet. The posts ignited one of the largest flame wars in Usenet history. During the development cycle, Smart refused to let other programmers have full access to his code and continued to change directions as new technology became available, causing the game to be in development for over seven years.
In November 1996, Take-Two Interactive finally released the game, reportedly over protests from Smart. The game was buggy and only partially finished, with outdated graphics, MIDI music, a cryptic interface, and almost no documentation. Critics and the video game community reacted poorly to the release. Eventually, a stable, playable version of the game was released as Battlecruiser 3000AD v2.0. Smart eventually released BC3K as freeware and went on to create several sequels under the Battlecruiser and Universal Combat titles.
Beyond Good & Evil
Although critically acclaimed and planned as the first part of a trilogy, Beyond Good & Evil (released in 2003) flopped commercially. Former Ubisoft employee Owen Hughes stated that it was felt that the simultaneous releases of internationally competing titles Tom Clancy's Splinter Cell and Prince of Persia: The Sands of Time and in Europe, XIII (all three published by Ubisoft and all of which had strong brand identity in their markets), made an impact on Beyond Good & Evils ability to achieve interest with the public. The game's commercial failure led Ubisoft to postpone plans for any subsequent titles in the series. A sequel was announced at the end of the Ubidays 2008 opening conference, and an HD version of the original was released for the Xbox 360 and PlayStation 3 via download in 2011. Alain Corre, Ubisoft's Executive Director of EMEA Territories, commented that the Xbox 360 release "did extremely well", but considered this success "too late" to make a difference in the game's poor sales. Beyond Good & Evil 2 was announced at Ubisoft's press conference at E3 2017, fourteen years after the release of the original game.
Brütal Legend
Brütal Legend is Double Fine Productions' second major game. The game is set in a world based on heavy metal music, includes a hundred-song soundtrack across numerous metal subgenres, and incorporates a celebrity voice cast including Jack Black, Lemmy Kilmister, Rob Halford, Ozzy Osbourne, Lita Ford, and Tim Curry. The game was originally to be published by Vivendi Games via Sierra Entertainment prior to its merger with Activision. Following the merger, Activision declined to publish Brütal Legend, and Double Fine turned to Electronic Arts as their publishing partner, delaying the game's release. Activision and Double Fine counter-sued each other for breach of contract, ultimately settling out of court. The game was designed as an action adventure/real-time strategy game similar to Herzog Zwei; as games in the real-time strategy genre generally do not perform well on consoles, Double Fine was told by both Vivendi and Electronic Arts to avoid stating this fact and emphasize other elements of the game. With some positive reviews from critics, the game got criticized for its real-time strategy elements that were not mentioned within the pre-release marketing, making it a difficult game to sell to players. Furthermore, its late-year release in October 2009 buried the title among many top-tier games, including Uncharted 2: Among Thieves, Batman: Arkham Asylum and Call of Duty: Modern Warfare 2. It only sold about 215,000 units within the first month, making it a "retail failure", and though Double Fine had begun work on a sequel, Electronic Arts cancelled further development. According to Tim Schafer, president and lead developer of Double Fine, 1.4 million copies of the game had been sold by February 2011.
Concord
Concord was a hero shooter game developed by Firewalk Studios and released for the PlayStation 5 and Windows in August 2024. The game was reported to have eight years of development work, estimated to have cost at least to development and was further funded when Firewalk was acquired by Sony Interactive Entertainment in 2023, where it was purportedly called "the future of PlayStation". The game was to be a live service title, supported by cosmetic microtransactions, but also was shipped as a premium title rather than free-to-play. At launch, the game saw fewer than 700 concurrent players through Steam and around 1,300 on PlayStation, and estimates of sales based on player counts were fewer than 25,000, according to Simon Carless. Carless cited the lack of marketing, the high price point for a live service game, and the saturation of hero shooters already on the market as reasons for the poor sales. With the poor sales during the first two weeks of release, Firewalk announced they would pull the game from sale, offer full refunds to all buyers, terminate the servers on September 6, 2024, and then would determine what direction to take the game, if any, while offline. Sony closed Firewalk in October 2024, and permanently shut down Concord. Sony's COO/CFO Hiroki Totoki said in an investor call following the closure of Concord that "with regards to new IP, of course, you don’t know the result until you actually try it", and that "we probably need to have a lot of gates, including user testing or internal evaluation, and the timing of such gates. And then we need to bring them forward, and we should have done those gates much earlier than we did."
Conker's Bad Fur Day
Conker's Bad Fur Day is a 3D platformer by Rare for the Nintendo 64. In it, the player controls Conker, a greedy, hard-drinking squirrel, through levels. While it is visually similar to Rare's previous games like Banjo-Kazooie and Donkey Kong 64, Conker's Bad Fur Day is aimed at mature audiences and features profanity, graphic violence, and off-color humor. The game was originally designed to be family-friendly, but was retooled after prerelease versions of the game were criticized for their similarities to Rare's previous games. Though receiving critical acclaim, Conker's Bad Fur Day performed well below expectations, with only 55,000 copies sold by April 2001. Numerous reasons have been cited for the game's perceived failure to connect with audiences, such as its high cost, advertisements exclusive to the older audience, and its release towards the end of the Nintendo 64's life cycle. Nintendo, which held a minority stake in Rare at the time, also did not actively promote the game. After Microsoft bought out Rare, it remade the game on the Xbox as Conker: Live and Reloaded which included an online multiplayer component based on part of the original game.
Daikatana
One of the more infamous failures in PC video games was Daikatana, which was drastically hyped due to creator John Romero's popular status as one of the key designers behind Doom. However, after being wrought with massive over-spending and serious delays, the game finally launched to incredibly poor critical reaction because of bugs, lackluster enemies, poor gameplay and production values, all of which were made worse by its heavy marketing campaign proclaiming it as the next "big thing" in first person shooters.
Dominion: Storm Over Gift 3
The first title released by Ion Storm, Dominion is a real time strategy title similar to Command & Conquer and Warcraft, based as a spin-off to the G-Nome canon. The game was originally developed by 7th Level, but was purchased by Ion Storm for US$1.8 million. The project originally had a budget of US$50,000 and was scheduled to be finished in three months with two staff members. Due to mismanagement and Ion Storm's inexperience, the project took over a year, costing hundreds of thousands of dollars. Dominion was released in July 1998. It received bad reviews and sold poorly, falling far short of recouping its purchase price, let alone the cost of finishing it. The game divided employees working on Ion's marquee title, Daikatana, arguably leading to the walkout of several key development team members. It put a strain on Ion Storm's finances, leading the once well-funded startup to scramble for cash as Daikatana development extended over several years.
Dragon Age: The Veilguard
Dragon Age: The Veilguard is a 2024 action role-playing game developed by BioWare and published by Electronic Arts. Released on October 31, 2024, it serves as the fourth major installment in the Dragon Age franchise, following Dragon Age: Inquisition.
The game had a protracted development period, restarting multiple times, including at one point planned as a live service game, and led to what critics and players felt was an uneven and inconsistent game on release. By January 2025, EA said that Dragon Age' ' underperformed by around 50% of the expected player count, contributing to lowering its forecasts for it's upcoming quarters.
Epic Mickey 2: The Power of Two
A sequel to the successful Wii-exclusive platformer Epic Mickey, Epic Mickey 2: The Power of Two was developed in 2012 by Junction Point Studios and published by Disney Interactive Studios for the Wii, Wii U, Xbox 360, PlayStation 3, PlayStation Vita, and Microsoft Windows. Though heavily advertised and being released on multiple consoles, only 270,000 copies of Epic Mickey 2 were sold in North America, barely a quarter of the original's sales of 1.3 million. The game's failure led to the shutdown of Junction Point and the cancellation of future entries in the Epic Mickey series.
E.T. the Extra-Terrestrial
Based on Steven Spielberg's popular 1982 movie of the same name and reportedly coded in just five weeks, this Atari 2600 game was rushed to the market for the 1982 holiday season.
Even with 1.5 million copies sold, the sales figures came nowhere near Atari's expectations, as it had ordered production of five million copies, with many of the sold games being returned to Atari for refunds by dissatisfied consumers. It had become an urban legend that Atari had buried the unsold cartridges of E.T. and other games in a landfill in Alamogordo, New Mexico, which was confirmed in 2014 when the site was allowed to be excavated, with former Atari personnel affirming they had dumped about 800,000 cartridges, including E.T. and other poorly-selling games. The financial figures and business tactics surrounding this product are emblematic of the video game crash of 1983 and contributed to Atari's bankruptcy. Atari paid $25 million for the license to produce the game, which further contributed to a debt of $536 million (equivalent to $ billion today). The company was divided and sold in 1984.
Grim Fandango
Known for being the first adventure game by LucasArts to use three-dimensional graphics, Grim Fandango received positive reviews and won numerous awards. It was originally thought that the game sold well during the 1998 holiday season. However, the game's sales appeared to be crowded out by other titles released during the late 1998 season, including Half-Life, Metal Gear Solid and The Legend of Zelda: Ocarina of Time. Based on data provided by PC Data (now owned by NPD Group), the game sold about 95,000 copies up to 2003 in North America, excluding online sales. Worldwide sales are estimated between 100,000 and 500,000 units by 2012. Developer Tim Schafer along with others of the Grim Fandango development team would leave LucasArts after this project to begin a new studio (Double Fine Productions). Grim Fandangos relatively modest sales are often cited as a contributing factor to the decline of the adventure game genre in the late 1990s, though the title's reputation as a "flop" is to an extent a case of perception over reality, as Schafer has pointed out that the game turned a profit, with the royalty check he eventually received being proof. His perspective is that the adventure genre did not so much lose its audience as fail to grow it while production costs continued to rise. This made adventure games less attractive investment for publishers; in contrast, the success of first-person shooters caused the console market to boom. The emergence of new distribution channels which did not rely on the traditional retail model would ultimately re-empower the niche genre. Double Fine has since remastered the game for high definition graphics and re-released it through these channels.
Jazz Jackrabbit 2
Although reviews for Jazz Jackrabbit 2 were positive, sales were insufficient and resulted in a loss for its publisher Gathering of Developers. This prevented the developers from finding a publisher for Jazz Jackrabbit 3, thus leading to its cancellation.
Kingdoms of Amalur: Reckoning Kingdoms of Amalur: Reckoning is an action RPG game released in 2012, developed by 38 Studios and Big Huge Games. 38 Studios had been formed by Curt Schilling initially in Massachusetts. After acquiring Big Huge Games from the failing THQ, the studio secured a loan guarantee from the economic development board of Rhode Island for establishing 38 Studios within the state and promoting job growth. Kingdoms was generally well-received by critics, and initial sales within the first three months were around 1.3 million. Though impressive, Rhode Island recognized that the title was expected to have hit 3 million units by this point for 38 Studios to pay back the loan. 38 Studios defaulted on one of the loan repayments, leading to a publisher to pull out of a investment in a sequel. The studio managed to make the next payment, but could not make payroll or other expenses, and shortly later declared bankruptcy by May 2012. Resolving the unpaid loan became a civil lawsuit, and ultimately with the state settling with Schilling and other investors for a payment, leaving the state around short on its loan. The rights to Kingdoms eventually fell to THQ Nordic AB, the holding company that came to acquire many of the former THQ properties after their bankruptcy.
The Lamplighters League The Lamplighters League is a turn-based tactics game developed by Harebrained Schemes and published by Paradox Interactive (the parent company of Harebrained Schemes) in 2023. The game was based on adventures published in pulp magazines of the 1930s. While the game received positive reviews, it failed to achieve a commercial success. A week after its release, Paradox stated that they intended to write off the game at a cost of (). This followed after Paradox had laid off 80% of Harebrained Schemes' employees. Harebrained Schemes, which had been bought by Paradox in 2018, announced that they agreed to end their partnership with Paradox, and reorganized as an independent company.
The Last Express
Released in 1997 after five years in development, this $6 million adventure game was the brainchild of Jordan Mechner, the creator of Prince of Persia. The game was noted for taking place in almost complete real-time, using Art Nouveau-style characters that were rotoscoped from a 22-day live-action video shoot, and featuring intelligent writing and levels of character depth that were not often seen in computer games. Even with rave reviews, Broderbund, the game's publisher, did little to promote the game, apart from a brief mention in a press release and enthusiastic statements by Broderbund executives, in part due to the entire Broderbund marketing team quitting in the weeks before its release. Released in April, the game was not a success, selling only about 100,000 copies, a million copies short of breaking even.
After the release of the game, Mechner's company Smoking Car Productions quietly folded, and Brøderbund was acquired by The Learning Company, who were only interested in Broderbund's educational software, effectively putting the game out of print which also caused the PlayStation port to be cancelled after almost being finished for a 1998 release. Mechner was later able to reacquire the rights to the game, and in 2012, worked with DotEmu to release an iOS port of the title, before making it to Android as well.
MadWorld MadWorld is a beat 'em up title for the Wii developed by PlatinumGames and distributed by Sega in March 2009. The game was purposely designed as an extremely violent video game. The game features a distinctive black-and-white graphic style that borrows from both Frank Miller's Sin City and other Japanese and Western comics. This monotone coloring is only broken by blood that comes from attacking and killing foes in numerous, gruesome, and over-the-top manners. Though there had been violent games available for the Wii from the day it was launched (e.g. No More Heroes and Manhunt 2), many perceived MadWorld as one of the first mature titles for the system, causing some initial outrage from concerned consumers about the normally family-friendly system. MadWorld was well received by critics, but this did not translate into commercial sales; only 123,000 units of the game sold in the United States during its first six months on the market. Sega considered these sales to be "disappointing". Regardless, the game's critical success allowed a successor, Anarchy Reigns (2012), to be produced.
Mario & Luigi: Bowser's Inside Story + Bowser Jr.'s Journey
After the success of Mario & Luigi: Superstar Saga + Bowser's Minions, a remake of the first game in the Mario & Luigi series in 2017, Nintendo announced a remake of Mario & Luigi: Bowser's Inside Story in March 2018 in a similar vain of the previous remake with improved visuals, a remastered soundtrack, and an additional story in Bowser Jr.'s Journey. Upon release in January 2019, the game was a critical success though it got slightly worse reviews than the original. However, the game was a commercial failure, selling under 9,500 copies in its first week in Japan. Famitsu reported that the game had a lifetime total of 34,523 copies, making it one of the worst-selling Mario games since the Virtual Boy. The game's failure was attributed to being one of the last games on the Nintendo 3DS, with the Nintendo Switch having already been out for nearly two years. The game has also been credited with resulting in the bankruptcy of series developer AlphaDream.
Marvel vs. Capcom: Infinite Marvel vs. Capcom: Infinite is the sixth main installment in Capcom's Marvel vs. Capcom series of fighting games that pits Capcom and Marvel's famous characters against each other. When the game was shown at E3 2017, some of the character designs were poorly received, particularly Chun-Li from Street Fighter and Dante from Devil May Cry. The game was also criticized for its lack of X-Men or Fantastic Four characters. Capcom projected that the game would sell two million units by December 31, 2017, but the game launched with a poor showing thanks to the game having a low budget, which would cause Marvel vs. Capcom: Infinite to generate only half of the projected amount that Capcom gave. Marvel vs. Capcom: Infinites failing would lead to cancellation of DLC, and its exclusion from tournaments such as Evo 2018, as well as Capcom being quiet surrounding the title. The title's failure was also in part due to the competition it received from Bandai Namco's Dragon Ball FighterZ which along with its massive name recognition, took influence from Infinite's two predecessors, Marvel vs. Capcom 2: New Age of Heroes and Ultimate Marvel vs. Capcom 3.
Ōkami Ōkami was a product of Clover Studio with direction by Hideki Kamiya, previously known for his work on the Resident Evil and Devil May Cry series. The game is favorably compared to a Zelda-type adventure, and is based on the quest of the goddess-wolf Amaterasu using a "celestial brush" to draw in magical effects on screen and to restore the cursed land of ancient Nippon. Released first in 2006 on the PlayStation 2, it later received a port to the Wii system, where the brush controls were reworked for the motion controls of the Wii Remote. The game was well received by critics, with Metacritic aggregate scores of 93% and 90% for the PlayStation 2 and Wii versions, respectively, and was considered one of the best titles for 2006; IGN named it their Game of the Year. Though strongly praised by critics, fewer than 600,000 units were sold by March 2009. These factors have led for Ōkami to be called the "least commercially successful winner of a game of the year award" in the 2010 version of the Guinness World Records Gamer's Edition. Shortly after its release, Capcom disbanded Clover Studio, though many of its employees, including Kamiya, went on to form PlatinumGames and produce MadWorld and the more successful Bayonetta. Strong fan support of the game led to a spiritual successor, Ōkamiden (2010), on the Nintendo DS, followed by a high-definition remaster of Ōkami for the PlayStation 3, PlayStation 4, Xbox One, Nintendo Switch and Microsoft Windows released during the mid-to-late 2010s.
Kamiya had expressed interest in a sequel to Ōkami since moving to PlatinumGames, stating he had been in talks with Capcom on the idea. He left PlatinumGames in mid-2023, at the time stating he was going independent. A sequel was announced at The Game Awards 2024 in December 2024, with development being led by Kamiya at a new studio called Clovers that included several former members of Clover Studio.
The Osu! Tatakae! Ouendan series
After developing the rhythm game Gitaroo Man, iNiS Corporation began work on a more innovative one for the Nintendo DS which was based on an idea from founder Keiichi Yano, in which players would tap and drag on-screen targets in time with music to help an oendan cheer up people who are in trouble so that they can overcome their problems, which Nintendo released as Osu! Tatakae! Ouendan. The game received positive reviews, leading to plans to develop a localized and upgraded reskin of the game titled Elite Beat Agents, as well as a sequel, Moero! Nekketsu Rhythm Damashii Osu! Tatakae! Ouendan 2. Despite all three games being praised by critics, none of them were commercially successful, all selling fewer than a million copies each. Poor sales, as well as the uniqueness of their target platform, prevented Nintendo from considering further sequels.
Overkill's The Walking Dead
Starbreeze Studios had acquired a license in 2014 to develop a game set in The Walking Dead franchise from Skybound Entertainment, using the cooperative gameplay mechanics from Payday: The Heist developed by Starbreeze's subsidiary Overkill Software. The game fell into development hell namely due to demands from Starbreeze to switch game engines, first from the internally-developed Diesel Engine which had been used on the Payday games to the newly developed Valhalla Engine, which Starbreeze had acquired at a large cost, and later to the Unreal Engine after the Valhalla proved too difficult to work with. Having slipped its major release dates twice, the game was completed by November 2018, but which the developers felt was under extreme rush and without sufficient quality review and testing. Overkill's The Walking Dead had mediocre reviews on release to Microsoft Windows, and only 100,000 units were sold within the month. Starbreeze had placed a significant amount of sales expectations behind the game, and with poor sales, the company placed plans to release the game on consoles on hold, and in December 2018 announced that it was restructuring due to a lack of liquidatable assets from the underperformance of Overkill's The Walking Dead. In Starbreeze's financial report for the quarter ending December 31, 2018, which included Overkill's The Walking Dead release, the game brought in only about (about ), while Payday 2, a title released in 2013, made in the same quarter. By the end of February 2019, Skybound pulled its licensing agreement from Starbreeze as "ultimately 'Overkill's The Walking Dead' did not meet our standards nor is it the quality that we were promised". Starbreeze officially halted further development of the Windows version and cancelled the game's planned console ports, while Skybound later cancelled the game entirely and pulled the license from Starbreeze. The poor returns on OTWD led Starbreeze to undergo a corporate restructuring from December 2018 to December 2019, laying off staff, selling off publishing and intellectual property rights, and reimplemented paid downloadable content for Payday 2, reneging on an early promise that all such future content would be free.
Psychonauts
Though achieving notable critical success, including GameSpot's 2005 Best Game No One Played award, Psychonauts sold fewer than 100,000 copies during its initial release. The game led to troubles at publisher Majesco Entertainment, including the resignation of its CEO and the plummeting of the company's stock, prompting a class-action lawsuit by the company's stockholders. At the time of its release in 2005, the game was considered the "poster child" for failures in innovative games. Its poor sales have also been blamed on a lack of marketing. However, today the game remains a popular title on various digital download services. The creator of Psychonauts, Tim Schafer, has a cult following due to his unique and eccentric style. Eventually, Double Fine would go on to acquire the full rights to publishing the game, and, with funding from Dracogen, created a Mac OS X and Linux port of the game, which was sold as part of a Humble Bundle in 2012 with nearly 600,000 bundles sold; according to Schafer, "We made more on Psychonauts [in 2012] than we ever have before." In August 2015, Steam Spy estimated approximately 1,157,000 owners of the game on the digital distributor Steam. Psychonauts was re-released in 2019 for the PlayStation 4 as a Standard Edition and a Collector's Edition, both region free, by publishing company Limited Run Games. A sequel, Psychonauts 2, was partially crowdfunded prior to Double Fine's acquisition by Microsoft who provided additional funding support, and was released in August 2021.
Puyo Puyo Chronicle Puyo Puyo Chronicle is a 2016 puzzle video game developed and published by Sega for the Nintendo 3DS. Released in Japan, it was intended to celebrate the 25th anniversary of Puyo Puyo, similar to titles like Puyo Puyo! 15th Anniversary and Puyo Puyo!! 20th Anniversary. The main game is an RPG that takes place in a storybook world, where the player progresses though the story by battling enemies and other characters in Puyo Puyo. While reviewers liked the change to 3D artstyle and the game's multiplayer modes, the RPG mode received heavy criticism for being repetitive and tedious. Puyo Puyo Chronicle sold less than 22,000 copies in its lifetime, and series producer Mizuki Hosoyamada confirmed that the game had poor sales in a 2021 interview with Red Bull. This was the last bi-decade Puyo Puyo game to celebrate an anniversary; the series 30th anniversary was instead celebrated via Super Puyo Quest Project, a large update to Puyopuyo!! Quest that changed major parts of the game—and content updates to Puyo Puyo Tetris 2.
Shenmue and Shenmue II Shenmue on the Dreamcast is more notorious for its overambitious budget than its poor sales figures. At the time of release in 1999, the game had the record for the most expensive production costs (over US$70 million), and its five-year production time. In comparison, the games' total sale was 1.2 million copies. Shenmue, however, was a critical hit, earning an average review score of 89%. The game was supposed to be the initial installment of a trilogy. The second installment was eventually released in 2001, but by this time the Dreamcast was floundering, so the game only saw a release in Japan and Europe. Sega eventually released it for North American players for the Xbox, but the poor performance of both titles combined with restructuring have made Sega reluctant to complete the trilogy for fear of failure to return on the investment. However, a Kickstarter campaign has received record support for a third title, with a release originally set for 2018, was eventually released on November 19, 2019. Ports of the first two titles were released in 2018 for PC, Xbox One and PlayStation 4 to re-acclimate players in preparation for the third title's release.
Sonic Boom: Rise of Lyric and Sonic Boom: Shattered Crystal Sonic Boom: Rise of Lyric is a spin-off from the Sonic the Hedgehog series, released in 2014 and developed by Big Red Button Entertainment and IllFonic for Sega and Sonic Team, along with its handheld counterpart Sonic Boom: Shattered Crystal, which was developed by Sanzaru Games. Although both games received a negative reception, Rise of Lyric for the Wii U is particularly considered one of the worst video games of all time due to many glitches, poor gameplay and weak writing. Both games failed commercially, selling only 490,000 copies as of February 2015, making them the lowest-selling games in the Sonic franchise. Big Red Button had considered shuttering in the wake of Rise of Lyric underperforming sales alongside its poor reception.
Sonic Runners Sonic Runners is a Sonic the Hedgehog game for Android and iOS. A side-scrolling endless runner, it was Sonic Team's first Sonic game that was exclusive to smartphones. It was soft launched in select regions in February 2015 and officially released worldwide in June 2015 to mixed reviews. Although it was downloaded over five million times, the game's publisher, Sega, considered it a commercial failure because it only made between ¥30–50 million a month. As a result, it was discontinued in July 2016. Nintendo Life wrote its failure as proof that the recognizability of a brand does not guarantee success.
Suicide Squad: Kill the Justice League Suicide Squad: Kill the Justice League was developed by Rocksteady Games who previously developed the highly successful Batman: Arkham series for Warner Bros. Games. A continuation of their work in the DC Comics universe, the game focused on the rogue's gallery of villains. The game received lukewarm reviews, with particular criticism aimed at the game's live service elements, and fell below Warner Bros.' sales expectations. In May 2024, Warner Bros. took a $200 million impairment charge, in part due to the game's poor sales, and Rocksteady suffered layoffs in August 2024.
Sunset Sunset, a first-person exploration adventure game involving a housekeeper working for a dictator of a fictional country, was developed by two-person Belgian studio Tale of Tales, who previously had created several acclaimed arthouse style games and other audio-visual projects such as The Path. They wanted to make Sunset a "game for gamers" while still retaining their arthouse-style approach, and in addition to planning on a commercial release, used Kickstarter to gain funding. Sunset only sold about 4000 copies on its release, including those to Kickstarter backers. Tale of Tales opted to close their studio after sinking the company's finances into the game, and believed that if they did release any new games in the future, they would likely shy away from commercial release.
System Shock 2 System Shock 2 is the 1999 sequel to the 1994 immersive sim System Shock. The original game was made by Looking Glass Studios and published by Origin Systems, a subsidiary of Electronic Arts at the time. System Shock was critically praised and had modest sales. Irrational Games was formed in 1997 by three former Looking Glass employees, and Looking Glass approached Irrational about co-developing a game like System Shock, and after several iterations, came to the idea of a direct sequel. System Shock 2 was similarly met with critical praise at release and was named as Game of the Year by several publications, but did not sell well, with only about 58,000 copies selling within eight months of release. For Looking Glass, System Shock 2, similar to Thief: The Dark Project, represent games that they had developed with multi-million dollar budgets that they could not recoup, and due to mounting debt, Looking Glass closed down in May 2000. Irrational wanted to continue to develop in the System Shock series, but Electronic Arts, which owned the intellectual property, felt sales of System Shock 2 failed to meet expectations to justify a sequel. Instead, Irrational set out to develop a game as a spiritual successor to the gameplay concepts of System Shock but without using the property, resulting in their 2007 hit title BioShock.
Uru: Ages Beyond Myst
The fourth game in the popular Myst series, released in 2003. It was developed by Cyan Worlds shortly after Riven was completed. The game took Cyan Worlds more than five years and $12 million to complete and was codenamed DIRT ("D'ni in real time"), then MUDPIE (meaning "Multi-User DIRT, Persistent / Personal Interactive Entertainment / Experience / Exploration / Environment"). Though it had generally positive reception, the sales were disappointing. In comparison, the first three Myst games had sold more than 12 million units collectively before Urus release. Urus poor sales were also considered a factor in financially burdening Cyan, contributing to the company's near closure in 2005.
Warhammer Age of Sigmar: Realms of Ruin
The real-time strategy video game Realms of Ruin is based on Warhammer Age of Sigmar. It was developed by Frontier Developments and released on 17 November 2023. Despite the "mixed or average" reviews (Metacritic), RoR game sales were much lower than expected. The RoR concurrent players on Steam peaked at 1,572 with just 129 playing on Steam on 27 November 2023. Post-launch, Frontier adjusted their revenue expectations with an expected "loss of around £9 million". Gaming journalists generally consider Realms of Ruin a flop. Stats on Video Game Insights show merely 69,330 units were sold via Steam with 36 active players (24h peak) and 54.1% positive reviews by 18 July 2024. This caused Frontier Developments' shares to decline nearly 20%.
Arcade game failures
I, Robot
Released by Atari in 1983, I, Robot was the first video game to use 3-D polygon graphics, and the first that allowed the player to change camera angles. It also had gameplay that rewarded planning and stealth as much as reflexes and trigger speed, and included a non-game mode called "Doodle City", where players could make artwork using the polygons. Production estimates vary, but consensus is that there were no more than 1500 units made.
Jack the Giantkiller
In 1982, the President of Cinematronics arranged a one-time purchase of 5000 printed circuit boards from Japan. The boards were used in the manufacture of several games, but the majority of them were reserved for the new arcade game Jack the Giantkiller, based on the classic fairy tale Jack and the Beanstalk. Between the purchase price of the boards and other expenses, Cinematronics invested almost two million dollars into Jack the Giantkiller. It completely flopped in the arcade and many of the boards went unsold, costing the company a huge amount of money at a time when it was already having financial difficulties.
Radar Scope Radar Scope was one of the first arcade games released by Nintendo. It was released in Japan first, and a brief run of success there led Nintendo to order 3,000 units for the American market in 1980. American operators were unimpressed, however, and Nintendo of America was stuck with about 2,000 unsold Radar Scope machines sitting in the warehouse.
Facing a potential financial disaster, Nintendo assigned the game's designer, Shigeru Miyamoto, to revamp the game. Instead he designed a brand new game that could be run in the same cabinets and on the same hardware as Radar Scope. That new game was the smash hit Donkey Kong, and Nintendo was able to recoup its investment in 1981 by converting the remaining unsold Radar Scope units to Donkey Kong and selling those.
Sundance Sundance is an arcade vector game released in 1979. Producer Cinematronics planned to manufacture about 1000 Sundance units, but sales suffered from a combination of poor gameplay and an abnormally high rate of manufacturing defects. The fallout rate in production was about 50%, the vector monitor (made by an outside vendor) had a defective picture tube that would arc and burn out if the game was left in certain positions during shipping, and according to programmer Tim Skelly, the circuit boards required a lot of cut-and-jumpering between mother and daughter boards that also made for a very fragile setup. The units that survived to reach arcade floors were not a hit with gamers—Skelly himself reportedly felt that the gameplay lacked the "anxiety element" necessary in a good game and asked Cinematronics not to release it, and in an April 1983 interview with Video Games Magazine he referred to Sundance'' as "a total dog".
See also
List of video games notable for negative reception
List of best-selling video games
List of films considered the worst
List of television shows notable for negative reception
List of video games considered the best
References
External links
The Dumbest 25 moments in gaming from GameSpy
The Silicon Valley 10 & 1 06.16.10: Top 10 Console Failures!
Computer and video gaming
Failure
Commercial failures in video gaming
Commercial failures in video gaming | List of commercial failures in video games | [
"Technology"
] | 17,037 | [
"History of video games",
"History of computing"
] |
2,004,299 | https://en.wikipedia.org/wiki/Patient%20gown | A hospital gown, sometimes called a johnny gown or johnny, especially in Canada and New England, is "a long loose piece of clothing worn in a hospital by someone doing or having an operation". It can be used as clothing for bedridden patients.
Utility
Hospital gowns worn by patients are designed so that hospital staff can easily access the part of the patient's body being treated.
The hospital gown is made of fabric that can withstand repeated laundering in hot water, usually cotton, and is fastened at the back with twill tape ties. Disposable hospital gowns may be made of paper or thin plastic, with paper or plastic ties.
Some gowns have snaps along the top of the shoulder and sleeves, so that the gown can be removed without disrupting intravenous lines in the patient's arms.
Hospital gowns used in psychiatric care will sometimes use snaps in the back instead of ties.
Used paper hospital gowns are associated with hospital infections, which could be avoided by proper disposal.
A Canadian study surveying patients at five hospitals determined 57 percent could have worn more clothing below the waist, but only 11 percent wore more than a gown. The physicians conducting the survey said gowns should not be required unless they are necessary. Although they are cheaper and easier to wash, Todd Lee, of Royal Victoria Hospital in Montreal, said gowns are not necessary unless the patient is incontinent or has an injury in the lower body. Otherwise, Lee said, pajamas or regular clothes may be acceptable.
Problems and redesign
Traditional hospital gowns were designed in an era when patients spent most of their hospital stays confined to bed. In this context, the open back of hospital gowns proved beneficial as it made dressing and undressing patients far easier for hospital staff and facilitated the use of a bedpan. Such advantages have become less salient over time as medical practices change to emphasize getting patients out of bed and mobile rather than encouraging prolonged bed rest. When upright and moving, the open back of a hospital gown exposes the buttocks and prevents the gown from retaining warmth, resulting in some patients experiencing embarrassment and discomfort while wearing them.
Some studies have been done on updating the garment for more modern needs. However, new-style gowns and other types of clothing can be more expensive. The Valley Hospital in Ridgewood, New Jersey reported a $70,000 a year increase.
Luke's Fast Breaks
When 9-year-old Luke Lange complained about wearing a hospital gown while being treated for Hodgkin lymphoma, his mother adapted some T-shirts for him to wear, using snap tape on the sides. Other children saw the t-shirt and wanted one too. Two years later, the organization Luke's FastBreaks had raised $1 million for children's cancer and given out over 5,000 of the t-shirts. They were long enough to wear like the gowns, but some preferred to wear them like t-shirts. Briton Lynn, executive director of Luke's FastBreaks, said the t-shirts helped children have a more positive attitude.
Traci Lamar design
In November 2006, Robert Wood Johnson Foundation gave a $236,000 grant to a team at North Carolina State University to design a new gown based on "style, cost, durability, comfort, function" and other qualities. NCSU professor Lamar's team worked to come up with a "more comfortable, less revealing" design. Surveys found that nurses did not like the ties in the back because knots could form, and some patients wore more than one gown at once, with one tied in front and the other in back. Many patients disliked how lightweight gowns were. In April 2009, the NCSU team showed potential new designs at a reception, and they were preparing to ask for more funding as they developed a prototype. Meanwhile, some hospitals were offering alternatives, including gowns that opened in the front or on the side, and drawstring pants, cotton tops and boxers. These cost more than traditional gowns. Lamar's additional funding came from RocketHub. At NCSU Fashion Week in 2013, Lamar's design was mentioned as "functional and dignified," but not shown "to prevent any patent infringements". A prototype, made of DermaFabric and made at Precision Fabrics in Greensboro, North Carolina, was to be tested at WakeMed.
DCS gown
In 2009, Fatima Ba-Alawi was honored for her DCS (dignity, comfort, safety) gown at a RCN conference on London. Four years after she started using her skills making dresses to redesign hospital gowns, NHS trusts were using the design. The reversible gowns have plastic poppers which make it easier to change without moving the patient and save staff time, and side pockets for drips or catheters, along with a pouch for cardio equipment. One version called the Faith Gown has a detachable head scarf and long sleeves.
Ben de Lisi design
Another redesign in England came from Ben de Lisi, one of six receiving grants. The Design Council was scheduled to show his design, which did not open in the back but did allow access, in March 2010.
According to the BBC, de Lisi's hospital gowns are made much more modestly, taking patient dignity into consideration.
Cleveland Clinic design
The Cleveland Clinic changed its gowns in 2010 because the CEO had heard many complaints. Diane von Furstenberg was commissioned to design stylish hospital gowns based on her fashionable wrap dress by the Cleveland Clinic. The new design was reversible with a V-neck in both the front and the back, with softer fabric.
Dignity Giving Suit
Birmingham Children's Hospital in England introduced the polyester/cotton Dignity Giving Suit in March 2013, after 18 months of work. Patients and health care professionals liked the suits with Velcro fasteners on the seams. Other area hospitals were interested. Adults wanted the gowns to be made for them as well as children.
Janice Fredrickson
A design patented in 2014 by Janice Fredrickson had a side opening and sleeve openings, and could be put on without the patient sitting up. One version had pockets for telemetry wires and for drainage bags. It was suggested that different colors be used for different patients, such as those at risk of falling.
Model G design
In 2015, Henry Ford Health System of Detroit was working on its own design, similar to a bathrobe with cotton blend. In tests, patients liked the new design. But any update was likely to cost more, as well as harder to take care of. The Model G design, to be made by Carhartt of Michigan, used snaps on the front and shoulders.
See also
Adaptive clothing
Gown
Locking clothing
Scrubs (clothing)
References
Gowns
Medical equipment | Patient gown | [
"Biology"
] | 1,395 | [
"Medical equipment",
"Medical technology"
] |
2,004,663 | https://en.wikipedia.org/wiki/Angiogenesis%20inhibitor | An angiogenesis inhibitor is a substance that inhibits the growth of new blood vessels (angiogenesis). Some angiogenesis inhibitors are endogenous and a normal part of the body's control and others are obtained exogenously through pharmaceutical drugs or diet.
While angiogenesis is a critical part of wound healing and other favorable processes, certain types of angiogenesis are associated with the growth of malignant tumors. Thus angiogenesis inhibitors have been closely studied for possible cancer treatment. Angiogenesis inhibitors were once thought to have potential as a "silver bullet" treatment applicable to many types of cancer, but the limitations of anti-angiogenic therapy have been shown in practice. Currently, angiogenesis inhibitors are recognized for their improvement of cancer immunotherapy by overcoming endothelial cell anergy. Angiogenesis inhibitors are also used to effectively treat macular degeneration in the eye, and other diseases that involve a proliferation of blood vessels.
Mechanism of action
When a tumor stimulates the growth of new vessels, it is said to have undergone an 'angiogenic switch'. The principal stimulus for this angiogenic switch appears to be oxygen deprivation, although other stimuli such as inflammation, oncogenic mutations and mechanical stress may also play a role. The angiogenic switch leads to tumor expression of pro-angiogenic factors and increased tumor vascularization. Specifically, tumor cells release various pro-angiogenic paracrine factors (including angiogenin, vascular endothelial growth factor (VEGF), fibroblast growth factor (FGF), and transforming growth factor-β (TGF-β). These stimulate endothelial cell proliferation, migration and invasion resulting in new vascular structures sprouting from nearby blood vessels. Cell adhesion molecules, such as integrins, are critical to the attachment and migration of endothelial cells to the extracellular matrix.
VEGF pathway inhibition
Inhibiting angiogenesis requires treatment with anti-angiogenic factors, or drugs which reduce the production of pro-angiogenic factors, prevent them binding to their receptors or block their actions. Inhibition of the VEGF pathway has become the focus of angiogenesis research, as approximately 60% of malignant tumors express high concentrations of VEGF. Strategies to inhibit the VEGF pathway include antibodies directed against VEGF or VEGFR, soluble VEGFR/VEGFR hybrids, and tyrosine kinase inhibitors. The most widely used VEGF pathway inhibitor on the market today is Bevacizumab. Bevacizumab binds to VEGF and inhibits it from binding to VEGF receptors.
Endogenous regulation
Angiogenesis is regulated by the activity of endogenous stimulators and inhibitors. Endogenous inhibitors, found in the body naturally, are involved in the day-to-day process of regulating blood vessel formation. Endogenous inhibitors are often derived from the extracellular matrix or basement membrane proteins and function by interfering with endothelial cell formation and migration, endothelial tube morphogenesis, and down-regulation of genes expressed in endothelial cells.
During tumor growth, the action of angiogenesis stimulators surpasses the control of angiogenesis inhibitors, allowing for unregulated or less regulated blood vessel growth and formation. Endogenous inhibitors are attractive targets for cancer therapy because they are less toxic and less likely to lead to drug resistance than some exogenous inhibitors. However, the therapeutic use of endogenous inhibitors has disadvantages. In animal studies, high doses of inhibitors were required to prevent tumor growth and the use of endogenous inhibitors would likely be long-term.
A recent method for the delivery of anti-angiogenesis factors to tumor regions in cancer patients uses genetically modified bacteria that are able to colonize solid tumors in vivo, such as Clostridium, Bifidobacteria and Salmonella by adding genes for anti-angiogenic factors such as endostatin or IP10 chemokine and removing any harmful virulence genes. A target can also be added to the outside of the bacteria so that they are sent to the correct organ in the body. The bacteria can then be injected into the patient and they will locate themselves to the tumor site, where they release a continual supply of the desired drugs in the vicinity of a growing cancer mass, preventing it from being able to gain access to oxygen and ultimately starving the cancer cells. This method has been shown to work both in vitro and in vivo in mice models, with very promising results. It is expected that this method will become commonplace for treatment of various cancer types in humans in the future.
Exogenous regulation
Diet
Some common components of human diets also act as mild angiogenesis inhibitors and have therefore been proposed for angioprevention, the prevention of metastasis through the inhibition of angiogenesis. In particular, the following foods contain significant inhibitors and have been suggested as part of a healthy diet for this and other benefits:
Soy products such as tofu and tempeh, (which contain the inhibitor "genistein")
Agaricus subrufescens mushrooms (contain the inhibitors sodium pyroglutamate and ergosterol)
Black raspberry (Rubus occidentalis) extract
Lingzhi mushrooms (via inhibition of VEGF and TGF-beta)
Trametes versicolor mushrooms (Polysaccharide-K)
Maitake mushrooms (via inhibition of VEGF)
Phellinus linteus mushrooms (via active substance Interfungins A inhibition of glycation)
Green tea (catechins)
Liquorice (glycyrrhizic acid)
Red wine (resveratrol)
Antiangiogenic phytochemicals and medicinal herbs
Royal Jelly (Queen bee acid)
Drugs
Research and development in this field has been driven largely by the desire to find better cancer treatments. Tumors cannot grow larger than 2mm without angiogenesis. By stopping the growth of blood vessels, scientists hope to cut the means by which tumors can nourish themselves and thus metastasize.
In addition to their use as anti-cancer drugs, angiogenesis inhibitors are being investigated for their use as anti-obesity agents, as blood vessels in adipose tissue never fully mature, and are thus destroyed by angiogenesis inhibitors. Angiogenesis inhibitors are also used as treatment for the wet form of macular degeneration. By blocking VEGF, inhibitors can cause regression of the abnormal blood vessels in the retina and improve vision when injected directly into the vitreous humor of the eye.
Overview
Bevacizumab
Through binding to VEGFR and other VEGF receptors in endothelial cells, VEGF can trigger multiple cellular responses like promoting cell survival, preventing apoptosis, and remodeling cytoskeleton, all of which promote angiogenesis. Bevacizumab (brand name Avastin) traps VEGF in the blood, lowering the binding of VEGF to its receptors. This results in reduced activation of the angiogenesis pathway, thus inhibiting new blood vessel formation in tumors.
After a series of clinical trials in 2004, Avastin was approved by the FDA, becoming the first commercially available anti-angiogenesis drug. FDA approval of Avastin for breast cancer treatment was later revoked on November 18, 2011.
Thalidomide
Despite the therapeutic potential of anti-angiogenesis drugs, they can also be harmful when used inappropriately. Thalidomide is one such antiangiogenic agent. Thalidomide was given to pregnant women to treat nausea. However, when pregnant women take an antiangiogenic agent, the developing fetus will not form blood vessels properly, thereby preventing the proper development of fetal limbs and circulatory systems. In the late 1950s and early 1960s, thousands of children were born with deformities, most notably phocomelia, as a consequence of thalidomide use.
Cannabinoids
According to a study published in the August 15, 2004 issue of the journal Cancer Research, cannabinoids, the active ingredients in marijuana, restrict the sprouting of blood vessels to gliomas (brain tumors) implanted under the skin of mice, by inhibiting the expression of genes needed for the production of vascular endothelial growth factor (VEGF).
General side effects of drugs
Bleeding
Bleeding is one of the most difficult side effects to manage; this complication is somewhat inherent to the effectiveness of the drug. Bevacizumab has been shown to be the drug most likely to cause bleeding complications. While the mechanisms of bleeding induced by anti-VEGF agents are complicated and not yet totally understood, the most accepted hypothesis is that VEGF could promote endothelial cell survival and integrity in the adult vasculature and its inhibition may decrease capacity for renewal of damaged endothelial cells.
Increased blood pressure
In a study done by ML Maitland, a mean blood pressure increase of 8.2 mm Hg systolic and 6.5 mm Hg diastolic was reported in the first 24 hours after the first treatment with sorafenib, a VEGF pathway inhibitor.
Less common side effects
Because these drugs act on parts of the blood and blood vessels, they tend to have side effects that affect these processes. Aside from problems with hemorrhage and hypertension, less common side effects of these drugs include dry, itchy skin, hand-foot syndrome (tender, thickened areas on the skin, sometimes with blisters on palms and soles), diarrhea, fatigue, and low blood counts. Angiogenesis inhibitors can also interfere with wound healing and cause cuts to re-open or bleed. Rarely, perforations (holes) in the intestines can occur.
See also
Brain-specific angiogenesis inhibitor 1 (and others)
References
Further reading
Milosevic, V., Edelmann, R.J., Fosse, J.H., Östman, A., Akslen, L.A. (2022). Molecular Phenotypes of Endothelial Cells in Malignant Tumors. In: Akslen, L.A., Watnick, R.S. (eds) Biomarkers of the Tumor Microenvironment. Springer, Cham. https://doi.org/10.1007/978-3-030-98950-7_3
External links
The idea of antiangiogenesis was pioneered by Dr. Judah Folkman. See and
Angiogenesis Inhibitors for Cancer - from The Angiogenesis Foundation, 23 June 2009
Angiogenesis Inhibitors for Eye Disease - from The Angiogenesis Foundation, 23 June 2009
Angiogenesis Inhibitors in the Treatment of Cancer - from the National Cancer Institute
Angiology | Angiogenesis inhibitor | [
"Biology"
] | 2,239 | [
"Angiogenesis",
"Angiogenesis inhibitors"
] |
2,004,719 | https://en.wikipedia.org/wiki/Nuclear%20matrix | In biology, the nuclear matrix is the network of fibres found throughout the inside of a cell nucleus after a specific method of chemical extraction. According to some it is somewhat analogous to the cell cytoskeleton. In contrast to the cytoskeleton, however, the nuclear matrix has been proposed to be a dynamic structure. Along with the nuclear lamina, it supposedly aids in organizing the genetic information within the cell.
The exact function of this structure is still disputed, and its very existence has been called into question. Evidence for such a structure was recognised as long ago as 1948, and consequently many proteins associated with the matrix have been discovered. The presence of intra-cellular proteins is common ground, and it is agreed that proteins such as the Scaffold, or Matrix Associated Proteins (SAR or MAR) have some role in the organisation of chromatin in the living cell. There is evidence that the nuclear matrix is involved in regulation of gene expression in Arabidopsis thaliana.
Whenever a similar structure can actually be found in living cells remains a topic of discussion. According to some sources, most, if not all proteins found in nuclear matrix are the aggregates of proteins of structures that can be found in the nucleus of living cells. Such structures are nuclear lamina, which consist of proteins termed lamins which can be also found in the nuclear matrix.
Validity of nuclear matrix
For a long time the question whether a polymer meshwork, a “nuclear matrix” or “nuclear-scaffold” or "NuMat" is an essential component of the in vivo nuclear architecture has remained a matter of debate. While there are arguments that the relative position of chromosome territories (CTs), the equivalent of condensed metaphase chromosomes at interphase, may be maintained due to steric hindrance or electrostatic repulsion forces between the apparently highly structured CT surfaces, this concept has to be reconciled with observations according to which cells treated with the classical matrix-extraction procedures maintain defined territories up to the point where a minor subset of acidic nuclear matrix proteins is released – very likely those proteins that governed their association with the nuclear skeleton. The nuclear matrix proteome consists of structural proteins, chaperones, DNA/RNA-binding proteins, chromatin remodeling and transcription factors. The complexity of NuMat is an indicator of diverse structural and functional significance of its proteins.
Scaffold/Matrix attachment regions (S/MARs)
S/MARs (scaffold/matrix attachment regions), the DNA regions that are known to attach genomic DNA to variety of nuclear proteins, show an ever increasing spectrum of established biological activities. There is a known overlap of this large group of sequences with sequences termed LADs (lamina attachment domains).
S/MARs find increasing use for the rational design of vectors with widespread use in gene therapy and biotechnology. Nowadays S/MAR functions can be modulated, improved and custom-tailored to the specific needs of novel vector systems.
Nuclear matrix and cancer
The nuclear matrix composition on human cells has been proven to be cell type and tumor specific. It has been clearly demonstrated that the nuclear matrix composition in a tumor is different from its normal counterparts. This fact could be useful to characterize cancer markers and to predict the disease even earlier. These markers have been found in urine and blood and could potentially be used in early detection and prognosis of human cancers.
See also
References
Further reading
Matrices (biology)
Nuclear substructures
Molecular genetics | Nuclear matrix | [
"Chemistry",
"Biology"
] | 706 | [
"Molecular genetics",
"Molecular biology"
] |
3,742,793 | https://en.wikipedia.org/wiki/Time%E2%80%93space%20compression | Time–space compression (also known as space–time compression and time–space distanciation) is an idea referring to the altering of the qualities of space–time and the relationship between space and time that is a consequence of the expansion of capital. It is rooted in Karl Marx's notion of the "annihilation of space by time" originally elaborated in the Grundrisse, and was later articulated by Marxist geographer David Harvey in his book The Condition of Postmodernity. A similar idea was proposed by Elmar Altvater in an article in PROKLA in 1987, translated into English as "Ecological and Economic Modalities of Time and Space" and published in Capitalism Nature Socialism in 1990.
Time–space compression occurs as a result of technological innovations driven by the global expansion of capital that condense or elide spatial and temporal distances, including technologies of communication (telegraph, telephones, fax machines, Internet) and travel (rail, cars, trains, jets), driven by the need to overcome spatial barriers, open up new markets, speed up production cycles, and reduce the turnover time of capital.
According to Paul Virilio, time-space compression is an essential facet of capitalist life, saying that "we are entering a space which is speed-space ... This new other time is that of electronic transmission, of high-tech machines, and therefore, man is present in this sort of time, not via his physical presence, but via programming" (qtd. in Decron 71). In Speed and Politics, Virilio coined the term dromology to describe the study of "speed-space". Virilio describes velocity as the hidden factor in wealth and power, where historical eras and political events are effectively speed-ratios. In his view, acceleration destroys space and compresses time in ways of perceiving reality.
Theorists generally identify two historical periods in which time–space compression occurred; the period from the mid-19th century to the beginnings of the First World War, and the end of the 20th century. In both of these time periods, according to Jon May and Nigel Thrift, "there occurred a radical restructuring in the nature and experience of both time and space ... both periods saw a significant acceleration in the pace of life concomitant with a dissolution or collapse of traditional spatial co-ordinates".
Criticism
Doreen Massey critiqued the idea of time-space compression in her discussion of globalization and its effect on our society. She insisted that any ideas that our world is "speeding up" and "spreading out" should be placed within local social contexts. "Time-space compression", she argues, "needs differentiating socially": "how people are placed within 'time-space compression' are complicated and extremely varied". In effect, Massey is critical of the notion of "time-space compression" as it represents capital's attempts to erase the sense of the local and masks the dynamic social ways through which places remain "meeting places".
For Moishe Postone, Harvey's treatment of space-time compression and postmodern diversity are merely reactions to capitalism. Hence Harvey's analysis remains "extrinsic to the social forms expressed" by the deep structure concepts of capital, value and the commodity. For Postone, the postmodern moment is not necessarily just a one-sided effect of the contemporary form of capitalism but can also be seen as having an emancipatory side if it happened to be part of post-capitalism. And because postmodernism usually neglects its own context of embeddedness it can legitimate capitalism as postmodern, whereas at the level of deep structure it may in fact be more concentrated, with large capitals that accumulate rather than diverge, and with an expansion of commodification niches with fewer buyers.
Postone asserts that one cannot step outside capitalism and declare it a pure evil or as a one-dimensional badness, since the emancipatory content of such things as equal distribution or diversity are potentials of capitalism itself in its abundant and diverse productive powers. This initial perspective misfires however, when forms of society such as modernity and subsequently postmodernism take itself as the true whole of life for being oppositional to capitalism, when in fact they are grounded in the reproduction of the same capitalist relations that created them.
See also
Global village
Late capitalism
Late modernism
Social production of space
Space of flows
References
Further reading
Jeff Lewis (2008), Cultural Studies, Sage, London. .
[Sophie Raine] (2022), What is Time-Space Compression?
Perception
History of telecommunications
Spacetime
Postmodernism
Postmodern theory
Cultural geography
Globalization | Time–space compression | [
"Physics",
"Mathematics"
] | 968 | [
"Spacetime",
"Vector spaces",
"Space (mathematics)",
"Theory of relativity"
] |
3,747,894 | https://en.wikipedia.org/wiki/Atmospheric%20dispersion%20modeling | Atmospheric dispersion modeling is the mathematical simulation of how air pollutants disperse in the ambient atmosphere. It is performed with computer programs that include algorithms to solve the mathematical equations that govern the pollutant dispersion. The dispersion models are used to estimate the downwind ambient concentration of air pollutants or toxins emitted from sources such as industrial plants, vehicular traffic or accidental chemical releases. They can also be used to predict future concentrations under specific scenarios (i.e. changes in emission sources). Therefore, they are the dominant type of model used in air quality policy making. They are most useful for pollutants that are dispersed over large distances and that may react in the atmosphere. For pollutants that have a very high spatio-temporal variability (i.e. have very steep distance to source decay such as black carbon) and for epidemiological studies statistical land-use regression models are also used.
Dispersion models are important to governmental agencies tasked with protecting and managing the ambient air quality. The models are typically employed to determine whether existing or proposed new industrial facilities are or will be in compliance with the National Ambient Air Quality Standards (NAAQS) in the United States and other nations. The models also serve to assist in the design of effective control strategies to reduce emissions of harmful air pollutants. During the late 1960s, the Air Pollution Control Office of the U.S. EPA initiated research projects that would lead to the development of models for the use by urban and transportation planners. A major and significant application of a roadway dispersion model that resulted from such research was applied to the Spadina Expressway of Canada in 1971.
Air dispersion models are also used by public safety responders and emergency management personnel for emergency planning of accidental chemical releases. Models are used to determine the consequences of accidental releases of hazardous or toxic materials, Accidental releases may result in fires, spills or explosions that involve hazardous materials, such as chemicals or radionuclides. The results of dispersion modeling, using worst case accidental release source terms and meteorological conditions, can provide an estimate of location impacted areas, ambient concentrations, and be used to determine protective actions appropriate in the event a release occurs. Appropriate protective actions may include evacuation or shelter in place for persons in the downwind direction. At industrial facilities, this type of consequence assessment or emergency planning is required under the U.S. Clean Air Act (CAA) codified in Part 68 of Title 40 of the Code of Federal Regulations.
The dispersion models vary depending on the mathematics used to develop the model, but all require the input of data that may include:
Meteorological conditions such as wind speed and direction, the amount of atmospheric turbulence (as characterized by what is called the "stability class"), the ambient air temperature, the height to the bottom of any inversion aloft that may be present, cloud cover and solar radiation.
Source term (the concentration or quantity of toxins in emission or accidental release source terms) and temperature of the material
Emissions or release parameters such as source location and height, type of source (i.e., fire, pool or vent stack) and exit velocity, exit temperature and mass flow rate or release rate.
Terrain elevations at the source location and at the receptor location(s), such as nearby homes, schools, businesses and hospitals.
The location, height and width of any obstructions (such as buildings or other structures) in the path of the emitted gaseous plume, surface roughness or the use of a more generic parameter "rural" or "city" terrain.
Many of the modern, advanced dispersion modeling programs include a pre-processor module for the input of meteorological and other data, and many also include a post-processor module for graphing the output data and/or plotting the area impacted by the air pollutants on maps. The plots of areas impacted may also include isopleths showing areas of minimal to high concentrations that define areas of the highest health risk. The isopleths plots are useful in determining protective actions for the public and responders.
The atmospheric dispersion models are also known as atmospheric diffusion models, air dispersion models, air quality models, and air pollution dispersion models.
Atmospheric layers
Discussion of the layers in the Earth's atmosphere is needed to understand where airborne pollutants disperse in the atmosphere. The layer closest to the Earth's surface is known as the troposphere. It extends from sea-level to a height of about and contains about 80 percent of the mass of the overall atmosphere. The stratosphere is the next layer and extends from to about . The third layer is the mesosphere which extends from to about . There are other layers above 80 km, but they are insignificant with respect to atmospheric dispersion modeling.
The lowest part of the troposphere is called the planetary boundary layer (PBL), or sometimes the atmospheric boundary layer. The air temperature of the PBL decreases with increasing altitude until it reaches a capping inversion, which is a type of inversion layer where warmer air sits higher in the atmosphere than cooler air. We call the region of the PBL below its capping inversion the convective planetary boundary layer; it is typically in height. The upper part of the troposphere (i.e., above the inversion layer) is called the free troposphere and it extends up to the tropopause (the boundary in the Earth's atmosphere between the troposphere and the stratosphere). In tropical and mid-latitudes during daytime, the free convective layer can comprise the entire troposphere, which is up to in the Intertropical Convergence Zone.
The PBL is important with respect to the transport and dispersion of airborne pollutants because the turbulent dynamics of wind are strongest at Earth's surface. The part of the PBL between the Earth's surface and the bottom of the inversion layer is known as the mixing layer. Almost all of the airborne pollutants emitted into the ambient atmosphere are transported and dispersed within the mixing layer. Some of the emissions penetrate the inversion layer and enter the free troposphere above the PBL.
In summary, the layers of the Earth's atmosphere from the surface of the ground upwards are: the PBL made up of the mixing layer capped by the inversion layer; the free troposphere; the stratosphere; the mesosphere and others. Many atmospheric dispersion models are referred to as boundary layer models because they mainly model air pollutant dispersion within the ABL. To avoid confusion, models referred to as mesoscale models have dispersion modeling capabilities that extend horizontally up to a few hundred kilometres. It does not mean that they model dispersion in the mesosphere.
Gaussian air pollutant dispersion equation
The technical literature on air pollution dispersion is quite extensive and dates back to the 1930s and earlier. One of the early air pollutant plume dispersion equations was derived by Bosanquet and Pearson. Their equation did not assume Gaussian distribution nor did it include the effect of ground reflection of the pollutant plume.
Sir Graham Sutton derived an air pollutant plume dispersion equation in 1947 which did include the assumption of Gaussian distribution for the vertical and crosswind dispersion of the plume and also included the effect of ground reflection of the plume.
Under the stimulus provided by the advent of stringent environmental control regulations, there was an immense growth in the use of air pollutant plume dispersion calculations between the late 1960s and today. A great many computer programs for calculating the dispersion of air pollutant emissions were developed during that period of time and they were called "air dispersion models". The basis for most of those models was the Complete Equation For Gaussian Dispersion Modeling Of Continuous, Buoyant Air Pollution Plumes shown below:
The above equation not only includes upward reflection from the ground, it also includes downward reflection from the bottom of any inversion lid present in the atmosphere.
The sum of the four exponential terms in converges to a final value quite rapidly. For most cases, the summation of the series with m = 1, m = 2 and m = 3 will provide an adequate solution.
and are functions of the atmospheric stability class (i.e., a measure of the turbulence in the ambient atmosphere) and of the downwind distance to the receptor. The two most important variables affecting the degree of pollutant emission dispersion obtained are the height of the emission source point and the degree of atmospheric turbulence. The more turbulence, the better the degree of dispersion.
Equations for and are:
(x) = exp(Iy + Jyln(x) + Ky[ln(x)]2)
(x) = exp(Iz + Jzln(x) + Kz[ln(x)]2)
(units of , and , and x are in meters)
The classification of stability class is proposed by F. Pasquill. The six stability classes are referred to:
A-extremely unstable
B-moderately unstable
C-slightly unstable
D-neutral
E-slightly stable
F-moderately stable
The resulting calculations for air pollutant concentrations are often expressed as an air pollutant concentration contour map in order to show the spatial variation in contaminant levels over a wide area under study. In this way the contour lines can overlay sensitive receptor locations and reveal the spatial relationship of air pollutants to areas of interest.
Whereas older models rely on stability classes (see air pollution dispersion terminology) for the determination of and , more recent models increasingly rely on the Monin-Obukhov similarity theory to derive these parameters.
Briggs plume rise equations
The Gaussian air pollutant dispersion equation (discussed above) requires the input of H which is the pollutant plume's centerline height above ground level—and H is the sum of Hs (the actual physical height of the pollutant plume's emission source point) plus ΔH (the plume rise due to the plume's buoyancy).
To determine ΔH, many if not most of the air dispersion models developed between the late 1960s and the early 2000s used what are known as the Briggs equations. G.A. Briggs first published his plume rise observations and comparisons in 1965. In 1968, at a symposium sponsored by CONCAWE (a Dutch organization), he compared many of the plume rise models then available in the literature. In that same year, Briggs also wrote the section of the publication edited by Slade dealing with the comparative analyses of plume rise models. That was followed in 1969 by his classical critical review of the entire plume rise literature, in which he proposed a set of plume rise equations which have become widely known as "the Briggs equations". Subsequently, Briggs modified his 1969 plume rise equations in 1971 and in 1972.
Briggs divided air pollution plumes into these four general categories:
Cold jet plumes in calm ambient air conditions
Cold jet plumes in windy ambient air conditions
Hot, buoyant plumes in calm ambient air conditions
Hot, buoyant plumes in windy ambient air conditions
Briggs considered the trajectory of cold jet plumes to be dominated by their initial velocity momentum, and the trajectory of hot, buoyant plumes to be dominated by their buoyant momentum to the extent that their initial velocity momentum was relatively unimportant. Although Briggs proposed plume rise equations for each of the above plume categories, it is important to emphasize that "the Briggs equations" which become widely used are those that he proposed for bent-over, hot buoyant plumes.
In general, Briggs's equations for bent-over, hot buoyant plumes are based on observations and data involving plumes from typical combustion sources such as the flue gas stacks from steam-generating boilers burning fossil fuels in large power plants. Therefore, the stack exit velocities were probably in the range of 20 to 100 ft/s (6 to 30 m/s) with exit temperatures ranging from 250 to 500 °F (120 to 260 °C).
A logic diagram for using the Briggs equations to obtain the plume rise trajectory of bent-over buoyant plumes is presented below:
{| border="0" cellpadding="2"
|-
|align=right|where:
|
|-
!align=right| Δh
|align=left|= plume rise, in m
|-
!align=right| F
|align=left|= buoyancy factor, in m4s−3
|-
!align=right| x
|align=left|= downwind distance from plume source, in m
|-
!align=right| xf
|align=left|= downwind distance from plume source to point of maximum plume rise, in m
|-
!align=right| u
|align=left|= windspeed at actual stack height, in m/s
|-
!align=right| s
|align=left|= stability parameter, in s−2
|}
The above parameters used in the Briggs' equations are discussed in Beychok's book.
See also
Atmospheric dispersion models
List of atmospheric dispersion models provides a more comprehensive list of models than listed below. It includes a very brief description of each model.
ADMS
AERMOD
ATSTEP
CALPUFF
CMAQ
DISPERSION21
FLACS
FLEXPART
HYSPLIT
ISC3
NAME
MERCURE
OSPM
Fluidyn-Panache
RIMPUFF
SAFE AIR
PUFF-PLUME
POLYPHEMUS
MUNICH
Organizations
Air Quality Modeling Group
Air Resources Laboratory
Finnish Meteorological Institute
KNMI, Royal Dutch Meteorological Institute
National Environmental Research Institute of Denmark
Swedish Meteorological and Hydrological Institute
TA Luft
UK Atmospheric Dispersion Modelling Liaison Committee
UK Dispersion Modelling Bureau
Desert Research Institute
VITO (institute) Belgium; https://vito.be/en
Swedish Defence Research Agency, FOI
Others
Air pollution dispersion terminology
List of atmospheric dispersion models
Portable Emissions Measurement System (PEMS)
Roadway air dispersion modeling
Useful conversions and formulas for air dispersion modeling
Air pollution forecasting
References
Further reading
Books
Introductory
Advanced
Proceedings
Guidance
External links
EPA's Support Center for Regulatory Atmospheric Modeling
EPA's Air Quality Modeling Group (AQMG)
NOAA's Air Resources Laboratory (ARL)
UK Atmospheric Dispersion Modelling Liaison Committee web site
UK Dispersion Modelling Bureau web site
Atmospheric Chemistry transport model LOTOS-EUROS
The Operational Priority Substances model OPS
HAMS-GPS Dispersion modelling
Wiki on Atmospheric Dispersion Modelling. Addresses the international community of atmospheric dispersion modellers - primarily researchers, but also users of models. Its purpose is to pool experiences gained by dispersion modellers during their work.
Air pollution
Environmental engineering
Industrial emissions control
Process safety | Atmospheric dispersion modeling | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,066 | [
"Industrial emissions control",
"Safety engineering",
"Chemical engineering",
"Atmospheric dispersion modeling",
"Civil engineering",
"Process safety",
"Environmental engineering",
"Chemical process engineering",
"Environmental modelling"
] |
25,654,089 | https://en.wikipedia.org/wiki/Optical%20force | The optical force is a phenomenon whereby beams of light can attract and repel each other. The force acts along an axis which is perpendicular to the light beams. Because of this, parallel beams can be induced to converge or diverge. The optical force works on a microscopic scale, and cannot currently be detected at larger scales. It was discovered by a team of Yale researchers led by electrical engineer Hong Tang.
See also
Optical lift
References
Force
Spacecraft propulsion
Aerospace engineering | Optical force | [
"Physics",
"Mathematics",
"Engineering"
] | 93 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Aerospace engineering",
"Wikipedia categories named after physical quantities",
"Matter"
] |
31,352,623 | https://en.wikipedia.org/wiki/PolyQ%20%28database%29 | PolyQ is a biological database of polyglutamine repeats in disease and non-disease associated proteins.
See also
Trinucleotide repeat disorder
References
External links
http://pxgrid.med.monash.edu.au/polyq
Biological databases
Post-translational modification | PolyQ (database) | [
"Chemistry",
"Biology"
] | 62 | [
"Gene expression",
"Biochemical reactions",
"Bioinformatics",
"Post-translational modification",
"Biological databases"
] |
31,357,305 | https://en.wikipedia.org/wiki/Relativistic%20runaway%20electron%20avalanche | A relativistic runaway electron avalanche (RREA) is an avalanche growth of a population of relativistic electrons driven through a material (typically air) by an electric field. RREA has been hypothesized to be related to lightning initiation, terrestrial gamma-ray flashes, sprite lightning, and spark development. RREA is unique as it can occur at electric fields an order of magnitude lower than the dielectric strength of the material.
Mechanism
When an electric field is applied to a material, free electrons will drift slowly through the material as described by the electron mobility. For low-energy electrons, faster drift velocities result in more interactions with surrounding particles. These interactions create a form of friction that slow the electrons down. Thus, for low-energy cases, the electron velocities tend to stabilize.
At higher energies, above about 100 keV, these collisional events become less common as the mean free path of the electron rises. These higher-energy electrons thus see less frictional force as their velocity increases. In the presence of the same electric field, these electrons will continue accelerating, "running away".
As runaway electrons gain energy from an electric field, they occasionally collide with atoms in the material, knocking off secondary electrons. If the secondary electrons also have high enough energy to run away, they too accelerate to high energies, produce further secondary electrons, etc. As such, the total number of energetic electrons grows exponentially in an avalanche.
The dynamic friction function, shown in the Figure, takes into account only energy losses due to inelastic collisions and has a minimum of ~216 keV/cm at electron energy of ~1.23 MeV. More useful thresholds, however, must include also the effects due to electron momentum loss due to elastic collisions. In that case, an analytical estimate gives the runaway threshold of ~282 keV/cm, which occurs at the electron energy of ~7 MeV. This result approximately agrees with numbers obtained from Monte Carlo simulations, of ~284 keV/cm and 10 MeV, respectively.
Seeding
The RREA mechanism above only describes the growth of the avalanche. An initial energetic electron is needed to start the process. In ambient air, such energetic electrons typically come from cosmic rays. In very strong electric fields, stronger than the maximum frictional force experienced by electrons, even low-energy ("cold" or "thermal") electrons can accelerate to relativistic energies, a process dubbed "thermal runaway."
Feedback
RREA avalanches generally move opposite the direction of the electric field. As such, after the avalanches leave the electric field region, frictional forces dominate, the electrons lose energy, and the process stops. There is the possibility, however, that photons or positrons produced by the avalanche will wander back to where the avalanche began and can produce new seeds for a second generation of avalanches. If the electric field region is large enough, the number of second-generation avalanches will exceed the number of first-generation avalanches and the number of avalanches itself grows exponentially. This avalanche of avalanches can produce extremely large populations of energetic electrons. This process eventually leads to the decay of the electric field below the level at which feedback is possible and therefore acts as a limit to the large-scale electric field strength.
Effects of RREA
The large population of energetic electrons produced in RREA will produce a correspondingly large population of energetic photons by bremsstrahlung. These photons are proposed as the source of terrestrial gamma-ray flashes. Large RREA events in thunderstorms may also contribute rare but large radiation doses to commercial airline flights. The American physicist Joseph Dwyer coined the term "dark lightning" for this phenomenon, which is still the subject of research.
References
Lightning
Atmospheric electricity
Electrical phenomena
Electron | Relativistic runaway electron avalanche | [
"Physics",
"Chemistry"
] | 782 | [
"Electron",
"Physical phenomena",
"Molecular physics",
"Atmospheric electricity",
"Electrical phenomena",
"Lightning"
] |
31,360,061 | https://en.wikipedia.org/wiki/Alazopeptin | Alazopeptin is an antibiotic, with moderate anti-trypanosomal and antitumor activity. It was originally isolated from Streptacidiphilus griseoplanus, sourced from soil near Williamsburg, Iowa. It is also isolated from Kitasatospora azatica It is still largely produced via fermentation broths of that organism. Structurally, alazopeptin is a tripeptide and contains 2 molecules of 6-diazo-5-oxo-L-norleucine and one molecule of L-alanine. In 2021 the biosynthetic pathway of alazopeptin was elucidated.
References
Antibiotics
Allylamines
Diazo compounds
Tripeptides | Alazopeptin | [
"Biology"
] | 152 | [
"Antibiotics",
"Biocides",
"Biotechnology products"
] |
31,360,541 | https://en.wikipedia.org/wiki/Suction%20caisson | Suction caissons (also referred to as suction anchors, suction piles or suction buckets) are a form of fixed platform anchor in the form of an open bottomed tube embedded in the sediment and sealed at the top while in use so that lifting forces generate a pressure differential that holds the caisson down. They have a number of advantages over conventional offshore foundations, mainly being quicker to install than deep foundation piles and being easier to remove during decommissioning. Suction caissons are now used extensively worldwide for anchoring large offshore installations, like oil platforms, offshore drillings and accommodation platforms to the seafloor at great depths. In recent years, suction caissons have also seen usage for offshore wind turbines in shallower waters.
Oil and gas recovery at great depth could have been a very difficult task without the suction anchor technology, which was developed and used for the first time in the North Sea 30 years ago.
The use of suction caissons/anchors has now become common practice worldwide. Statistics from 2002 revealed that 485 suction caissons had been installed in more than 50 different localities around the world, in depths to about 2000 m. Suction caissons have been installed in most of the deep water oil producing areas around the world: The North Sea, Gulf of Mexico, offshore West Africa, offshore Brazil, West of Shetland, South China Sea, Adriatic Sea and Timor Sea. No reliable statistics have been produced after 2002, but the use of suction caissons is still rising.
Description
A suction caisson can effectively be described as an inverted bucket that is embedded in the marine sediment. Attachment to the sea bed is achieved either through pushing or by creating a negative pressure inside the caisson skirt by pumping water out of the caisson; both of these techniques have the effect of securing the caisson into the sea bed. The foundation can also be rapidly removed by reversing the installation process, pumping water into the caisson to create an overpressure.
The concept of suction technology was developed for projects where gravity loading is not sufficient for pressing foundation skirts into the ground. The technology was also developed for anchors subject to large tension forces due to waves and stormy weather. The suction caisson technology functions very well in a seabed with soft clays or other low strength sediments. The suction caissons are in many cases easier to install than piles, which must be driven (hammered) into the ground with a pile driver.
Mooring lines are usually attached to the side of the suction caisson at the optimal load attachment point, which must be calculated for each caisson. Once installed, the caisson acts much like a short rigid pile and is capable of resisting both lateral and axial loads. Limit equilibrium methods or 3D finite element analysis are used to calculate the holding capacity.
History
Suction caissons were first used as anchors for floating structures in the offshore oil and gas industry, including offshore platforms such as the Draupner E oil rig.
There are great differences between the first small suction caissons that were installed for Shell at the Gorm field in the North Sea in 1981 and the large suction caissons that were installed for the Diana platform in the Gulf of Mexico in 1999. The twelve suction caissons on the Gorm field were intended to secure a simple loading buoy device at a depth of 40 metres, while the installation of suction anchors for the Diana platform was a world record in itself at that time, concerning water depth and size of anchors. The height of the Diana suction caissons is 30 metres, their diameter 6.5 metres, and they were installed at a depth of about 1500 m on soft clay deposits. Since then, suction caissons have been installed at even larger depths, but the Diana installation was a technology breakthrough for the 20th century.
An important development step for the suction caisson technology emerged from cooperation between the former operator in the North Sea, Saga Petroleum AS, and Norwegian Geotechnical Institute (NGI). Saga Petroleum's oil-producing Snorre A platform was a tension-leg platform of a type that in other parts of the world would have been founded with up to 90 metres long piles. Unfortunately on the Snorre oil field, it was difficult to use long piles due to the presence of huge pebbles at 60 m depth under the seabed. Saga Petroleum decided therefore to use suction caissons, which were analysed by NGI. These analyses were verified from extensive model tests. The calculations showed that the platform could be safely secured by suction caissons of only 12 m in length. Snorre A started to produce oil in 1992 and is now operated by the Norwegian oil company Statoil.
Suction buckets were tested with offshore wind turbines at Frederikshavn in 2002, at Horns Rev in 2008 and Borkum Riffgrund in 2014, and are to be used in a third of the foundations at the initial development at Hornsea Wind Farm.
Statoil have gone on to use the technology for windfarms.
They are also planned to be used for some of the wind turbines in the Hornsea Project One wind farm scheduled to be completed in 2020. Similarly, a suction bucket contract has been awarded for the Aberdeen Bay Wind Farm.
Gravity oil platforms
Suction caissons have a lot of similarities with foundation design principles and solutions for the big gravity oil platforms that were installed in the North Sea when the offshore oil production started there in the beginning of 1970's. The first gravity oil platform on the Ekofisk oil field had a foundation area that was as big as a football field, and it was placed on a seabed with very dense sand. The platform was designed to tolerate waves up to 24 m in height.
As the installation of oil platforms continued in the North Sea, in areas with poor ground conditions such as soft clays, they were designed to survive even higher storm waves. These platforms were founded on a system of cylindrical skirts that were penetrated into the ground under combined gravity load and underpressure. The oil platform at the Gullfaks C field was equipped with 22 m long skirts. The Troll A platform is founded in 330 m depth with 30 m long skirts and is the world's biggest gravity platform.
Research and development
The Norwegian Geotechnical Institute (NGI) has been heavily involved with the concept development, design and installation of suction anchors from the start. The project "Application of offshore bucket foundations and anchors in lieu of conventional designs" (1994-1998) was sponsored by 15 international petroleum and industry companies and was one of the most important studies. The project “Skirted foundations and anchors in clay” (1997-1999) was sponsored by 19 international companies organized through the Offshore Technology Research Center (OTRC) in the US, and the project “Skirted offshore foundations and anchors in sand” (1997-2000) was sponsored by 8 international companies. The main conclusions from the projects were presented in the 1999 OTC paper no 10824.
An industry sponsored study on the design and analysis of deepwater anchors in soft clay was completed in 2003, where NGI participated together with OTRC and Centre for Offshore Foundation Systems (COFS) in Australia. The overall objective was to provide the API Geotechnical Workgroup (RG7) and the Deepstar Joint Industry Project VI with background, data and other information needed to develop a widely applicable recommended practice for the design and installation of deepwater anchors.
The Norwegian classification society DNV (Det Norske Veritas), active worldwide in risk analysis and safety evaluation of special constructions, has produced a recommended practice report on the design procedures for suction anchors which is based on close cooperation with NGI. The main information from the project was presented in the 2006 OTC paper no 18038.
In 2002 NGI established the subsidiary NGI Inc in Houston. The subsidiary has since been awarded the detailed geotechnical design for more than 15 suction anchor projects in the Gulf of Mexico, and among these the challenging Mad Dog Spar project involving design of anchors located in old slide deposits below the Sigsbee Escarpment. For further information reference can be made to the 2006 OTC papers no 17949 and 17950.
See also
, a temporary water-excluding structure built in place, sometimes surrounding a working area as does an open caisson.
, for information on geotechnical considerations.
References
Offshore engineering | Suction caisson | [
"Engineering"
] | 1,735 | [
"Construction",
"Offshore engineering"
] |
31,361,681 | https://en.wikipedia.org/wiki/Protein-RNA%20interface%20database | The Protein–RNA Interface Database (PRIDB) is a database of protein–RNA interfaces extracted from the Protein Data Bank.
See also
RNA-binding protein
Protein Data Bank
References
External links
http://bindr.gdcb.iastate.edu/PRIDB.
Biological databases
RNA-binding proteins
RNA | Protein-RNA interface database | [
"Biology"
] | 67 | [
"Bioinformatics",
"Biological databases"
] |
890,697 | https://en.wikipedia.org/wiki/Institute%20for%20Solar%20Physics | The Institute for Solar Physics () is a Swedish research institute. It is managed as an independent institute associated with Stockholm University through its Department of Astronomy. It is also a national research infrastructure under the Swedish Research Council.
The institute was established in 1951 by the Royal Swedish Academy of Sciences as the Research Station for Astrophysics on the island of Capri, Italy. Around 1980 the station moved to La Palma in the Canary Islands. The new station is situated within the Spanish-International Observatory on the Roque de los Muchachos. It soon became obvious that the superior astronomical climate on La Palma called for a first-class solar telescope. The 47.5-cm Swedish Vacuum Solar Telescope (SVST) was erected in 1985.
During the 1990s, the daily operations of the institute gradually moved from La Palma to Stockholm.
The SVST was removed from the tower on 28 August 2000 after almost 15 years of successful operations. It was replaced with the Swedish 1-m Solar Telescope, which has twice as large an aperture diameter.
In 2013, the institute was transferred from the Royal Swedish Academy of Sciences to its current home with Stockholm University.
External links
Institute for Solar Physics
Stockholm University
Royal Swedish Academy of Sciences
Astrophysics research institutes | Institute for Solar Physics | [
"Physics"
] | 246 | [
"Astrophysics research institutes",
"Astrophysics"
] |
890,862 | https://en.wikipedia.org/wiki/L-attributed%20grammar | L-attributed grammars are a special type of attribute grammars. They allow the attributes to be evaluated in one depth-first left-to-right traversal of the abstract syntax tree. As a result, attribute evaluation in L-attributed grammars can be incorporated conveniently in top-down parsing.
A syntax-directed definition is L-attributed if each inherited attribute of on the right side of depends only on
the attributes of the symbols
the inherited attributes of (but not its synthesized attributes)
Every S-attributed syntax-directed definition is also L-attributed.
Implementing L-attributed definitions in Bottom-Up parsers requires rewriting L-attributed definitions into translation schemes.
Many programming languages are L-attributed. Special types of compilers, the narrow compilers, are based on some form of L-attributed grammar. These are a strict superset of S-attributed grammars. Used for code synthesis.
Either "inherited attributes" or "synthesized attributes" associated with the occurrence of symbol .
References
Formal languages
Compiler construction | L-attributed grammar | [
"Mathematics"
] | 210 | [
"Formal languages",
"Mathematical logic"
] |
891,255 | https://en.wikipedia.org/wiki/Supermanifold | In physics and mathematics, supermanifolds are generalizations of the manifold concept based on ideas coming from supersymmetry. Several definitions are in use, some of which are described below.
Informal definition
An informal definition is commonly used in physics textbooks and introductory lectures. It defines a supermanifold as a manifold with both bosonic and fermionic coordinates. Locally, it is composed of coordinate charts that make it look like a "flat", "Euclidean" superspace. These local coordinates are often denoted by
where x is the (real-number-valued) spacetime coordinate, and and are Grassmann-valued spatial "directions".
The physical interpretation of the Grassmann-valued coordinates are the subject of debate; explicit experimental searches for supersymmetry have not yielded any positive results. However, the use of Grassmann variables allow for the tremendous simplification of a number of important mathematical results. This includes, among other things a compact definition of functional integrals, the proper treatment of ghosts in BRST quantization, the cancellation of infinities in quantum field theory, Witten's work on the Atiyah-Singer index theorem, and more recent applications to mirror symmetry.
The use of Grassmann-valued coordinates has spawned the field of supermathematics, wherein large portions of geometry can be generalized to super-equivalents, including much of Riemannian geometry and most of the theory of Lie groups and Lie algebras (such as Lie superalgebras, etc.) However, issues remain, including the proper extension of de Rham cohomology to supermanifolds.
Definition
Three different definitions of supermanifolds are in use. One definition is as a sheaf over a ringed space; this is sometimes called the "algebro-geometric approach". This approach has a mathematical elegance, but can be problematic in various calculations and intuitive understanding. A second approach can be called a "concrete approach", as it is capable of simply and naturally generalizing a broad class of concepts from ordinary mathematics. It requires the use of an infinite number of supersymmetric generators in its definition; however, all but a finite number of these generators carry no content, as the concrete approach requires the use of a coarse topology that renders almost all of them equivalent. Surprisingly, these two definitions, one with a finite number of supersymmetric generators, and one with an infinite number of generators, are equivalent.
A third approach describes a supermanifold as a base topos of a superpoint. This approach remains the topic of active research.
Algebro-geometric: as a sheaf
Although supermanifolds are special cases of noncommutative manifolds, their local structure makes them better suited to study with the tools of standard differential geometry and locally ringed spaces.
A supermanifold M of dimension (p,q) is a topological space M with a sheaf of superalgebras, usually denoted OM or C∞(M), that is locally isomorphic to , where the latter is a Grassmann (Exterior) algebra on q generators.
A supermanifold M of dimension (1,1) is sometimes called a super-Riemann surface.
Historically, this approach is associated with Felix Berezin, Dimitry Leites, and Bertram Kostant.
Concrete: as a smooth manifold
A different definition describes a supermanifold in a fashion that is similar to that of a smooth manifold, except that the model space has been replaced by the model superspace .
To correctly define this, it is necessary to explain what and are. These are given as the even and odd real subspaces of the one-dimensional space of Grassmann numbers, which, by convention, are generated by a countably infinite number of anti-commuting variables: i.e. the one-dimensional space is given by where V is infinite-dimensional. An element z is termed real if ; real elements consisting of only an even number of Grassmann generators form the space of c-numbers, while real elements consisting of only an odd number of Grassmann generators form the space of a-numbers. Note that c-numbers commute, while a-numbers anti-commute. The spaces and are then defined as the p-fold and q-fold Cartesian products of and .
Just as in the case of an ordinary manifold, the supermanifold is then defined as a collection of charts glued together with differentiable transition functions. This definition in terms of charts requires that the transition functions have a smooth structure and a non-vanishing Jacobian. This can only be accomplished if the individual charts use a topology that is considerably coarser than the vector-space topology on the Grassmann algebra. This topology is obtained by projecting down to and then using the natural topology on that. The resulting topology is not Hausdorff, but may be termed "projectively Hausdorff".
That this definition is equivalent to the first one is not at all obvious; however, it is the use of the coarse topology that makes it so, by rendering most of the "points" identical. That is, with the coarse topology is essentially isomorphic to
Properties
Unlike a regular manifold, a supermanifold is not entirely composed of a set of points. Instead, one takes the dual point of view that the structure of a supermanifold M is contained in its sheaf OM of "smooth functions". In the dual point of view, an injective map corresponds to a surjection of sheaves, and a surjective map corresponds to an injection of sheaves.
An alternative approach to the dual point of view is to use the functor of points.
If M is a supermanifold of dimension (p,q), then the underlying space M inherits the structure of a differentiable manifold whose sheaf of smooth functions is , where is the ideal generated by all odd functions. Thus M is called the underlying space, or the body, of M. The quotient map corresponds to an injective map M → M; thus M is a submanifold of M.
Examples
Let M be a manifold. The odd tangent bundle ΠTM is a supermanifold given by the sheaf Ω(M) of differential forms on M.
More generally, let E → M be a vector bundle. Then ΠE is a supermanifold given by the sheaf Γ(ΛE*). In fact, Π is a functor from the category of vector bundles to the category of supermanifolds.
Lie supergroups are examples of supermanifolds.
Batchelor's theorem
Batchelor's theorem states that every supermanifold is noncanonically isomorphic to a supermanifold of the form ΠE. The word "noncanonically" prevents one from concluding that supermanifolds are simply glorified vector bundles; although the functor Π maps surjectively onto the isomorphism classes of supermanifolds, it is not an equivalence of categories. It was published by Marjorie Batchelor in 1979.
The proof of Batchelor's theorem relies in an essential way on the existence of a partition of unity, so it does not hold for complex or real-analytic supermanifolds.
Odd symplectic structures
Odd symplectic form
In many physical and geometric applications, a supermanifold comes equipped with an Grassmann-odd symplectic structure. All natural geometric objects on a supermanifold are graded. In particular, the bundle of two-forms is equipped with a grading. An odd symplectic form ω on a supermanifold is a closed, odd form, inducing a non-degenerate pairing on TM. Such a supermanifold is called a P-manifold. Its graded dimension is necessarily (n,n), because the odd symplectic form induces a pairing of odd and even variables. There is a version of the Darboux theorem for P-manifolds, which allows one
to equip a P-manifold locally with a set of coordinates where the odd symplectic form ω is written as
where are even coordinates, and odd coordinates. (An odd symplectic form should not be confused with a Grassmann-even symplectic form on a supermanifold. In contrast, the Darboux version of an even symplectic form is
where are even coordinates, odd coordinates and are either +1 or −1.)
Antibracket
Given an odd symplectic 2-form ω one may define a Poisson bracket known as the antibracket of any two functions F and G on a supermanifold by
Here and are the right and left derivatives respectively and z are the coordinates of the supermanifold. Equipped with this bracket, the algebra of functions on a supermanifold becomes an antibracket algebra.
A coordinate transformation that preserves the antibracket is called a P-transformation. If the Berezinian of a P-transformation is equal to one then it is called an SP-transformation.
P and SP-manifolds
Using the Darboux theorem for odd symplectic forms one can show that P-manifolds are constructed from open sets of superspaces glued together by P-transformations. A manifold is said to be an SP-manifold if these transition functions can be chosen to be SP-transformations. Equivalently one may define an SP-manifold as a supermanifold with a nondegenerate odd 2-form ω and a density function ρ such that on each coordinate patch there exist Darboux coordinates in which ρ is identically equal to one.
Laplacian
One may define a Laplacian operator Δ on an SP-manifold as the operator which takes a function H to one half of the divergence of the corresponding Hamiltonian vector field. Explicitly one defines
In Darboux coordinates this definition reduces to
where xa and θa are even and odd coordinates such that
The Laplacian is odd and nilpotent
One may define the cohomology of functions H with respect to the Laplacian. In Geometry of Batalin-Vilkovisky quantization, Albert Schwarz has proven that the integral of a function H over a Lagrangian submanifold L depends only on the cohomology class of H and on the homology class of the body of L in the body of the ambient supermanifold.
SUSY
A pre-SUSY-structure on a supermanifold of dimension
(n,m) is an odd m-dimensional
distribution .
With such a distribution one associates
its Frobenius tensor
(since P is odd, the skew-symmetric Frobenius
tensor is a symmetric operation).
If this tensor is non-degenerate,
e.g. lies in an open orbit of
,
M is called a SUSY-manifold.
SUSY-structure in dimension (1, k)
is the same as odd contact structure.
See also
Superspace
Supersymmetry
Supergeometry
Graded manifold
Batalin–Vilkovisky formalism
References
Joseph Bernstein, "Lectures on Supersymmetry (notes by Dennis Gaitsgory)", Quantum Field Theory program at IAS: Fall Term
A. Schwarz, "Geometry of Batalin-Vilkovisky quantization", ArXiv hep-th/9205088
C. Bartocci, U. Bruzzo, D. Hernandez Ruiperez, The Geometry of Supermanifolds (Kluwer, 1991)
L. Mangiarotti, G. Sardanashvily, Connections in Classical and Quantum Field Theory (World Scientific, 2000) ()
External links
Super manifolds: an incomplete survey at the Manifold Atlas.
Supersymmetry
Generalized manifolds
Structures on manifolds
Mathematical physics | Supermanifold | [
"Physics",
"Mathematics"
] | 2,434 | [
"Applied mathematics",
"Theoretical physics",
"Unsolved problems in physics",
"Physics beyond the Standard Model",
"Mathematical physics",
"Supersymmetry",
"Symmetry"
] |
891,379 | https://en.wikipedia.org/wiki/Disk%20enclosure | A disk enclosure is a specialized casing designed to hold and power hard disk drives or solid state drives while providing a mechanism to allow them to communicate to one or more separate computers.
Drive enclosures provide power to the drives therein and convert the data sent across their native data bus into a format usable by an external connection on the computer to which it is connected. In some cases, the conversion is as trivial as carrying a signal between different connector types. In others, it is complicated enough to require a separate embedded system to retransmit data over connector and signal of a different standard.
Factory-assembled external hard disk drives, external DVD-ROM drives, and others consist of a storage device in a disk enclosure.
Benefits
Key benefits to using external disk enclosures include:
Adding additional storage space and media types to small form factor and laptop computers, as well as sealed embedded systems such as digital video recorders and video game consoles.
Adding RAID capabilities to computers that lack RAID controllers or adequate space for additional drives.
Adding more drives to any given server or workstation than their chassis can hold.
Transferring data between non-networked computers, humorously known as sneakernet.
Adding an easily removable backup source with a separate power supply from the connected computer.
Using a network-attached storage-capable enclosure over a network to share data or provide a cheap off-site backup solution.
Preventing the heat from a disk drive from increasing the heat inside an operating computer case.
Simple and cheap approach to hot swapping.
Recovering the data from a damaged computer's hard drive, particularly when it does not share the same interface with the computer used to perform the recovery.
Lower the cost of removable storage by reusing hardware designed for internal use.
In some instances, provides a hardened chassis to prevent wear and tear.
Consumer enclosures
In the consumer market, commonly used configurations of drive enclosures utilize magnetic hard drives or optical disc drives inside USB, FireWire, or Serial ATA enclosures. External 3.5-in floppy drives are also fairly common, following a trend to not integrate floppy drives into compact and laptop computers. Pre-built external drives are available through all major manufacturers of hard drives, as well as several third parties.
These may also be referred to as a caddy – a sheath, typically plastic or metallic, within which a hard disk drive can be placed and connected with the same type of adapters as a conventional motherboard and power supply would use. The exterior of the caddy typically has two female sockets, used for data transfer and power.
Variants of caddy:
some larger caddies can support several devices at once and can feature either separate outputs to connect each device to a different computer, or a single output to connect both over the same data cable
some caddies do not require an external power supply, and instead obtain power from the device to which they are connected
some caddies have integrated fans with which to keep the drives within at a cool temperature
caddies for all major standards exist, supporting for example ATA, SCSI and SATA drives and USB, SCSI and FireWire outputs
Advantages:
relatively high transfer speed; typically faster than other common portable media such as CDs, DVDs and USB flash drives, slower than drives connected using solely ATA, SCSI and SATA connectors
storage; typically larger than CDs, DVDs and USB flash drives
price-to-storage ratio; typically better than CDs, DVDs and USB flash drives
Disadvantages:
power; most variants require a supply, unlike CDs, DVDs and USB flash drives...
size; typically larger than CDs, DVDs and USB flash drives
Form factors
Multiple drives: RAID-enabled enclosures and iSCSI enclosures commonly hold multiple drives. High-end and server-oriented chassis are often built around 3.5-in drives in hot-swappable drive caddies.
"5.25-inch" drive: (5.75 in × 8 in × 1.63 in = 146.1 mm × 203 mm × 41.4 mm)Most desktop models of drives for optical 120-mm discs (DVD-ROM or CD-ROM drives, CD or DVD burners), are designed to be mounted into a so-called "5.25-inch slot", which obtained its nickname because this slot size was initially used by drives for floppy disks in the IBM PC AT. (The original "5.25-inch slot" in the IBM PC was with 3.25 in (82.6 mm) twice as high as the one commonly used today; in fact, the PC's drive size was called "5.25-inch full-height", and the size used in the PC AT and commonly used today is "5.25-inch half-height".)
"3.5-inch" drive: (4 in × 5.75 in × 1 in = 101.6 mm × 146.05 mm × 25.4 mm) This smaller, disk-drive form factor was introduced with the Apple Macintosh series in 1984, and later adopted throughout the industry beginning widely with the IBM PS/2 series in 1987, which included drives of this size for 90-mm ("3.5-inch") floppy disks. This form factor is today used by most desktop hard drives. They usually have 10 mounting holes with American 6-32 UNC 2B threads: three on each side and four on the bottom.
"2.5-inch" drive: (2.75 in × 3.945 in × 0.374 in = 69.85 mm × 100.2 mm × 9.5 mm)This even smaller, form factor is widely used today in notebook computers and similar small-footprint devices. One commonplace feature for these drives is radically lower power consumption than is found in larger drives. This enables enclosure vendors to power the devices directly from the host device's USB or other external bus, in most cases.
"1.8-inch" drive: Found in extremely compact devices, such as certain portable media players and smaller notebooks, these devices are not standardized like their 2.5 inch cousins.
A range of other form factors has emerged for mobile devices. While laptop hard drives are today generally of the 9.5 mm high variant of the "2.5-inch" drive form factor, older laptops and notebooks had hard drives that varied in height, which can make it difficult to find a well-fitting chassis. Laptop optical drives require "slim" 5.25-in enclosures, since they have approximately half the thickness of their desktop counterparts, and most models use a special 50-pin connector that differs from the 40-pin connectors used on desktop ATA drives.
While they are less common now than they once were, it is also possible to purchase a drive chassis and mount that will convert a 3.5-inch hard drive into a removable hard disk that can be plugged into and removed from a mounting bracket permanently installed in a desktop PC case. The mounting bracket carries the data bus and power connections over a proprietary connector, and converts back into the drive's native data bus format and power connections inside the drive's chassis.
Enterprise enclosures
In enterprise storage the term refers to a larger physical chassis. The term can be used both in reference to network-attached storage (NAS) and components of a storage area network (SAN) or be used to describe a chassis directly attached to one or more servers over an external bus. Like their conventional server brethren, these devices may include a backplane, temperature sensors, cooling systems, enclosure management devices, and redundant power supplies.
Connections
Native drive interfaces
SCSI, SAS, Fibre Channel, eSATAp, and eSATA interfaces can be used to directly connect the external hard drive to an internal host adapter, without the need for any intervening controller. External variants of these native drive protocols are extremely similar to the internal protocols, but are often expanded to carry power (such as eSATAp and the SCSI Single Connector Attachment) and to use a more durable physical connector. A host adapter with external port may be necessary to connect a drive, if a computer lacks an available external port.
Direct attach serial interfaces
USB or FireWire connections are typically used to attach consumer class external hard drives to a computer. Unlike SCSI, eSATA, or SAS these require circuitry to convert the hard disk's native signal to the appropriate protocol. Parallel ATA and internal Serial ATA hard disks are frequently connected to such chassis because nearly all computers on the market today have USB or FireWire ports, and these chassis are inexpensive and easy to find.
Network protocols
iSCSI, NFS, or CIFS are all commonly used protocols that are used to allow an external hard drive to use a network to send data to a computer system. This type of external hard drive is also known as Network-attached storage or NAS. Often, such drives are embedded computers running operating systems such as Linux or VxWorks that use their NFS daemons and SAMBA to provide a networked file system. A newer technology NAS, has been applied to some disk enclosures, which allows network ability, direct connection (e.g., USB) and even RAID features.
Hard drive shucking
"Shucking" refers to the process of purchasing an external hard disk drive and removing the drive from its enclosure, in order for it to be used as an internal disk drive. This is performed because external drives are often cheaper than internal drives of the same capacity and model, and that external drives designed for continuous usage often contain hard drives designed for increased reliability.
Following the hard disk drive shortages caused by the 2011 Thailand floods, data storage company Backblaze reduced its cost of acquiring hard drives by purchasing external hard drives and shucking them. According to Backblaze Chief Executive Gleb Budman, the company purchased 1,838 external drives during this period. Describing the process as "drive farming", the company noted that it was much cheaper for them to purchase 3 TB external drives and removing them from their cases manually, than it is to purchase internal drives.
See also
Computer bus
Computer case
External storage
Hard drive
Network-attached storage
Network Direct Attached Storage
SCSI Attached Fault-Tolerant Enclosure
SCSI Enclosure Services
SGPIO - Serial General Purpose Input/Output
Solid-state drive
USB Mass Storage Device
USB flash drive
References
Computer storage devices
Enclosure | Disk enclosure | [
"Technology"
] | 2,108 | [
"Computer storage devices",
"Recording devices"
] |
892,803 | https://en.wikipedia.org/wiki/Quantum%20error%20correction | Quantum error correction (QEC) is a set of techniques used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is theorised as essential to achieve fault tolerant quantum computing that can reduce the effects of noise on stored quantum information, faulty quantum gates, faulty quantum state preparation, and faulty measurements. Effective quantum error correction would allow quantum computers with low qubit fidelity to execute algorithms of higher complexity or greater circuit depth.
Classical error correction often employs redundancy. The simplest albeit inefficient approach is the repetition code. A repetition code stores the desired (logical) information as multiple copies, and—if these copies are later found to disagree due to errors introduced to the system—determines the most likely value for the original data by majority vote. For instance, suppose we copy a bit in the one (on) state three times. Suppose further that noise in the system introduces an error that corrupts the three-bit state so that one of the copied bits becomes zero (off) but the other two remain equal to one. Assuming that errors are independent and occur with some sufficiently low probability p, it is most likely that the error is a single-bit error and the intended message is three bits in the one state. It is possible that a double-bit error occurs and the transmitted message is equal to three zeros, but this outcome is less likely than the above outcome. In this example, the logical information is a single bit in the one state and the physical information are the three duplicate bits. Creating a physical state that represents the logical state is called encoding and determining which logical state is encoded in the physical state is called decoding. Similar to classical error correction, QEC codes do not always correctly decode logical qubits, but instead reduce the effect of noise on the logical state.
Copying quantum information is not possible due to the no-cloning theorem. This theorem seems to present an obstacle to formulating a theory of quantum error correction. But it is possible to spread the (logical) information of one logical qubit onto a highly entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits.
In classical error correction, syndrome decoding is used to diagnose which error was the likely source of corruption on an encoded state. An error can then be reversed by applying a corrective operation based on the syndrome. Quantum error correction also employs syndrome measurements. It performs a multi-qubit measurement that does not disturb the quantum information in the encoded state but retrieves information about the error. Depending on the QEC code used, syndrome measurement can determine the occurrence, location and type of errors. In most QEC codes, the type of error is either a bit flip, or a sign (of the phase) flip, or both (corresponding to the Pauli matrices X, Z, and Y). The measurement of the syndrome has the projective effect of a quantum measurement, so even if the error due to the noise was arbitrary, it can be expressed as a combination of basis operations called the error basis (which is given by the Pauli matrices and the identity). To correct the error, the Pauli operator corresponding to the type of error is used on the corrupted qubit to revert the effect of the error.
The syndrome measurement provides information about the error that has happened, but not about the information that is stored in the logical qubit—as otherwise the measurement would destroy any quantum superposition of this logical qubit with other qubits in the quantum computer, which would prevent it from being used to convey quantum information.
Bit-flip code
The repetition code works in a classical channel, because classical bits are easy to measure and to repeat. This approach does not work for a quantum channel in which, due to the no-cloning theorem, it is not possible to repeat a single qubit three times. To overcome this, a different method has to be used, such as the three-qubit bit-flip code first proposed by Asher Peres in 1985. This technique uses entanglement and syndrome measurements and is comparable in performance with the repetition code.
Consider the situation in which we want to transmit the state of a single qubit through a noisy channel . Let us moreover assume that this channel either flips the state of the qubit, with probability , or leaves it unchanged. The action of on a general input can therefore be written as .
Let be the quantum state to be transmitted. With no error-correcting protocol in place, the transmitted state will be correctly transmitted with probability . We can however improve on this number by encoding the state into a greater number of qubits, in such a way that errors in the corresponding logical qubits can be detected and corrected. In the case of the simple three-qubit repetition code, the encoding consists in the mappings and . The input state is encoded into the state . This mapping can be realized for example using two CNOT gates, entangling the system with two ancillary qubits initialized in the state . The encoded state is what is now passed through the noisy channel.
The channel acts on by flipping some subset (possibly empty) of its qubits. No qubit is flipped with probability , a single qubit is flipped with probability , two qubits are flipped with probability , and all three qubits are flipped with probability . Note that a further assumption about the channel is made here: we assume that acts equally and independently on each of the three qubits in which the state is now encoded. The problem is now how to detect and correct such errors, while not corrupting the transmitted state.
Let us assume for simplicity that is small enough that the probability of more than a single qubit being flipped is negligible. One can then detect whether a qubit was flipped, without also querying for the values being transmitted, by asking whether one of the qubits differs from the others. This amounts to performing a measurement with four different outcomes, corresponding to the following four projective measurements:This reveals which qubits are different from the others, without at the same time giving information about the state of the qubits themselves. If the outcome corresponding to is obtained, no correction is applied, while if the outcome corresponding to is observed, then the Pauli X gate is applied to the -th qubit. Formally, this correcting procedure corresponds to the application of the following map to the output of the channel:
Note that, while this procedure perfectly corrects the output when zero or one flips are introduced by the channel, if more than one qubit is flipped then the output is not properly corrected. For example, if the first and second qubits are flipped, then the syndrome measurement gives the outcome , and the third qubit is flipped, instead of the first two. To assess the performance of this error-correcting scheme for a general input we can study the fidelity between the input and the output . Being the output state correct when no more than one qubit is flipped, which happens with probability , we can write it as , where the dots denote components of resulting from errors not properly corrected by the protocol. It follows that This fidelity is to be compared with the corresponding fidelity obtained when no error-correcting protocol is used, which was shown before to equal . A little algebra then shows that the fidelity after error correction is greater than the one without for . Note that this is consistent with the working assumption that was made while deriving the protocol (of being small enough).
Sign-flip code
The bit flip is the only kind of error in classical computers. In quantum computers, however, another kind of error is possible: the sign flip. Through transmission in a channel, the relative sign between and can become inverted. For instance, a qubit in the state may have its sign flip to
The original state of the qubit
will be changed into the state
In the Hadamard basis, bit flips become sign flips and sign flips become bit flips. Let be a quantum channel that can cause at most one phase flip. Then the bit-flip code from above can recover by transforming into the Hadamard basis before and after transmission through .
Shor code
The error channel may induce either a bit flip, a sign flip (i.e., a phase flip), or both. It is possible to correct for both types of errors on a logical qubit using a well-designed QEC code. One example of a code that does this is the Shor code, published in 1995. Since these two types of errors are the only types of errors that can result after a projective measurement, a Shor code corrects arbitrary single-qubit errors.
Let be a quantum channel that can arbitrarily corrupt a single qubit. The 1st, 4th and 7th qubits are for the sign flip code, while the three groups of qubits (1,2,3), (4,5,6), and (7,8,9) are designed for the bit flip code. With the Shor code, a qubit state will be transformed into the product of 9 qubits , where
If a bit flip error happens to a qubit, the syndrome analysis will be performed on each block of qubits (1,2,3), (4,5,6), and (7,8,9) to detect and correct at most one bit flip error in each block.
If the three bit flip group (1,2,3), (4,5,6), and (7,8,9) are considered as three inputs, then the Shor code circuit can be reduced as a sign flip code. This means that the Shor code can also repair a sign flip error for a single qubit.
The Shor code also can correct for any arbitrary errors (both bit flip and sign flip) to a single qubit. If an error is modeled by a unitary transform U, which will act on a qubit , then can be described in the form
where ,,, and are complex constants, I is the identity, and the Pauli matrices are given by
If U is equal to I, then no error occurs. If , a bit flip error occurs. If , a sign flip error occurs. If then both a bit flip error and a sign flip error occur. In other words, the Shor code can correct any combination of bit or phase errors on a single qubit.
More generally, the error operator U does not need to be unitary, but can be an Kraus operator from a quantum operation representing a system interacting with its environment.
Bosonic codes
Several proposals have been made for storing error-correctable quantum information in bosonic modes. Unlike a two-level system, a quantum harmonic oscillator has infinitely many energy levels in a single physical system. Codes for these systems include cat, Gottesman-Kitaev-Preskill (GKP), and binomial codes. One insight offered by these codes is to take advantage of the redundancy within a single system, rather than to duplicate many two-level qubits.
Binomial code
Written in the Fock basis, the simplest binomial encoding is
where the subscript L indicates a "logically encoded" state. Then if the dominant error mechanism of the system is the stochastic application of the bosonic lowering operator the corresponding error states are and respectively. Since the codewords involve only even photon number, and the error states involve only odd photon number, errors can be detected by measuring the photon number parity of the system. Measuring the odd parity will allow correction by application of an appropriate unitary operation without knowledge of the specific logical state of the qubit. However, the particular binomial code above is not robust to two-photon loss.
Cat code
Schrödinger cat states, superpositions of coherent states, can also be used as logical states for error correction codes. Cat code, realized by Ofek et al. in 2016, defined two sets of logical states: and , where each of the states is a superposition of coherent state as follows
Those two sets of states differ from the photon number parity, as states denoted with only occupy even photon number states and states with indicate they have odd parity. Similar to the binomial code, if the dominant error mechanism of the system is the stochastic application of the bosonic lowering operator , the error takes the logical states from the even parity subspace to the odd one, and vice versa. Single-photon-loss errors can therefore be detected by measuring the photon number parity operator using a dispersively coupled ancillary qubit.
Still, cat qubits are not protected against two-photon loss , dephasing noise , photon-gain error , etc.
General codes
In general, a quantum code for a quantum channel is a subspace , where is the state Hilbert space, such that there exists another quantum channel with
where is the orthogonal projection onto . Here is known as the correction operation.
A non-degenerate code is one for which different elements of the set of correctable errors produce linearly independent results when applied to elements of the code. If distinct of the set of correctable errors produce orthogonal results, the code is considered pure.
Models
Over time, researchers have come up with several codes:
Peter Shor's 9-qubit-code, a.k.a. the Shor code, encodes 1 logical qubit in 9 physical qubits and can correct for arbitrary errors in a single qubit.
Andrew Steane found a code that does the same with 7 instead of 9 qubits, see Steane code.
Raymond Laflamme and collaborators found a class of 5-qubit codes that do the same, which also have the property of being fault-tolerant. A 5-qubit code is the smallest possible code that protects a single logical qubit against single-qubit errors.
A generalisation of the technique used by Steane, to develop the 7-qubit code from the classical [7, 4] Hamming code, led to the construction of an important class of codes called the CSS codes, named for their inventors: Robert Calderbank, Peter Shor and Andrew Steane. According to the quantum Hamming bound, encoding a single logical qubit and providing for arbitrary error correction in a single qubit requires a minimum of 5 physical qubits.
A more general class of codes (encompassing the former) are the stabilizer codes discovered by Daniel Gottesman, and by Robert Calderbank, Eric Rains, Peter Shor, and N. J. A. Sloane; these are also called additive codes.
Two dimensional Bacon–Shor codes are a family of codes parameterized by integers m and n. There are nm qubits arranged in a square lattice.
Alexei Kitaev's topological quantum codes, introduced in 1997 as the toric code, and the more general idea of a topological quantum computer are the basis for various code types.
Todd Brun, Igor Devetak, and Min-Hsiu Hsieh also constructed the entanglement-assisted stabilizer formalism as an extension of the standard stabilizer formalism that incorporates quantum entanglement shared between a sender and a receiver.
The ideas of stabilizer codes, CSS codes, and topological codes can be expanded into the 2D planar surface code, of which various types exist. As of June 2024, the 2D planar surface code is generally considered the most well-studied type of quantum error correction, and one of the leading contenders for practical use in quantum computing.
That these codes allow indeed for quantum computations of arbitrary length is the content of the quantum threshold theorem, found by Michael Ben-Or and Dorit Aharonov, which asserts that you can correct for all errors if you concatenate quantum codes such as the CSS codes—i.e. re-encode each logical qubit by the same code again, and so on, on logarithmically many levels—provided that the error rate of individual quantum gates is below a certain threshold; as otherwise, the attempts to measure the syndrome and correct the errors would introduce more new errors than they correct for.
As of late 2004, estimates for this threshold indicate that it could be as high as 1–3%, provided that there are sufficiently many qubits available.
Experimental realization
There have been several experimental realizations of CSS-based codes. The first demonstration was with nuclear magnetic resonance qubits. Subsequently, demonstrations have been made with linear optics, trapped ions, and superconducting (transmon) qubits.
In 2016 for the first time the lifetime of a quantum bit was prolonged by employing a QEC code. The error-correction demonstration was performed on Schrödinger-cat states encoded in a superconducting resonator, and employed a quantum controller capable of performing real-time feedback operations including read-out of the quantum information, its analysis, and the correction of its detected errors. The work demonstrated how the quantum-error-corrected system reaches the break-even point at which the lifetime of a logical qubit exceeds the lifetime of the underlying constituents of the system (the physical qubits).
Other error correcting codes have also been implemented, such as one aimed at correcting for photon loss, the dominant error source in photonic qubit schemes.
In 2021, an entangling gate between two logical qubits encoded in topological quantum error-correction codes has first been realized using 10 ions in a trapped-ion quantum computer. 2021 also saw the first experimental demonstration of fault-tolerant Bacon-Shor code in a single logical qubit of a trapped-ion system, i.e. a demonstration for which the addition of error correction is able to suppress more errors than is introduced by the overhead required to implement the error correction as well as fault tolerant Steane code.
In 2022, researchers at the University of Innsbruck have demonstrated a fault-tolerant universal set of gates on two logical qubits in a trapped-ion quantum computer. They have performed a logical two-qubit controlled-NOT gate between two instances of the seven-qubit colour code, and fault-tolerantly prepared a logical magic state.
In February 2023, researchers at Google claimed to have decreased quantum errors by increasing the qubit number in experiments, they used a fault tolerant surface code measuring an error rate of 3.028% and 2.914% for a distance-3 qubit array and a distance-5 qubit array respectively.
In April 2024, researchers at Microsoft claimed to have successfully tested a quantum error correction code that allowed them to achieve an error rate with logical qubits that is 800 times better than the underlying physical error rate.
This qubit virtualization system was used to create 4 logical qubits with 30 of the 32 qubits on Quantinuum's trapped-ion hardware. The system uses an active syndrome extraction technique to diagnose errors and correct them while calculations are underway without destroying the logical qubits.
In January 2025, researchers at UNSW Sydney managed to develop an error correction method using antimony-based materials, including antimonides, leveraging high-dimensional quantum states (qudits) with up to eight states. By encoding quantum information in the nuclear spin of a phosphorus atom embedded in silicon and employing advanced pulse control techniques, they demonstrated enhanced error resilience.
Quantum error correction without encoding and parity checks
In 2022, research at University of Engineering and Technology Lahore demonstrated error cancellation by inserting single-qubit Z-axis rotation gates into strategically chosen locations of the superconductor quantum circuits. The scheme has been shown to effectively correct errors that would otherwise rapidly add up under constructive interference of coherent noise. This is a circuit-level calibration scheme that traces deviations (e.g. sharp dips or notches) in the decoherence curve to detect and localize the coherent error, but does not require encoding or parity measurements. However, further investigation is needed to establish the effectiveness of this method for the incoherent noise.
See also
Error detection and correction
Soft error
References
Further reading
External links
Quantum computing
Fault-tolerant computer systems | Quantum error correction | [
"Technology",
"Engineering"
] | 4,214 | [
"Fault-tolerant computer systems",
"Reliability engineering",
"Computer systems"
] |
893,337 | https://en.wikipedia.org/wiki/Local%20hidden-variable%20theory | In the interpretation of quantum mechanics, a local hidden-variable theory is a hidden-variable theory that satisfies the principle of locality. These models attempt to account for the probabilistic features of quantum mechanics via the mechanism of underlying, but inaccessible variables, with the additional requirement that distant events be statistically independent.
The mathematical implications of a local hidden-variable theory with regards to quantum entanglement were explored by physicist John Stewart Bell, who in 1964 proved that broad classes of local hidden-variable theories cannot reproduce the correlations between measurement outcomes that quantum mechanics predicts, a result since confirmed by a range of detailed Bell test experiments.
Models
Single qubit
A collection of related theorems, beginning with Bell's proof in 1964, show that quantum mechanics is incompatible with local hidden variables. However, as Bell pointed out, restricted sets of quantum phenomena can be imitated using local hidden-variable models. Bell provided a local hidden-variable model for quantum measurements upon a spin-1/2 particle, or in the terminology of quantum information theory, a single qubit. Bell's model was later simplified by N. David Mermin, and a closely related model was presented by Simon B. Kochen and Ernst Specker. The existence of these models is related to the fact that Gleason's theorem does not apply to the case of a single qubit.
Bipartite quantum states
Bell also pointed out that up until then, discussions of quantum entanglement focused on cases where the results of measurements upon two particles were either perfectly correlated or perfectly anti-correlated. These special cases can also be explained using local hidden variables.
For separable states of two particles, there is a simple hidden-variable model for any measurements on the two parties. Surprisingly, there are also entangled states for which all von Neumann measurements can be described by a hidden-variable model. Such states are entangled, but do not violate any Bell inequality. The so-called Werner states are a single-parameter family of states that are invariant under any transformation of the type where is a unitary matrix. For two qubits, they are noisy singlets given as
where the singlet is defined as .
Reinhard F. Werner showed that such states allow for a hidden-variable model for , while they are entangled if . The bound for hidden-variable models could be improved until . Hidden-variable models have been constructed for Werner states even if positive operator-valued measurements (POVM) are allowed, not only von Neumann measurements. Hidden variable models were also constructed to noisy maximally entangled states, and even extended to arbitrary pure states mixed with white noise. Beside bipartite systems, there are also results for the multipartite case. A hidden-variable model for any von Neumann measurements at the parties has been presented for a three-qubit quantum state.
Time-dependent variables
Previously some new hypotheses were conjectured concerning the role of time in constructing hidden-variables theory. One approach was suggested by K. Hess and W. Philipp and relies upon possible consequences of time dependencies of hidden variables; this hypothesis has been criticized by Richard D. Gill, , Anton Zeilinger and Marek Żukowski, as well as D. M. Appleby.
See also
EPR paradox
Bohr–Einstein debates
References
Quantum measurement
Hidden variable theory | Local hidden-variable theory | [
"Physics"
] | 680 | [
"Quantum measurement",
"Quantum mechanics"
] |
894,139 | https://en.wikipedia.org/wiki/Causal%20contact | Two entities are in causal contact if there may be an event that has affected both in a causal way. Every object of mass in space, for instance, exerts a field force on all other objects of mass, according to Newton's law of universal gravitation. Because this force exerted by one object affects the motion of the other, it can be said that these two objects are in causal contact.
The only objects not in causal contact are those for which there is no event in the history of the universe that could have sent a beam of light to both. For example, if the universe were not expanding and had existed for 10 billion years, anything more than 20 billion light-years away from the earth would not be in causal contact with it. Anything less than 20 billion light-years away would because an event occurring 10 billion years in the past that was 10 billion light-years away from both the earth and the object under question could have affected both. Depending on the expansion history of the universe, there may be a time such that there will be no particle horizons: all matter in the universe will be in causal contact.
A good illustration of this principle is the light cone, which is constructed as follows. Taking as event a flash of light (light pulse) at time , all events that can be reached by this pulse from form the future light cone of , whilst those events that can send a light pulse to form the past light cone of .
Given an event , the light cone classifies all events in spacetime into 5 distinct categories:
Events on the future light cone of .
Events on the past light cone of .
Events inside the future light cone of are those affected by the beam of light emitted at .
Events inside the past light cone of are those that can emit a beam of light and affect what is happening at .
All other events are in the (absolute) elsewhere of and are those that will never affect and can never be affected by .
See the causal structure of Minkowski spacetime for a more detailed discussion.
References
Mechanics
Theoretical physics | Causal contact | [
"Physics",
"Engineering"
] | 418 | [
"Theoretical physics",
"Mechanics",
"Mechanical engineering",
"Theoretical physics stubs"
] |
894,198 | https://en.wikipedia.org/wiki/Curtain%20wall%20%28architecture%29 | A curtain wall is an exterior covering of a building in which the outer walls are non-structural, instead serving to protect the interior of the building from the elements. Because the curtain wall façade carries no structural load beyond its own dead load weight, it can be made of lightweight materials. The wall transfers lateral wind loads upon it to the main building structure through connections at floors or columns of the building.
Curtain walls may be designed as "systems" integrating frame, wall panel, and weatherproofing materials. Steel frames have largely given way to aluminum extrusions. Glass is typically used for infill because it can reduce construction costs, provide an architecturally pleasing look, and allow natural light to penetrate deeper within the building. However, glass also makes the effects of light on visual comfort and solar heat gain in a building more difficult to control. Other common infills include stone veneer, metal panels, louvres, and operable windows or vents.
Unlike storefront systems, curtain wall systems are designed to span multiple floors, taking into consideration building sway and movement and design requirements such as thermal expansion and contraction; seismic requirements; water diversion; and thermal efficiency for cost-effective heating, cooling, and interior lighting.
History
Historically, buildings were constructed of timber, masonry, or a combination of both. Their exterior walls were load-bearing, supporting much or all of the load of the entire structure. The nature of the materials resulted in inherent limits to a building's height and the maximum size of window openings.
The development and widespread use of structural steel and later reinforced concrete allowed relatively small columns to support large loads. The exterior walls could be non-load bearing, and thus much lighter and more open than load-bearing walls of the past. This gave way to increased use of glass as an exterior façade, and the modern-day curtain wall was born.
Post-and-beam and balloon framed timber structures effectively had an early version of curtain walls, for their frames supported loads that allowed the walls themselves to serve other functions, such as keeping weather out and allowing light in. When iron began to be used extensively in buildings in late 18th-century Britain, such as at Ditherington Flax Mill, and later when buildings of wrought iron and glass such as The Crystal Palace were built, the building blocks of structural understanding were laid for the development of curtain walls.
Oriel Chambers (1864) and 16 Cook Street (1866), both built in Liverpool, England, by local architect and civil engineer Peter Ellis, are characterised by their extensive use of glass in their facades. Toward the courtyards they boasted metal-framed glass curtain walls, which makes them two of the world's first buildings to include this architectural feature. Oriel Chambers is listed in the Guinness Book of Records as the earliest such building. The extensive glass walls allowed light to penetrate further into the building, utilizing more floor space and reducing lighting costs. Oriel Chambers comprises set over five floors without an elevator, which had only recently been invented and was not yet widespread. The Statue of Liberty (1886) features a thin, non-load-bearing copper skin. Extensive use of glass became required for large factory buildings to allow light for manufacture, sometimes making it seem like they had all glass facades.
An early example of an all-steel curtain wall used in the classical style is the department store on Leipziger Straße, Berlin, built in 1901 (since demolished).
Some of the first curtain walls were made with steel mullions, and the polished plate glass was attached to the mullions with asbestos- or fiberglass-modified glazing compound. Eventually silicone sealants or glazing tape were substituted for the glazing compound. Some designs included an outer cap to hold the glass in place and to protect the integrity of the seals. The landmarks of curtain wall design as it came to dominate construction were the very different systems used by the United Nations Headquarters and the Lever House completed in 1952.
Ludwig Mies van der Rohe's curtain wall is one of the most important aspects of his architectural design. Mies first began prototyping the curtain wall in his high-rise residential building designs along Chicago's lakeshore, achieving the look of a curtain wall at 860-880 Lake Shore Drive Apartments. He finally perfected the curtain wall at 900–910 Lake Shore Drive, where the curtain is an autonomous aluminum and glass skin. After 900–910, Mies's curtain wall appeared on all of his subsequent high-rise building designs, including the Seagram Building in New York.
The widespread use of aluminium extrusions for mullions began during the 1970s. Aluminum alloys offer the unique advantage of being able to be easily extruded into nearly any shape required for design and aesthetic purposes. Today, the design complexity and shapes available are nearly limitless. Custom shapes can be designed and manufactured with relative ease. The Omni San Diego Hotel curtain wall in California, designed by architectural firm Hornberger and Worstel and developed by JMI Realty, is an example of a unitized curtain-wall system with integrated sunshades.
Systems and principles
Stick systems
The vast majority of ground-floor curtain walls are installed as long pieces (referred to as sticks) between floors vertically and between vertical members horizontally. Framing members may be fabricated in a shop, but installation and glazing is typically performed at the jobsite.
Ladder systems
Very similar to a stick system, a ladder system has mullions which can be split and then either snapped or screwed together consisting of a half box and plate. This allows sections of curtain wall to be fabricated in a shop, effectively reducing the time spent installing the system onsite. The drawbacks of using such a system is reduced structural performance and visible joint lines down the length of each mullion.
Unitized systems
Unitized curtain walls entail factory fabrication and assembly of panels and may include factory glazing. These completed units are installed on the building structure to form the building enclosure. Unitized curtain wall has the advantages of: speed; lower field installation costs; and quality control within an interior climate-controlled environment. The economic benefits are typically realized on large projects or in areas of high field labor rates.
Rainscreen principle
A common feature in curtain wall technology, the rainscreen principle theorizes that equilibrium of air pressure between the outside and inside of the "rainscreen" prevents water penetration into the building. For example, the glass is captured between an inner and an outer gasket in a space called the glazing rebate. The glazing rebate is ventilated to the exterior so that the pressure on the inner and outer sides of the outer gasket is the same. When the pressure is equal across this gasket, water cannot be drawn through joints or defects in the gasket.
Design concerns
A curtain wall system must be designed to handle all loads imposed on it as well as keep air and water from penetrating the building envelope.
Loads
The loads imposed on the curtain wall are transferred to the building structure through the anchors which attach the mullions to the building.
Dead load
Dead load is defined as the weight of structural elements and the permanent features on the structure. In the case of curtain walls, this load is made up of the weight of the mullions, anchors and other structural components of the curtain wall, as well as the weight of the infill material. Additional dead loads imposed on the curtain wall may include sunshades or signage attached to the curtain wall.
Wind load
Wind load is a normal force acting on the building as the result of wind blowing on the building. Wind pressure is resisted by the curtain wall system since it envelops and protects the building. Wind loads vary greatly throughout the world, with the largest wind loads being near the coast in hurricane-prone regions. For each project location, building codes specify the required design wind loads. Often, a wind tunnel study is performed on large or unusually-shaped buildings. A scale model of the building and the surrounding vicinity is built and placed in a wind tunnel to determine the wind pressures acting on the structure in question. These studies take into account vortex shedding around corners and the effects of surrounding topography and buildings.
Seismic load
Seismic loads in a curtain wall system are limited to the interstory drift induced on the building during an earthquake. In most situations, the curtain wall is able to naturally withstand seismic and wind induced building sway because of the space provided between the glazing infill and the mullion. In tests, standard curtain wall systems are typically able to withstand up to of relative floor movement without glass breakage or water leakage.
Snow load
Snow loads and live loads are not typically an issue in curtain walls, since curtain walls are designed to be vertical or slightly inclined. If the slope of a wall exceeds 20 degrees or so, these loads may need to be considered.
Thermal load
Thermal loads are induced in a curtain wall system because aluminum has a relatively high coefficient of thermal expansion. This means that over the span of a couple of floors, the curtain wall will expand and contract some distance, relative to its length and the temperature differential. This expansion and contraction is accounted for by cutting horizontal mullions slightly short and allowing a space between the horizontal and vertical mullions. In unitized curtain wall, a gap is left between units, which is sealed from air and water penetration by gaskets. Vertically, anchors carrying wind load only (not dead load) are slotted to account for movement. Incidentally, this slot also accounts for live load deflection and creep in the floor slabs of the building structure.
Blast load
Accidental explosions and terrorist threats have brought on increased concern for the fragility of a curtain wall system in relation to blast loads. The bombing of the Alfred P. Murrah Federal Building in Oklahoma City, Oklahoma, has spawned much of the current research and mandates in regards to building response to blast loads. Currently, all new federal buildings in the U.S. and all U.S. embassies built on foreign soil must have some provision for resistance to bomb blasts.
Since the curtain wall is at the exterior of the building, it becomes the first line of defense in a bomb attack. As such, blast resistant curtain walls are designed to withstand such forces without compromising the interior of the building to protect its occupants. Since blast loads are very high loads with short durations, the curtain wall response should be analyzed in a dynamic load analysis, with full-scale mock-up testing performed prior to design completion and installation.
Blast resistant glazing consists of laminated glass, which is meant to break but not separate from the mullions. Similar technology is used in hurricane-prone areas for impact protection from wind-borne debris.
Air infiltration
Air infiltration is the air which passes through the curtain wall from the exterior to the interior of the building. The air is infiltrated through the gaskets, through imperfect joinery between the horizontal and vertical mullions, through weep holes, and through imperfect sealing. The American Architectural Manufacturers Association (AAMA) is an industry trade group in the U.S. that has developed voluntary specifications regarding acceptable levels of air infiltration through a curtain wall.
Water penetration
Water penetration is defined as water passing from the exterior of the building to the interior of the curtain wall system. Sometimes, depending on the building specifications, a small amount of controlled water on the interior is deemed acceptable. Controlled water penetration is defined as water that penetrates beyond the inner most vertical plane of the test specimen, but has a designed means of drainage back to the exterior. AAMA Voluntary Specifications allow for controlled water penetration while the underlying ASTM E1105 test method would define such water penetration as a failure. To test the ability of a curtain wall to withstand water penetration in the field, an ASTM E1105 water spray rack system is placed on the exterior side of the test specimen, and a positive air pressure difference is applied to the system. This set up simulates a wind driven rain event on the curtain wall to check for field performance of the product and of the installation. Field quality control and assurance checks for water penetration has become the norm as builders and installers apply such quality programs to help reduce the number of water damage litigation suits against their work.
Deflection
One of the disadvantages of using aluminum for mullions is that its modulus of elasticity is about one-third that of steel. This translates to three times more deflection in an aluminum mullion compared to a similar steel section under a given load. Building specifications set deflection limits for perpendicular (wind-induced) and in-plane (dead load-induced) deflections. These deflection limits are not imposed due to strength capacities of the mullions. Rather, they are designed to limit deflection of the glass (which may break under excessive deflection), and to ensure that the glass does not come out of its pocket in the mullion. Deflection limits are also necessary to control movement at the interior of the curtain wall. Building construction may be such that there is a wall located near the mullion, and excessive deflection can cause the mullion to contact the wall and cause damage. Also, if deflection of a wall is quite noticeable, public perception may raise undue concern that the wall is not strong enough.
Deflection limits are typically expressed as the distance between anchor points divided by a constant number. A deflection limit of L/175 is common in curtain wall specifications, based on experience with deflection limits that are unlikely to cause damage to the glass held by the mullion. Say that a given curtain wall is anchored at 12-foot (144 in) floor heights. The allowable deflection would then be 144/175 = 0.823 inches, which means the wall is allowed to deflect inward or outward a maximum of 0.823 inches at the maximum wind pressure. However, some panels require stricter movement restrictions, or certainly those that prohibit a torque-like motion.
Deflection in mullions is controlled by different shapes and depths of curtain wall members. The depth of a given curtain wall system is usually controlled by the area moment of inertia required to keep deflection limits under the specification. Another way to limit deflections in a given section is to add steel reinforcement to the inside tube of the mullion. Since steel deflects at one-third the rate of aluminum, the steel will resist much of the load at a lower cost or smaller depth.
Deflection in curtain wall mullions also differs from deflection of the building structure, whether concrete, steel, or timber. Curtain wall anchors must be designed to allow differential movement between the building structure and the curtain wall.
Strength
Strength (or maximum usable stress) available to a particular material is not related to its material stiffness (the material property governing deflection); it is a separate criterion in curtain wall design and analysis. This often affects the selection of materials and sizes for design of the system. The allowable bending strength for certain aluminum alloys, such as those typically used in curtain wall framing, approaches the allowable bending strength of steel alloys used in building construction.
Thermal criteria
Relative to other building components, aluminum has a high heat transfer coefficient, meaning that aluminum is a very good conductor of heat. This translates into high heat loss through aluminum (or steel) curtain wall mullions. There are several ways to compensate for this heat loss, the most common way being the addition of thermal breaks. These are barriers between exterior metal and interior metal, usually made of polyvinyl chloride (PVC). These breaks provide a significant decrease in the thermal conductivity of the curtain wall. However, since the thermal break interrupts the aluminum mullion, the overall moment of inertia of the mullion is reduced and must be accounted for in the structural analysis and deflection analysis of the system.
Thermal conductivity of the curtain wall system is important because of heat loss through the wall, which affects the heating and cooling costs of the building. On a poorly performing curtain wall, condensation may form on the interior of the mullions. This could cause damage to adjacent interior trim and walls.
Rigid insulation is provided in spandrel areas to provide a higher R-value at these locations.
Thermally-broken mullions with double- or triple-glazed IGUs are often referred to as "high-performance" curtain walls. While these curtain wall systems are more energy-efficient than older, single-glazed versions, they are still significantly less efficient than opaque (solid) wall construction. For example, nearly all curtain wall systems, thermally-broken or otherwise, have a U-value of 0.2 or higher, which is equivalent to an R-value of 5 or lower.
Infills
Infill refers to the large panels that are inserted into the curtain wall between mullions. Infills are typically glass but may be made up of nearly any exterior building element. Some common infills include metal panels, louvers, and photovoltaic panels. Infills are also referred to as spandrels or spandrel panels.
Glass
Float glass is by far the most common curtain wall glazing type. It can be manufactured in an almost infinite combination of color, thickness, and opacity. For commercial construction, the two most common thicknesses are monolithic and insulating glass. 1/4 inch glass is typically used only in spandrel areas, while insulating glass is used for the rest of the building (sometimes spandrel glass is specified as insulating glass as well). The 1 inch insulation glass is typically made up of two 1/4-inch lites of glass with a airspace. The air inside is usually atmospheric air, but some inert gases, such as argon or krypton, may be used in order to offer better thermal transmittance values. In Europe, triple-pane insulating glass infill is now common. In Scandinavia, the first curtain walls with quadruple-pane have been built.
Larger thicknesses are typically employed for buildings or areas with higher thermal, relative humidity, or sound transmission requirements, such as laboratory areas or recording studios. In residential construction, thicknesses commonly used are monolithic and insulating glass.
Glass may be used which is transparent, translucent, or opaque, or in varying degrees thereof. Transparent glass usually refers to vision glass in a curtain wall. Spandrel or vision glass may also contain translucent glass, which could be for security or aesthetic purposes. Opaque glass is used in areas to hide a column or spandrel beam or shear wall behind the curtain wall. Another method of hiding spandrel areas is through shadow box construction (providing a dark enclosed space behind the transparent or translucent glass). Shadow box construction creates a perception of depth behind the glass that is sometimes desired.
Stone veneer
Thin blocks () of stone can be inset within a curtain wall system. The type of stone used is limited only by the strength of the stone and the ability to manufacture it in the proper shape and size. Common stone types used are: calcium silicate, granite, marble, travertine, limestone, and engineered stone. To reduce weight and improve strength, the natural stone may be attached to an aluminum honeycomb backing.
Panels
Metal panels can take various forms including stainless steel, aluminum plate; aluminum composite panels consisting of two thin aluminum sheets sandwiching a thin plastic interlayer; copper wall cladding, and panels consisting of metal sheets bonded to rigid insulation, with or without an inner metal sheet to create a sandwich panel. Other opaque panel materials include fiber-reinforced plastic (FRP) and terracotta. Terracotta curtain wall panels were first used in Europe, but only a few manufacturers produce high quality modern terracotta curtain wall panels.
Louvers
A louver is provided in an area where mechanical equipment located inside the building requires ventilation or fresh air to operate. They can also serve as a means of allowing outside air to filter into the building to take advantage of favorable climatic conditions and minimize the usage of energy-consuming HVAC systems. Curtain wall systems can be adapted to accept most types of louver systems to maintain the same architectural sightlines and style while providing desired functionality.
Windows and vents
Most curtain wall glazing is fixed, meaning that there is no access to the exterior of the building except through doors. However, windows or vents can be glazed into the curtain wall system as well, to provide required ventilation or operable windows. Nearly any window type can be made to fit into a curtain wall system.
Fire safety
Firestopping at the perimeter slab edge, which is a gap between the floor and the curtain wall, is essential to slow the passage of fire and combustion gases between floors. Spandrel areas must have non-combustible insulation at the interior face of the curtain wall. Some building codes require the mullion to be wrapped in heat-retarding insulation near the ceiling to prevent the mullions from melting and spreading the fire to the floor above. The firestop at the perimeter slab edge is considered a continuation of the fire-resistance rating of the floor slab. The curtain wall itself, however, is not ordinarily required to have a rating. This causes a quandary as compartmentalization (fire protection) is typically based upon closed compartments to avoid fire and smoke migrations beyond each engaged compartment. A curtain wall by its very nature prevents the completion of the compartment (or envelope). The use of fire sprinklers has been shown to mitigate this matter. As such, unless the building is sprinklered, fire may still travel up the curtain wall, if the glass on the exposed floor is shattered from heat, causing flames to lick up the outside of the building.
Falling glass can endanger pedestrians, firefighters and firehoses below. An example of this is the 1988 First Interstate Tower fire in Los Angeles, California. The fire leapfrogged up the tower by shattering the glass and then consuming the aluminum framing holding the glass. Aluminum's melting temperature is 660 °C, whereas building fires can reach 1,100 °C. The melting point of aluminum is typically reached within minutes of the start of a fire.
Fireman knock-out glazing panels are often required for venting and emergency access from the exterior. Knock-out panels are generally fully tempered glass to allow full fracturing of the panel into small pieces and relatively safe removal from the opening.
Maintenance and repair
Curtain walls and perimeter sealants require maintenance to maximize service life. Perimeter sealants, properly designed and installed, have a typical service life of 10 to 15 years. Removal and replacement of perimeter sealants require meticulous surface preparation and proper detailing.
Aluminum frames are generally painted or anodized. Care must be taken when cleaning areas around anodized material as some cleaning agents will destroy the finish. Factory applied fluoropolymer thermoset coatings have good resistance to environmental degradation and require only periodic cleaning. Recoating with an air-dry fluoropolymer coating is possible but requires special surface preparation and is not as durable as the baked-on original coating. Anodized aluminum frames cannot be "re-anodized" in place but can be cleaned and protected by proprietary clear coatings to improve appearance and durability.
Stainless steel curtain walls require no coatings, and embossed, as opposed to abrasively finished, surfaces maintain their original appearance indefinitely without cleaning or other maintenance. Some specially textured matte stainless steel surface finishes are hydrophobic and resist airborne and rain-borne pollutants. This has been valuable in the American Southwest and in the Mideast for avoiding dust, as well as avoiding soot and smoke staining in polluted urban areas.
See also
Mullion wall
Insulated glazing
Quadruple glazing
Copper in architecture
References
External links
European Commission's portal for efficient Curtain Walling
EN 13830: Curtain Walling - Product Standard
EN 13119: Curtain Walling - Terminology
Understanding Curtain Wall & Window Wall differences
Types of wall
Building engineering
Construction
Architectural elements | Curtain wall (architecture) | [
"Technology",
"Engineering"
] | 4,973 | [
"Structural engineering",
"Building engineering",
"Construction",
"Types of wall",
"Architectural elements",
"Civil engineering",
"Components",
"Architecture"
] |
2,784,213 | https://en.wikipedia.org/wiki/Escuela%20Superior%20Latinoamericana%20de%20Inform%C3%A1tica | The Escuela Superior Latinoamericana de Informática (Spanish for "Latin American Superior School of Informatics", ESLAI) was an Argentine undergraduate school of computer science established in 1986. Classes were held in a former country house at the Pereyra Iraola Park in Buenos Aires Province, located approximately 40 km from Buenos Aires.
The school had Argentine mathematician Manuel Sadosky among its main founders. In spite of its short life, it had a considerable impact on informatics teaching and research in Argentina and South America. ESLAI courses were attended by students from several Spanish-speaking countries in South America such as Argentina, Uruguay, Paraguay, Bolivia, Peru, Ecuador, Colombia, and Venezuela. All students had a full scholarship and the admission process was passed by about 15% of applicants.
ESLAI established cooperation programs with a number of foreign universities in the Americas as well as in Europe. Those agreements sponsored important visitors to the school, such as Alberto O. Mendelzon, Jean-Raymond Abrial, Ugo Montanari, Carlo Ghezzi and Giorgio Ausiello, and enabled its students to attend graduate school at foreign universities.
The school had a significant European influence and was oriented towards theoretical aspects of computer science, such as typed lambda calculus, formal verification, and Martin-Löf's intuitionistic type theory.
Unfortunately, ESLAI was never able to develop a relationship with local companies, which in an emergent economy like Argentina's is essential to be involved with more practical problems. Without financial support, ESLAI had to close down in September 1990 during the presidency of Carlos Menem.
References
1986 establishments in Argentina
1990 disestablishments in Argentina
Educational institutions established in 1986
Educational institutions disestablished in 1990
Universities in Buenos Aires Province
Computer science departments | Escuela Superior Latinoamericana de Informática | [
"Technology"
] | 360 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
2,784,317 | https://en.wikipedia.org/wiki/Poincar%C3%A9%E2%80%93Bendixson%20theorem | In mathematics, the Poincaré–Bendixson theorem is a statement about the long-term behaviour of orbits of continuous dynamical systems on the plane, cylinder, or two-sphere.
Theorem
Given a differentiable real dynamical system defined on an open subset of the plane, every non-empty compact ω-limit set of an orbit, which contains only finitely many fixed points, is either
a fixed point,
a periodic orbit, or
a connected set composed of a finite number of fixed points together with homoclinic and heteroclinic orbits connecting these.
Moreover, there is at most one orbit connecting different fixed points in the same direction. However, there could be countably many homoclinic orbits connecting one fixed point.
Discussion
A weaker version of the theorem was originally conceived by , although he lacked a complete proof which was later given by .
Continuous dynamical systems that are defined on two-dimensional manifolds other than the plane (or cylinder or two-sphere), as well as those defined on higher-dimensional manifolds, may exhibit ω-limit sets that defy the three possible cases under the Poincaré–Bendixson theorem. On a torus, for example, it is possible to have a recurrent non-periodic orbit, and three-dimensional systems may have strange attractors. Nevertheless, it is possible to classify the minimal sets of continuous dynamical systems on any two-dimensional compact and connected manifold due to a generalization of Arthur J. Schwartz.
Applications
One important implication is that a two-dimensional continuous dynamical system cannot give rise to a strange attractor. If a strange attractor C did exist in such a system, then it could be enclosed in a closed and bounded subset of the phase space. By making this subset small enough, any nearby stationary points could be excluded. But then the Poincaré–Bendixson theorem says that C is not a strange attractor at all—it is either a limit cycle or it converges to a limit cycle.
The Poincaré–Bendixson theorem does not apply to discrete dynamical systems, where chaotic behaviour can arise in two- or even one-dimensional systems.
See also
Rotation number
References
Theorems in dynamical systems | Poincaré–Bendixson theorem | [
"Mathematics"
] | 456 | [
"Theorems in dynamical systems",
"Mathematical theorems",
"Mathematical problems",
"Dynamical systems"
] |
2,784,419 | https://en.wikipedia.org/wiki/OPC%20Historical%20Data%20Access | This group of standards, created by the OPC Foundation, provides COM specifications for communicating data from devices and applications that provide historical data, such as databases. The specifications provides for access to raw, interpolated and aggregate data (data with calculations).
OPC Historical Data Access, also known as OPC HDA, is used to exchange archived process data. This is in contrast to the OPC Data Access (OPC DA) specification that deals with real-time data. OPC technology is based on client / server architecture. Therefore, an OPC client, such as a trending application or spreadsheet, can retrieve data from an OPC compliant data source, such as a historian, using OPC HDA.
Similar to the OPC Data Access specification, OPC Historical Data Access also uses Microsoft's DCOM to transport data. DCOM also provides OPC HDA with full security features such as user authentication and authorization, as well as communication encryption services. OPC HDA Clients and Servers can reside on separate PCs, even if they are separated by a firewall. To do this, system integrators must configure DCOM properly as well as open ports in the firewall. If using the Windows firewall, users only need to open a single port.
See also
OLE for process control
OPC Foundation
OPC Data Access
External links
OPC Foundation
OPC Historical Data Access specification
Industrial automation
Computer standards
Component-based software engineering | OPC Historical Data Access | [
"Technology",
"Engineering"
] | 300 | [
"Computer standards",
"Industrial engineering",
"Automation",
"Component-based software engineering",
"Industrial automation",
"Components"
] |
2,785,756 | https://en.wikipedia.org/wiki/Extinction%20%28psychology%29 | Extinction is a behavioral phenomenon observed in both operantly conditioned and classically conditioned behavior, which manifests itself by fading of non-reinforced conditioned response over time. When operant behavior that has been previously reinforced no longer produces reinforcing consequences, the behavior gradually returns to operant levels (to the frequency of the behavior previous to learning, which may or may not be zero). In classical conditioning, when a conditioned stimulus is presented alone, so that it no longer predicts the coming of the unconditioned stimulus, conditioned responding gradually stops. For example, after Pavlov's dog was conditioned to salivate at the sound of a metronome, it eventually stopped salivating to the metronome after the metronome had been sounded repeatedly but no food came. Many anxiety disorders such as post-traumatic stress disorder are believed to reflect, at least in part, a failure to extinguish conditioned fear.
Theories
The dominant account of extinction involves associative models. However, there is debate over whether extinction involves simply "unlearning" the unconditional stimulus (US) – Conditional stimulus (CS) association (e.g., the Rescorla–Wagner account) or, alternatively, a "new learning" of an inhibitory association that masks the original excitatory association (e.g., Konorski, Pearce and Hall account). A third account concerns non-associative mechanisms such as habituation, modulation and response fatigue. Myers & Davis review fear extinction in rodents and suggested that multiple mechanisms may be at work depending on the timing and circumstances in which the extinction occurs.
Given the competing views and difficult observations for the various accounts researchers have turned to investigations at the cellular level (most often in rodents) to tease apart the specific brain mechanisms of extinction, in particular the role of the brain structures (amygdala, hippocampus, the prefrontal cortex), and specific neurotransmitter systems (e.g., GABA, NMDA). A recent study in rodents by Amano, Unal and Paré published in Nature Neuroscience found that extinction of a conditioned fear response is correlated with synaptic inhibition in the fear output neurons of the central amygdala that project to the periaqueductal gray that controls freezing behavior. They infer that inhibition derives from the ventromedial prefrontal cortex and suggest promising targets at the cellular level for new treatments of anxiety.
Classical conditioning
Learning extinction can also occur in a classical conditioning paradigm. In this model, a neutral cue or context can come to elicit a conditioned response when it is paired with an unconditioned stimulus. An unconditioned stimulus is one that naturally and automatically triggers a certain behavioral response. A certain stimulus or environment can become a conditioned cue or a conditioned context, respectively, when paired with an unconditioned stimulus. An example of this process is a fear conditioning paradigm using a mouse. In this instance, a tone paired with a mild footshock can become a conditioned cue, eliciting a fear response when presented alone in the future. In the same way, the context in which a footshock is received such as a chamber with certain dimensions and a certain odor can elicit the same fear response when the mouse is placed back in that chamber in the absence of the footshock.
In this paradigm, extinction occurs when the animal is re-exposed to the conditioned cue or conditioned context in the absence of the unconditioned stimulus. As the animal learns that the cue or context no longer predicts the coming of the unconditioned stimulus, conditioned responding gradually decreases, or extinguishes.
Operant conditioning
In the operant conditioning paradigm, extinction refers to the process of no longer providing the reinforcement that has been maintaining a behavior. Operant extinction differs from forgetting in that the latter refers to a decrease in the strength of a behavior over time when it has not been emitted. For example, a child who climbs under his desk, a response which has been reinforced by attention, is subsequently ignored until the attention-seeking behavior no longer occurs. In his autobiography, B. F. Skinner noted how he accidentally discovered the extinction of an operant response due to the malfunction of his laboratory equipment:
When the extinction of a response has occurred, the discriminative stimulus is then known as an extinction stimulus (SΔ or S-delta). When an S-delta is present, the reinforcing consequence which characteristically follows a behavior does not occur. This is the opposite of a discriminative stimulus, which is a signal that reinforcement will occur. For instance, in an operant conditioning chamber, if food pellets are only delivered when a response is emitted in the presence of a green light, the green light is a discriminative stimulus. If when a red light is present. food will not be delivered, then the red light is an extinction stimulus. (Food is used here as an example of a reinforcer.) However, some make the distinction between extinction stimuli and "S-Delta" due to the behavior not having a reinforcement history, i.e. in an array of three items (phone, pen, paper) "Which one is the phone" the "pen" and "paper" will not produce a response in the teacher, but is not technically extinction on the first trial due to selecting "pen" or "paper", missing a reinforcement history. This still would be considered as S-Delta.
Successful extinction procedures
In order for extinction to work effectively, it must be done consistently. Extinction is considered successful when responding in the presence of an extinction stimulus (a red light or a teacher not giving a bad student attention, for instance) is zero. When a behavior reappears again after it has gone through extinction, it is called spontaneous recovery. It (extinction) is the result of challenging behavior(s) no longer occurring without the need for reinforcement. If there is a relapse and reinforcements are given, the problem behavior will return. Extinction can be a long process; therefore, it requires that the facilitator of the procedure be completely invested from beginning to end in order for the outcome to be successful. The fewer challenging behaviors observed after extinction will most likely produce a less significant spontaneous recovery. While working towards extinction there are different distributions or schedules of when to administer reinforcements. Some people may use an intermittent reinforcement schedule that include: fixed ratio, variable ratio, fixed interval and variable interval. Another option is to use a continuous reinforcement. Schedules can be both fixed and variable and also the number of reinforcements given during each interval can vary.
Extinction procedures in the classroom
A positive classroom environment wields better results in learning growth. Therefore, in order for children to be successful in the classroom, their environment should be free of problem behaviors that can cause distractions. The classroom should be a place that offers consistency, structure, and stability, where the student feels empowered, supported and safe. When problem behaviors occur, learning opportunities decrease. Problem behaviors in the classroom that would benefit from extinction may include off-task behaviors, blurting, yelling, interrupting and use of inappropriate language. The use of extinction has been used primarily when the problem behaviors interfered with successful classroom outcomes. While other methods have been used in conjunction with extinction, positive outcomes are not likely when extinction is not used in behavior interventions.
Burst
While extinction, when implemented consistently over time, results in the eventual decrease of the undesired behavior, in the short term the subject might exhibit what is called an extinction burst. An extinction burst will often occur when the extinction procedure has just begun. This usually consists of a sudden and temporary increase in the response's frequency, followed by the eventual decline and extinction of the behavior targeted for elimination. Novel behavior, or emotional responses or aggressive behavior, may also occur.
For example, a pigeon has been reinforced to peck an electronic button. During its training history, every time the pigeon pecked the button, it will have received a small amount of bird seed as a reinforcer. Thus, whenever the bird is hungry, it will peck the button to receive food. However, if the button were to be turned off, the hungry pigeon will first try pecking the button just as it has in the past. When no food is forthcoming, the bird will likely try repeatedly. After a period of frantic activity, in which their pecking behavior yields no result, the pigeon's pecking will decrease in frequency.
Although not explained by reinforcement theory, the extinction burst can be understood using control theory. In perceptual control theory, the degree of output involved in any action is proportional to the discrepancy between the reference value (desired rate of reward in the operant paradigm) and the current input. Thus, when reward is removed, the discrepancy increases, and the output is increased. In the long term, 'reorganisation', the learning algorithm of control theory, would adapt the control system such that output is reduced.
The evolutionary advantage of this extinction burst is clear. In a natural environment, an animal that persists in a learned behavior, despite not resulting in immediate reinforcement, might still have a chance of producing reinforcing consequences if the animal tries again. This animal would be at an advantage over another animal that gives up too easily.
Despite the name, however, not every explosive reaction to adverse stimuli subsides to extinction. Indeed, a small minority of individuals persist in their reaction indefinitely.
Extinction-induced variability
Extinction-induced variability serves an adaptive role similar to the extinction burst. When extinction begins, subjects can exhibit variations in response topography (the movements involved in the response). Response topography is always somewhat variable due to differences in environment or idiosyncratic causes but normally a subject's history of reinforcement keeps slight variations stable by maintaining successful variations over less successful variations. Extinction can increase these variations significantly as the subject attempts to acquire the reinforcement that previous behaviors produced. If a person attempts to open a door by turning the knob, but is unsuccessful, they may next try jiggling the knob, pushing on the frame, knocking on the door or other behaviors to get the door to open. Extinction-induced variability can be used in shaping to reduce problematic behaviors by reinforcing desirable behaviors produced by extinction-induced variability.
Autism
Children with Autism Spectrum Disorder (ASD) are known to have restricted or repetitive behaviors that can cause problems when trying to function in day-to-day activities. Extinction is used as an intervention to help with problem behaviors. Some problem behaviors may include but are not limited to, self-injurious behaviors, aggression, tantrums, problems with sleep, and making choices. Ignoring certain self-injurious behaviors can lead to the extinction of said behaviors in children with ASD. Escape Extinction (EE) is commonly used in instances when having to make choices causes problem behavior. An example could be having to choose between mint or strawberry flavored toothpaste when brushing your teeth. Those would be the only two options available. When implementing EE, the interventionist will use physical and verbal prompting to help the subject make a choice.
Anxiety
Fear extinction is the fundamental principle behind exposure therapy, a common treatment for anxiety disorders. In this process, the conditioned fear responses diminish progressively over time, when the previously conditioned stimulus is presented without being paired with the unconditioned stimulus. To understand the brain changes during this, a task-functional Magnetic Resonance Imaging (fMRI) can be performed. Moreover, Positron Emission Tomography (PET) can be used to quantify endogenous dopamine release. Dopamine antagonists like [11C] raclopride and [18F] fallypride can be used to study D2/D3 dopamine receptor binding potential in the brain. [11C] Raclopride is popular in studies focusing on striatal dopamine activity and ease of use considering a shorter half-life (about 20 minutes). On the other hand, [18F] fallypride is best for studying extrastriatal dopamine binding potential but has a half-life of approximately 110 minutes. Additionally, simultaneous PET and fMRI allow researchers to capture both dopamine binding potential and blood oxygen level-dependent (BOLD) signals during the task. Recent studies highlight the critical role of dorsolateral and ventromedial prefrontal cortex regions (vmPFC), together with other areas like the anterior insula, amygdala, and hippocampus in facilitating fear extinction processes.
Neurobiology
Glutamate
Glutamate is a neurotransmitter that has been extensively implicated in the neural basis of learning. D-Cycloserine (DCS) is a partial agonist for the glutamate receptor NMDA at the glycine site, and has been trialed as an adjunct to conventional exposure-based treatments based on the principle of cue extinction.
A role for glutamate has also been identified in the extinction of a cocaine-associated environmental stimulus through testing in rats. Specifically, the metabotropic glutamate 5 receptor (mGlu5) is important for the extinction of a cocaine-associated context and a cocaine-associated cue.
Dopamine
Dopamine is another neurotransmitter implicated in learning extinction across both appetitive and aversive domains. Dopamine signaling has been implicated in the extinction of conditioned fear and the extinction of drug-related learning
Circuitry
The brain region most extensively implicated in learning extinction is the infralimbic cortex (IL) of the medial prefrontal cortex (mPFC) The IL is important for the extinction of reward- and fear-associated behaviors, while the amygdala has been strongly implicated in the extinction of conditioned fear. The posterior cingulate cortex (PCC) and temporoparietal junction (TPJ) have also been identified as regions that may be associated with impaired extinction in adolescents.
Across development
There is a strong body of evidence to suggest that extinction alters across development. That is, learning extinction may differ during infancy, childhood, adolescence and adulthood. During infancy and childhood, learning extinction is especially persistent, which some have interpreted as erasure of the original CS-US association, but this remains contentious. In contrast, during adolescence and adulthood extinction is less persistent, which is interpreted as new learning of a CS-no US association that exists in tandem and opposition to the original CS-US memory.
See also
Operant conditioning
Post-traumatic stress disorder
References
Behavioral concepts | Extinction (psychology) | [
"Biology"
] | 3,002 | [
"Behavior",
"Behavioral concepts",
"Behaviorism"
] |
2,786,041 | https://en.wikipedia.org/wiki/Oblique%20shock | An oblique shock wave is a shock wave that, unlike a normal shock, is inclined with respect to the direction of incoming air. It occurs when a supersonic flow encounters a corner that effectively turns the flow into itself and compresses. The upstream streamlines are uniformly deflected after the shock wave. The most common way to produce an oblique shock wave is to place a wedge into supersonic, compressible flow. Similar to a normal shock wave, the oblique shock wave consists of a very thin region across which nearly discontinuous changes in the thermodynamic properties of a gas occur. While the upstream and downstream flow directions are unchanged across a normal shock, they are different for flow across an oblique shock wave.
It is always possible to convert an oblique shock into a normal shock by a Galilean transformation.
Wave theory
For a given Mach number, M1, and corner angle, θ, the oblique shock angle, β, and the downstream Mach number, M2, can be calculated. Unlike after a normal shock where M2 must always be less than 1, in oblique shock M2 can be supersonic (weak shock wave) or subsonic (strong shock wave). Weak solutions are often observed in flow geometries open to atmosphere (such as on the outside of a flight vehicle). Strong solutions may be observed in confined geometries (such as inside a nozzle intake). Strong solutions are required when the flow needs to match the downstream high pressure condition. Discontinuous changes also occur in the pressure, density and temperature, which all rise downstream of the oblique shock wave.
The θ-β-M equation
Using the continuity equation and the fact that the tangential velocity component does not change across the shock, trigonometric relations eventually lead to the θ-β-M equation which shows θ as a function of M1, β and ɣ, where ɣ is the Heat capacity ratio.
It is more intuitive to want to solve for β as a function of M1 and θ, but this approach is more complicated, the results of which are often contained in tables or calculated through a numerical method.
Maximum deflection angle
Within the θ-β-M equation, a maximum corner angle, θMAX, exists for any upstream Mach number. When θ > θMAX, the oblique shock wave is no longer attached to the corner and is replaced by a detached bow shock. A θ-β-M diagram, common in most compressible flow textbooks, shows a series of curves that will indicate θMAX for each Mach number. The θ-β-M relationship will produce two β angles for a given θ and M1, with the larger angle called a strong shock and the smaller called a weak shock. The weak shock is almost always seen experimentally.
The rise in pressure, density, and temperature after an oblique shock can be calculated as follows:
M2 is solved for as follows, where is the post-shock flow deflection angle:
Wave applications
Oblique shocks are often preferable in engineering applications when compared to normal shocks. This can be attributed to the fact that using one or a combination of oblique shock waves results in more favourable post-shock conditions (smaller increase in entropy, less stagnation pressure loss, etc.) when compared to utilizing a single normal shock. An example of this technique can be seen in the design of supersonic aircraft engine intakes or supersonic inlets. A type of these inlets is wedge-shaped to compress air flow into the combustion chamber while minimizing thermodynamic losses. Early supersonic aircraft jet engine intakes were designed using compression from a single normal shock, but this approach caps the maximum achievable Mach number to roughly 1.6. Concorde (which first flew in 1969) used variable geometry wedge-shaped intakes to achieve a maximum speed of Mach 2.2. A similar design was used on the F-14 Tomcat (the F-14D was first delivered in 1994) and achieved a maximum speed of Mach 2.34.
Many supersonic aircraft wings are designed around a thin diamond shape. Placing a diamond-shaped object at an angle of attack relative to the supersonic flow streamlines will result in two oblique shocks propagating from the front tip over the top and bottom of the wing, with Prandtl-Meyer expansion fans created at the two corners of the diamond closest to the front tip. When correctly designed, this generates lift.
Waves and the hypersonic limit
As the Mach number of the upstream flow becomes increasingly hypersonic, the equations for the pressure, density, and temperature after the oblique shock wave reach a mathematical limit. The pressure and density ratios can then be expressed as:
For a perfect atmospheric gas approximation using γ = 1.4, the hypersonic limit for the density ratio is 6. However, hypersonic post-shock dissociation of O2 and N2 into O and N lowers γ, allowing for higher density ratios in nature. The hypersonic temperature ratio is:
See also
Bow shock (aerodynamics)
Gas dynamics
Mach reflection
Moving shock
Shock polar
Shock wave
References
External links
NASA oblique shock wave calculator (Java applet)
Supersonic wind tunnel test demonstration (Mach 2.5) with flat plate and wedge creating an oblique shock(Video)
Aerodynamics
Fluid dynamics
Shock waves | Oblique shock | [
"Physics",
"Chemistry",
"Engineering"
] | 1,080 | [
"Physical phenomena",
"Shock waves",
"Chemical engineering",
"Aerodynamics",
"Waves",
"Aerospace engineering",
"Piping",
"Fluid dynamics"
] |
2,789,271 | https://en.wikipedia.org/wiki/Multiphysics%20simulation | In computational modelling, multiphysics simulation (often shortened to simply "multiphysics") is defined as the simultaneous simulation of different aspects of a physical system or systems and the interactions among them. For example, simultaneous simulation of the physical stress on an object, the temperature distribution of the object and the thermal expansion which leads to the variation of the stress and temperature distributions would be considered a multiphysics simulation. Multiphysics simulation is related to multiscale simulation, which is the simultaneous simulation of a single process on either multiple time or distance scales.
As an interdisciplinary field, multiphysics simulation can span many science and engineering disciplines. Simulation methods frequently include numerical analysis, partial differential equations and tensor analysis.
Multiphysics simulation process
The implementation of a multiphysics simulation follows a typical series of steps:
Identify the aspects of the system to be simulated, including physical processes, starting conditions, and the coupling or boundary conditions among these processes.
Create a discrete mathematical model of the system.
Numerically solve the model.
Process the resulting data.
Mathematical models
Mathematical models used in multiphysics simulations are generally a set of coupled equations. The equations can be divided into three categories according to the nature and intended role: governing equation, auxiliary equations and boundary/initial conditions. A governing equation describes a major physical mechanism or process. Multiphysics simulations are numerically implemented with discretization methods such as the finite element method, finite difference method, or finite volume method.
Challenges of multiphysics simulation
Generally speaking, multiphysics simulation is much harder than that for individual aspects of the physical processes.
The main extra issue is how to integrate the multiple aspects of the processes with proper handling of the interactions among them.
Such issue becomes quite difficult when different types numerical methods are used for the simulations of individual physical aspects.
For example, when simulating a fluid-structure interaction problem with typical Eulerian finite volume method for flow
and Lagrangian finite element method for structure dynamics.
See also
Finite difference time-domain method
References
Susan L. Graham, Marc Snir, and Cynthia A. Patterson (Editors), Getting Up to Speed: The Future of Supercomputing, Appendix D. The National Academies Press, Washington DC, 2004. .
Paul Lethbridge, Multiphysics Analysis, p26, The Industrial Physicist, Dec 2004/Jan 2005, , Archived at:
Numerical analysis
Computational physics | Multiphysics simulation | [
"Physics",
"Mathematics"
] | 473 | [
"Computational mathematics",
"Computational physics",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
2,789,297 | https://en.wikipedia.org/wiki/Neutron%20economy | Neutron economy is defined as the ratio of excess neutron production divided by the rate of fission. The numbers are a weighted average based primarily on the energies of the neutrons.
Nuclear fission is a process in which the nuclei of atoms are split apart. Among the various particles released in this process are high-energy neutrons with energies spread over the neutron spectrum. Those neutrons may cause other nuclei to undergo fission, leading to the possibility of a chain reaction. However, the neutrons can only cause another fission under certain conditions based on their energy; high-energy, or "relativistic", neutrons will often fly right through another nucleus without causing fission. The chance that a neutron will be captured increases greatly when its energy is about that of the target nucleus, which is known as a "thermal neutron". In order to maintain a chain reaction in a nuclear reactor, a neutron moderator is used to slow the neutrons down. This moderator is often used as the coolant that is used for energy extraction as well, and the most common moderator is water. The neutrons also slow due to elastic and inelastic collisions with fuel and other materials in the reactor.
A fission reactor is based on the idea of maintaining criticality, where every fission event leads to another fission event, no more and no less. As fission of uranium releases two or three neutrons, this means some of the neutrons must be removed as part of the overall process. Some will be lost purely due to geometry, those released travelling outward from the outer edge of the fuel mass will not have a chance to cause fission, for instance. Others will be absorbed through various processes in the mass, and still others will be deliberately absorbed by control rods or similar devices to maintain the correct overall balance. The process of moderating the neutrons almost always leads to some of them being absorbed as well.
Neutron economy is a measure of the number of neutrons being released that can cause fission compared to the number needed to maintain the chain reaction. This is not simply an accounting of the total number of neutrons, as it also includes a weighting based on the energy. Thus, remaining high-energy neutrons are not a major part of the "overall economy" as they do not maintain the chain reaction. The quantity that indicates how much the neutron economy is out of balance is given the term reactivity. If a reactor is exactly critical—that is, the neutron production is exactly equal to neutron destruction—the reactivity is zero. If the reactivity is positive, the reactor is supercritical. If the reactivity is negative, the reactor is subcritical.
The term "neutron economy" is used not just for the instantaneous reactivity of a reactor, but also to describe the overall efficiency of a nuclear reactor design. Common reactor designs using conventional water as the coolant and moderator generally have poor relative neutron economies because the water will absorb some of the thermal neutrons, reducing the number available to keep the reaction going. In contrast, heavy water already has an extra neutron, and the same reaction generally causes it to be released, meaning that a reactor moderated with heavy water does not absorb neutrons and thus has a better neutron economy. Reactors with high neutron economies have more "leftover neutrons" which can be used for other purposes, like breeding additional fuel or causing sub-critical fission in nuclear waste to "burn off" some of the more radioactive components.
See also
Dollar (reactivity)
Breeder reactor
References
Economy
Nuclear technology | Neutron economy | [
"Physics"
] | 713 | [
"Nuclear technology",
"Nuclear physics"
] |
2,790,212 | https://en.wikipedia.org/wiki/Roof%20pitch | Roof pitch is the steepness of a roof expressed as a ratio of inch(es) rise per horizontal foot (or their metric equivalent), or as the angle in degrees its surface deviates from the horizontal. A flat roof has a pitch of zero in either instance; all other roofs are pitched.
A roof that rises 3 inches per foot, for example, would be described as having a pitch of 3 (or “3 in 12”).
Description
The pitch of a roof is its vertical 'rise' over its horizontal 'run’ (i.e. its span), also known as its 'slope'.
In the imperial measurement systems, "pitch" is usually expressed with the rise first and run second (in the US, run is held to number 12; e.g., 3:12, 4:12, 5:12). In metric systems either the angle in degrees or rise per unit of run, expressed as a '1 in _' slope (where a '1 in 1' equals 45°) is used. Where convenient, a reduced ratio is used (e.g., a '3 in 4' slope, for a '9 in 12' or '1 in ').
Selection
Considerations involved in selecting a roof pitch include availability and cost of materials, aesthetics, ease or difficulty of construction, climatic factors such as wind and potential snow load, and local building codes.
The primary purpose of pitching a roof is to redirect wind and precipitation, whether in the form of rain or snow. Thus, pitch is typically greater in areas of high rain or snowfall, lower in areas of high wind. The steep roof of the tropical Papua New Guinea longhouse, for example, sweeps almost to the ground. The high, steeply-pitched gabled roofs of Northern Europe are typical in regions of heavy snowfall. In some areas building codes require a minimum slope. Buffalo, New York and Montreal, Quebec, Canada, specify 6 in 12, a pitch of approximately 26.6 degrees.
A flat roof includes pitches as low as :12 to 2:12 (1 in 24 to 1 in 6), which are barely capable of properly shedding water. Such low-slope roofs (up to 4:12 (1 in 3)) require special materials and techniques to avoid leaks. Conventional describes pitches from 4:12 (1 in 3) to 9:12 (3 in 4). Steep is above 9:12 (3 in 4) (21:12) (7 in 4) and may require extra fasteners.
US convention is to use whole numbers when even (e.g. "three in twelve") or the nearest single or two-digit fraction when not (e.g. either "five and a half in twelve" or "five point five in twelve", each expressed numerically as :12 and 5.5:12) respectively.
Definitions vary on when a roof is considered pitched. In degrees, 10° (2 in 12 or 1 in 6) is considered by at least one reference a minimum.
In trigonometric expression, exact roof slope in degrees is given by the arctangent. For example: arctan() = 14.0°.
Framing carpenters cut rafters on an angle to "pitch" a roof. Lower pitched roof styles allow for lower structures with a corresponding reduction in framing and sheathing materials.
Historic expressions of roof pitch
Historically, roof pitch was designated in two other ways: A ratio of the ridge height to the width of the building (span) and as a ratio of the rafter length to the width of the building.
Commonly used roof pitches were given names such as:
Greek: the ridge height is to the span (an angle of 12.5° to 16°);
Roman: the ridge height is to the span (an angle of 24° to 34°);
Common: the rafter length is the span (about 48°);
Gothic: the rafters equal the span (60°); and
Elizabethan: the rafters are longer than the span (more than 60°).
See also
List of roof shapes
Shed roof
Flat roof
Types of Pitched roof
References
External links
How to determine roof pitch
Roof pitch calculator
Online roof pitch calculator
Building
Building engineering
Roof construction | Roof pitch | [
"Engineering"
] | 878 | [
"Building engineering",
"Building",
"Construction",
"Civil engineering",
"Roof construction",
"Architecture"
] |
2,790,407 | https://en.wikipedia.org/wiki/Uranium%28IV%29%20sulfate | Uranium(IV) sulfate (U(SO4)2) is a water-soluble salt of uranium. It is a very toxic compound. Uranium sulfate minerals commonly are widespread around uranium bearing mine sites, where they usually form during the evaporation of acid sulfate-rich mine tailings which have been leached by oxygen-bearing waters. Uranium sulfate is a transitional compound in the production of uranium hexafluoride. It was also used to fuel aqueous homogeneous reactors.
Preparation
Uranyl sulfate in solution is readily photochemically reduced to uranium(IV) sulfate. The photoreduction is carried out in the sun and requires the addition of ethanol as a reducing agent. Uranium(IV) crystallizes or is precipitated by ethanol in excess. It can be obtained with different degrees of hydration. U(SO4)2 can also be prepared through electrochemical reduction of U(VI) and the addition of sulfates. Reduction of U(VI) to U(IV) occurs naturally through a variety of means, including through the actions of microorganisms. Formation of U(SO4)2 is an entropically and thermodynamically favorable reaction.
Mining and presence in the environment
In-situ leaching (ISL), a widespread technique used to mine uranium, is implicated in the artificial increase of uranium sulfate compounds. ISL was the most widely used method to mine uranium in the United States during the 1990s. The method involves pumping an extraction liquid (either sulfuric acid or an alkaline carbonate solution) into an ore deposit, where it complexes with the uranium, removing the liquid and purifying the uranium. This synthetic addition of sulfuric acid unnaturally raises the abundance of uranium sulfate complexes at the site. The lower pH caused by the introduction of acid increases the solubility of U(IV), which is typically relatively insoluble and precipitates out of solution at neutral pH. Oxidation states for uranium range from U3+ to U6+, U(III) and U(V) are rarely found, while U(VI) and U(IV) predominate. U(VI) forms stable aqueous complexes and is thus fairly mobile. Preventing the spread of toxic uranium compounds from mining sites often involves reduction of U(VI) to the far less soluble U(IV). The presence of sulfuric acid and sulfates prevents this sequestration, however, by both lowering the pH and through the formation of uranium salts. U(SO4)2 is soluble in water, and thus far more mobile. Uranium sulfate complexes also form quite readily.
Environmental and health effects
U(IV) is much less soluble, and thus less environmentally mobile, than U(VI), which also forms sulfate compounds such as UO2(SO4). Bacteria which are able to reduce uranium have been proposed as a means of eliminating U(VI) from contaminated areas, such as mine tailings and nuclear weapons manufacture sites. Contamination of groundwater by uranium is considered a serious health risk, and can be damaging to the environment as well. Several species of sulfate reducing bacteria also have the ability to reduce uranium. The ability to clear the environment of both sulfate (which solubilizes reduced uranium) and mobile U(VI) makes bioremediation of ISL mining sites a possibility.
Related compounds
U(SO4)2 is a semi-soluble compound and exists in a variety of hydration states, with up to nine coordinating waters.
U(IV) can have up to five coordinating sulfates, although nothing above U(SO4)2 has been significantly described. Kinetics data for U(SO4)2+ and U(SO4)2 reveal that the bidentate complex is strongly favored thermodynamically, with a reported K0 of 10.51, as compared to K0=6.58 for the monodentate complex. U(IV) is much more stable as a sulfate compound, particularly as U(SO4)2. Běhounekite is a recently (2011) described U(IV) mineral with the chemical composition U(SO4)2 (H2O)4. The uranium center has eight oxygen ligands, four provided by the sulfate groups and four from the water ligands. U(SO4)2 (H2O)4 forms short, green crystals. Běhounekite is the first naturally occurring U(IV) sulfate to be described.
References
Merkel, B.; Hasche-Berger, A. (2008). Uranium, Mining and Hydrogeology. Springer.
Hennig, C.; Schmeide, K.; Brendler, V.; Moll, H.; Tsushima, S.; Scheinost, A.C. (2007). “EXAFS Investigation of U(VI), U(IV), and Th(IV) Sulfato Complexes in Aqueous Solution”. Inorganic Chemistry 46 (15): 5882–5892.
Cardenas, E.; Watson, D.; Gu, B.; Ginder-Vogel, M.; Kitanidis, P.K.; Jardin, P.M.; Wu, W.; Leigh, M.B.; Carley, J.; Carroll, S.; Gentry, T.; Luoe, J.; Zhou, J.; Criddle, C.S.; Marsh, T.L.; Tiedje, J.M. (2010). “Significant Association between Sulfate-Reducing Bacteria and Uranium-Reducing Microbial Communities as Revealed by a Combined Massively Parallel Sequencing-Indicator Species Approach”. Applied and Environmental Microbiology 76 (20): 6778-6786.
Converse, B.J.; Wua, T.; Findlay, R.H.; Roden, E.E. (2013). “U(VI) Reduction in Sulfate-Reducing Subsurface Sediments Amended with Ethanol or Acetate”. Applied and Environmental Microbiology 79 (13): 4173-4177.
Day, R. A.; Wilhite, R. N.; Hamilton, F.R. (1955). “Stability of Complexes of Uranium(IV) with Chloride, Sulfate and Thiocyanate”. Journal of the American Chemical Society 77 (12):3180-3182.
Hennig, C.; Kraus, W.; Emmerling, F.; Ikeda, A.;. Scheinost, A.C. (2008). “Coordination of a Uranium(IV) Sulfate Monomer in an Aqueous Solution and in the Solid State”. Inorganic Chemistry 47 (5): 1634-1638.
Plášil, J.; Fejfarová, K.; Novák, M.; Dušek, M.; Škoda, R.; Hloušek, J.; Čejka, J.; Majzlan, J.; Sejkora, J.; Machovič, V.; Talla, D. (2011). “Běhounekite, U(SO4)2(H2O)4, from Jáchymov (St Joachimsthal), Czech Republic: the first natural U4+ sulphate”. Mineralogical Magazine 75 (6): 2739-2753.
Mudd, G.M. (2001). “Critical Review of Acid in situ leach uranium mining: 1. USA and Australia”. Environmental Geology 41 (3-4): 390-403.
Závodská, L.; Kosorínová, E.; Scerbáková, L.; Lesny, J. (2008). “Environmental Chemistry of Uranium”. HV .
Uranium(IV) compounds
Sulfates | Uranium(IV) sulfate | [
"Chemistry"
] | 1,636 | [
"Sulfates",
"Salts"
] |
2,791,110 | https://en.wikipedia.org/wiki/Anonymous%20matching | Anonymous matching is a matchmaking method facilitated by computer databases, in which each user confidentially selects people they are interested in dating and the computer identifies and reports matches to pairs of users who share a mutual attraction. Protocols for anonymous matchmaking date back to the 1980s, and one of the earliest papers on the topic is by Baldwin and Gramlich, published in 1985. From a technical perspective, the problem and solution are trivial and likely predate even this paper. The problem becomes interesting and requires more sophisticated cryptography when the matchmaker (central server) isn't trusted.
The purpose of the protocol is to allow people to initiate romantic relationships while avoiding the risk of embarrassment, awkwardness, and other negative consequences associated with unwanted romantic overtures and rejection. The general concept was patented on September 7, 1999, by David J. Blumberg and DoYouDo chief executive officer Gil S. Sudai, but several websites were already employing the methodology by that date, and thus apparently were allowed to continue using it. United States Patent 5,950,200 points out several potential flaws in traditional courtship and in conventional dating systems in which strangers meet online, promoting anonymous matching of friends and acquaintances as a better alternative:
Implementations
Some of the most notable implementations of the idea have been:
Baldwin and Gramlich, as cited above.
eCRUSH. launched Valentine's Day, 1999, is the most successful implementation of the concept. Targeted to the teen market, it has more than 1.6 million users and claims more than 600,000 legitimate matches
DoYOU2.com. The website's owner, DoYouDo, Inc., was incorporated 23 September 1999 and acquired by MatchNet in September 2000 in exchange for stock valued at $1,820,000. According to MatchNet's 2003 annual report, "The acquisition was made primarily for the purpose of acquiring the patent on this business model for future development."
The LiveJournal Secret Crush meme. In mid-2003, a company named Anonymous Consulting created an online quiz called "Secret Crush Meme," which would provide each user with a chart showing who on their LiveJournal friends list had a crush on them, as well as what "kind" of crush they had (public, secret or ex). The quiz was designed to harvest crushes between LiveJournal users (hence the elaborate disclaimer). In October 2003, a new quiz, called "Secret Crush Meme 2: The Revenge of Secret Crush Meme," was released , which showed users how many crushes other users had on them, as well as what kind. There was a catch: For four dollars, the company would tell someone who had crushes on them. This created controversy between couples who listed other users as crushes as well as people getting ex-crushes when they felt they should have gotten public crushes, and much ethics debating. Finally a small PERL script was written and distributed to poison the database. Faced with attempts to poison the database from many different IP addresses, the project was shut down.
SecretAdmirer.com. This service claims 100,000 successful matches . Salon.com called SecretAdmirer.com "the grandfather of the concept, launched in 1997." Its methodology is different from the others in that in order for a match to occur, the recipient must send emails out, rather than simply place names on a confidential list.
These commercial implementations all trust the central server, simplifying the solution and implementation drastically. Baldwin and Gramlich solved this case in 1985, as well as the more notable and challenging case in which the central server isn't trusted.
Viral marketing
eCRUSH, DoYOU2.com, the LiveJournal Secret Crush meme, and SecretAdmirer.com are examples of anonymous matching services using viral marketing to increase their membership. Users are encouraged to send an anonymous email to their crush so that they will visit the site and enter their own crushes, facilitating a match. In the case of SecretAdmirer.com, the email is mandatory; this represents a more aggressive type of viral marketing.
At least one site, CrushLink, was accused by eCRUSH of sending spam emails disguised as crush notifications. According to a Salon article, "What makes SomeoneLikesYou and Crushlink different from the rest of the sites in the genre is this: they bait hopeful visitors to hand over as many e-mail addresses as possible by trading clues for e-mail addresses". Both sites are now defunct.
References
R.W. Baldwin and W.C. Gramlich, Cryptographic Protocol for Trustable Match Making, IEEE Security and Privacy, 1985.
Lester, Amelia A. and Borja, Anais A.: The Rise and Success of Sparknotes, The Harvard Crimson, 18 October 2001.
Mieszkowski, Katharine: The bot who loved me, 7 August 2002.
United States Patent 5,950,200: Method and apparatus for detection of reciprocal interests or feelings and subsequent notification, United States Patent and Trademark Office, 7 September 1999.
Matchmaking
Cryptography | Anonymous matching | [
"Mathematics",
"Engineering"
] | 1,043 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
1,364,502 | https://en.wikipedia.org/wiki/Belt%20%28mechanical%29 | A belt is a loop of flexible material used to link two or more rotating shafts mechanically, most often parallel. Belts may be used as a source of motion, to transmit power efficiently or to track relative movement. Belts are looped over pulleys and may have a twist between the pulleys, and the shafts need not be parallel.
In a two pulley system, the belt can either drive the pulleys normally in one direction (the same if on parallel shafts), or the belt may be crossed, so that the direction of the driven shaft is reversed (the opposite direction to the driver if on parallel shafts). The belt drive can also be used to change the speed of rotation, either up or down, by using different sized pulleys.
As a source of motion, a conveyor belt is one application where the belt is adapted to carry a load continuously between two points.
History
The mechanical belt drive, using a pulley machine, was first mentioned in the text of the Dictionary of Local Expressions by the Han Dynasty philosopher, poet, and politician Yang Xiong (53–18 BC) in 15 BC, used for a quilling machine that wound silk fibres onto bobbins for weavers' shuttles. The belt drive is an essential component of the invention of the spinning wheel. The belt drive was not only used in textile technologies, it was also applied to hydraulic-powered bellows dated from the 1st century AD.
Power transmission
Belts are the cheapest utility for power transmission between shafts that may not be axially aligned. Power transmission is achieved by purposely designed belts and pulleys. The variety of power transmission needs that can be met by a belt-drive transmission system are numerous, and this has led to many variations on the theme. Belt drives run smoothly and with little noise, and provide shock absorption for motors, loads, and bearings when the force and power needed changes. A drawback to belt drives is that they transmit less power than gears or chain drives. However, improvements in belt engineering allow use of belts in systems that formerly only allowed chain drives or gears.
Power transmitted between a belt and a pulley is expressed as the product of difference of tension and belt velocity:
where and are tensions in the tight side and slack side of the belt respectively. They are related as
where is the coefficient of friction, and is the angle (in radians) subtended by contact surface at the centre of the pulley.
Power transmission loss form
Pros and cons
Belt drives are simple, inexpensive, and do not require axially aligned shafts. They help protect machinery from overload and jam, and damp and isolate noise and vibration. Load fluctuations are shock-absorbed (cushioned). They need no lubrication and minimal maintenance. They have high efficiency (90–98%, usually 95%), high tolerance for misalignment, and are of relatively low cost if the shafts are far apart. Clutch action can be achieved by shifting the belt to a free turning pulley or by releasing belt tension. Different speeds can be obtained by stepped or tapered pulleys.
The angular-velocity ratio may not be exactly constant or equal to that of the pulley diameters, due to slip and stretch. However, this problem can be largely solved by the use of toothed belts. Working temperatures range from . Adjustment of centre distance or addition of an idler pulley is crucial to compensate for wear and stretch.
Flat belts
Flat belts were widely used in the 19th and early 20th centuries in line shafting to transmit power in factories. They were also used in countless farming, mining, and logging applications, such as bucksaws, sawmills, threshers, silo blowers, conveyors for filling corn cribs or haylofts, balers, water pumps (for wells, mines, or swampy farm fields), and electrical generators. Flat belts are still used today, although not nearly as much as in the line-shaft era. The flat belt is a simple system of power transmission that was well suited for its day. It can deliver high power at high speeds (373 kW at 51 m/s; 115 mph), in cases of wide belts and large pulleys. Wide-belt-large-pulley drives are bulky, consuming much space while requiring high tension, leading to high loads, and are poorly suited to close-centers applications. V-belts have mainly replaced flat belts for short-distance power transmission; and longer-distance power transmission is typically no longer done with belts at all. For example, factory machines now tend to have individual electric motors.
Because flat belts tend to climb towards the higher side of the pulley, pulleys were made with a slightly convex or "crowned" surface (rather than flat) to allow the belt to self-center as it runs. Flat belts also tend to slip on the pulley face when heavy loads are applied, and many proprietary belt dressings were available that could be applied to the belts to increase friction, and so power transmission.
Flat belts were traditionally made of leather or fabric. Early flour mills in Ukraine had leather belt drives. After World War I, there was such a shortage of shoe leather that people cut up the belt drives to make shoes. Selling shoes was more profitable than selling flour for a time. Flour milling soon came to a standstill and bread prices rose, contributing to famine conditions. Leather drive belts were put to another use during the Rhodesian Bush War (1964–1979): To protect riders of cars and busses from land mines, layers of leather belt drives were placed on the floors of vehicles in danger zones. Today most belt drives are made of rubber or synthetic polymers. Grip of leather belts is often better if they are assembled with the hair side (outer side) of the leather against the pulley, although some belts are instead given a half-twist before joining the ends (forming a Möbius strip), so that wear can be evenly distributed on both sides of the belt. Belts ends are joined by lacing the ends together with leather thonging (the oldest of the methods), steel comb fasteners and/or lacing, or by gluing or welding (in the case of polyurethane or polyester). Flat belts were traditionally jointed, and still usually are, but they can also be made with endless construction.
Rope drives
In the mid 19th century, British millwrights discovered that multi-grooved pulleys connected by ropes outperformed flat pulleys connected by leather belts. Wire ropes were occasionally used, but cotton, hemp, manila hemp and flax rope saw the widest use. Typically, the rope connecting two pulleys with multiple V-grooves was spliced into a single loop that traveled along a helical path before being returned to its starting position by an idler pulley that also served to maintain the tension on the rope. Sometimes, a single rope was used to transfer power from one multiple-groove drive pulley to several single- or multiple-groove driven pulleys in this way.
In general, as with flat belts, rope drives were used for connections from stationary engines to the jack shafts and line shafts of mills, and sometimes from line shafts to driven machinery. Unlike leather belts, however, rope drives were sometimes used to transmit power over relatively long distances. Over long distances, intermediate sheaves were used to support the "flying rope", and in the late 19th century, this was considered quite efficient.
Round belts
Round belts are a circular cross section belt designed to run in a pulley with a 60 degree V-groove. Round grooves are only suitable for idler pulleys that guide the belt, or when (soft) O-ring type belts are used. The V-groove transmits torque through a wedging action, thus increasing friction. Nevertheless, round belts are for use in relatively low torque situations only and may be purchased in various lengths or cut to length and joined, either by a staple, a metallic connector (in the case of hollow plastic), gluing or welding (in the case of polyurethane). Early sewing machines utilized a leather belt, joined either by a metal staple or glued, to great effect.
Spring belts
Spring belts are similar to rope or round belts but consist of a long steel helical spring. They are commonly found on toy or small model engines, typically steam engines driving other toys or models or providing a transmission between the crankshaft and other parts of a vehicle. The main advantage over rubber or other elastic belts is that they last much longer under poorly controlled operating conditions. The distance between the pulleys is also less critical. Their main disadvantage is that slippage is more likely due to the lower coefficient of friction. The ends of a spring belt can be joined either by bending the last turn of the helix at each end by 90 degrees to form hooks, or by reducing the diameter of the last few turns at one end so that it "screws" into the other end.
V belts (also style V-belts, vee belts, or, less commonly, wedge rope) solved the slippage and alignment problem. It is now the basic belt for power transmission. They provide the best combination of traction, speed of movement, load of the bearings, and long service life. They are generally endless, and their general cross-section shape is roughly trapezoidal (hence the name "V"). The "V" shape of the belt tracks in a mating groove in the pulley (or sheave), with the result that the belt cannot slip off. The belt also tends to wedge into the groove as the load increases—the greater the load, the greater the wedging action—improving torque transmission and making the V-belt an effective solution, needing less width and tension than flat belts. V-belts trump flat belts with their small center distances and high reduction ratios. The preferred center distance is larger than the largest pulley diameter, but less than three times the sum of both pulleys. Optimal speed range is . V-belts need larger pulleys for their thicker cross-section than flat belts.
For high-power requirements, two or more V-belts can be joined side-by-side in an arrangement called a multi-V, running on matching multi-groove sheaves. This is known as a multiple-V-belt drive (or sometimes a "classical V-belt drive").
V-belts may be homogeneously rubber or polymer throughout, or there may be fibers embedded in the rubber or polymer for strength and reinforcement. The fibers may be of textile materials such as cotton, polyamide (such as nylon) or polyester or, for greatest strength, of steel or aramid (such as Technora, Twaron or Kevlar).
When an endless belt does not fit the need, jointed and link V-belts may be employed. Most models offer the same power and speed ratings as equivalently-sized endless belts and do not require special pulleys to operate. A link v-belt is a number of polyurethane/polyester composite links held together, either by themselves, such as Fenner Drives' PowerTwist, or Nu-T-Link (with metal studs). These provide easy installation and superior environmental resistance compared to rubber belts and are length-adjustable by disassembling and removing links when needed.
History of V-belts
Trade journal coverage of V-belts in automobiles from 1916 mentioned leather as the belt material, and mentioned that the V angle was not yet well standardized. The endless rubber V-belt was developed in 1917 by Charles C. Gates of the Gates Rubber Company. Multiple-V-belt drive was first arranged a few years later by Walter Geist of the Allis-Chalmers corporation, who was inspired to replace the single rope of multi-groove-sheave rope drives with multiple V-belts running parallel. Geist filed for a patent in 1925, and Allis-Chalmers began marketing the drive under the "Texrope" brand; the patent was granted in 1928 (). The "Texrope" brand still exists, although it has changed ownership and no longer refers to multiple-V-belt drive alone.
Multi-groove belts
A multi-groove, V-ribbed, or polygroove belt is made up of usually between 3 and 24 V-shaped sections alongside each other. This gives a thinner belt for the same drive surface, thus it is more flexible, although often wider. The added flexibility offers an improved efficiency, as less energy is wasted in the internal friction of continually bending the belt. In practice this gain of efficiency causes a reduced heating effect on the belt, and a cooler-running belt lasts longer in service. Belts are commercially available in several sizes, with usually a 'P' (sometimes omitted) and a single letter identifying the pitch between grooves. The 'PK' section with a pitch of 3.56 mm is commonly used for automotive applications.
A further advantage of the polygroove belt that makes them popular is that they can run over pulleys on the ungrooved back of the belt. Though this is sometimes done with V-belts with a single idler pulley for tensioning, a polygroove belt may be wrapped around a pulley on its back tightly enough to change its direction, or even to provide a light driving force.
Any V-belt's ability to drive pulleys depends on wrapping the belt around a sufficient angle of the pulley to provide grip. Where a single-V-belt is limited to a simple convex shape, it can adequately wrap at most three or possibly four pulleys, so can drive at most three accessories. Where more must be driven, such as for modern cars with power steering and air conditioning, multiple belts are required. As the polygroove belt can be bent into concave paths by external idlers, it can wrap any number of driven pulleys, limited only by the power capacity of the belt.
This ability to bend the belt at the designer's whim allows it to take a complex or "serpentine" path. This can assist the design of a compact engine layout, where the accessories are mounted more closely to the engine block and without the need to provide movable tensioning adjustments. The entire belt may be tensioned by a single idler pulley.
The nomenclature used for belt sizes varies by region and trade. An automotive belt with the number "740K6" or "6K740" indicates a belt in length, 6 ribs wide, with a rib pitch of (a standard thickness for a K series automotive belt would be 4.5mm). A metric equivalent would be usually indicated by "6PK1880" whereby 6 refers to the number of ribs, PK refers to the metric PK thickness and pitch standard, and 1880 is the length of the belt in millimeters.
Ribbed belt
A ribbed belt is a power transmission belt featuring lengthwise grooves. It operates from contact between the ribs of the belt and the grooves in the pulley. Its single-piece structure is reported to offer an even distribution of tension across the width of the pulley where the belt is in contact, a power range up to 600 kW, a high speed ratio, serpentine drives (possibility to drive off the back of the belt), long life, stability and homogeneity of the drive tension, and reduced vibration. The ribbed belt may be fitted on various applications: compressors, fitness bikes, agricultural machinery, food mixers, washing machines, lawn mowers, etc.
Film belts
Though often grouped with flat belts, they are actually a different kind. They consist of a very thin belt (0.5–15 millimeters or 100–4000 micrometres) strip of plastic and occasionally rubber. They are generally intended for low-power (less than 10 watts), high-speed uses, allowing high efficiency (up to 98%) and long life. These are seen in business machines, printers, tape recorders, and other light-duty operations.
Timing belts
Timing belts (also known as toothed, notch, cog, or synchronous belts) are a positive transfer belt and can track relative movement. These belts have teeth that fit into a matching toothed pulley. When correctly tensioned, they have no slippage, run at constant speed, and are often used to transfer direct motion for indexing or timing purposes (hence their name). They are often used instead of chains or gears, so there is less noise and a lubrication bath is not necessary. Camshafts of automobiles, miniature timing systems, and stepper motors often utilize these belts. Timing belts need the least tension of all belts and are among the most efficient. They can bear up to at speeds of .
Timing belts with a helical offset tooth design are available. The helical offset tooth design forms a chevron pattern and causes the teeth to engage progressively. The chevron pattern design is self-aligning and does not make the noise that some timing belts make at certain speeds, and is more efficient at transferring power (up to 98%).
The advantages of timing belts include clean operation, energy efficiency, low maintenance, low noise, non slip performance, versatile load and speed capabilities.
Disadvantages include a relatively high purchase cost, the need for specially fabricated toothed pulleys, less protection from overloading, jamming, and vibration due to their continuous tension cords, the lack of clutch action (only possible with friction-drive belts), and the fixed lengths, which do not allow length adjustment (unlike link V-belts or chains).
Specialty belts
Belts normally transmit power on the tension side of the loop. However, designs for continuously variable transmissions exist that use belts that are a series of solid metal blocks, linked together as in a chain, transmitting power on the compression side of the loop.
Rolling roads
Belts used for rolling roads for wind tunnels can be capable of .
Standards for use
The open belt drive has parallel shafts rotating in the same direction, whereas the cross-belt drive also bears parallel shafts but rotate in opposite direction. The former is far more common, and the latter not appropriate for timing and standard V-belts unless there is a twist between each pulley so that the pulleys only contact the same belt surface. Nonparallel shafts can be connected if the belt's center line is aligned with the center plane of the pulley. Industrial belts are usually reinforced rubber but sometimes leather types. Non-leather, non-reinforced belts can only be used in light applications.
The pitch line is the line between the inner and outer surfaces that is neither subject to tension (like the outer surface) nor compression (like the inner). It is midway through the surfaces in film and flat belts and dependent on cross-sectional shape and size in timing and V-belts. Standard reference pitch diameter can be estimated by taking average of gear teeth tips diameter and gear teeth base diameter. The angular speed is inversely proportional to size, so the larger the one wheel, the less angular velocity, and vice versa. Actual pulley speeds tend to be 0.5–1% less than generally calculated because of belt slip and stretch. In timing belts, the inverse ratio teeth of the belt contributes to the exact measurement.
The speed of the belt is:
International use standards
Standards include:
ISO 9563: This standard specifies requirements and test methods for endless power transmission V-belts and V-ribbed belts.
ISO 4184: This standard specifies the dimensions of classical and narrow V-belts for general use.
ISO 9981: This standard deals with the dimensions of rubber synchronous belt drives.
ISO 9982: This standard covers the dimensions of polyurethane synchronous belt drives.
DIN 22101: This standard covers the design principles for belt conveyors used in bulk material handling, including safety requirements and testing methods.
ASME B29.1: This standard specifies the dimensions, tolerances, and quality requirements for roller chain drives, which include belts and sprockets.
ANSI/RMA IP-20 is a standard developed by the American National Standards Institute (ANSI) and the Rubber Manufacturers Association (RMA) that focuses on elastomeric belts used in industrial applications. This standard covers important aspects such as dimensions and tolerances, ensuring that the belts perform reliably and efficiently in various industrial settings.
SAE J1459 is a standard developed by the Society of Automotive Engineers (SAE) that focuses on automotive V-belts and V-ribbed belts. These belts are used in various automotive applications, such as power transmission between the engine and different accessories, including the alternator, power steering pump, air conditioning compressor, and water pump. The standard specifies test procedures, performance requirements, and dimensions to ensure the belts are reliable, durable, and suitable for automotive use.
ASTM D378 is a standard developed by the American Society for Testing and Materials (ASTM), which focuses on the testing of conveyor belts used in various industries for specific applications. Conveyor belts are essential for material handling and transportation in industries such as mining, construction, agriculture, and manufacturing. ASTM D378 covers the testing methods to evaluate conveyor belts for performance characteristics, such as fire resistance and oil resistance, ensuring that they meet safety and operational requirements.
Selection criteria
Belt drives are built under the following required conditions: speeds of and power transmitted between drive and driven unit; suitable distance between shafts; and appropriate operating conditions. The equation for power is
Factors of power adjustment include speed ratio; shaft distance (long or short); type of drive unit (electric motor, internal combustion engine); service environment (oily, wet, dusty); driven unit loads (jerky, shock, reversed); and pulley-belt arrangement (open, crossed, turned). These are found in engineering handbooks and manufacturer's literature. When corrected, the power is compared to rated powers of the standard belt cross-sections at particular belt speeds to find a number of arrays that perform best. Now the pulley diameters are chosen. It is generally either large diameters or large cross-section that are chosen, since, as stated earlier, larger belts transmit this same power at low belt speeds as smaller belts do at high speeds. To keep the driving part at its smallest, minimal-diameter pulleys are desired. Minimum pulley diameters are limited by the elongation of the belt's outer fibers as the belt wraps around the pulleys. Small pulleys increase this elongation, greatly reducing belt life. Minimal pulley diameters are often listed with each cross-section and speed, or listed separately by belt cross-section. After the cheapest diameters and belt section are chosen, the belt length is computed. If endless belts are used, the desired shaft spacing may need adjusting to accommodate standard-length belts. It is often more economical to use two or more juxtaposed V-belts, rather than one larger belt.
In large speed ratios or small central distances, the angle of contact between the belt and pulley may be less than 180°. If this is the case, the drive power must be further increased, according to manufacturer's tables, and the selection process repeated. This is because power capacities are based on the standard of a 180° contact angle. Smaller contact angles mean less area for the belt to obtain traction, and thus the belt carries less power.
Belt friction
Belt drives depend on friction to operate, but excessive friction wastes energy and rapidly wears the belt. Factors that affect belt friction include belt tension, contact angle, and the materials used to make the belt and pulleys.
Belt tension
Power transmission is a function of belt tension. However, also increasing with tension is stress (load) on the belt and bearings. The ideal belt is that of the lowest tension that does not slip in high loads. Belt tensions should also be adjusted to belt type, size, speed, and pulley diameters. Belt tension is determined by measuring the force to deflect the belt a given distance per inch (or mm) of pulley. Timing belts need only adequate tension to keep the belt in contact with the pulley.
Belt wear
Fatigue, more so than abrasion, is the culprit for most belt problems. This wear is caused by stress from rolling around the pulleys. High belt tension; excessive slippage; adverse environmental conditions; and belt overloads caused by shock, vibration, or belt slapping all contribute to belt fatigue.
Belt vibration
Vibration signatures are widely used for studying belt drive malfunctions. Some of the common malfunctions or faults include the effects of belt tension, speed, sheave eccentricity and misalignment conditions. The effect of sheave Eccentricity on vibration signatures of the belt drive is quite significant. Although, vibration magnitude is not necessarily increased by this it will create strong amplitude modulation. When the top section of a belt is in resonance, the vibrations of the machine is increased. However, an increase in the machine vibration is not significant when only the bottom section of the belt is in resonance. The vibration spectrum has the tendency to move to higher frequencies as the tension force of the belt is increased.
Belt dressing
Belt slippage can be addressed in several ways. Belt replacement is an obvious solution, and eventually the mandatory one (because no belt lasts forever). Often, though, before the replacement option is executed, retensioning (via pulley centerline adjustment) or dressing (with any of various coatings) may be successful to extend the belt's lifespan and postpone replacement. Belt dressings are typically liquids that are poured, brushed, dripped, or sprayed onto the belt surface and allowed to spread around; they are meant to recondition the belt's driving surfaces and increase friction between the belt and the pulleys. Some belt dressings are dark and sticky, resembling tar or syrup; some are thin and clear, resembling mineral spirits. Some are sold to the public in aerosol cans at auto parts stores; others are sold in drums only to industrial users.
Specifications
To fully specify a belt, the material, length, and cross-section size and shape are required. Timing belts, in addition, require that the size of the teeth be given. The length of the belt is the sum of the central length of the system on both sides, half the circumference of both pulleys, and the square of the sum (if crossed) or the difference (if open) of the radii. Thus, when dividing by the central distance, it can be visualized as the central distance times the height that gives the same squared value of the radius difference on, of course, both sides. When adding to the length of either side, the length of the belt increases, in a similar manner to the Pythagorean theorem. One important concept to remember is that as gets closer to there is less of a distance (and therefore less addition of length) as it approaches zero.
On the other hand, in a crossed belt drive the sum rather than the difference of radii is the basis for computation for length. So the wider the small drive increases, the belt length is higher.
V-belt profiles
Metric v-belt profiles (note pulley angles are reduced for small radius pulleys):
* Common pulley design is to have a higher angle of the first part of the opening, above the so-called "pitch line".
E.g. the pitch line for SPZ could be 8.5 mm from the bottom of the "V". In other words, 0–8.5 mm is 35° and 45° from 8.5 and above.
See also
Belt-drive turntable
Belt-driven bicycle
Belt track
Conveyor belt
Gilmer belt
Lariat chain – a science exhibit showing the effects when a belt is run "too fast"
Roller chain
Timing belt (camshaft)
References
External links
Belt Passing Frequency Vibration Calculator | RITEC | Library & Tools
Chinese inventions
Mechanical power transmission | Belt (mechanical) | [
"Physics"
] | 5,798 | [
"Mechanical power transmission",
"Mechanics"
] |
1,364,622 | https://en.wikipedia.org/wiki/Four-dimensional%20space | Four-dimensional space (4D) is the mathematical extension of the concept of three-dimensional space (3D). Three-dimensional space is the simplest possible abstraction of the observation that one needs only three numbers, called dimensions, to describe the sizes or locations of objects in the everyday world. For example, the volume of a rectangular box is found by measuring and multiplying its length, width, and height (often labeled , , and ). This concept of ordinary space is called Euclidean space because it corresponds to Euclid's geometry, which was originally abstracted from the spatial experiences of everyday life.
The idea of adding a fourth dimension appears in Jean le Rond d'Alembert's "Dimensions", published in 1754, but the mathematics of more than three dimensions only emerged in the 19th century. The general concept of Euclidean space with any number of dimensions was fully developed by the Swiss mathematician Ludwig Schläfli before 1853. Schläfli's work received little attention during his lifetime and was published only posthumously, in 1901, but meanwhile the fourth Euclidean dimension was rediscovered by others. In 1880 Charles Howard Hinton popularized it in an essay, "What is the Fourth Dimension?", in which he explained the concept of a "four-dimensional cube" with a step-by-step generalization of the properties of lines, squares, and cubes. The simplest form of Hinton's method is to draw two ordinary 3D cubes in 2D space, one encompassing the other, separated by an "unseen" distance, and then draw lines between their equivalent vertices. This can be seen in the accompanying animation whenever it shows a smaller inner cube inside a larger outer cube. The eight lines connecting the vertices of the two cubes in this case represent a single direction in the "unseen" fourth dimension.
Higher-dimensional spaces (greater than three) have since become one of the foundations for formally expressing modern mathematics and physics. Large parts of these topics could not exist in their current forms without using such spaces. Einstein's theory of relativity is formulated in 4D space, although not in a Euclidean 4D space. Einstein's concept of spacetime has a Minkowski structure based on a non-Euclidean geometry with three spatial dimensions and one temporal dimension, rather than the four symmetric spatial dimensions of Schläfli's Euclidean 4D space.
Single locations in Euclidean 4D space can be given as vectors or 4-tuples, i.e., as ordered lists of numbers such as . It is only when such locations are linked together into more complicated shapes that the full richness and geometric complexity of higher-dimensional spaces emerge. A hint of that complexity can be seen in the accompanying 2D animation of one of the simplest possible regular 4D objects, the tesseract, which is analogous to the 3D cube.
History
Lagrange wrote in his (published 1788, based on work done around 1755) that mechanics can be viewed as operating in a four-dimensional space— three dimensions of space, and one of time. As early as 1827, Möbius realized that a fourth spatial dimension would allow a three-dimensional form to be rotated onto its mirror-image. The general concept of Euclidean space with any number of dimensions was fully developed by the Swiss mathematician Ludwig Schläfli in the mid-19th century, at a time when Cayley, Grassman and Möbius were the only other people who had ever conceived the possibility of geometry in more than three dimensions. By 1853 Schläfli had discovered all the regular polytopes that exist in higher dimensions, including the four-dimensional analogs of the Platonic solids.
An arithmetic of four spatial dimensions, called quaternions, was defined by William Rowan Hamilton in 1843. This associative algebra was the source of the science of vector analysis in three dimensions as recounted by Michael J. Crowe in A History of Vector Analysis. Soon after, tessarines and coquaternions were introduced as other four-dimensional algebras over R. In 1886, Victor Schlegel described his method of visualizing four-dimensional objects with Schlegel diagrams.
One of the first popular expositors of the fourth dimension was Charles Howard Hinton, starting in 1880 with his essay What is the Fourth Dimension?, published in the Dublin University magazine. He coined the terms tesseract, ana and kata in his book A New Era of Thought and introduced a method for visualizing the fourth dimension using cubes in the book Fourth Dimension. Hinton's ideas inspired a fantasy about a "Church of the Fourth Dimension" featured by Martin Gardner in his January 1962 "Mathematical Games column" in Scientific American.
Higher dimensional non-Euclidean spaces were put on a firm footing by Bernhard Riemann's 1854 thesis, , in which he considered a "point" to be any sequence of coordinates . In 1908, Hermann Minkowski presented a paper consolidating the role of time as the fourth dimension of spacetime, the basis for Einstein's theories of special and general relativity. But the geometry of spacetime, being non-Euclidean, is profoundly different from that explored by Schläfli and popularised by Hinton. The study of Minkowski space required Riemann's mathematics which is quite different from that of four-dimensional Euclidean space, and so developed along quite different lines. This separation was less clear in the popular imagination, with works of fiction and philosophy blurring the distinction, so in 1973 H. S. M. Coxeter felt compelled to write:
Vectors
Mathematically, a four-dimensional space is a space that needs four parameters to specify a point in it. For example, a general point might have position vector , equal to
This can be written in terms of the four standard basis vectors , given by
so the general vector is
Vectors add, subtract and scale as in three dimensions.
The dot product of Euclidean three-dimensional space generalizes to four dimensions as
It can be used to calculate the norm or length of a vector,
and calculate or define the angle between two non-zero vectors as
Minkowski spacetime is four-dimensional space with geometry defined by a non-degenerate pairing different from the dot product:
As an example, the distance squared between the points and is 3 in both the Euclidean and Minkowskian 4-spaces, while the distance squared between and is 4 in Euclidean space and 2 in Minkowski space; increasing decreases the metric distance. This leads to many of the well-known apparent "paradoxes" of relativity.
The cross product is not defined in four dimensions. Instead, the exterior product is used for some applications, and is defined as follows:
This is bivector valued, with bivectors in four dimensions forming a six-dimensional linear space with basis . They can be used to generate rotations in four dimensions.
Orthogonality and vocabulary
In the familiar three-dimensional space of daily life, there are three coordinate axes—usually labeled , , and —with each axis orthogonal (i.e. perpendicular) to the other two. The six cardinal directions in this space can be called up, down, east, west, north, and south. Positions along these axes can be called altitude, longitude, and latitude. Lengths measured along these axes can be called height, width, and depth.
Comparatively, four-dimensional space has an extra coordinate axis, orthogonal to the other three, which is usually labeled . To describe the two additional cardinal directions, Charles Howard Hinton coined the terms ana and kata, from the Greek words meaning "up toward" and "down from", respectively.
As mentioned above, Hermann Minkowski exploited the idea of four dimensions to discuss cosmology including the finite velocity of light. In appending a time dimension to three-dimensional space, he specified an alternative perpendicularity, hyperbolic orthogonality. This notion provides his four-dimensional space with a modified simultaneity appropriate to electromagnetic relations in his cosmos. Minkowski's world overcame problems associated with the traditional absolute space and time cosmology previously used in a universe of three space dimensions and one time dimension.
Geometry
The geometry of four-dimensional space is much more complex than that of three-dimensional space, due to the extra degree of freedom.
Just as in three dimensions there are polyhedra made of two dimensional polygons, in four dimensions there are polychora made of polyhedra. In three dimensions, there are 5 regular polyhedra known as the Platonic solids. In four dimensions, there are 6 convex regular 4-polytopes, the analogs of the Platonic solids. Relaxing the conditions for regularity generates a further 58 convex uniform 4-polytopes, analogous to the 13 semi-regular Archimedean solids in three dimensions. Relaxing the conditions for convexity generates a further 10 nonconvex regular 4-polytopes.
In three dimensions, a circle may be extruded to form a cylinder. In four dimensions, there are several different cylinder-like objects. A sphere may be extruded to obtain a spherical cylinder (a cylinder with spherical "caps", known as a spherinder), and a cylinder may be extruded to obtain a cylindrical prism (a cubinder). The Cartesian product of two circles may be taken to obtain a duocylinder. All three can "roll" in four-dimensional space, each with its properties.
In three dimensions, curves can form knots but surfaces cannot (unless they are self-intersecting). In four dimensions, however, knots made using curves can be trivially untied by displacing them in the fourth direction—but 2D surfaces can form non-trivial, non-self-intersecting knots in 4D space. Because these surfaces are two-dimensional, they can form much more complex knots than strings in 3D space can. The Klein bottle is an example of such a knotted surface. Another such surface is the real projective plane.
Hypersphere
The set of points in Euclidean 4-space having the same distance from a fixed point forms a hypersurface known as a 3-sphere. The hyper-volume of the enclosed space is:
This is part of the Friedmann–Lemaître–Robertson–Walker metric in General relativity where is substituted by function with meaning the cosmological age of the universe. Growing or shrinking with time means expanding or collapsing universe, depending on the mass density inside.
Four-dimensional perception in humans
Research using virtual reality finds that humans, despite living in a three-dimensional world, can, without special practice, make spatial judgments about line segments embedded in four-dimensional space, based on their length (one-dimensional) and the angle (two-dimensional) between them. The researchers noted that "the participants in our study had minimal practice in these tasks, and it remains an open question whether it is possible to obtain more sustainable, definitive, and richer 4D representations with increased perceptual experience in 4D virtual environments". In another study, the ability of humans to orient themselves in 2D, 3D, and 4D mazes has been tested. Each maze consisted of four path segments of random length and connected with orthogonal random bends, but without branches or loops (i.e. actually labyrinths). The graphical interface was based on John McIntosh's free 4D Maze game. The participating persons had to navigate through the path and finally estimate the linear direction back to the starting point. The researchers found that some of the participants were able to mentally integrate their path after some practice in 4D (the lower-dimensional cases were for comparison and for the participants to learn the method).
However, a 2020 review underlined how these studies are composed of a small subject sample and mainly of college students. It also pointed out other issues that future research has to resolve: elimination of artifacts (these could be caused, for example, by strategies to resolve the required task that don't use 4D representation/4D reasoning and feedback given by researchers to speed up the adaptation process) and analysis on inter-subject variability (if 4D perception is possible, its acquisition could be limited to a subset of humans, to a specific critical period, or to people's attention or motivation). Furthermore, it is undetermined if there is a more appropriate way to project the 4-dimension (because there are no restrictions on how the 4-dimension can be projected). Researchers also hypothesized that human acquisition of 4D perception could result in the activation of brain visual areas and entorhinal cortex. If so they suggest that it could be used as a strong indicator of 4D space perception acquisition. Authors also suggested using a variety of different neural network architectures (with different a priori assumptions) to understand which ones are or are not able to learn.
Dimensional analogy
To understand the nature of four-dimensional space, a device called dimensional analogy is commonly employed. Dimensional analogy is the study of how () dimensions relate to dimensions, and then inferring how dimensions would relate to () dimensions.
The dimensional analogy was used by Edwin Abbott Abbott in the book Flatland, which narrates a story about a square that lives in a two-dimensional world, like the surface of a piece of paper. From the perspective of this square, a three-dimensional being has seemingly god-like powers, such as ability to remove objects from a safe without breaking it open (by moving them across the third dimension), to see everything that from the two-dimensional perspective is enclosed behind walls, and to remain completely invisible by standing a few inches away in the third dimension.
By applying dimensional analogy, one can infer that a four-dimensional being would be capable of similar feats from the three-dimensional perspective. Rudy Rucker illustrates this in his novel Spaceland, in which the protagonist encounters four-dimensional beings who demonstrate such powers.
Cross-sections
As a three-dimensional object passes through a two-dimensional plane, two-dimensional beings in this plane would only observe a cross-section of the three-dimensional object within this plane. For example, if a sphere passed through a sheet of paper, beings in the paper would see first a single point. A circle gradually grows larger, until it reaches the diameter of the sphere, and then gets smaller again, until it shrinks to a point and disappears. The 2D beings would not see a circle in the same way as three-dimensional beings do; rather, they only see a one-dimensional projection of the circle on their 1D "retina". Similarly, if a four-dimensional object passed through a three-dimensional (hyper) surface, one could observe a three-dimensional cross-section of the four-dimensional object. For example, a hypersphere would appear first as a point, then as a growing sphere (until it reaches the "hyperdiameter" of the hypersphere), with the sphere then shrinking to a single point and then disappearing. This means of visualizing aspects of the fourth dimension was used in the novel Flatland and also in several works of Charles Howard Hinton. And, in the same way, three-dimensional beings (such as humans with a 2D retina) can see all the sides and the insides of a 2D shape simultaneously, a 4D being could see all faces and the inside of a 3D shape at once with their 3D retina.
Projections
A useful application of dimensional analogy in visualizing higher dimensions is in projection. A projection is a way of representing an n-dimensional object in dimensions. For instance, computer screens are two-dimensional, and all the photographs of three-dimensional people, places, and things are represented in two dimensions by projecting the objects onto a flat surface. By doing this, the dimension orthogonal to the screen (depth) is removed and replaced with indirect information. The retina of the eye is also a two-dimensional array of receptors but the brain can perceive the nature of three-dimensional objects by inference from indirect information (such as shading, foreshortening, binocular vision, etc.). Artists often use perspective to give an illusion of three-dimensional depth to two-dimensional pictures. The shadow, cast by a fictitious grid model of a rotating tesseract on a plane surface, as shown in the figures, is also the result of projections.
Similarly, objects in the fourth dimension can be mathematically projected to the familiar three dimensions, where they can be more conveniently examined. In this case, the 'retina' of the four-dimensional eye is a three-dimensional array of receptors. A hypothetical being with such an eye would perceive the nature of four-dimensional objects by inferring four-dimensional depth from indirect information in the three-dimensional images in its retina.
The perspective projection of three-dimensional objects into the retina of the eye introduces artifacts such as foreshortening, which the brain interprets as depth in the third dimension. In the same way, perspective projection from four dimensions produces similar foreshortening effects. By applying dimensional analogy, one may infer four-dimensional "depth" from these effects.
As an illustration of this principle, the following sequence of images compares various views of the three-dimensional cube with analogous projections of the four-dimensional tesseract into three-dimensional space.
Shadows
A concept closely related to projection is the casting of shadows.
If a light is shone on a three-dimensional object, a two-dimensional shadow is cast. By dimensional analogy, light shone on a two-dimensional object in a two-dimensional world would cast a one-dimensional shadow, and light on a one-dimensional object in a one-dimensional world would cast a zero-dimensional shadow, that is, a point of non-light. Going the other way, one may infer that light shining on a four-dimensional object in a four-dimensional world would cast a three-dimensional shadow.
If the wireframe of a cube is lit from above, the resulting shadow on a flat two-dimensional surface is a square within a square with the corresponding corners connected. Similarly, if the wireframe of a tesseract were lit from "above" (in the fourth dimension), its shadow would be that of a three-dimensional cube within another three-dimensional cube suspended in midair (a "flat" surface from a four-dimensional perspective). (Note that, technically, the visual representation shown here is a two-dimensional image of the three-dimensional shadow of the four-dimensional wireframe figure.)
Bounding regions
The dimensional analogy also helps in inferring basic properties of objects in higher dimensions, such as the bounding region. For example, two-dimensional objects are bounded by one-dimensional boundaries: a square is bounded by four edges. Three-dimensional objects are bounded by two-dimensional surfaces: a cube is bounded by 6 square faces.
By applying dimensional analogy, one may infer that a four-dimensional cube, known as a tesseract, is bounded by three-dimensional volumes. And indeed, this is the case: mathematics shows that the tesseract is bounded by 8 cubes. Knowing this is key to understanding how to interpret a three-dimensional projection of the tesseract. The boundaries of the tesseract project to volumes in the image, not merely two-dimensional surfaces.
Hypervolume
The 4-volume or hypervolume in 4D can be calculated in closed form for simple geometrical figures, such as the tesseract (s4, for side length s) and the 4-ball ( for radius r).
Reasoning by analogy from familiar lower dimensions can be an excellent intuitive guide, but care must be exercised not to accept results that are not more rigorously tested. For example, consider the formulas for the area enclosed by a circle in two dimensions () and the volume enclosed by a sphere in three dimensions (). One might guess that the volume enclosed by the sphere in four-dimensional space is a rational multiple of , but the correct volume is . The volume of an n-ball in an arbitrary dimension n is computable from a recurrence relation connecting dimension to dimension .
In culture
In art
In literature
Science fiction texts often mention the concept of "dimension" when referring to parallel or alternate universes or other imagined planes of existence. This usage is derived from the idea that to travel to parallel/alternate universes/planes of existence one must travel in a direction/dimension besides the standard ones. In effect, the other universes/planes are just a small distance away from our own, but the distance is in a fourth (or higher) spatial (or non-spatial) dimension, not the standard ones.
One of the most heralded science fiction stories regarding true geometric dimensionality, and often recommended as a starting point for those just starting to investigate such matters, is the 1884 novella Flatland by Edwin A. Abbott. Isaac Asimov, in his foreword to the Signet Classics 1984 edition, described Flatland as "The best introduction one can find into the manner of perceiving dimensions."
The idea of other dimensions was incorporated into many early science fiction stories, appearing prominently, for example, in Miles J. Breuer's The Appendix and the Spectacles (1928) and Murray Leinster's The Fifth-Dimension Catapult (1931); and appeared irregularly in science fiction by the 1940s. Classic stories involving other dimensions include Robert A. Heinlein's —And He Built a Crooked House (1941), in which a California architect designs a house based on a three-dimensional projection of a tesseract; Alan E. Nourse's Tiger by the Tail and The Universe Between (both 1951); and The Ifth of Oofth (1957) by Walter Tevis. Another reference is Madeleine L'Engle's novel A Wrinkle In Time (1962), which uses the fifth dimension as a way of "tesseracting the universe" or "folding" space to move across it quickly. The fourth and fifth dimensions are also key components of the book The Boy Who Reversed Himself by William Sleator.
In philosophy
Immanuel Kant wrote in 1783: "That everywhere space (which is not itself the boundary of another space) has three dimensions and that space, in general, cannot have more dimensions is based on the proposition that not more than three lines can intersect at right angles in one point. This proposition cannot at all be shown from concepts, but rests immediately on intuition and indeed on pure intuition a priori because it is apodictically (demonstrably) certain."
"Space has Four Dimensions" is a short story published in 1846 by German philosopher and experimental psychologist Gustav Fechner under the pseudonym "Dr. Mises". The protagonist in the tale is a shadow who is aware of and able to communicate with other shadows, but who is trapped on a two-dimensional surface. According to Fechner, this "shadow-man" would conceive of the third dimension as being one of time. The story bears a strong similarity to the "Allegory of the Cave" presented in Plato's The Republic ( 380 BC).
Simon Newcomb wrote an article for the Bulletin of the American Mathematical Society in 1898 entitled "The Philosophy of Hyperspace". Linda Dalrymple Henderson coined the term "hyperspace philosophy", used to describe writing that uses higher dimensions to explore metaphysical themes, in her 1983 thesis about the fourth dimension in early-twentieth-century art. Examples of "hyperspace philosophers" include Charles Howard Hinton, the first writer, in 1888, to use the word "tesseract"; and the Russian esotericist P. D. Ouspensky.
See also
4-polytope
4-manifold
Exotic R4
Four-dimensionalism
List of four-dimensional games
Time in physics
Spacetime
Citations
References
Further reading
Andrew Forsyth (1930) Geometry of Four Dimensions, link from Internet Archive.
Extract of page 68
E. H. Neville (1921) The Fourth Dimension, Cambridge University Press, link from University of Michigan Historical Math Collection.
External links
"Dimensions" videos, showing several different ways to visualize four-dimensional objects
Science News article summarizing the "Dimensions" videos, with clips
Flatland: a Romance of Many Dimensions (second edition)
Frame-by-frame animations of 4D - 3D analogies
4 (number)
Dimension
Multi-dimensional geometry
Science fiction themes
Special relativity
Quaternions | Four-dimensional space | [
"Physics"
] | 5,009 | [
"Geometric measurement",
"Physical quantities",
"Special relativity",
"Theory of relativity",
"Dimension"
] |
1,364,686 | https://en.wikipedia.org/wiki/Cognitive%20liberty | Cognitive liberty, or the "right to mental self-determination", is the freedom of an individual to control their own mental processes, cognition, and consciousness. It has been argued to be both an extension of, and the principle underlying, the right to freedom of thought. Though a relatively recently defined concept, many theorists see cognitive liberty as being of increasing importance as technological advances in neuroscience allow for an ever-expanding ability to directly influence consciousness. Cognitive liberty is not a recognized right in any international human rights treaties, but has gained a limited level of recognition in the United States, and is argued to be the principle underlying a number of recognized rights.
Overview
The term "cognitive liberty" was coined by neuroethicist Wrye Sententia and legal theorist and lawyer Richard Glen Boire, the founders and directors of the non-profit Center for Cognitive Liberty and Ethics (CCLE). Sententia and Boire define cognitive liberty as "the right of each individual to think independently and autonomously, to use the full power of his or her mind, and to engage in multiple modes of thought."
The CCLE is a network of scholars dedicated to protecting freedom of thought in the modern world of accelerating neurotechnologies. They seek to develop public policies that will preserve and enhance freedom of thought, and offer guidance with regard to relevant developments in neurotechnology, psychopharmacology, cognitive sciences and law.
Sententia and Boire conceived of the concept of cognitive liberty as a response to the increasing ability of technology to monitor and manipulate cognitive function, and the corresponding increase in the need to ensure individual cognitive autonomy and privacy. Sententia divides the practical application of cognitive liberty into two principles:
As long as their behavior does not endanger others, individuals should not be compelled against their will to use technologies that directly interact with the brain or be forced to take certain psychoactive drugs.
As long as they do not subsequently engage in behavior that harms others, individuals should not be prohibited from, or criminalized for, using new mind-enhancing drugs and technologies.
These two facets of cognitive liberty are reminiscent of Timothy Leary's "Two Commandments for the Molecular Age", from his 1968 book The Politics of Ecstasy:
Supporters of cognitive liberty therefore seek to impose both a negative and a positive obligation on states: to refrain from non-consensually interfering with an individual's cognitive processes, and to allow individuals to self-determine their own "inner realm" and control their own mental functions.
Freedom from interference
This first obligation, to refrain from non-consensually interfering with an individual's cognitive processes, seeks to protect individuals from having their mental processes altered or monitored without their consent or knowledge, "setting up a defensive wall against unwanted intrusions". Ongoing improvements to neurotechnologies, such as transcranial magnetic stimulation and electroencephalography (or "brain fingerprinting"), and to pharmacology, in the form of selective serotonin reuptake inhibitors (SSRIs), nootropics, modafinil and other psychoactive drugs, are continuing to increase the ability to both monitor and directly influence human cognition. As a result, many theorists have emphasized the importance of recognizing cognitive liberty in order to protect individuals from the state using such technologies to alter those individuals' mental processes: "states must be barred from invading the inner sphere of persons, from accessing their thoughts, modulating their emotions or manipulating their personal preferences." These specific ethical concerns regarding the use of neuroscience technologies to interfere or invade the brain form the fields of neuroethics and neuroprivacy.
This element of cognitive liberty has been raised in relation to a number of state-sanctioned interventions in individual cognition, from the mandatory psychiatric 'treatment' of homosexuals in the US before the 1970s, to the non-consensual administration of psychoactive drugs to unwitting US citizens during CIA Project MKUltra, to the forcible administration of mind-altering drugs on individuals to make them competent to stand trial. Futurist and bioethicist George Dvorsky, chair of the Board of the Institute for Ethics and Emerging Technologies has identified this element of cognitive liberty as being of relevance to the debate around the curing of autism spectrum conditions. Duke University School of Law Professor Nita A. Farahany has also proposed legislative protection of cognitive liberty as a way of safeguarding the protection from self-incrimination found in the Fifth Amendment to the US Constitution, in the light of the increasing ability to access human memory. Her book 'The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology' discusses the matter in great detail.
Though this element of cognitive liberty is often defined as an individual's freedom from state interference with human cognition, Jan Christoph Bublitz and Reinhard Merkel among others suggest that cognitive liberty should also prevent other, non-state entities from interfering with an individual's mental "inner realm". Bublitz and Merkel propose the introduction of a new criminal offense punishing "interventions severely interfering with another's mental integrity by undermining mental control or exploiting pre-existing mental weakness." Direct interventions that reduce or impair cognitive capacities such as memory, concentration, and willpower; alter preferences, beliefs, or behavioral dispositions; elicit inappropriate emotions; or inflict clinically identifiable mental injuries would all be prima facie impermissible and subject to criminal prosecution. Sententia and Boire have also expressed concern that corporations and other non-state entities might utilize emerging neurotechnologies to alter individuals' mental processes without their consent.
Freedom to self-determine
Where the first obligation seeks to protect individuals from interference with cognitive processes by the state, corporations or other individuals, this second obligation seeks to ensure that individuals have the freedom to alter or enhance their own consciousness. An individual who enjoys this aspect of cognitive liberty has the freedom to alter their mental processes in any way they wish to, whether through indirect methods such as meditation, yoga or prayer, or through direct cognitive intervention through psychoactive drugs or neurotechnology.
As psychotropic drugs are a powerful method of altering cognitive function, many advocates of cognitive liberty are also advocates of drug law reform, claiming that the "war on drugs" is in fact a "war on mental states". The CCLE, as well as other cognitive liberty advocacy groups such as Cognitive Liberty UK, have lobbied for the re-examination and reform of prohibited drug law; one of the CCLE's key guiding principles is that "governments should not criminally prohibit cognitive enhancement or the experience of any mental state". Calls for reform of restrictions on the use of prescription cognitive-enhancement drugs (also called smart drugs or nootropics) such as Prozac, Ritalin and Adderall have also been made on the grounds of cognitive liberty.
This element of cognitive liberty is also of great importance to proponents of the transhumanist movement, a key tenet of which is the enhancement of human mental function. Wrye Sententia has emphasized the importance of cognitive liberty in ensuring the freedom to pursue human mental enhancement, as well as the freedom to choose against enhancement. Sententia argues that the recognition of a "right to (and not to) direct, modify, or enhance one's thought processes" is vital to the free application of emerging neurotechnology to enhance human cognition and that something beyond the current conception of freedom of thought is needed. Sententia claims that "cognitive liberty's strength is that it protects those who do want to alter their brains, but also those who do not".
Relationship with recognized human rights
Cognitive liberty is not currently recognized as a human right by any international human rights treaty. While freedom of thought is recognized by Article 18 of the Universal Declaration of Human Rights (UDHR), freedom of thought can be distinguished from cognitive liberty in that the former is concerned with protecting an individual's freedom to think whatever they want, whereas cognitive liberty is concerned with protecting an individual's freedom to think however they want. Cognitive liberty seeks to protect an individual's right to determine their own state of mind and be free from external control over their state of mind, rather than just protecting the content of an individual's thoughts.
It has been suggested that the lack of protection of cognitive liberty in previous human rights instruments was due to the relative lack of technology capable of directly interfering with mental autonomy at the time the core human rights treaties were created. As the human mind was considered invulnerable to direct manipulation, control or alteration, it was deemed unnecessary to expressly protect individuals from unwanted mental interference. With modern advances in neuroscience and in anticipation of its future development however, it is argued that such express protection is becoming increasingly necessary.
Cognitive liberty then can be seen as an extension of or an "update" to the right to freedom of thought as it has been traditionally understood. Freedom of thought should now be understood to include the right to determine one's own mental state as well as the content of one's thoughts. However, some have instead argued that cognitive liberty is already an inherent part of the international human rights framework as the principle underlying the rights to freedom of thought, expression and religion. The freedom to think in whatever manner one chooses is a "necessary precondition to those guaranteed freedoms." Daniel Waterman and Casey William Hardison have argued that cognitive liberty is fundamental to Freedom of Thought because it encompasses the ability to have certain types of experiences, including the right to experience altered or non-ordinary states of consciousness. It has also been suggested that cognitive liberty can be seen to be a part of the inherent dignity of human beings as recognized by Article 1 of the UDHR.
Most proponents of cognitive liberty agree, however, that cognitive liberty should be expressly recognized as a human right in order to properly provide protection for individual cognitive autonomy.
At least one scholar and proponent of cognitive liberty, Christoph Bublitz, has used the term 'freedom of mind' to describe cognitive liberty: "mind altering interventions primary affect another sense of freedom, freedom of mind, a concept that has not received much attention although it should rank among the most important legal and political freedoms…This freedom is not often regarded in its own right but should be recognized and more fully developed in face of emerging mind-altering technologies…Freedom of mind is the freedom of a person to use her mental capacities as she pleases, free from external interferences and internal impediments".
Legal recognition
In the United States
Richard Glen Boire of the Center for Cognitive Liberty and Ethics filed an amicus brief with the US Supreme Court in the case of Sell v. United States, in which the Supreme Court examined whether the court had the power to make an order to forcibly administer antipsychotic medication to an individual who had refused such treatment, for the sole purpose of making them competent to stand trial.
In the United Kingdom
In the case of R v Hardison, the defendant, charged with eight counts under the Misuse of Drugs Act 1971 (MDA), including the production of DMT and LSD, claimed that cognitive liberty was safeguarded by Article 9 of the European Convention on Human Rights. Hardison argued that "individual sovereignty over one's interior environment constitutes the very core of what it means to be free", and that as psychotropic drugs are a potent method of altering an individual's mental process, prohibition of them under the MDA was in opposition to Article 9. The court however disagreed, calling Hardison's arguments a "portmanteau defense" and relying upon the UN Drug Conventions and the earlier case of R v Taylor to deny Hardison's right to appeal to a superior court. Hardison was convicted and given a 20-year prison sentence, though he was released on 29 May 2013 after nine years in prison.
Criticism
The recent development of neurosciences is increasing the possibility of controlling and influence specific mental functions. The risks inherent in removing restrictions on controlled cognitive-enhancing drugs, including of widening the gap between those able to afford such treatments and those unable to do so, have caused many to remain skeptical about the wisdom of recognizing cognitive liberty as a right. Political philosopher and Harvard University professor Michael J. Sandel, when examining the prospect of memory enhancement, wrote that "some who worry about the ethics of cognitive enhancement point to the danger of creating two classes of human beings – those with access to enhancement technologies, and those who must make do with an unaltered memory that fades with age."
See also
Cognitive ergonomics
Cosmetic pharmacology
Drug liberalization
Morphological freedom
Neuroenhancement
Neuroethics
Neurolaw
Personalized medicine
Psychonautics
Responsible drug use
The Rhetoric of Drugs, by Jacques Derrida
Self-ownership
Techno-progressivism
Thomas Szasz
References
Civil rights and liberties
Drug culture
Human rights
Identity politics
Medical ethics
Psychedelics, dissociatives and deliriants
Transhumanism | Cognitive liberty | [
"Technology",
"Engineering",
"Biology"
] | 2,661 | [
"Genetic engineering",
"Transhumanism",
"Ethics of science and technology"
] |
1,364,864 | https://en.wikipedia.org/wiki/Levelling%20refraction | Levelling refraction refers to the systematic refraction effect distorting the results of line levelling over the Earth's surface.
In line levelling, short segments of a line are levelled by taking readings through a level from two staffs, one fore and one behind. By chaining together the height differences of these segments, one can compute the total height difference between the end points of a line.
The classical work on levelling refraction is that of TJ Kukkamäki in 1938–39. His analysis is based upon the understanding that the measurement beams travel within a boundary layer close to the Earth's surface, which behaves differently from the atmosphere at large. When measuring over a tilted surface, the systematic effect accumulates.
The Kukkamäki levelling refraction became notorious as the explanation of the "Palmdale Bulge", which geodesists observed in California in the 1970s.
Levelling refraction can be eliminated by either of two techniques:
Measuring the vertical temperature gradient within the atmospheric boundary layer. Typically two temperature-dependent resistors are used, one at , the other at height above the ground, mounted on a staff and connected in a Wheatstone bridge.
Using climatological modelling. Depending on the time of day and year, geographical location and general weather conditions, also levelling observations can be approximately corrected for which no original temperature gradient measurements were collected.
An alternative, hi-tech approach is dispersometry using two different wavelengths of light. Only recently blue lasers have become readily available making this a realistic proposition.
References
Kukkamäki, T.J. (1938): Über die nivellitische Refraktion. Publ. 25, Finnish Geodetic Institute, Helsinki
Kukkamäki, T.J. (1939): Formeln und Tabellen zur Berechning der nivellitischen Refraktion. Publ. 27, Finnish Geodetic Institute, Helsinki.
Further reading
Charles T. Whalen (1982), Results of Leveling Refraction Tests by the National Geodetic Survey, U.S. Department of Commerce, National Oceanic and Atmospheric Administration, National Ocean Survey, 1982.
Geodesy
Surveying | Levelling refraction | [
"Mathematics",
"Engineering"
] | 451 | [
"Applied mathematics",
"Civil engineering",
"Surveying",
"Geodesy"
] |
1,364,923 | https://en.wikipedia.org/wiki/Electronic%20throttle%20control | Electronic throttle control (ETC) is an automotive technology that uses electronics to replace the traditional mechanical linkages between the driver's input such as a foot pedal to the vehicle's throttle mechanism which regulates speed or acceleration. This concept is often called drive by wire, and sometimes called accelerate-by-wire or throttle-by-wire.
Operation
A typical ETC system consists of three major components: (i) an accelerator pedal module (ideally with two or more independent sensors), (ii) a throttle valve that can be opened and closed by an electric motor (sometimes referred to as an electric or electronic throttle body (ETB)), and (iii) a powertrain or engine control module (PCM or ECM). The ECM is a type of electronic control unit (ECU), which is an embedded system that employs software to determine the required throttle position by calculations from data measured by other sensors, including the accelerator pedal position sensors, engine speed sensor, vehicle speed sensor, and cruise control switches. The electric motor is then used to open the throttle valve to the desired angle via a closed-loop control algorithm within the ECM.
Benefits
The benefits of electronic throttle control are largely unnoticed by most drivers because the aim is to make the vehicle power-train characteristics seamlessly consistent irrespective of prevailing conditions, such as engine temperature, altitude, and accessory loads. Electronic throttle control is also working 'behind the scenes' to dramatically improve the ease with which the driver can execute gear changes and deal with the dramatic torque changes associated with rapid accelerations and decelerations.
Electronic throttle control facilitates the integration of features such as cruise control, traction control, stability control, and precrash systems and others that require torque management, since the throttle can be moved irrespective of the position of the driver's accelerator pedal. ETC provides some benefit in areas such as air-fuel ratio control, exhaust emissions and fuel consumption reduction, and also works in concert with other technologies such as gasoline direct injection.
Failure modes
There is no mechanical linkage between the accelerator pedal and the throttle valve with electronic throttle control. Instead, the position of the throttle valve (i.e., the amount of air in the engine) is fully controlled by the ETC software via the electric motor. But just opening or closing the throttle valve by sending a new signal to the electric motor is an open loop condition and leads to inaccurate control. Thus, most, if not all, current ETC systems use closed loop feedback systems, such as PID control, whereby the ECU tells the throttle to open or close a certain amount. The throttle position sensor(s) are continually read and then the software makes appropriate adjustments to reach the desired amount of engine power.
There are two primary types of Throttle Position Sensor (TPS): a potentiometer or a non-contact sensor Hall Effect sensor (magnetic device). A potentiometer is a satisfactory way for non-critical applications such as volume control on a radio, a wiper contact rubbing against a resistance element like dirt or wear between the wiper and the resistor can cause erratic readings. The more reliable solution is the magnetic coupling, which makes no physical contact, so will never be subject to failing by wear. This is an insidious failure as it may not provide any symptoms until there is total failure. All cars having a TPS have what is known as a 'limp-home-mode'. When the car goes into the limp-home-mode it is because the accelerator, engine control computer and the throttle are not connecting to each other in which they can function together. The engine control computer shuts down the signal to the throttle position motor and a set of springs in the throttle set it to a fast idle, fast enough to get the transmission in gear but not so fast that driving may be dangerous.
Software or electronic failures within the ETC have been suspected by some to be responsible for alleged incidents of unintended acceleration. A series of investigations by the U.S. National Highway Traffic Safety Administration (NHTSA) were unable to get to the bottom of all of the reported incidents of unintended acceleration in 2002 and later model year Toyota and Lexus vehicles. A February 2011 report issued by a team from NASA (which studied the source code and electronics for a 2005 Camry model, at the request of NHTSA) did not rule out software malfunctions as a potential cause. In October 2013, the first jury to hear evidence about Toyota's source code (from expert witness Michael Barr) found Toyota liable for the death of a passenger in a September 2007 unintended acceleration collision in Oklahoma.
References
Vehicle technology | Electronic throttle control | [
"Engineering"
] | 950 | [
"Vehicle technology",
"Mechanical engineering by discipline"
] |
1,367,595 | https://en.wikipedia.org/wiki/Elemental%20analysis | Elemental analysis is a process where a sample of some material (e.g., soil, waste or drinking water, bodily fluids, minerals, chemical compounds) is analyzed for its elemental and sometimes isotopic composition. Elemental analysis can be qualitative (determining what elements are present), and it can be quantitative (determining how much of each is present). Elemental analysis falls within the ambit of analytical chemistry, the instruments involved in deciphering the chemical nature of our world.
History
Antoine Lavoisier is regarded as the inventor of elemental analysis as a quantitative, experimental tool to assess the chemical composition of a compound. At the time, elemental analysis was based on the gravimetric determination of specific absorbent materials before and after selective adsorption of the combustion gases. Today fully automated systems based on thermal conductivity or infrared spectroscopy detection of the combustion gases, or other spectroscopic methods are used.
CHNX analysis
For organic chemists, elemental analysis or "EA" almost always refers to CHNX analysis—the determination of the mass fractions of carbon, hydrogen, nitrogen, and heteroatoms (X) (halogens, sulfur) of a sample. This information is important to help determine the structure of an unknown compound, as well as to help ascertain the structure and purity of a synthesized compound. In present-day organic chemistry, spectroscopic techniques (NMR, both 1H and 13C), mass spectrometry and chromatographic procedures have replaced EA as the primary technique for structural determination. However, it still gives very useful complementary information.
The most common form of elemental analysis, CHNS analysis, is accomplished by combustion analysis. Modern elemental analyzers are also capable of simultaneous determination of sulfur along with CHN in the same measurement run.
Quantitative analysis
Quantitative analysis determines the mass of each element or compound present. Other quantitative methods include gravimetry, optical atomic spectroscopy, and neutron activation analysis.
Gravimetry is where the sample is dissolved, the element of interest is precipitated and its mass measured, or the element of interest is volatilized, and the mass loss is measured.
Optical atomic spectroscopy includes flame atomic absorption, graphite furnace atomic absorption, and inductively coupled plasma atomic emission spectroscopy, which probe the outer electronic structure of atoms.
Neutron activation analysis involves the activation of a sample matrix through the process of neutron capture. The resulting radioactive target nuclei of the sample begin to decay, emitting gamma rays of specific energies that identify the radioisotopes present in the sample. The concentration of each analyte can be determined by comparison to an irradiated standard with known concentrations of each analyte.
Qualitative analysis
To qualitatively determine which elements exist in a sample, the methods are mass spectrometric atomic spectroscopy, such as inductively coupled plasma mass spectrometry, which probes the mass of atoms; other spectroscopy, which probes the inner electronic structure of atoms such as X-ray fluorescence, particle-induced X-ray emission, X-ray photoelectron spectroscopy, and Auger electron spectroscopy; and chemical methods such as the sodium fusion test and Schöniger oxidation.
Analysis of results
The analysis of results is performed by determining the ratio of elements from within the sample and working out a chemical formula that fits with those results. This process is useful as it helps determine if a sample sent is the desired compound and confirms the purity of a compound. The accepted deviation of elemental analysis results from the calculated is 0.3%.
See also
Dumas method of molecular weight determination
References
Analytical chemistry
Materials science | Elemental analysis | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 738 | [
"Applied and interdisciplinary physics",
"Materials science",
"Chemical tests",
"nan",
"Elemental analysis"
] |
1,368,015 | https://en.wikipedia.org/wiki/Collisional%20excitation | Collisional excitation is a process in which the kinetic energy of a collision partner is converted into the internal energy of a reactant species.
Astronomy
In astronomy, collisional excitation gives rise to spectral lines in the spectra of astronomical objects such as planetary nebulae and H II regions.
In these objects, most atoms are ionised by photons from hot stars embedded within the nebular gas, stripping away electrons. The emitted electrons, (called photoelectrons), may collide with atoms or ions within the gas, and excite them. When these excited atoms or ions revert to their ground state, they will emit a photon. The spectral lines formed by these photons are called collisionally excited lines (often abbreviated to CELs).
CELs are only seen in gases at very low densities (typically less than a few thousand particles per cm³) for forbidden transitions. For allowed transitions, the gas density can be substantially higher. At higher densities, the reverse process of collisional de-excitation suppresses the lines. Even the hardest vacuum produced on earth is still too dense for CELs to be observed. For this reason, when CELs were first observed by William Huggins in the spectrum of the Cat's Eye Nebula, he did not know what they were, and attributed them to a hypothetical new element called nebulium. However, the lines he observed were later found to be emitted by extremely rarefied oxygen.
CELs are very important in the study of gaseous nebulae, because they can be used to determine the density and temperature of the gas.
Mass spectrometry
Collisional excitation in mass spectrometry is the process where an ion collides with an atom or molecule and leads to an increase in the internal energy of the ion. Molecular ions are accelerated to high kinetic energy and then collide with neutral gas molecules (e.g. helium, nitrogen or argon). In the collision some of the kinetic energy is converted into internal energy which results in fragmentation in a process known as collision-induced dissociation.
See also
Collision-induced absorption and emission
References
Astronomical spectroscopy
Mass spectrometry | Collisional excitation | [
"Physics",
"Chemistry"
] | 446 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Astrophysics",
"Mass spectrometry",
"Astronomical spectroscopy",
"Spectroscopy",
"Matter"
] |
20,187,427 | https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20cyclooxygenase%202%20inhibitors | Cyclooxygenases are enzymes that take part in a complex biosynthetic cascade that results in the conversion of polyunsaturated fatty acids to prostaglandins and thromboxane(s).
Their main role is to catalyze the transformation of arachidonic acid into the intermediate prostaglandin H2, which is the precursor of a variety of prostanoids with diverse and potent biological actions.
Cyclooxygenases have two main isoforms that are called COX-1 and COX-2 (as well as a COX-3). COX-1 is responsible for the synthesis of prostaglandin and thromboxane in many types of cells, including the gastro-intestinal tract and blood platelets. COX-2 plays a major role in prostaglandin biosynthesis in inflammatory cells and in the central nervous system. Prostaglandin synthesis in these sites is a key factor in the development of inflammation and hyperalgesia.
COX-2 inhibitors have analgesic and anti-inflammatory activity by blocking the transformation of arachidonic acid into prostaglandin H2 selectively.
The rise for development of selective COX-2 inhibitors
The impetus for development of selective COX-2 inhibitors was the adverse gastrointestinal side-effects of NSAIDs. Soon after the discovery of the mechanism of action of NSAIDs, strong indications emerged for alternative forms of COX, but little supporting evidence was found. COX enzyme proved to be difficult to purify and was not sequenced until 1988. In 1991 the existence of the COX-2 enzyme was confirmed by being cloned by Dr. Dan Simmons at Brigham Young University. Before the confirmation of COX-2 existence, the Dupont company had developed a compound, DuP-697, that was potent in many anti-inflammatory assays but did not have the ulcerogenic effects of NSAIDs. Once the COX-2 enzyme was identified, Dup-697 became the building-block for synthesis of COX-2 inhibitors. Celecoxib and rofecoxib, the first COX-2 inhibitors to reach market, were based on DuP-697. It took less than eight years to develop and market the first COX-2 inhibitor, with Celebrex (celecoxib) launched in December 1998 and Vioxx (rofecoxib) launched in May 1999. Celecoxib and other COX-2 selective inhibitors, valdecoxib, parecoxib, and mavacoxib, were discovered by a team at the Searle division of Monsanto led by John Talley.
Development of COX-2 inhibitors
Early studies showed that, when inflammation is induced, the affected organ unexpectedly develops an enormous capacity to generate prostaglandins. It was demonstrated that the increase is due to de novo synthesis of fresh enzyme. In 1991, during the investigation of the expression of early-response genes in fibroblasts transformed with Rous sarcoma virus, a novel mRNA transcript that was similar, but not identical, to the seminal COX enzyme was identified. It was suggested that an isoenzyme of COX had been discovered. Another group discovered a novel cDNA species encoding a protein with similar structure to COX-1 while studying phorbol-ester-induced genes in Swiss 3T3 cells. The same laboratory showed that this gene truly expressed a novel COX enzyme. The two enzymes were renamed COX-1, referring to the original enzyme and COX-2.
Building on those results, scientists started focusing on selective COX-2 inhibitors. Enormous effort was spent on the development of NSAIDs between the 1960s and 1980 so there were numerous pharmacophores to test when COX-2 was discovered. Early efforts focused on modification on two lead compounds, DuP-697 and NS-398. These compounds differ greatly from NSAIDs that are arylalkonic acid analogs. Encouraged by the "concept testing" experiments with selective inhibitors, and armed with several solid leads and clear idea of the nature of the binding site, development of this field was rapid. In vitro recombinant enzyme assays provided powerful means for assessing COX selectivity and potency and led to the discovery and clinical development of the first rationally designed COX-2 selective inhibitor, celecoxib. Efforts have been made to convert NSAIDs into selective COX-2 inhibitors such as indometacin by lengthening of the alkylcarboxylic acid side-chain, but none have been marketed.
Structure Activity Relationship (SAR)
DuP-697 was a building-block for synthesis of COX-2 inhibitors and served as the basic chemical model for the coxibs that are the only selective COX-2 inhibitors on the market today. DuP-697 is a diaryl heterocycle with cis-stilbene moiety. Structure activity relationship (SAR) studies for diaryl heterocyclic compounds have indicated that a cis-stilbene moiety and changes in the para-position of one of the aryl rings play an important role in COX-2 selectivity. Celecoxib and parecoxib have a sulfonamide substituent (SO2NH2) in para-position on one of the aryl rings while etoricoxib and rofecoxib have a methylsulfone (SO2CH3). The oxidation state on the sulfur is important for selectivity; sulfones and sulfonamides are selective for COX-2 but sulfoxides and sulfides are not. The ring system that is fused in this stilbene system has been extensively manipulated to include every imaginable heterocyclic and carbocyclic skeleton of varying ring sizes. It is known that a SO2NHCOCH3 moiety as in parecoxib, which is a prodrug for valdecoxib, is 105 – 106 more reactive acetylating agent of enzyme serine hydroxyl groups than simple amides. Due to the fact that varying kinetic mechanisms affect potency for COX-1 versus COX-2, relying Potency and selectivity in human whole blood is used by many groups and has been accepted as a standard assessment of COX-2 potency and selectivity.
The relationship between amino acid profile of COX-2 enzyme and inhibition mechanism
One of the keys to developing COX-2 selective drugs is the larger active site of COX-2, which makes it possible to make molecules too large to fit into the COX-1 active site but still able to fit the COX-2. The larger active site of COX-2 is partly due to a polar hydrophilic side-pocket that forms because of substitution of Ile523, His513, and Ile434 in COX-1 by Val523, Arg513, and Val434 in COX-2. Val523 is less bulky than Ile523, which increases the volume of the active site. Substitution of Ile434 for Val434 allows the side-chain of Phe518 to move back and make some extra space. This side-pocket allows for interactions with Arg513, which is a replacement for His513 of COX-1. Arg513 is thought to be a key residue for diaryl heterocycle inhibitors such as the coxibs. The side-chain of Leu384, at the top of the receptor channel, is oriented into the active site of COX-1, but, in COX-2, it is oriented away from the active site and makes more space in the apex of the binding site.
The bulky sulfonamide group in COX-2 inhibitors such as celecoxib and rofecoxib prevent the molecule from entering the COX-1 channel.
For optimal activity and selectivity of the coxibs, a 4-methylsulfonylphenyl attached to an unsaturated (usually) five-membered ring with a vicinal lipophilic group is required (rofecoxib). The SO2CH3 can be replaced by SO2NH2, wherein the lipophilic pocket is occupied by an optionally substituted phenyl ring or a bulky alkoxy substituent (celecoxib). Within the hydrophilic side-pocket of COX-2, the oxygen of the sulfonamide (or sulfone) group interacts with Hist90, Arg513, and Gln192 and forms hydrogen bonds. The substituted phenyl group at the top of the channel interacts with the side-chains of amino acid residues through hydrophobic and electrostatic interactions. Tyr385 makes for some sterical restrictions of this side of the binding site so a small substituent of the phenyl group makes for better binding. Degrees of freedom are also important for the binding. The central ring of the coxibs decides the orientation of the aromatic rings and, therefore, the binding to COX enzyme even though it often has no electrostatic interactions with any of the amino acid residues. The high lipophilicity of the active site does require low polarity of the central scaffold of the coxibs.
Mechanism of binding
Studies on the binding mechanism of selective COX-2 inhibitors show that they have two reversible steps with both COX-1 and COX-2, but the selectivity for COX-2 is due to another step that is slow and irreversible and is seen only in the inhibition of COX-2, not COX-1. The irreversible step has been attributed to the presence of the sulfonamide (or sulfone) that fits into the side-pocket of COX-2. This has been studied using SC-58125 (an analogue of celecoxib) and mutated COX-2, wherein the valine 523 residue was replaced by isoleucine 523. The irreversible inhibition did not happen, but reversible inhibition was noticed. A model has been made to explain this three-step mechanism behind the inhibitory effects of selective COX-2 inhibitors. The first step accounts for the contact of the inhibitor with the gate of the hydrophobic channel (called the lobby region). The second step could account for the movement of the inhibitor from the lobby region to the active site of the COX enzyme. The last step probably represents repositioning of the inhibitor at the active site, which leads to strong interactions of the phenylsulfonamide or phenylsulfone group of the inhibitor and the amino acids of the side pocket. It is directly inhibition to postaglanding
Pharmacokinetics of coxibs
The coxibs are widely distributed throughout the body. All of the coxibs achieve sufficient brain concentrations to have a central analgesic effect, and all reduce prostaglandin formation in inflamed joints. All are well absorbed, but peak concentration may differ between the coxibs. The coxibs are highly protein-bound, and the published estimate of half-lives is variable between the coxibs.
Celecoxib
Celecoxib was the first specific inhibitor of COX-2 approved to treat patients with rheumatism and osteoarthritis. A study showed that the absorption rate, when given orally, is moderate, and peak plasma concentration occurs after about 2–4 hours. However, the extent of absorption is not well known. Celecoxib has the affinity to bind extensively to plasma proteins, especially to plasma albumin. It has an apparent volume of distribution (VD) of 455 +/- 166 L in humans and the area under the plasma concentration-time curve (AUC) increases proportionally to increased oral doses, between 100 and 800 mg. Celecoxib is metabolized primarily by CYP2C9 isoenzyme to carboxylic acid and also by non-CYP-dependent glucuronidation to glucuronide metabolites. The metabolites are excreted in urine and feces, with a small proportion of unchanged drug (2%) in the urine. Its elimination half-life is about 11 hours (6–12 hours) in healthy individuals, but racial differences in drug disposition and pharmacokinetic changes in the elderly have been reported. People with chronic kidney disease appear to have 43% lower plasma concentration compared to healthy individuals, with a 47% increase in apparent clearance, and it can be expected that patients with mild to moderate hepatic impairment have increased steady-state AUC.
Parecoxib and valdecoxib
Parecoxib sodium is a water-soluble inactive ester amide prodrug of valdecoxib, a novel second-generation COX-2-specific inhibitor and the first such agent to be developed for injectable use. It is rapidly converted by hepatic enzymatic hydrolysis to the active form valdecoxib. The compound then undergoes another conversion, which involves both cytochrome P450-mediated pathway (CYP2C9, CYP3A4) and non-cytochrome P450-mediated pathway, to hydroxylated metabolite and glucuronide metabolite. The hydroxylated metabolite, that also has weak COX-2-specific inhibitory properties, is then further metabolized by non-cytochrome P450 pathway to a glucuronide metabolite. These metabolites are excreted in the urine.
After intra-muscular administration of Parecoxib sodium peak plasma concentration is reached within 15 minutes. The plasma concentration decreases rapidly after administration because of a rather short serum half-life, which is about 15–52 minutes. This can be explained by the rapid formation of Valdecoxib. In contrast to the rapid clearance of Parecoxib, plasma concentration of Valdecoxib declines slowly because of a longer half-life. On the other hand, when Valdecoxib is taken orally it is absorbed rapidly (1–2 hours), but presence of food can delay peak serum concentration. It then undergoes the same metabolism that is described above. It is extensively protein-bound (98%), and the plasma half-life is about 7–8 hours. Note that the half-life can be significantly prolonged in the elderly or those with hepatic impairment, and can lead to drug accumulation.
The hydroxyl metabolite reaches its highest mean plasma concentration within 3 to 4 hours from administration, but it is considerably lower than of Valdecoxib or about 1/10 of the plasma levels of Valdecoxib.
Etoricoxib
Etoricoxib, that is used for patients with chronic arthropathies and musculoskeletal and dental pain, is absorbed moderately when given orally. A study on its pharmacokinetics showed that the plasma peak concentration of etoricoxib occurs after approximately 1 hour. It has shown to be extensively bound to plasma albumin (about 90%), and has an apparent volume of distribution (VD) of 120 L in humans. The area under the plasma concentration-time curve (AUC) increases in proportion to increased dosage (5–120 mg). The elimination half-life is about 20 hours in healthy individuals, and such long half-life enables the choice to have once-daily dosage. Etoricoxib, like the other coxibs, is excreted in urine and feces and also metabolized in likewise manner. CYP3A4 is mostly responsible for biotransformation of etoricoxib to carboxylic acid metabolite, but a non CYP450 metabolism pathway to glucuronide metabolite is also at hand. A very small portion of etoricoxib (<1%) is eliminated unchanged in the urine. Patients with chronic kidney disease do not appear to have different plasma concentration curve (AUC) compared to healthy individuals. It has though been reported that patients with moderate hepatic impairment have increased plasma concentration curve (AUC) by approximately 40%. It has been stated that further study is necessary to describe precisely the relevance of pharmacokinetic properties in terms of the clinical benefits and risks of etoricoxib compared to other clinical options.
Lumiracoxib
Lumiracoxib is unique amongst the coxibs in being a weak acid. It was developed for the treatment of osteoarthritis, rheumatoid arthritis and acute pain. The acidic nature of lumiracoxib allows it to penetrate well into areas of inflammation. It has shown to be rapidly and well absorbed, with peak plasma concentration occurring in about 1–3 hours. A study showed that when a subject was given 400 mg dose, the amount of unchanged drug in the plasma 2.5 hours postdose suggest a modest first pass effect. The terminal half-life in plasma ranged from 5.4 to 8.6 hours (mean =6.5 hours). The half-life in synovial fluid is considerably longer than in plasma, and the concentration in synovial fluid 24 hours after administration would be expected to result in a substantial COX-2 inhibition. This fact can explain why some users may suffice with once-daily dosage despite a short plasma half-life. The major plasma metabolites are 5-carboxy, 4’-hydroxy, and 4’-hydroxy-5-carboxy derivatives. Lumiracoxib is extensively metabolized before it is excreted, and the excretion routes are in the urine or feces. Peak plasma concentrations exceed those necessary to maximally inhibit COX-2, and that is consistent with a longer pharmacodynamic half-life. In vitro lumiracoxib has demonstrated a greater COX-2 selectivity than any of the other coxibs.
Rofecoxib
Rofecoxib was the second selective COX-2 inhibitor to be marketed, and the first one to be taken off the market. When the pharmacokinetics were studied in healthy human subjects, the peak concentration was achieved in 9 hours with effective half-life of approximately 17 hours. A secondary peak has been observed, which might suggest that the absorption of rofecoxib varies with intestinal motility, hence leading to high variability in time until peak concentration is met. Seventy-one and a half percent of the dose was recovered in urine (less than 1% unmetabolised) and 14.2% was recovered in feces (approximately 1.8% in the bile). Among the metabolites were rofecoxib-3’,4’-dihydrodiol, 4’-hydroxyrofecoxib-O-β-D-glucuronide, 5-hydroxyrofecoxib-O-β-D-glucuronide, 5-hydroxyrofecoxib, rofecoxib-erythro-3,4-dihydrohydroxy acid, rofecoxib-threo-3,4-dihydrohydroxy acid, cis-3,4-dihydrorofecoxib and trans-3,4-dihydrorofecoxib.
Cardiovascular events associated with selective COX-2 inhibitors
Even before the first selective COX-2 inhibitor was marketed, specialists began to suspect that there might be a cardiovascular risk associated with this class of medicines. In the VIGOR study (Vioxx Gastrointestinal Outcomes Research), rofecoxib (Vioxx) was compared to naproxen. After a short time, it became evident that there was a fivefold higher risk of myocardial infarction in the rofecoxib group compared to the group that received naproxen. The authors suggested that the difference was due to the cardioprotective effects of naproxen. The APPROVe (Adenomatous Poly Prevention on Vioxx) study was a multicentre, randomized, placebo-controlled, double blind trial aimed to assess the effect of three-year treatment with rofecoxib on recurrence of neoplastic polyps in individuals with a history of colorectal adenomas. In 2000 and 2001, 2587 patients with a history of colorectal adenomas were recruited and followed. The trial was stopped early (2 months before expected completion) on recommendations of its data safety and monitoring board because of concerns about cardiovascular toxicity. When looking at the results of the study, it showed a statistically significant increase in cardiovascular risk when taking rofecoxib compared to placebo beginning after 18 months of treatment. Then on 30 September Merck gave out a news release announcing their voluntary worldwide withdrawal of Vioxx.
Some studies of other coxibs have also shown increase in the risk of cardiovascular events, while others have not. For instance, the Adenoma Prevention with Celecoxib study (APC) showed a dose-related increase in risk of cardiovascular death, myocardial infarction, stroke, or heart failure when taking celecoxib compared to placebo; and the Successive Celecoxib Efficacy and Safety Study I (SUCCESS-I) showed increased risk of myocardial infarction when taking 100 mg twice a day of celecoxib compared to diclofenac and naproxen; but taking 200 mg twice a day had lower incidence of myocardial infarction compared to diclofenac and naproxen. Nussmeier et al. (2005) showed in a study increase in incidence of cardiovascular events when taking parecoxib and valdecoxib (compared to placebo) after coronary artery bypass surgery.
Possible mechanisms
It has been proposed that COX-2 selectivity could cause imbalance of prostaglandins in the vasculature. If this were the explanation for the increased cardiovascular risk then low-dose aspirin should negate this effect, which was not the case in the APPROVe trial. Also, the non-selective COX inhibitors, have also shown increase in cardiovascular events.
Another possible explanation was studied by Li H. et al. (2008). They showed that in spontaneously hypertensive rats (SHR) non-selective NSAIDs and the coxibs produce oxidative stress, indicated by enhanced vascular superoxide(O2−) content and elevated peroxide in plasma, which is in tune with enhanced expression of NADPH oxidase, which was noticed with use of diclofenac and naproxen and, to a lesser degree, rofecoxib and celecoxib. Nitrite in plasma was also decreased suggesting a diminished synthesis of vascular nitric oxide (NO). This decrease in NO synthesis did not result from decreased expression of endothelial nitric oxide synthase (eNOS) because expression of eNOS mRNA was not reduced, and even upregulated for some products. The decrease in NO synthesis could, rather, be explained by loss of eNOS function. For eNOS to be normally functional, it needs to form a dimer and to have its cofactor BH4, which is one of the most potent naturally occurring reducing agents. BH4 is sensitive to oxidation by peroxynitrite (ONOO−), which is produced when NO reacts with O2−, so it has been hypothesized that depletion of BH4 can occur with excessive oxidative stress (that can be caused by NSAIDs) and, hence, be the cause of eNOS dysfunction. This dysfunction, which is referred to as eNOS uncoupling, causes the production of O2− by eNOS, thereby leading to more oxidative stress produced by eNOS. In a study, both the selective COX-2 inhibitors and the non-selective NSAIDs produced oxidative stress, with greater effects seen with non-selective NSAIDs use. This could fit with the hypothesis concerning the prostacyclin/thromboxane imbalance. That is, although the non-selective NSAIDs produce more oxidative stress, they prevent platelet aggregation, whereas the COX-2 inhibitors reduce prostacyclin production, and, hence, the cardiovascular risk for the non-selective NSAIDs is not higher than for the coxibs.
Among other hypotheses are increased blood pressure, decreased production of epi-lipoxins (which have anti-inflammatory effects), and inhibition of vascular remodeling when using selective COX-2 inhibitors.
See also
Arachidonic acid
Cyclooxygenase
Cyclooxygenase 1
Cyclooxygenase 2
NSAID
COX-2 selective inhibitor
References
Cyclooxygenase 2 Inhibitors, Discovery And Development Of
COX-2 inhibitors | Discovery and development of cyclooxygenase 2 inhibitors | [
"Chemistry",
"Biology"
] | 5,243 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
20,191,641 | https://en.wikipedia.org/wiki/Mutually%20unbiased%20bases | In quantum information theory, a set of bases in Hilbert space Cd are said to be mutually unbiased if when a system is prepared in an eigenstate of one of the bases, then all outcomes of the measurement with respect to the other basis are predicted to occur with an equal probability inexorably equal to 1/d.
Overview
The notion of mutually unbiased bases was first introduced by Julian Schwinger in 1960, and the first person to consider applications of mutually unbiased bases was I. D. Ivanovic in the problem of quantum state determination.
Mutually unbiased bases (MUBs) and their existence problem is now known to have several closely related problems and equivalent avatars in several other branches of mathematics and quantum sciences, such as SIC-POVMs, finite projective/affine planes, complex Hadamard matrices and more [see section: Related problems].
MUBs are important for quantum key distribution, more specifically in secure quantum key exchange. MUBs are used in many protocols since the outcome is random when a measurement is made in a basis unbiased to that in which the state was prepared. When two remote parties share two non-orthogonal quantum states, attempts by an eavesdropper to distinguish between these by measurements will affect the system and this can be detected. While many quantum cryptography protocols have relied on 1-qubit technologies, employing higher-dimensional states, such as qutrits, allows for better security against eavesdropping. This motivates the study of mutually unbiased bases in higher-dimensional spaces.
Other uses of mutually unbiased bases include quantum state reconstruction, quantum error correction codes, detection of quantum entanglement, and the so-called "mean king's problem".
Definition and examples
A pair of orthonormal bases and in Hilbert space Cd are said to be mutually unbiased, if and only if the square of the magnitude of the inner product between any basis states and equals the inverse of the dimension d:
These bases are unbiased in the following sense: if a system is prepared in a state belonging to one of the bases, then all outcomes of the measurement with respect to the other basis are predicted to occur with equal probability.
Example for d = 2
The three bases
provide the simplest example of mutually unbiased bases in C2. The above bases are composed of the eigenvectors of the Pauli spin matrices and their product , respectively.
Example for d = 4
For d = 4, an example of d + 1 = 5 mutually unbiased bases where each basis is denoted by Mj, 0 ≤ j ≤ 4, is given as follows:
Existence problem
Let denote the maximum number of mutually unbiased bases in the d-dimensional Hilbert space Cd. It is an open question how many mutually unbiased bases, , one can find in Cd, for arbitrary d.
In general, if
is the prime-power factorization of d, where
then the maximum number of mutually unbiased bases which can be constructed satisfies
It follows that if the dimension of a Hilbert space d is an integer power of a prime number, then it is possible to find d + 1 mutually unbiased bases. This can be seen in the previous equation, as the prime number decomposition of d simply is . Therefore,
Thus, the maximum number of mutually unbiased bases is known when d is an integer power of a prime number, but it is not known for arbitrary d.
The smallest dimension that is not an integer power of a prime is d = 6. This is also the smallest dimension for which the number of mutually unbiased bases is not known. The methods used to determine the number of mutually unbiased bases when d is an integer power of a prime number cannot be used in this case. Searches for a set of four mutually unbiased bases when d = 6, both by using Hadamard matrices and numerical methods have been unsuccessful. The general belief is that the maximum number of mutually unbiased bases for d = 6 is .
Related problems
The MUBs problem seems similar in nature to the symmetric property of SIC-POVMs. William Wootters points out that a complete set of unbiased bases yields a geometric structure known as a finite projective plane, while a SIC-POVM (in any dimension that is a prime power) yields a finite affine plane, a type of structure whose definition is identical to that of a finite projective plane with the roles of points and lines exchanged. In this sense, the problems of SIC-POVMs and of mutually unbiased bases are dual to one another.
In dimension , the analogy can be taken further: a complete set of mutually unbiased bases can be directly constructed from a SIC-POVM. The 9 vectors of the SIC-POVM, together with the 12 vectors of the mutually unbiased bases, form a set that can be used in a Kochen–Specker proof. However, in 6-dimensional Hilbert space, a SIC-POVM is known, but no complete set of mutually unbiased bases has yet been discovered, and it is widely believed that no such set exists.
Search methods
Weyl group method
Let and be two unitary operators in the Hilbert space Cd such that
for some phase factor . If is a primitive root of unity, for example then the eigenbases of and are mutually unbiased.
By choosing the eigenbasis of to be the standard basis, we can generate another basis unbiased to it using a Fourier matrix. The elements of the Fourier matrix are given by
Other bases which are unbiased to both the standard basis and the basis generated by the Fourier matrix can be generated using Weyl groups. The dimension of the Hilbert space is important when generating sets of mutually unbiased bases using Weyl groups. When d is a prime number, then the usual d + 1 mutually unbiased bases can be generated using Weyl groups. When d is not a prime number, then it is possible that the maximal number of mutually unbiased bases which can be generated using this method is 3.
Unitary operators method using finite fields
When d = p is prime, we define the unitary operators and by
where is the standard basis and is a root of unity.
Then the eigenbases of the following d + 1 operators are mutually unbiased:
For odd d, the t-th eigenvector of the operator is given explicitly by
When is a power of a prime, we make use of the finite field to construct a maximal set of d + 1 mutually unbiased bases. We label the elements of the computational basis of Cd using the finite field:
.
We define the operators and in the following way
where
is an additive character over the field and the addition and multiplication in the kets and is that of .
Then we form d + 1 sets of commuting unitary operators:
and for each
The joint eigenbases of the operators in one set are mutually unbiased to that of any other set. We thus have d + 1 mutually unbiased bases.
Hadamard matrix method
Given that one basis in a Hilbert space is the standard basis, then all bases which are unbiased with respect to this basis can be represented by the columns of a complex Hadamard matrix multiplied by a normalization factor. For d = 3 these matrices would have the form
The problem of finding a set of k+1 mutually unbiased bases therefore corresponds to finding k mutually unbiased complex Hadamard matrices.
An example of a one parameter family of Hadamard matrices in a 4-dimensional Hilbert space is
Entropic uncertainty relations
There is an alternative characterization of mutually unbiased bases that considers them in terms of uncertainty relations.
Entropic uncertainty relations are analogous to the Heisenberg uncertainty principle, and Hans Maassen and J. B. M. Uffink found that for any two bases and :
where and and is the respective entropy of the bases and , when measuring a given state.
Entropic uncertainty relations are often preferable to the Heisenberg uncertainty principle, as they are not phrased in terms of the state to be measured, but in terms of c.
In scenarios such as quantum key distribution, we aim for measurement bases such that full knowledge of a state with respect to one basis implies minimal knowledge of the state with respect to the other bases. This implies a high entropy of measurement outcomes, and thus we call these strong entropic uncertainty relations.
For two bases, the lower bound of the uncertainty relation is maximized when the measurement bases are mutually unbiased, since mutually unbiased bases are maximally incompatible: the outcome of a measurement made in a basis unbiased to that in which the state is prepared in is completely random. In fact, for a d-dimensional space, we have:
for any pair of mutually unbiased bases and . This bound is optimal: If we measure a state from one of the bases then the outcome has entropy 0 in that basis and an entropy of in the other.
If the dimension of the space is a prime power, we can construct d + 1 MUBs, and then it has been found that
which is stronger than the relation we would get from pairing up the sets and then using the Maassen and Uffink equation. Thus we have a characterization of d + 1 mutually unbiased bases as those for which the uncertainty relations are strongest.
Although the case for two bases, and for d + 1 bases is well studied, very little is known about uncertainty relations for mutually unbiased bases in other circumstances.
When considering more than two, and less than bases it is known that large sets of mutually unbiased bases exist which exhibit very little uncertainty. This means merely being mutually unbiased does not lead to high uncertainty, except when considering measurements in only two bases. Yet there do exist other measurements that are very uncertain.
Infinite dimension Hilbert spaces
While there has been investigation into mutually unbiased bases in infinite dimension Hilbert space, their existence remains an open question. It is conjectured that in a continuous Hilbert space, two orthonormal bases and are said to be mutually unbiased if
For the generalized position and momentum eigenstates and , the value of k is
The existence of mutually unbiased bases in a continuous Hilbert space remains open for debate, as further research in their existence is required before any conclusions can be reached.
Position states and momentum states are eigenvectors of Hermitian operators and , respectively. Weigert and Wilkinson were first to notice that also a linear combination of these operators have eigenbases, which have some features typical for the mutually unbiased bases. An operator has eigenfunctions proportional to with and the corresponding eigenvalues . If we parametrize and as and , the overlap between any eigenstate of the linear combination and any eigenstate of the position operator (both states normalized to the Dirac delta) is constant, but dependent on :
where and stand for eigenfunctions of and .
See also
Measurement in quantum mechanics
POVM
SIC-POVM
QBism
References
Quantum measurement
Unsolved problems in physics
Unsolved problems in mathematics
Hilbert spaces
Operator theory
Incidence geometry
Euclidean plane geometry
Algebraic geometry
Hypergraphs
Computer-assisted proofs | Mutually unbiased bases | [
"Physics",
"Mathematics"
] | 2,330 | [
"Unsolved problems in mathematics",
"Computer-assisted proofs",
"Euclidean plane geometry",
"Unsolved problems in physics",
"Quantum mechanics",
"Combinatorics",
"Fields of abstract algebra",
"Quantum measurement",
"Algebraic geometry",
"Hilbert spaces",
"Planes (geometry)",
"Mathematical prob... |
20,192,616 | https://en.wikipedia.org/wiki/SIMPLE%20algorithm | In computational fluid dynamics (CFD), the SIMPLE algorithm is a widely used numerical procedure to solve the Navier–Stokes equations. SIMPLE is an acronym for Semi-Implicit Method for Pressure Linked Equations.
The SIMPLE algorithm was developed by Prof. Brian Spalding and his student Suhas Patankar at Imperial College London in the early 1970s. Since then it has been extensively used by many researchers to solve different kinds of fluid flow and heat transfer problems.
Many popular books on computational fluid dynamics discuss the SIMPLE algorithm in detail.
A modified variant is the SIMPLER algorithm (SIMPLE Revised), that was introduced by Patankar in 1979.
Algorithm
The algorithm is iterative. The basic steps in the solution update are as follows:
Set the boundary conditions.
Compute the gradients of velocity and pressure.
Solve the discretized momentum equation to compute the intermediate velocity field.
Compute the uncorrected mass fluxes at faces.
Solve the pressure correction equation to produce cell values of the pressure correction.
Update the pressure field: where urf is the under-relaxation factor for pressure.
Update the boundary pressure corrections .
Correct the face mass fluxes:
Correct the cell velocities: ; where is the gradient of the pressure corrections, is the vector of central coefficients for the discretized linear system representing the velocity equation and Vol is the cell volume.
Update density due to pressure changes.
See also
PISO algorithm
SIMPLEC algorithm
References
Computational fluid dynamics | SIMPLE algorithm | [
"Physics",
"Chemistry"
] | 292 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
20,197,040 | https://en.wikipedia.org/wiki/Integrated%20project%20delivery | Integrated project delivery (IPD) is a construction project delivery method that seeks the efficiency and involvement of all participants (people, systems, business structures and practices) through all phases of design, fabrication, and construction. IPD combines ideas from integrated practice and lean construction. The objectives of IPD are to increase productivity, reduce waste (waste being described as resources spent on activities that do not add value to the end product), avoid time overruns, enhance final product quality, and reduce conflicts between owners, architects and contractors during construction. IPD emphasizes the use of technology to facilitate communication between the parties involved in the construction process.
Background
The construction industry has suffered from a productivity decline since the 1960s while all other non-farm industries have seen large boosts in productivity. Proponents of Integrated project delivery argue that problems in contemporary construction, such as buildings that are behind schedule and over budget, are due to adverse relations between the owner, general contractor, and architect.
Using ideas developed by Toyota in their Toyota Production System and computer technology advances, the new focus in IPD is the final value created for the owner. In essence, IPD sees all allocation of resources for any activity that does not add value to the end product (the finished building) as wasteful.
IPD in practice
In Practice, the IPD system is a process where all disciplines in a construction project work as one firm. The primary team members include the architect, key technical consultants as well as a general contractor and subcontractors. The growing use of building information modeling in the construction industry is allowing for easier sharing of information between project participants using IPD and is considered a tool to increase productivity throughout the construction process.
Unlike the design–build project delivery method which typically places the contractor in the leading role on a building project, IPD represents a return to the "master builder" concept where the entire building team including the owner, architect, general contractor, building engineers, fabricators, and subcontractors work collaboratively throughout the construction process.
Multi-Party Agreements
One common way to further the goals of IPD is through a multi-party agreement among key participants. In a multi-party agreement (MPA), the primary project participants execute a single contract specifying their respective roles, rights, obligations, and liabilities. In effect, the multi-party agreement creates a temporary virtual, and in some instances formal, organization to realize a specific project. Because a single agreement is used, each party understands its role in relationship to the other participants. Compensation structures are often open-book, so each party's interests and contributions are similarly transparent. Multi-party agreements require trust, as compensation is tied to overall project success and individual success depends on the contributions of all team members.
Common forms of multi-party agreements include
project alliances, which create a project structure where the owner guaranteed the direct costs of non-owner parties, but payment of profit, overhead and bonus depends on project outcome;
a single-purpose entity, which is a temporary, but formal, legal structure created to realize a specific project;
and relational contracts, which are similar to Project Alliances in that a virtual organization is created from individual entities, but it differs in its approach to compensation, risk sharing and decision making.
The role of technology in IPD
The adoption of IPD as a standard for collaborative good practice on construction projects presents its own problems. As most construction projects involve disparate stakeholders, traditional IT solutions are not conducive to collaborative working. Sharing files behind IT firewalls, large email attachment sizes and the ability to view all manner of file types without the native software all make IPD difficult.
The need to overcome collaborative IT challenges has been one of the drivers behind the growth of online construction collaboration technology. Since 2000, a new generation of technology companies evolved using SaaS to facilitate IPD.
This collaboration software streamlines the flow of documentation, communications and workflows ensuring everyone is working from 'one version of the truth'. Collaboration software allows users from disparate locations to keep all communications, documents & drawings, forms and data, amongst other types of electronic file, in one place. Version control is assured and users are able to view and mark up files online without the need for native software. The technology also enables project confidence and mitigates risk thanks to inbuilt audit trails.
Criticism
A significant criticism of IPD that the single-minded focus on efficiency is often associated with a lack of concern for employee safety and well-being. This led to a poor safety performance and increased stress levels among construction workers, as they strive reach higher goals with less resources.
Job Order Contracting
Job Order Contracting, JOC is form of integrated project delivery that specifically targets repair, renovation, and minor new construction. It has proven to be capable of delivering over 90% of projects on-time, on-budget, and to the satisfaction of the owner, contractors, and customer alike.
See also
Building information modeling
Lean construction
Patrick MacLeamy - the inventor of MacLeamy Curve
References
Selected articles on integrated project delivery
2017 NIBS Delivering Better Facilities through Lean Construction and Owner Leadership
Integrated Project Delivery: A Platform for Efficient Construction BuildingGreen, November 1, 2008
Another look: Is IPD the solution? Daily Journal of Commerce, Oct. 20, 2008
Just a month old, the BIM Addendum has won national endorsements Philadelphia Business Journal, Aug. 21, 2008
Integrated Project Delivery Improves Efficiency, Streamlines Construction – Lean Management Approach Eliminates Waste and Enhances Project Outcome – Tradeline, July 16, 2008
Red Business, Blue Business: If architects do not take the leadership role on integrated practice, they will cede this turf DesignIntelligence, May 30, 2008
AIA: American Institute of Architects delivers new contract documents to encourage Integrated Project Delivery architosh, May 21, 2008
AIA Introduces Integrated Project Delivery Agreements Contract Magazine, April 28, 2008
Integrated Project Delivery pulls together people, systems, business structures and practices Daily Commercial News and Construction Record, Mar. 12, 2008
New Colorado Law Permits IPD For State and Local Governments – Colorado Real Estate Journal, February 5, 2008
Changing Project Delivery Strategy – An Implementation Framework Journal of Public Works Management & Policy, Jan. 2008; vol. 12: pp. 483–502
AIA and AIA California Council Partner Introduce Integrated Project Delivery: A Guide Cadalyst, Nov. 6, 2007
The Integrated Agreement for Lean Project Delivery Construction Lawyer, American Bar Association, McDonough Holland & Allen PC, Number 3, Volume 26, Summer 2006
Managing Integrated Project Delivery CMAA College of Fellows, November 2009
External links
Governor Ritter Signs Integrated Project Delivery Bill into Colorado Law
Integrated Project Delivery: First Principles for Owners and Teams – 3xPT Strategy Group, July 7, 2008: Construction Users Roundtable (CURT), Associated General Contractors of America (AGC), American Institute of Architects
National Institute of Building Sciences – many related articles on Integrated Project Delivery, Building Information Modeling
Design Responsibility in Integrated Project Delivery: Looking Back and Moving Forward – Donovan/Hatem LLP, Jan. 2008
HOUSE BILL 07-1342 Colorado State Government, June 1, 2007
Construction
Building engineering
Design | Integrated project delivery | [
"Engineering"
] | 1,459 | [
"Building engineering",
"Construction",
"Civil engineering",
"Design",
"Architecture"
] |
32,848,705 | https://en.wikipedia.org/wiki/Q-Meixner%E2%80%93Pollaczek%20polynomials | In mathematics, the q-Meixner–Pollaczek polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. give a detailed list of their properties.
Definition
The polynomials are given in terms of basic hypergeometric functions and the q-Pochhammer symbol by :
References
Orthogonal polynomials
Q-analogs
Special hypergeometric functions | Q-Meixner–Pollaczek polynomials | [
"Mathematics"
] | 76 | [
"Q-analogs",
"Combinatorics"
] |
32,850,365 | https://en.wikipedia.org/wiki/Line%20integral%20convolution | In scientific visualization, line integral convolution (LIC) is a method to visualize a vector field (such as fluid motion) at high spatial resolutions. The LIC technique was first proposed by Brian Cabral and Leith Casey Leedom in 1993.
In LIC, discrete numerical line integration is performed along the field lines (curves) of the vector field on a uniform grid. The integral operation is a convolution of a filter kernel and an input texture, often white noise. In signal processing, this process is known as a discrete convolution.
Overview
Traditional visualizations of vector fields use small arrows or lines to represent vector direction and magnitude. This method has a low spatial resolution, which limits the density of presentable data and risks obscuring characteristic features in the data. More sophisticated methods, such as streamlines and particle tracing techniques, can be more revealing but are highly dependent on proper seed points. Texture-based methods, like LIC, avoid these problems since they depict the entire vector field at point-like (pixel) resolution.
Compared to other integration-based techniques that compute field lines of the input vector field, LIC has the advantage that all structural features of the vector field are displayed, without the need to adapt the start and end points of field lines to the specific vector field. In other words, it shows the topology of the vector field.
In user testing, LIC was found to be particularly good for identifying critical points.
Algorithm
Informal description
LIC causes output values to be strongly correlated along the field lines, but uncorrelated in orthogonal directions. As a result, the field lines contrast each other and stand out visually from the background.
Intuitively, the process can be understood with the following example: the flow of a vector field can be visualized by overlaying a fixed, random pattern of dark and light paint. As the flow passes by the paint, the fluid picks up some of the paint's color, averaging it with the color it has already acquired. The result is a randomly striped, smeared texture where points along the same streamline tend to have a similar color. Other physical examples include:
whorl patterns of paint, oil, or foam on a river
visualisation of magnetic field lines using randomly distributed iron filings
fine sand being blown by strong wind
Formal mathematical description
Although the input vector field and the result image are discretized, it pays to look at it from a continuous viewpoint. Let be the vector field given in some domain . Although the input vector field is typically discretized, we regard the field as defined in every point of , i.e. we assume an interpolation. Streamlines, or more generally field lines, are tangent to the vector field in each point. They end either at the boundary of or at critical points where . For the sake of simplicity, critical points and boundaries are ignored in the following.
A field line , parametrized by arc length , is defined as Let be the field line that passes through the point for . Then the image gray value at is set to
where is the convolution kernel, is the noise image, and is the length of field line segment that is followed.
has to be computed for each pixel in the LIC image. If carried out naively, this is quite expensive. First, the field lines have to be computed using a numerical method for solving ordinary differential equations, like a Runge–Kutta method, and then for each pixel the convolution along a field line segment has to be calculated.
The final image will normally be colored in some way. Typically, some scalar field in (like the vector length) is used to determine the hue, while the grayscale LIC output determines the brightness.
Different choices of convolution kernels and random noise produce different textures; for example, pink noise produces a cloudy pattern where areas of higher flow stand out as smearing, suitable for weather visualization. Further refinements in the convolution can improve the quality of the image.
Programming description
Algorithmically, LIC takes a vector field and noise texture as input, and outputs a texture. The process starts by generating in the domain of the vector field a random gray level image at the desired output resolution. Then, for every pixel in this image, the forward and backward streamline of a fixed arc length is calculated. The value assigned to the current pixel is computed by a convolution of a suitable convolution kernel with the gray levels of all the noise pixels lying on a segment of this streamline. This creates a gray level LIC image.
Versions
Basic
Basic LIC images are grayscale images, without color and animation. While such LIC images convey the direction of the field vectors, they do not indicate orientation; for stationary fields, this can be remedied by animation. Basic LIC images do not show the length of the vectors (or the strength of the field).
Color
The length of the vectors (or the strength of the field) is usually coded in color; alternatively, animation can be used.
Animation
LIC images can be animated by using a kernel that changes over time. Samples at a constant time from the streamline would still be used, but instead of averaging all pixels in a streamline with a static kernel, a ripple-like kernel constructed from a periodic function multiplied by a Hann function acting as a window (in order to prevent artifacts) is used. The periodic function is then shifted along the period to create an animation.
Fast LIC (FLIC)
The computation can be significantly accelerated by re-using parts of already computed field lines, specializing to a box function as convolution kernel and avoiding redundant computations during convolution. The resulting fast LIC method can be generalized to convolution kernels that are arbitrary polynomials.
Oriented Line Integral Convolution (OLIC)
Because LIC does not encode flow orientation, it cannot distinguish between streamlines of equal direction but opposite orientation. Oriented Line Integral Convolution (OLIC) solves this issue by using a ramp-like asymmetric kernel and a low-density noise texture. The kernel asymmetrically modulates the intensity along the streamline, producing a trace that encodes orientation; the low-density of the noise texture prevents smeared traces from overlapping, aiding readability.
Fast Rendering of Oriented Line Integral Convolution (FROLIC) is a variation that approximates OLIC by rendering each trace in discrete steps instead of as a continuous smear.
Unsteady Flow LIC (UFLIC)
For time-dependent vector fields (unsteady flow), a variant called Unsteady Flow LIC has been designed that maintains the coherence of the flow animation. An interactive GPU-based implementation of UFLIC has been presented.
Parallel
Since the computation of an LIC image is expensive but inherently parallel, the process has been parallelized and, with availability of GPU-based implementations, interactive on PCs.
Multidimensional
Note that the domain does not have to be a 2D domain: the method is applicable to higher dimensional domains using multidimensional noise fields. However, the visualization of the higher-dimensional LIC texture is problematic; one way is to use interactive exploration with 2D slices that are manually positioned and rotated. The domain does not have to be flat either; the LIC texture can be computed also for arbitrarily shaped 2D surfaces in 3D space.
Applications
This technique has been applied to a wide range of problems since it first was published in 1993, both scientific and creative, including:
Representing vector fields:
visualization of steady (time-independent) flows (streamlines)
visual exploration of 2D autonomous dynamical systems
wind mapping
water flow mapping
Artistic effects for image generation and stylization:
pencil drawing (automatic pencil drawing generation technique using LIC pencil filter)
automatic generation of hair textures
creating marbling texture
Terrain generalization:
creating generalized shaded relief
Implementations
GPU Based Image Processing Tools by Raymond McGuire
ParaView : Line Integral Convolution
A 2D flow visualization tool based on LIC and RK4. Developed using C++ and VTK. by Andres Bejarano
Wolfram Research (2008), LineIntegralConvolutionPlot, Wolfram Language function, https://reference.wolfram.com/language/ref/LineIntegralConvolutionPlot.html (updated 2014).
See also
Weighted arithmetic mean
References
Numerical function drawing
Vector calculus
Fluid dynamics
Complex analysis
Computer graphics | Line integral convolution | [
"Chemistry",
"Engineering"
] | 1,758 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
30,246,751 | https://en.wikipedia.org/wiki/Manley%E2%80%93Rowe%20relations | The Manley–Rowe relations are mathematical expressions developed originally for electrical engineers to predict the amount of energy in a wave that has multiple frequencies. They have since been found to describe systems in non-linear optics,
fluid mechanics and the theory of non-linear dynamical systems, as they provide a pair of invariants or conserved quantities for the three-wave equation. For example, in a resonant interaction in non-linear optics, the Manley–Rowe relations can be interpreted as saying one photon is created as two more are destroyed (or conversely, two are created when one is destroyed.) For the three-wave equation, the Manley–Rowe invariants can be related to the modular invariants and of the Weierstrass ℘-function. This essentially follows because the three-wave interaction has exact solutions that are given by elliptic functions.
History
The original papers, written by two researchers at Bell Labs, J. M. Manley and H. E. Rowe between 1956 and 1960
was for an electrical circuit containing nonlinear capacitors and inductors. One or more oscillators, operating at specified frequencies, are connected to the input of this circuit. The Manley–Rowe relations predict the energy present in waves at various frequencies, including new frequencies (such as harmonics and sidebands) that arise in the circuit due to nonlinearity. The theory is based partly on the principle of conservation of energy. It requires that energy storage in the circuit is a stationary process that varies with time only due to the oscillations and not due to some steady increase or decrease with time. More precisely, the theory describes a resonant interaction between waves at various different frequencies; the resonant interaction describes which frequencies can mix and interact, and the strengths by which they couple.
Because the Manley–Rowe relations are based on general concepts like nonlinear waves and conservation of energy, their use is not limited to the original application in radio-frequency electrical circuits. They have also found use in other scientific fields, for example nonlinear optics. In the electrical circuit for the original derivation of Manley–Rowe relations, capacitors and inductors store energy from a wave and then release it. Other physical systems that involve energy storage for waves, and nonlinear generation of new waves, can make use of the same relations.
John Manley and Harrison Rowe were protégés of Ralph Hartley at Bell Laboratories. The work with nonlinear reactances (inductors and capacitors) was started back in 1917 by John Burton and Eugene Peterson. When Hartley joined Bell Laboratories after being part of Western Electric, he started a research group on nonlinear oscillations. This group was later joined by Peterson, Manley, and Rowe.
Notes
Applied mathematics
Electrical engineering | Manley–Rowe relations | [
"Mathematics",
"Engineering"
] | 562 | [
"Applied mathematics",
"Electrical engineering"
] |
30,250,063 | https://en.wikipedia.org/wiki/Float%20%28liquid%20level%29 | Liquid level floats, also known as float balls, are spherical, cylindrical, oblong or similarly shaped objects, made from either rigid or flexible material, that are buoyant in water and other liquids. They are non-electrical hardware frequently used as visual sight-indicators for surface demarcation and level measurement. They may also be incorporated into switch mechanisms or translucent fluid-tubes as a component in monitoring or controlling liquid level.
Liquid level floats, or float switches, use the principle of material buoyancy (differential densities) to follow fluid levels. Solid floats are often made of plastics with a density less than water or other application liquid, and so they float. Hollow floats filled with air are much less dense than water or other liquids, and are appropriate for some applications.
Stainless steel magnetic floats are tubed magnetic floats, used for reed switch activation; they have a hollow tubed connection running through them. These magnetic floats have become standard equipment where strength, corrosion resistance and buoyancy are necessary. They are manufactured by welding two drawn half shells together. The welding process is critical for the strength and durability of the float. The weld is a full penetration weld providing a smoothly finished seam, hardly distinguishable from the rest of the float surface.
Liquid level floats can also be constructed with thermoplastic corrosion-resistant materials. These materials include PVC, polypropylene and PVDF. An example of an application that would require such materials would be if a manufacturer of metal plating and metal finishing lines required continuous level measurement of their chromic acid tanks. Stainless Steel would rapidly corrode in chromic acid, which is why one would have the option to go with a PVDF float, which is a material with great chemical resistance to chromic acid.
Thermoplastic level floats are a great alternative to some other forms of level sensors such as ultrasonic or radar when dealing with corrosive chemical applications. This is because some chemicals create vapor blankets or corrosive fumes inside of tanks. Liquid level floats are unaffected by any foam, vapor, turbulence or condensate inside of the tanks that would normally cause issues with an ultrasonic or radar level sensor.
See also
Level sensor
Float switch
References
Liquids
Fluid dynamics
Volumetric instruments
Mechanical engineering
Buoyancy | Float (liquid level) | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 469 | [
"Applied and interdisciplinary physics",
"Chemical engineering",
"Phases of matter",
"Measuring instruments",
"Volumetric instruments",
"Mechanical engineering",
"Piping",
"Fluid dynamics",
"Matter",
"Liquids"
] |
22,724,191 | https://en.wikipedia.org/wiki/Eurocode%203%3A%20Design%20of%20steel%20structures | In the Eurocode series of European standards (EN) related to construction, Eurocode 3: Design of steel structures (abbreviated EN 1993 or, informally, EC 3) describes how to design steel structures, using the limit state design philosophy.
It was approved by the European Committee for Standardization (CEN) on 16 April 2004. Eurocode 3 comprises 20 documents dealing with the different aspects of steel structure design:
EN 1993-1-1: General rules and rules for buildings.
EN 1993-1-2: General rules - Structural fire design.
EN 1993-1-3: General rules - Supplementary rules for cold-formed members and sheeting.
EN 1993-1-4: General rules - Supplementary rules for stainless steels.
EN 1993-1-5: General rules - Plated structural elements.
EN 1993-1-6: General rules - Strength and stability of shell structures.
EN 1993-1-7: General rules - Strength and stability of planar plated structures subject to out of plane loading.
EN 1993-1-8: Design of joints.
EN 1993-1-9: Fatigue.
EN 1993-1-10: Material toughness and through-thickness properties.
EN 1993-1-11: Design of structures with tension components.
EN 1993-1-12: General - High strength steels.
EN 1993-2: Steel bridges.
EN 1993-3-1: Towers, masts and chimneys – Towers and masts.
EN 1993-3-2: Towers, masts and chimneys – Chimneys
EN 1993-4-1: Silos
EN 1993-4-2: Storage tanks
EN 1993-4-3: Pipelines
EN 1993-5: Deep foundation (piling)
EN 1993-6: Crane supporting structures
Eurocode 3 applies to the design of buildings and civil engineering works in steel. It complies with the principles and requirements for the safety and serviceability of structures, the basis of their design and verification that are given in EN 1990 – Basis of structural design. It is only concerned with requirements for resistance, serviceability, durability and fire resistance.
Eurocode 3 is intended to be used in conjunction with:
EN 1990: Eurocode - Basis of structural design;
EN 1991: Eurocode 1 - Actions on structures;
ENs, ETAGs and ETAs for construction products relevant for steel structures;
EN 1090 Execution of steel structures – Technical requirements;
EN 1992 to EN 1999 when steel structures or steel components are referred to.
Part 1-1: General rules and rules for buildings
EN 1993-1-1 gives basic design rules for steel structures with material thicknesses t ≥ 3 mm. It also gives supplementary provisions for the structural design of steel buildings.
Contents
General
Basis of design
Materials
Durability
Structural analysis
Ultimate limit states
Serviceability limit states
Part 1-2: General rules - Structural fire design
EN 1993-1-2 deals with the design of steel structures for the accidental situation of fire exposure and it has to be used in conjunction with EN 1993-1-1 and EN 1991-1-2. This part only identifies differences from, or supplements to, normal temperature design. EN 1993-1-2 deals only with passive methods of fire protection.
Part 1-3: General rules - Supplementary rules for cold-formed members and sheeting
EN 1993-1-3 gives design requirements for cold-formed thin gauge members and sheeting. It applies to cold-formed steel products made from coated or uncoated thin gauge hot or cold rolled sheet or strip, that have been cold-formed by such processes as cold-rolled forming or press-braking. It may also be used for the design of profiled steel sheeting for composite steel and concrete slabs at the construction stage, see EN 1994. The execution of steel structures made of cold-formed thin gauge members and sheeting is covered in EN 1090.
Part 1-4: General rules - Supplementary rules for stainless steels
EN 1993-1-4 deals with the additional requirements for the design of steel structures made of stainless steel and it has to be used in conjunction with EN 1993-1-1 and EN 1993-1-3.
Part 1-5: Plated structural elements
EN 1993-1-5 gives design requirements of stiffened and unstiffened plates which are subject to inplane forces.
Part 1-6: Strength and Stability of Shell Structures
EN 1993-1-6 gives design requirements for plated steel structures that have the form of a shell of revolution.
Part 1-7: General Rules - Supplementary rules for planar plated structural elements with out of plane loading
EN 1993-1-7: provides principles and rules of application for the structural design of stiffened and unstiffened plates loaded with out of plane actions and it has to be used in conjunction with EN 1993-1-1.
Part 1-8: Design of joints
EN 1993-1-8 gives design methods for the design of joints subject to predominantly static loading using steel grades S235, S275, S355 and S460. More specifically, it gives detailed application rules to determine the static design resistances of uniplanar and multiplanar joints in lattice structures composed of circular, square or rectangular hollow sections, and of uniplanar joints in lattice structures composed of combinations of hollow sections with open sections (space frames and trusses).
Part 1-9: Fatigue
EN 1993-1-9 gives methods for the assessment of fatigue resistance of members, connections and joints subjected to fatigue loading. These methods are derived from fatigue tests with large scale specimens, that include effects of geometrical and structural imperfections from material production and execution (e.g. the effects of tolerances and residual stresses from welding).
Part 1-10: Material Toughness and through-thickness properties
EN 1993-1-10 provides the guidelines for the selection of steel for fracture toughness and through-thickness properties of welded elements where there is a significant risk of lamellar tearing during the fabrication process.
Part 1-11: Design of Structures with tension components
EN 1993-1-11 gives design rules for structures with tension components made of steel which due to their connections are adjustable and replaceable. These components due to their adjustability and replaceability properties are mostly pre-fabricated delivered on-site and installed into the structure as a whole. Non adjustable and replaceable components are out of the scope of EN 1993-1-11.
Part 1-12: High Strength steels
EN 1993-1-12 gives rules that can be used in conjunction with all the other part of EN 1993 to enable steel structures to be designed with steel of grades greater than S460 up to S700.
Part 2: Steel Bridges
EN 1993-2 gives a general basis for the structural design of steel bridges and steel parts of composite bridges. It gives provisions that supplement, modify or supersede the equivalent provisions given in the various parts of EN 1993-1. This standard is concerned only with the resistance, serviceability and durability of bridge structures. Other aspects of design are not considered.
Part 3-1: Towers, masts and chimneys
EN 1993-3-1 applies to the structural design of vertical steel towers, masts and chimneys, and is concerned only with their resistance, serviceability and durability.
Part 3-2: Towers, masts and chimneys - Chimneys
EN 1993-3-2 applies to the structural design of vertical steel chimneys of circular or conical section. It covers chimneys that are cantilevered, supported at intermediate levels or guyed. It is concerned only with the requirement for resistance (strength, stability and fatigue) of steel chimneys. The term Chimney is used to refer to:
chimney structures,
steel cylindrical elements of towers,
steel cylindrical shafts of guyed masts.
Part 4-1: Silos
EN 1993-4-1 provides principles and application rules for the structural design of steel silos of circular or rectangular plan-form, being free standing or supported and is concerned only with the requirements for resistance and stability of steel silos.
Part 4-2: Tanks
EN 1993-4-2 provides principles and application rules for the structural design of vertical cylindrical above ground steel storage tanks for liquid products with the following characteristics:
characteristic internal pressures above the liquid level not less than −100 mbar and not more than 500 mbar, i.e. outside the scope of the Pressure Equipment Directive,
design metal temperature in the range of −50 °C to +300 °C,
maximum design liquid level not higher than the top of the cylindrical shell.
Part 4-3: Pipelines
EN 1993-4-3 deals with the analysis and design of steel pipelines used for the transport of liquids and gases under normal temperatures.
Part 5: Piling
EN 1993-5 gives design rules for steel sheet piling and bearing piles to supplement the generic rules in EN 1993-1 and is intended to be used with Eurocodes EN 1990 - Basis of design, EN 1991 - Actions on structures and EN 1997-1 for Geotechnical Design.
Part 6: Crane supporting structures
EN 1993-6 gives principles and application rules for the structural design of crane runaway beams and other crane supporting structures including columns and other member fabricated from steel. This part is intended to be used with Eurocode EN 1991-1 and it covers overhead crane runaways inside buildings and outdoor overhead crane runaways.
References
External links
The EN Eurocodes
EN 1993 - Eurocode 3: Design of steel structures - "Eurocodes: Background and applications" workshop
01993
3
Structural steel | Eurocode 3: Design of steel structures | [
"Engineering"
] | 1,947 | [
"Structural engineering",
"Structural steel"
] |
22,727,349 | https://en.wikipedia.org/wiki/Ceramic%20matrix%20composite | In materials science ceramic matrix composites (CMCs) are a subgroup of composite materials and a subgroup of ceramics. They consist of ceramic fibers embedded in a ceramic matrix. The fibers and the matrix both can consist of any ceramic material, including carbon and carbon fibers.
Introduction
The motivation to develop CMCs was to overcome the problems associated with the conventional technical ceramics like alumina, silicon carbide, aluminum nitride, silicon nitride or zirconia – they fracture easily under mechanical or thermo-mechanical loads because of cracks initiated by small defects or scratches. The crack resistance is very low, as in glass.
To increase the crack resistance or fracture toughness, particles (so-called monocrystalline whiskers or platelets) were embedded into the matrix. However, the improvement was limited, and those products found application only in some ceramic cutting tools.
The integration of long multi-strand fibers has drastically increased the crack resistance, elongation and thermal shock resistance, and resulted in several new applications. The reinforcements used in ceramic matrix composites (CMC) serve to enhance the fracture toughness of the combined material system while still taking advantage of the inherent high strength and Young's modulus of the ceramic matrix.
The most common reinforcement embodiment is a continuous-length ceramic fiber, with an elastic modulus that is typically somewhat lower than the matrix. The functional role of this fiber is (1) to increase the CMC stress for the progress of micro-cracks through the matrix, thereby increasing the energy expended during crack propagation; and then (2) when thru-thickness cracks begin to form across the CMC at higher stress (proportional limit stress, PLS), to bridge these cracks without fracturing, thereby providing the CMC with a high ultimate tensile strength (UTS). In this way, ceramic fiber reinforcements not only increase the composite structure's initial resistance to crack propagation but also allow the CMC to avoid abrupt brittle failure that is characteristic of monolithic ceramics.
This behavior is distinct from the behavior of ceramic fibers in polymer matrix composites (PMC) and metal matrix composites (MMC), where the fibers typically fracture before the matrix due to the higher failure strain capabilities of these matrices.
Carbon (C), special silicon carbide (SiC), alumina () and mullite () fibers are most commonly used for CMCs. The matrix materials are usually the same, that is C, SiC, alumina and mullite. In certain ceramic systems, including SiC and silicon nitride, processes of abnormal grain growth may result in a microstructure exhibiting elongated large grains in a matrix of finer rounded grains. AGG derived microstructures exhibit toughening due to crack bridging and crack deflection by the elongated grains, which can be considered as an in-situ produced fibre reinforcement. Recently Ultra-high temperature ceramics (UHTCs) were investigated as ceramic matrix in a new class of CMC so-called Ultra-high Temperature Ceramic Matrix Composites (UHTCMC) or Ultra-high Temperature Ceramic Composites (UHTCC).
Generally, CMC names include a combination of type of fiber/type of matrix. For example, C/C stands for carbon-fiber-reinforced carbon (carbon/carbon), or C/SiC for carbon-fiber-reinforced silicon carbide. Sometimes the manufacturing process is included, and a C/SiC composite manufactured with the liquid polymer infiltration (LPI) process (see below) is abbreviated as LPI-C/SiC.
The important commercially available CMCs are C/C, C/SiC, SiC/SiC and . They differ from conventional ceramics in the following properties, presented in more detail below:
Elongation to rupture up to 1%
Strongly increased fracture toughness
Extreme thermal shock resistance
Improved dynamical load capability
Anisotropic properties following the orientation of fibers
Manufacture
The manufacturing processes usually consist of the following three steps:
Lay-up and fixation of the fibers, shaped like the desired component
Infiltration of the matrix material
Final machining and, if required, further treatments like coating or impregnation of the intrinsic porosity.
The first and the last step are almost the same for all CMCs:
In step one, the fibers, often named rovings, are arranged and fixed using techniques used in fiber-reinforced plastic materials, such as lay-up of fabrics, filament winding, braiding and knotting. The result of this procedure is called fiber-preform or simply preform.
For the second step, five different procedures are used to fill the ceramic matrix in between the fibers of the preform:
Deposition out of a gas mixture
Pyrolysis of a pre-ceramic polymer
Chemical reaction of elements
Sintering at a relatively low temperature in the range
Electrophoretic deposition of a ceramic powder
Procedures one, two and three find applications with non-oxide CMCs, whereas the fourth one is used for oxide CMCs; combinations of these procedures are also practiced. The fifth procedure is not yet established in industrial processes. All procedures have sub-variations, which differ in technical details. All procedures yield a porous material.
The third and final step of machining – grinding, drilling, lapping or milling – has to be done with diamond tools. CMCs can also be processed with a water jet, laser, or ultrasonic machining.
Ceramic fibers
Ceramic fibers in CMCs can have a polycrystalline structure, as in conventional ceramics. They can also be amorphous or have inhomogeneous chemical composition, which develops upon pyrolysis of organic precursors. The high process temperatures required for making CMCs preclude the use of organic, metallic or glass fibers. Only fibers stable at temperatures above can be used, such as fibers of alumina, mullite, SiC, zirconia or carbon. Amorphous SiC fibers have an elongation capability above 2% – much larger than in conventional ceramic materials (0.05 to 0.10%). The reason for this property of SiC fibers is that most of them contain additional elements like oxygen, titanium and/or aluminum yielding a tensile strength above 3 GPa. These enhanced elastic properties are required for various three-dimensional fiber arrangements (see example in figure) in textile fabrication, where a small bending radius is essential.
Manufacturing procedures
Matrix deposition from a gas phase
Chemical vapor deposition (CVD) is well suited for this purpose. In the presence of a fiber preform, CVD takes place in between the fibers and their individual filaments and therefore is called chemical vapor infiltration (CVI). One example is the manufacture of C/C composites: a C-fiber preform is exposed to a mixture of argon and a hydrocarbon gas (methane, propane, etc.) at a pressure of around or below 100 kPa and a temperature above 1000 °C. The gas decomposes depositing carbon on and between the fibers. Another example is the deposition of silicon carbide, which is usually conducted from a mixture of hydrogen and methyl-trichlorosilane (MTS, ; it is also common in silicone production). Under defined condition this gas mixture deposits fine and crystalline silicon carbide on the hot surface within the preform.
This CVI procedure leaves a body with a porosity of about 10–15%, as access of reactants to the interior of the preform is increasingly blocked by deposition on the exterior.
Matrix forming via pyrolysis of C- and Si-containing polymers
Hydrocarbon polymers shrink during pyrolysis, and upon outgassing form carbon with an amorphous, glass-like structure, which by additional heat treatment can be changed to a more graphite-like structure. Other special polymers, known as preceramic polymers where some carbon atoms are replaced by silicon atoms, the so-called polycarbosilanes, yield amorphous silicon carbide of more or less stoichiometric composition. A large variety of such silicon carbide, silicon oxycarbide, silicon carbonitride and silicon oxynitride precursors already exist and more preceramic polymers for the fabrication of polymer derived ceramics are being developed. To manufacture a CMC material, the fiber preform is infiltrated with the chosen polymer. Subsequent curing and pyrolysis yield a highly porous matrix, which is undesirable for most applications. Further cycles of polymer infiltration and pyrolysis are performed until the final and desired quality is achieved. Usually, five to eight cycles are necessary.
The process is called liquid polymer infiltration (LPI), or polymer infiltration and pyrolysis (PIP). Here also a porosity of about 15% is common due to the shrinking of the polymer. The porosity is reduced after every cycle.
Matrix forming via chemical reaction
With this method, one material located between the fibers reacts with a second material to form the ceramic matrix. Some conventional ceramics are also manufactured by chemical reactions. For example, reaction-bonded silicon nitride (RBSN) is produced through the reaction of silicon powder with nitrogen, and porous carbon reacts with silicon to form reaction bonded silicon carbide, a silicon carbide which contains inclusions of a silicon phase. An example of CMC manufacture, which was introduced for the production of ceramic brake discs, is the reaction of silicon with a porous preform of C/C. The process temperature is above , that is above the melting point of silicon, and the process conditions are controlled such that the carbon fibers of the C/C-preform almost completely retain their mechanical properties. This process is called liquid silicon infiltration (LSI). Sometimes, and because of its starting point with C/C, the material is abbreviated as C/C-SiC. The material produced in this process has a very low porosity of about 3%.
Matrix forming via sintering
This process is used to manufacture oxide fiber/oxide matrix CMC materials. Since most ceramic fibers cannot withstand the normal sintering temperatures of above , special precursor liquids are used to infiltrate the preform of oxide fibers. These precursors allow sintering, that is ceramic-forming processes, at temperatures of 1000–1200 °C. They are, for example, based on mixtures of alumina powder with the liquids tetra-ethyl-orthosilicate (as Si donor) and aluminium-butylate (as Al donor), which yield a mullite matrix. Other techniques, such as sol–gel process chemistry, are also used. CMCs obtained with this process usually have a high porosity of about 20%.
Matrix formed via electrophoresis
In the electrophoretic process, electrically charged particles dispersed in a special liquid are transported through an electric field into the preform, which has the opposite electrical charge polarity. This process is under development, and is not yet used industrially. Some remaining porosity must be expected here, too.
Properties
Mechanical properties
Basic mechanism of mechanical properties
The high fracture toughness or crack resistance mentioned above is a result of the following mechanism: under load the ceramic matrix cracks, like any ceramic material, at an elongation of about 0.05%. In CMCs the embedded fibers bridge these cracks (see picture). This mechanism works only when the matrix can slide along the fibers, which means that there must be a weak bond between the fibers and matrix. A strong bond would require a very high elongation capability of the fiber bridging the crack and would result in a brittle fracture, as with conventional ceramics. The production of CMC material with high crack resistance requires a step to weaken this bond between the fibers and matrix. This is achieved by depositing a thin layer of pyrolytic carbon or boron nitride on the fibers, which weakens the bond at the fiber/matrix interface, leading to the fiber pull-out at crack surfaces, as shown in the SEM picture at the top of this article. In oxide-CMCs, the high porosity of the matrix is sufficient to establish a weak bond.
Properties under tensile and bending loads, crack resistance
The influence and quality of the fiber interface can be evaluated through mechanical properties. Measurements of the crack resistance were performed with notched specimens (see figure) in so-called single-edge-notch-bend (SENB) tests. In fracture mechanics, the measured data (force, geometry and crack surface) are normalized to yield the so-called stress intensity factor (SIF), KIc. Because of the complex crack surface (see figure at the top of this article) the real crack surface area can not be determined for CMC materials. The measurements, therefore, use the initial notch as the crack surface, yielding the formal SIF shown in the figure. This requires identical geometry for comparing different samples. The area under these curves thus gives a relative indication of the energy required to drive the crack tip through the sample (force times path length gives energy). The maxima indicate the load level necessary to propagate the crack through the sample. Compared to the sample of conventional SiSiC ceramic, two observations can be made:
All tested CMC materials need up to several orders of magnitude more energy to propagate the crack through the material.
The force required for crack propagation varies between different types of CMCs.
In the table, CVI, LPI, and LSI denote the manufacturing process of the C/SiC-material. Data on the oxide CMC and SiSiC are taken from manufacturer datasheets. The tensile strength of SiSiC and were calculated from measurements of elongation to fracture and Young's modulus, since generally only bending strength data are available for those ceramics. Averaged values are given in the table, and significant differences, even within one manufacturing route, are possible.
Tensile tests of CMCs usually show nonlinear stress-strain curves, which look as if the material deforms plastically. It is called quasi-plastic, because the effect is caused by the microcracks, which are formed and bridged with increasing load. Since the Young's modulus of the load-carrying fibers is generally lower than that of the matrix, the slope of the curve decreases with increasing load.
Curves from bending tests look similar to those of the crack resistance measurements shown above.
The following features are essential in evaluating bending and tensile data of CMCs:
CMC materials with a low matrix content (down to zero) have a high tensile strength (close to the tensile strength of the fiber), but low bending strength.
CMC materials with a low fiber content (down to zero) have a high bending strength (close to the strength of the monolithic ceramic), but no elongation beyond 0.05% under tensile load.
The primary quality criterion for CMCs is crack resistance behavior or fracture toughness.
High Temperature Creep Properties
Although CMCs are able to operate at very high temperatures, creep deformation still occur around 1000 °C, in the range of certain high-temperature applications. Creep acts on either the matrix or fiber depending on the creep mismatch ratio (CMR) between the effective fiber strain rate and effective matrix strain rate. The component with the smaller strain rate bears the load and is susceptible to creep.
The three main creep stages are governed by the creep mismatch ratio. During primary creep, internal stresses are transferred allowing the CMR to approach unity, as well as the secondary creep stage. The tertiary creep stage, where failure occurs, can be governed by fiber creep, where failure occurs due to fiber fracture, or matrix creep, which lead to matrix cracking. Usually, matrix creep strength is worse than the fiber, so the fiber bears the load. However, matrix cracking can still occur with weak fiber regions, resulting in oxidation in oxidizing atmospheres, weakening the material. Increasing temperature, applied stress, and defect densities lead to greater creep deformation and earlier failure.
A rule of mixtures may be applied to find the strain rate of the composite given the strain rates of the constituents. For particulates, a simple sum of the product of the cross-sectional area fraction and creep response of each constituent can determine the composite's total creep response. For fibers, a sum of the constituents’ creep response divided by the cross-sectional area fraction determines the total creep response.
Particulates:
Fibers:
where is the creep response and is the constituent cross sectional area fraction.
Other mechanical properties
In many CMC components the fibers are arranged as 2-dimensional (2D) stacked plain or satin weave fabrics. Thus the resulting material is anisotropic or, more specifically, orthotropic. A crack between the layers is not bridged by fibers. Therefore, the interlaminar shear strength (ILS) and the strength perpendicular to the 2D fiber orientation are low for these materials. Delamination can occur easily under certain mechanical loads. Three-dimensional fiber structures can improve this situation (see micrograph above).
The compressive strengths shown in the table are lower than those of conventional ceramics, where values above 2000 MPa are common; this is a result of porosity.
The composite structure allows high dynamical loads. In the so-called low-cycle-fatigue (LCF) or high-cycle-fatigue (HCF) tests the material experiences cyclic loads under tensile and compressive (LCF) or only tensile (HCF) load. The higher the initial stress the shorter the lifetime and the smaller the number of cycles to rupture. With an initial load of 80% of the strength, a SiC/SiC sample survived about 8 million cycles (see figure).
The Poisson's ratio shows an anomaly when measured perpendicular to the plane of the fabric because interlaminar cracks increase the sample thickness.
Thermal and electrical properties
The thermal and electrical properties of the composite are a result of its constituents, namely fibers, matrix, and pores as well as their composition. The orientation of the fibers yields anisotropic data. Oxide CMCs are very good electrical insulators, and because of their high porosity, their thermal insulation is much better than that of conventional oxide ceramics.
The use of carbon fibers increases the electrical conductivity, provided the fibers contact each other and the voltage source. The silicon carbide matrix is a good thermal conductor. Electrically, it is a semiconductor, and its resistance therefore decreases with increasing temperature. Compared to (poly)crystalline SiC, the amorphous SiC fibers are relatively poor conductors of heat and electricity.
Comments for the table: (p) and (v) refer to data parallel and vertical to fiber orientation of the 2D-fiber structure, respectively. LSI material has the highest thermal conductivity because of its low porosity – an advantage when using it for brake discs. These data are subject to scatter depending on details of the manufacturing processes.
Conventional ceramics are very sensitive to thermal stress because of their high Young's modulus and low elongation capability. Temperature differences and low thermal conductivity create locally different elongations, which together with the high Young's modulus generate high stress. This results in cracks, rupture, and brittle failure. In CMCs, the fibers bridge the cracks, and the components show no macroscopic damage, even if the matrix has cracked locally. The application of CMCs in brake disks demonstrates the effectiveness of ceramic composite materials under extreme thermal shock conditions.
Corrosion properties
Data on the corrosion behaviour of CMCs are scarce except for oxidation at temperatures above 1000 °C. These properties are determined by the constituents, namely the fibers and matrix. Ceramic materials, in general, are very stable to corrosion. The broad spectrum of manufacturing techniques with different sintering additives, mixtures, glass phases, and porosities are crucial for the results of corrosion tests. Less impurities and exact stoichiometry lead to less corrosion. Amorphous structures and non-ceramic chemicals frequently used as sintering aids are starting points of corrosive attack.
Alumina
Pure alumina shows excellent corrosion resistance against most chemicals. Amorphous glass and silica phases at the grain boundaries determine the speed of corrosion in concentrated acids and bases and result in creep at high temperatures. These characteristics limit the use of alumina. For molten metals, alumina is used only with gold and platinum.
Alumina fibers
These fibers demonstrate corrosion properties similar to alumina, but commercially available fibers are not very pure and therefore less resistant. Because of creep at temperatures above 1000 °C, there are only a few applications for oxide CMCs.
Carbon
The most significant corrosion of carbon occurs in the presence of oxygen above about . It burns to form carbon dioxide and/or carbon monoxide. It also oxidizes in strong oxidizing agents like concentrated nitric acid. In molten metals, it dissolves and forms metal carbides. Carbon fibers do not differ from carbon in their corrosion behavior.
Silicon carbide
Pure silicon carbide is one of the most corrosion-resistant materials. Only strong bases, oxygen above about , and molten metals react with it to form carbides and silicides. The reaction with oxygen forms and , whereby a surface layer of slows down subsequent oxidation (passive oxidation). Temperatures above about and a low partial pressure of oxygen result in so-called active oxidation, in which CO, and gaseous SiO are formed causing rapid loss of SiC. If the SiC matrix is produced other than by CVI, corrosion-resistance is not as good. This is a consequence of porosity in the amorphous LPI, and residual silicon in the LSI-matrix.
Silicon carbide fibers
Silicon carbide fibers are produced via pyrolysis of organic polymers, and therefore their corrosion properties are similar to those of the silicon carbide found in LPI-matrices. These fibers are thus more sensitive to bases and oxidizing media than pure silicon carbide.
Applications
CMC materials overcome the major disadvantages of conventional technical ceramics, namely brittle failure and low fracture toughness, and limited thermal shock resistance. Therefore, their applications are in fields requiring reliability at high-temperatures (beyond the capability of metals) and resistance to corrosion and wear. These include:
Heat shield systems for space vehicles, which are needed during the re-entry phase, where high temperatures, thermal shock conditions and heavy vibration loads take place.
Components for high-temperature gas turbines such as combustion chambers, stator vanes, exhaust mixers and turbine blades.
Components for burners, flame holders, and hot gas ducts, where the use of oxide CMCs has found its way.
Brake disks and brake system components, which experience extreme thermal shock (greater than throwing a glowing part of any material into water).
Components for slide bearings under heavy loads requiring high corrosion and wear resistance.
In addition to the foregoing, CMCs can be used in applications which employ conventional ceramics or in which metal components have limited lifetimes due to corrosion or high temperatures.
Developments for applications in space
During the re-entry phase of space vehicles, the heat shield system is exposed to temperatures above for a few minutes. Only ceramic materials can survive such conditions without significant damage, and among ceramics, only CMCs can adequately handle thermal shocks. The development of CMC-based heat shield systems promises the following advantages:
Reduced weight
Higher load-carrying capacity of the system
Reusability for several re-entries
Better steering during the re-entry phase with CMC flap systems
In these applications, the high temperatures preclude the use of oxide fiber CMCs, because under the expected loads the creep would be too high. Amorphous silicon carbide fibers lose their strength due to re-crystallization at temperatures above . Therefore, carbon fibers in a silicon carbide matrix (C/SiC) are used in development programs for these applications. The European program HERMES of ESA, started in the 1980s and for financial reasons abandoned in 1992, has produced first results. Several follow-up programs focused on the development, manufacture, and qualification of nose cap, leading edges and steering flaps for the NASA X-38 space vehicle.
This development program has qualified the use of C/SiC bolts and nuts and the bearing system of the flaps. The latter were ground-tested at the DLR in Stuttgart, Germany, under expected conditions of the re-entry phase: , 4 tonnes load, oxygen partial pressure similar to re-entry conditions, and simultaneous bearing movements of four cycles per second. A total of five re-entry phases were simulated. Design and manufacture of the two steering flaps and its bearings, screws and nuts was performed by MT Aerospace in Augsburg, Germany based on the CVI-process for the production of carbon fiber reinforced silicon carbide (see manufacturing procedures above).
Furthermore, oxidation protection systems were developed and qualified to prevent burnout of the carbon fibers. After mounting of the flaps, mechanical ground tests were performed successfully by NASA in Houston, Texas, US. The next test – a real re-entry of the unmanned vehicle X-38 – was canceled for financial reasons. One of the Space Shuttles would have brought the vehicle into orbit, from where it would have returned to the Earth.
These qualifications were promising for only this application. The high-temperature load lasts only around 20 minutes per re-entry, and for reusability, only about 30 cycles would be sufficient. For industrial applications in a hot gas environment, though, several hundred cycles of thermal loads and up to many thousands of hours of lifetime are required.
The Intermediate eXperimental Vehicle (IXV), a project initiated by ESA in 2009, is Europe's first lifting body reentry vehicle. Developed by Thales Alenia Space, the IXV was scheduled to make its first flight in 2014 on the fourth Vega mission (VV04) over the Gulf of Guinea. More than 40 European companies contributed to its construction. The thermal protection system for the underside of the vehicle, comprising the nose, leading edges and lower surface of the wing, were designed and made by Herakles using a ceramic matrix composite (CMC), carbon/silicon-carbide (C/SiC), in this case based on the liquid silicon infilration (LSI) process (see manufacturing procedures above). These components should have been functioned as the vehicle's heat shield during its atmospheric reentry.
The European Commission funded a research project, C3HARME, under the NMP-19-2015 call of Framework Programmes for Research and Technological Development (H2020) in 2016 for the design, development, production, and testing of a new class of ultra-high temperature ceramic matrix composites (UHTCMC) reinforced with silicon carbide fibers and carbon fibers suitable for applications in severe aerospace environments such as propulsion and Thermal protection systems (TPSs).
Developments for gas turbine components
The use of CMCs in gas turbines permit higher turbine inlet temperatures, which improves engine efficiency. Because of the complex shape of stator vanes and turbine blades, the development was first focused on the combustion chamber. In the US, a combustor made of SiC/SiC with a special SiC fiber of enhanced high-temperature stability was successfully tested for 15,000 hours. SiC oxidation was substantially reduced by the use of an oxidation protection coating consisting of several layers of oxides.
The engine collaboration between General Electric and Rolls-Royce studied the use of CMC stator vanes in the hot section of the F136, a turbofan engine which failed to beat the Pratt & Whitney F135 for use in the F-35 Joint Strike Fighter. An engine joint venture, CFM International, is using CMCs to manufacture the high-temperature turbine shrouds. General Electric is using CMCs in combustor liners, nozzles, and the high-temperature turbine shroud for its upcoming GE9X engine. CMC parts are also being studied for stationary applications in both the cold and hot sections of the engines since stresses imposed on rotating parts would require further development effort. Generally, development continues of CMCs for use in turbines to reduce technical issues and cost reduction.
After in investment and 20 years of research and development, by 2020 GE Aviation aims to produce per year up to of CMC prepreg and 10 t of silicon carbide fiber. Chemical vapor deposition can apply coatings on a laid-able fiber tape in large quantities and GE managed to infiltrate and cast parts with very high silicon densities, higher than 90% for cyclic fatigue environments, thanks to thermal processing.
EBCs to protect gas turbine components
Environmental barrier coatings (EBCs) provide a barrier to the CMCs to reduce the amount of oxygen and other corrosive substances from diffusing through the surface of CMC components.
Design Requirements for EBCs:
Relative coefficient match with CMC component to reduce probability of cracking
Low volatility to minimize stead-induced corrosion/recession
Resistant to molten ingested particulate
High temperature capability
Phase stability at high temperatures
Chemical Compatibility with the CMC and additional layers
High Hardness and toughness to protect against Foreign Object Damage (FOD) and erosion
Typically when coating with an EBC a bond coat is required to support good adhesion to the CMC component. NASA has developed a slurry based EBC which starts with a mullite-based coating before being layered with an additional 2-3 layers. In order for EBCs to actively protect the CMC surface, sintering aids must be added to the slurry coat to create a dense coating that will block the penetration of oxygen, gaseous, and molten deposits from the engine. Sintering creates a densified coating and enhances bonding and performance of the coating.
Currently, research is being done to combat common failure modes such as delamination, erosion, and cracking caused by steam or molten deposits. Delamination and cracking due to molten deposits are typically caused by the reaction with the EBC creating an unexpected microstructure leading to CTE mismatch and low toughness in that phase. Steam degradation is caused by the volatilization of the thermally grown oxide layer between the EBC and the ceramic. The steam produced from this leads to a rapid recession of SiC, i.e. degradation of the EBC. The success of EBCs are imperative to the overall success of CMC components in the gas flow of the turbine in jet engines.
Overall benefits of EBCs:
Extends the life of CMC components allowing for overall cost savings in jet engine production
Improves Oxidation Resistance of CMC components
Provides greater oxidation resistance to CMC components exposed to gaseous compounds from the jet engine
Application of oxide CMC in burner and hot gas ducts
Oxygen-containing gas at temperatures above is rather corrosive for metal and silicon carbide components. Such components, which are not exposed to high mechanical stress, can be made of oxide CMCs, which can withstand temperatures up to . The gallery below shows the flame holder of a crispbread bakery as tested after for 15,000 hours, which subsequently operated for a total of more than 20,000 hours.
Flaps and ventilators circulating hot, oxygen-containing gases can be fabricated in the same shape as their metal equivalents. The lifetime for these oxide CMC components is several times longer than for metals, which often deform. A further example is an oxide CMC lifting gate for a sintering furnace, which has survived more than 260,000 opening cycles.
Application in brake discs
Carbon/carbon (C/C) materials are used in the disc brakes of racing cars and airplanes, and C/SiC brake disks manufactured by the LSI process were qualified and are commercially available for sports cars. The advantages of these C/SiC disks are:
Very little wear, resulting in lifetime use for a car with a normal driving load of , is forecast by manufacturers.
No fading is experienced, even under high load.
No surface humidity effect on the friction coefficient shows up, as in C/C brake disks.
The corrosion resistance, for example to the road salt, is much better than for metal disks.
The disk mass is only 40% of a metal disk. This translates into less unsprung and rotating mass.
The weight reduction improves shock absorber response, road-holding comfort, agility, fuel economy, and thus driving comfort.
The SiC-matrix of LSI has a very low porosity, which protects the carbon fibers quite well. Brake disks do not experience temperatures above for more than a few hours in their lifetime. Oxidation is therefore not a problem in this application. The reduction of manufacturing costs will decide the success of this application for middle-class cars.
Application in slide bearings
Conventional SiC, or sometimes the less expensive SiSiC, have been used successfully for more than 25 years in slide or journal bearings of pumps. The pumped liquid itself provides the lubricant for the bearing. Very good corrosion resistance against practically all kinds of media, and very low wear and low friction coefficients are the basis of this success. These bearings consist of a static bearing, shrink-fitted in its metallic environment, and a rotating shaft sleeve, mounted on the shaft. Under compressive stress, the ceramic static bearing has a low risk of failure, but a SiC shaft sleeve does not have this situation and must, therefore, have a large wall thickness and/or be specially designed. In large pumps with shafts in diameter, the risk of failure is higher due to the changing requirements on the pump performance – for example, load changes during operation. The introduction of SiC/SiC as a shaft sleeve material has proven to be very successful. Test rig experiments showed an almost triple specific load capability of the bearing system with a shaft sleeve made of SiC/SiC, sintered SiC as static bearing, and water at as lubricant. The specific load capacity of a bearing is usually given in W/mm2 and calculated as a product of the load (MPa), the surface speed of the bearing (m/s) and friction coefficient; it is equal to the power loss of the bearing system due to friction.
This slide bearing concept, namely SiC/SiC shaft sleeve and SiC bearing, has been used since 1994 in applications such as in the boiler feedwater pumps of power stations, which pump several thousand cubic meters of hot water to a level of , and in tubular casing pumps for water works or seawater desalination plants, pumping up to to a level of around .
This bearing system has been tested in pumps for liquid oxygen, for example in oxygen turbopumps for thrust engines of space rockets, with the following results. SiC and SiC/SiC are compatible with liquid oxygen. In an auto-ignition test according to the French standard NF 28-763, no auto-ignition was observed with powdered SiC/SiC in 20 bar pure oxygen at temperatures up to . Tests have shown that the friction coefficient is half, and wear one-fiftieth of standard metals used in this environment. A hydrostatic bearing system (see picture) has survived several hours at a speed up to 10,000 revolutions per minute, various loads, and 50 cycles of start/stop transients without any significant traces of wear.
Other applications and developments
Thrust control flaps for military jet engines
Components for fusion and fission reactors
Friction systems for various applications
Nuclear applications
Solar furnaces
Heat treatment, high temperature, soldering fixtures
See also
Polymer matrix composites
Metal matrix composites
References
Further reading
Ceramic materials
Composite materials | Ceramic matrix composite | [
"Physics",
"Engineering"
] | 7,307 | [
"Composite materials",
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
22,728,536 | https://en.wikipedia.org/wiki/Heat%20number | A heat number is a unique identification coupon number that is stamped on a material plate after it is removed from the ladle and rolled at a steel mill. It serves as a traceable identifier that links the metal product to its specific batch or "heat," allowing access to detailed records about the material's composition, manufacturing process, and quality assurance.
Industry quality standards require materials to be tested at the manufacturer and the results of these tests be submitted through a report, also called a mill sheet, mill certificate or mill test certificate (MTC). The only way to trace a steel plate back to its mill sheet is the heat number. A heat number is similar to a lot number, which is used to identify production runs of any other product for quality control purposes.
Numerical significance
Usually, but not universally, the numbers indicate:
the first digit corresponds to the furnace number
the second digit indicates the year in which the material was melted
the last three (and sometimes four) indicate the melt number.
References
Steelmaking
Metallurgical processes
Quality control | Heat number | [
"Chemistry",
"Materials_science"
] | 215 | [
"Metallurgical processes",
"Steelmaking",
"Metallurgy"
] |
27,506,488 | https://en.wikipedia.org/wiki/FreeFlyer | FreeFlyer is a commercial off-the-shelf software application for satellite mission analysis, design, and operations. Its architecture revolves around its native scripting language, known as FreeForm Script. As a mission planning tool, it encompasses several capabilities, including precise orbit modeling, 2D and 3D visualization, sensor modeling, maneuver modeling, maneuver estimation, plotting, orbit determination, tracking data simulation, and space environment modeling.
FreeFlyer implements standard astrodynamics models such as the JGM-2, EGM-96, and LP-165 gravity potential models; atmospheric density models like Jacchia-Roberts, Harris-Priester, and NRL-MSIS; the International Reference Ionosphere model; and the International Geomagnetic Reference Field magnetic field model.
Background
FreeFlyer is owned and developed by a.i. solutions, Inc. and is utilized by NASA, NOAA, and the USAF for space mission operations, mission assurance, and analysis support.
Operational and analysis support
FreeFlyer has been used to support many spacecraft missions, for mission planning analysis, operational analysis, or both. Specific mission examples include the International Space Station (ISS), the JSpOC Mission System, the Earth Observing System, Solar Dynamics Observatory (SDO), and Magnetospheric Multiscale Mission (MMS).
FreeFlyer has also been successfully used to conduct analysis in both the high-performance computing (HPC) and service-oriented architecture (SOA) environments.
Software tiers
FreeFlyer is one stand-alone product with two tiers of rising functionality.
FreeFlyer scripting
FreeFlyer contains an object-oriented scripting language and an accompanying integrated development environment.
Below is a basic FreeFlyer script that creates and displays a spacecraft:
// Create a spacecraft object
Spacecraft sc1;
// Create a ViewWindow, passing sc1 as part of an array of objects to view
ViewWindow vw({sc1});
// Propagate and view the spacecraft for two days
While (sc1.ElapsedTime < TimeSpan.FromDays(2));
sc1.Step();
vw.Update();
End;
References
External links
3D graphics software
Aerospace engineering software
Astronomy software
Mathematical software
Physics software
Science software for Windows | FreeFlyer | [
"Physics",
"Astronomy",
"Mathematics",
"Engineering"
] | 473 | [
"Works about astronomy",
"Physics software",
"Computational physics",
"Astronomy software",
"Aerospace engineering",
"Aerospace engineering software",
"Mathematical software"
] |
27,509,304 | https://en.wikipedia.org/wiki/Water%20pyramid | A Water pyramid or WaterPyramid is a village-scale solar still, designed to distill water using solar energy for remote communities without easy access to clean, fresh water. It provides a means whereby communities can produce potable drinking water from saline, brackish or polluted water sources.
History
Martijn Nitzsche, an engineer from the Netherlands, founded Aqua-Aero Water Systems to develop water treatment and purification systems. In the early 2000s, the company invented the WaterPyramid technology. The first WaterPyramid was engineered and installed in collaboration with MWH Global, an international environmental engineering firm, in the country of Gambia in 2005. The WaterPyramid desalination systems were awarded the World Bank Development Marketplace award in 2006.
Description
The pyramid stands about tall, in diameter, and has a conical shape. It is constructed of plastic sheeting, which is inflated using a fan powered by solar energy generated by the pyramid. Within the pyramid, temperatures reach up to , which evaporates water pumped into thin layer of water inside the cone. Distilled water runs down the sides of the pyramid wall and is collected by gutters that feed into a collection tank. When sunshine is replaced by rain, the falling water is also collected around the edge of the base of the cone and stored for use in dry weather. Each pyramid can desalinate approximately 265 gallons (1000 liters) of water each day. To increase water production, a village simply adds additional pyramids.
Operation of each pyramid is the responsibility of the local community, generating employment opportunities for the village. Since the water produced by the Pyramid is distilled water, there are also business uses for excess water production, such as the filling of batteries, which provide additional income to the village.
See also
Desalination
Solar-powered desalination unit
Solar still
Seawater Greenhouse
Notes
External links
Inventor of the WaterPyramid, Aqua-Aero Water Systems: www.aaws.nl
Collaborative engineering and consulting firm MWH: mwhglobal.com
Water treatment
Water technology
Water supply
Hydrology
Appropriate technology
Drinking water
Water and the environment | Water pyramid | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 434 | [
"Hydrology",
"Water treatment",
"Water pollution",
"Environmental engineering",
"Water technology",
"Water supply"
] |
21,280,576 | https://en.wikipedia.org/wiki/Mechanism%20%28engineering%29 | In engineering, a mechanism is a device that transforms input forces and movement into a desired set of output forces and movement. Mechanisms generally consist of moving components which may include Gears and gear trains; Belts and chain drives; cams and followers; Linkages; Friction devices, such as brakes or clutches; Structural components such as a frame, fasteners, bearings, springs, or lubricants; Various machine elements, such as splines, pins, or keys.
German scientist Franz Reuleaux defines machine as "a combination of resistant bodies so arranged that by their means the mechanical forces of nature can be compelled to do work accompanied by certain determinate motion". In this context, his use of machine is generally interpreted to mean mechanism.
The combination of force and movement defines power, and a mechanism manages power to achieve a desired set of forces and movement.
A mechanism is usually a piece of a larger process, known as a mechanical system or machine. Sometimes an entire machine may be referred to as a mechanism; examples are the steering mechanism in a car, or the winding mechanism of a wristwatch.
However, typically, a set of multiple mechanisms is called a machine.
Kinematic pairs
From the time of Archimedes to the Renaissance, mechanisms were viewed as constructed from simple machines, such as the lever, pulley, screw, wheel and axle, wedge, and inclined plane. Reuleaux focused on bodies, called links, and the connections between these bodies, called kinematic pairs, or joints.
To use geometry to study the movement of a mechanism, its links are modelled as rigid bodies. This means that distances between points in a link are assumed to not change as the mechanism moves—that is, the link does not flex. Thus, the relative movement between points in two connected links is considered to result from the kinematic pair that joins them.
Kinematic pairs, or joints, are considered to provide ideal constraints between two links, such as the constraint of a single point for pure rotation, or the constraint of a line for pure sliding, as well as pure rolling without slipping and point contact with slipping. A mechanism is modelled as an assembly of rigid links and kinematic pairs.
Links and joints
Reuleaux called the ideal connections between links kinematic pairs. He distinguished between higher pairs, with line contact between the two links, and lower pairs, with area contact between the links. shows that there are many ways to construct pairs that do not fit this simple model.
Lower pair: A lower pair is an ideal joint that has surface contact between the pair of elements, as in the following cases:
A revolute pair, or hinged joint, requires that a line in the moving body remain co-linear with a line in the fixed body, and a plane perpendicular to this line in the moving body must maintain contact with a similar perpendicular plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore gives the pair one degree of freedom.
A prismatic joint, or slider, requires that a line in the moving body remain co-linear with a line in the fixed body, and a plane parallel to this line in the moving body must maintain contact with a similar parallel plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore gives the pair one degree of freedom.
A cylindrical joint requires that a line in the moving body remain co-linear with a line in the fixed body. It combines a revolute joint and a sliding joint. This joint has two degrees of freedom.
A spherical joint, or ball joint, requires that a point in the moving body maintain contact with a point in the fixed body. This joint has three degrees of freedom.
A planar joint requires that a plane in the moving body maintain contact with a plane in a fixed body. This joint has three degrees of freedom.
A screw joint, or helical joint, has only one degree of freedom because the sliding and rotational motions are related by the helix angle of the thread.
Higher pairs: Generally, a higher pair is a constraint that requires a line or point contact between the elemental surfaces. For example, the contact between a cam and its follower is a higher pair called a cam joint. Similarly, the contact between the involute curves that form the meshing teeth of two gears are cam joints.
Kinematic diagram
A kinematic diagram reduces machine components to a skeleton diagram that emphasises the joints and reduces the links to simple geometric elements. This diagram can also be formulated as a graph by representing the links of the mechanism as edges and the joints as vertices of the graph. This version of the kinematic diagram has proven effective in enumerating kinematic structures in the process of machine design.
An important consideration in this design process is the degree of freedom of the system of links and joints, which is determined using the Chebychev–Grübler–Kutzbach criterion.
Planar mechanisms
While all mechanisms in a mechanical system are three-dimensional, they can be analysed using plane geometry if the movement of the individual components is constrained so that all point trajectories are parallel or in a series connection to a plane. In this case the system is called a planar mechanism. The kinematic analysis of planar mechanisms uses the subset of Special Euclidean group SE, consisting of planar rotations and translations, denoted by SE.
The group SE is three-dimensional, which means that every position of a body in the plane is defined by three parameters. The parameters are often the x and y coordinates of the origin of a coordinate frame in M, measured from the origin of a coordinate frame in F, and the angle measured from the x-axis in F to the x-axis in M. This is often described saying a body in the plane has three degrees of freedom.
The pure rotation of a hinge and the linear translation of a slider can be identified with subgroups of SE, and define the two joints as one degree-of-freedom joints of planar mechanisms. The cam joint formed by two surfaces in sliding and rotating contact is a two degree-of-freedom joint.
Spherical mechanisms
It is possible to construct a mechanism such that the point trajectories in all components lie in concentric spherical shells around a fixed point. An example is the gimbaled gyroscope. These devices are called spherical mechanisms. Spherical mechanisms are constructed by connecting links with hinged joints such that the axes of each hinge pass through the same point. This point becomes centre of the concentric spherical shells. The movement of these mechanisms is characterised by the group SO(3) of rotations in three-dimensional space. Other examples of spherical mechanisms are the automotive differential and the robotic wrist.
The rotation group SO(3) is three-dimensional. An example of the three parameters that specify a spatial rotation are the roll, pitch and yaw angles used to define the orientation of an aircraft.
Spatial mechanisms
A mechanism in which a body moves through a general spatial movement is called a spatial mechanism. An example is the RSSR linkage, which can be viewed as a four-bar linkage in which the hinged joints of the coupler link are replaced by rod ends, also called spherical joints or ball joints. The rod ends let the input and output cranks of the RSSR linkage be misaligned to the point that they lie in different planes, which causes the coupler link to move in a general spatial movement. Robot arms, Stewart platforms, and humanoid robotic systems are also examples of spatial mechanisms.
Bennett's linkage is an example of a spatial overconstrained mechanism, which is constructed from four hinged joints.
The group SE(3) is six-dimensional, which means the position of a body in space is defined by six parameters. Three of the parameters define the origin of the moving reference frame relative to the fixed frame. Three other parameters define the orientation of the moving frame relative to the fixed frame.
Linkages
A linkage is a collection of links connected by joints. Generally, the links are the structural elements and the joints allow movement. Perhaps the single most useful example is the planar four-bar linkage. There are, however, many more special linkages:
Watt's linkage is a four-bar linkage that generates an approximate straight line. It was critical to the operation of his design for the steam engine. This linkage also appears in vehicle suspensions to prevent side-to-side movement of the body relative to the wheels.
The success of Watt's linkage led to the design of similar approximate straight-line linkages, such as Hoeken's linkage and Chebyshev's linkage.
The Peaucellier linkage generates a true straight-line output from a rotary input.
The Sarrus linkage is a spatial linkage that generates straight-line movement from a rotary input.
The Klann linkage and the Jansen linkage are recent inventions that provide interesting walking movements. They are respectively a six-bar and an eight-bar linkage.
Compliant mechanisms
A compliant mechanism is a series of rigid bodies connected by compliant elements. These mechanisms have many advantages, including reduced part-count, reduced "slop" between joints (no parasitic motion because of gaps between parts), energy storage, low maintenance (they don't require lubrication and there is low mechanical wear), and ease of manufacture.
Flexure bearings (also known as flexure joints) are a subset of compliant mechanisms that produce a geometrically well-defined motion (rotation) on application of a force.
Cam and follower mechanisms
A cam and follower mechanism is formed by the direct contact of two specially shaped links. The driving link is called the cam and the link that is driven through the direct contact of their surfaces is called the follower. The shape of the contacting surfaces of the cam and follower determines the movement of the mechanism. In general a cam and follower mechanism's energy is transferred from cam to follower. The camshaft is rotated and, according to the cam profile, the follower moves up and down. Nowadays, slightly different types of eccentric cam followers are also available, in which energy is transferred from the follower to the cam. The main benefit of this type of cam and follower mechanism is that the follower moves slightly and helps to rotate the cam six times more circumference length with 70% of the force.
Gears and gear trains
The transmission of rotation between contacting toothed wheels can be traced back to the Antikythera mechanism of Greece and the south-pointing chariot of China. Illustrations by the Renaissance scientist Georgius Agricola show gear trains with cylindrical teeth. The implementation of the involute tooth yielded a standard gear design that provides a constant speed ratio. Some important features of gears and gear trains are:
The ratio of the pitch circles of mating gears defines the speed ratio and the mechanical advantage of the gear set.
A planetary gear train provides high gear reduction in a compact package.
It is possible to design gear teeth for gears that are non-circular, yet still transmit torque smoothly.
The speed ratios of chain and belt drives are computed in the same way as gear ratios.
Mechanism synthesis
The design of mechanisms to achieve a particular movement and force transmission is known as the kinematic synthesis of mechanisms. This is a set of geometric techniques which yield the dimensions of linkages, cam and follower mechanisms, and gears and gear trains to perform a required mechanical movement and power transmission.
See also
Gear train
Linkage (mechanical)
Machine (mechanical)
Mechanical system
Mechanical watch
Outline of machines
Virtual work
Hoberman mechanism
Moving parts
References
External links
Balanced hinge-lever mechanism
Machines and Mechanisms Wiki
Kinematic Models for Design Digital Library (KMODDL) collections of movies and photos of hundreds of mechanism models
A six-bar straight-line linkage in the collection of Reuleaux models at Cornell University
Animations of a variety of mechanisms
Example of a six-bar function generator that computes the angle for a given range
A variety of linkage animations
A variety of six-bar linkage designs
Animation of a spherical deployable mechanism
Machines | Mechanism (engineering) | [
"Physics",
"Technology",
"Engineering"
] | 2,492 | [
"Physical systems",
"Machines",
"Mechanical engineering",
"Mechanisms (engineering)"
] |
28,793,951 | https://en.wikipedia.org/wiki/Milne-Thomson%20circle%20theorem | In fluid dynamics the Milne-Thomson circle theorem or the circle theorem is a statement giving a new stream function for a fluid flow when a cylinder is placed into that flow. It was named after the English mathematician L. M. Milne-Thomson.
Let be the complex potential for a fluid flow, where all singularities of lie in . If a circle is placed into that flow, the complex potential for the new flow is given by
with same singularities as in and is a streamline. On the circle , , therefore
Example
Consider a uniform irrotational flow with velocity flowing in the positive direction and place an infinitely long cylinder of radius in the flow with the center of the cylinder at the origin. Then , hence using circle theorem,
represents the complex potential of uniform flow over a cylinder.
See also
Potential flow
Conformal mapping
Velocity potential
Milne-Thomson method for finding a holomorphic function
References
Fluid mechanics
Fluid dynamics
Equations of fluid dynamics | Milne-Thomson circle theorem | [
"Physics",
"Chemistry",
"Engineering"
] | 191 | [
"Equations of fluid dynamics",
"Equations of physics",
"Chemical engineering",
"Civil engineering",
"Piping",
"Fluid mechanics",
"Fluid dynamics"
] |
28,794,596 | https://en.wikipedia.org/wiki/Lead%20titanate | Lead(II) titanate is an inorganic compound with the chemical formula PbTiO3. It is the lead salt of titanic acid. Lead(II) titanate is a yellow powder that is insoluble in water.
At high temperatures, lead titanate adopts a cubic perovskite structure. At 760 K, the material undergoes a second order phase transition to a tetragonal perovskite structure which exhibits ferroelectricity. Lead titanate is one of the end members of the lead zirconate titanate (, , PZT) system, which is technologically one of the most important ferroelectric and piezoelectric ceramics; .
Lead titanate occurs in nature as mineral macedonite.
Toxicity
Lead titanate is toxic, like other lead compounds. It irritates skin, mucous membranes and eyes. It may also cause harm to unborn babies and might have effects on fertility.
Solubility in water
The solubility of hydrothermally-synthesized perovskite-phase PbTiO3 in water was experimentally determined at 25 and 80 °C to depend on pH and vary from 4.9x10−4 mol/kg at pH≈3, to 1.9x10−4 mol/kg at pH≈7.7, to "undetectable" (<3.2x10−7 mol/kg) in the range 10<pH<11. At still higher pH values, the solubility increased again. The solubility was apparently incongruent and was quantified as the analytical concentration of Pb.
References
Lead(II) compounds
Titanates
Ferroelectric materials
Perovskites | Lead titanate | [
"Physics",
"Materials_science"
] | 356 | [
"Physical phenomena",
"Ferroelectric materials",
"Materials",
"Electrical phenomena",
"Hysteresis",
"Matter"
] |
28,796,495 | https://en.wikipedia.org/wiki/Zoptarelin%20doxorubicin | Zoptarelin doxorubicin (developmental code names AEZS-108, AN-152) consists of doxorubicin linked to a small peptide agonist to the luteinizing hormone-releasing hormone (LHRH) receptor. It has been developed as a potential treatment for a number of human cancers. The LHRH receptor is aberrantly present on the cell surface of approximately 80% of endometrial and ovarian cancers, 86% of prostate cancers and about 50% of breast cancers. Whereas in normal tissues, expression of this receptor is mainly confined to the pituitary gland, reproductive organs and hematopoietic stem cells. To a lesser extent the LHRH receptor is also found on the surface of bladder, colorectal, and pancreatic cancers, sarcomas, lymphomas, melanomas, and renal cell carcinomas.
The proposed method of action is that upon administration zoptarelin doxorubicin binds to the LHRH receptor and is subsequently internalized, concentrating the toxic doxorubicin within cancer cells and the small subset of normal tissues, as opposed to the completely systemic distribution observed with untargeted chemotherapeutics. The specific targeting of the doxorubicin to LHRH receptor bearing cells is also proposed to reduce the cardiotoxicity observed in the administration of unconjugated doxorubicin.
Zoptarelin doxorubicin was invented by Andrew V. Schally while at the Tulane University School of Medicine, New Orleans and subsequently at the Sylvester Comprehensive Cancer Center, University of Miami. It has been subsequently developed by AEterna Zentaris Inc. In June 2016, Aeterna stated that it plans to submit an NDA to the FDA by mid 2017. Zoptarelin doxorubicin was discontinued for all indications under development in May 2017.
The U.S. Food and Drug Administration (FDA) has granted it orphan drug status for ovarian cancer and endometrial cancer.
Clinical trials
Promising results have been reported from a phase II clinical trial for ovarian cancer and endometrial cancer. Phase II trials have also been undertaken for prostate, breast and bladder cancer, although no results for these trials have been reported in peer-reviewed literature. A phase I trial in prostate cancer indicated that nine out ten evaluable patients achieved disease stabilization through administration of zoptarelin doxorubicin.
A phase III trial for endometrial cancer was initiated in April 2013 and it the primary completion date is estimated to be December 2016. In May 2017 the results were disclosed, showing that the drug did not extend overall survival nor did it improve the safety profile compared to doxorubicin.
References
Abandoned drugs
Anthracyclines
Peptides | Zoptarelin doxorubicin | [
"Chemistry"
] | 600 | [
"Biomolecules by chemical classification",
"Drug safety",
"Molecular biology",
"Peptides",
"Abandoned drugs"
] |
28,801,798 | https://en.wikipedia.org/wiki/Active%20learning%20%28machine%20learning%29 | Active learning is a special case of machine learning in which a learning algorithm can interactively query a human user (or some other information source), to label new data points with the desired outputs. The human user must possess knowledge/expertise in the problem domain, including the ability to consult/research authoritative sources when necessary. In statistics literature, it is sometimes also called optimal experimental design. The information source is also called teacher or oracle.
There are situations in which unlabeled data is abundant but manual labeling is expensive. In such a scenario, learning algorithms can actively query the user/teacher for labels. This type of iterative supervised learning is called active learning. Since the learner chooses the examples, the number of examples to learn a concept can often be much lower than the number required in normal supervised learning. With this approach, there is a risk that the algorithm is overwhelmed by uninformative examples. Recent developments are dedicated to multi-label active learning, hybrid active learning and active learning in a single-pass (on-line) context, combining concepts from the field of machine learning (e.g. conflict and ignorance) with adaptive, incremental learning policies in the field of online machine learning. Using active learning allows for faster development of a machine learning algorithm, when comparative updates would require a quantum or super computer.
Large-scale active learning projects may benefit from crowdsourcing frameworks such as Amazon Mechanical Turk that include many humans in the active learning loop.
Definitions
Let be the total set of all data under consideration. For example, in a protein engineering problem, would include all proteins that are known to have a certain interesting activity and all additional proteins that one might want to test for that activity.
During each iteration, , is broken up into three subsets
: Data points where the label is known.
: Data points where the label is unknown.
: A subset of that is chosen to be labeled.
Most of the current research in active learning involves the best method to choose the data points for .
Scenarios
Pool-Based Sampling: In this approach, which is the most well known scenario, the learning algorithm attempts to evaluate the entire dataset before selecting data points (instances) for labeling. It is often initially trained on a fully labeled subset of the data using a machine-learning method such as logistic regression or SVM that yields class-membership probabilities for individual data instances. The candidate instances are those for which the prediction is most ambiguous. Instances are drawn from the entire data pool and assigned a confidence score, a measurement of how well the learner "understands" the data. The system then selects the instances for which it is the least confident and queries the teacher for the labels. The theoretical drawback of pool-based sampling is that it is memory-intensive and is therefore limited in its capacity to handle enormous datasets, but in practice, the rate-limiting factor is that the teacher is typically a (fatiguable) human expert who must be paid for their effort, rather than computer memory.
Stream-Based Selective Sampling: Here, each consecutive unlabeled instance is examined one at a time with the machine evaluating the informativeness of each item against its query parameters. The learner decides for itself whether to assign a label or query the teacher for each datapoint. As contrasted with Pool-based sampling, the obvious drawback of stream-based methods is that the learning algorithm does not have sufficient information, early in the process, to make a sound assign-label-vs ask-teacher decision, and it does not capitalize as efficiently on the presence of already labeled data. Therefore, the teacher is likely to spend more effort in supplying labels than with the pool-based approach.
Membership Query Synthesis: This is where the learner generates synthetic data from an underlying natural distribution. For example, if the dataset are pictures of humans and animals, the learner could send a clipped image of a leg to the teacher and query if this appendage belongs to an animal or human. This is particularly useful if the dataset is small. The challenge here, as with all synthetic-data-generation efforts, is in ensuring that the synthetic data is consistent in terms of meeting the constraints on real data. As the number of variables/features in the input data increase, and strong dependencies between variables exist, it becomes increasingly difficult to generate synthetic data with sufficient fidelity. For example, to create a synthetic data set for human laboratory-test values, the sum of the various white blood cell (WBC) components in a White Blood Cell differential must equal 100, since the component numbers are really percentages. Similarly, the enzymes Alanine Transaminase (ALT) and Aspartate Transaminase (AST) measure liver function (though AST is also produced by other tissues, e.g., lung, pancreas) A synthetic data point with AST at the lower limit of normal range (8-33 Units/L) with an ALT several times above normal range (4-35 Units/L) in a simulated chronically ill patient would be physiologically impossible.
Query strategies
Algorithms for determining which data points should be labeled can be organized into a number of different categories, based upon their purpose:
Balance exploration and exploitation: the choice of examples to label is seen as a dilemma between the exploration and the exploitation over the data space representation. This strategy manages this compromise by modelling the active learning problem as a contextual bandit problem. For example, Bouneffouf et al. propose a sequential algorithm named Active Thompson Sampling (ATS), which, in each round, assigns a sampling distribution on the pool, samples one point from this distribution, and queries the oracle for this sample point label.
Expected model change: label those points that would most change the current model.
Expected error reduction: label those points that would most reduce the model's generalization error.
Exponentiated Gradient Exploration for Active Learning: In this paper, the author proposes a sequential algorithm named exponentiated gradient (EG)-active that can improve any active learning algorithm by an optimal random exploration.
Uncertainty sampling: label those points for which the current model is least certain as to what the correct output should be.
Query by committee: a variety of models are trained on the current labeled data, and vote on the output for unlabeled data; label those points for which the "committee" disagrees the most
Querying from diverse subspaces or partitions: When the underlying model is a forest of trees, the leaf nodes might represent (overlapping) partitions of the original feature space. This offers the possibility of selecting instances from non-overlapping or minimally overlapping partitions for labeling.
Variance reduction: label those points that would minimize output variance, which is one of the components of error.
Conformal prediction: predicts that a new data point will have a label similar to old data points in some specified way and degree of the similarity within the old examples is used to estimate the confidence in the prediction.
Mismatch-first farthest-traversal: The primary selection criterion is the prediction mismatch between the current model and nearest-neighbour prediction. It targets on wrongly predicted data points. The second selection criterion is the distance to previously selected data, the farthest first. It aims at optimizing the diversity of selected data.
User Centered Labeling Strategies: Learning is accomplished by applying dimensionality reduction to graphs and figures like scatter plots. Then the user is asked to label the compiled data (categorical, numerical, relevance scores, relation between two instances.
A wide variety of algorithms have been studied that fall into these categories. While the traditional AL strategies can achieve remarkable performance, it is often challenging to predict in advance which strategy is the most suitable in aparticular situation. In recent years, meta-learning algorithms have been gaining in popularity. Some of them have been proposed to tackle the problem of learning AL strategies instead of relying on manually designed strategies. A benchmark which compares 'meta-learning approaches to active learning' to 'traditional heuristic-based Active Learning' may give intuitions if 'Learning active learning' is at the crossroads
Minimum marginal hyperplane
Some active learning algorithms are built upon support-vector machines (SVMs) and exploit the structure of the SVM to determine which data points to label. Such methods usually calculate the margin, , of each unlabeled datum in and treat as an -dimensional distance from that datum to the separating hyperplane.
Minimum Marginal Hyperplane methods assume that the data with the smallest are those that the SVM is most uncertain about and therefore should be placed in to be labeled. Other similar methods, such as Maximum Marginal Hyperplane, choose data with the largest . Tradeoff methods choose a mix of the smallest and largest s.
See also
List of datasets for machine learning research
Sample complexity
Bayesian Optimization
Reinforcement learning
Literature
Improving Generalization with Active Learning, David Cohn, Les Atlas & Richard Ladner, Machine Learning 15, 201–221 (1994). https://doi.org/10.1007/BF00993277
Balcan, Maria-Florina & Hanneke, Steve & Wortman, Jennifer. (2008). The True Sample Complexity of Active Learning.. 45-56. https://link.springer.com/article/10.1007/s10994-010-5174-y
Active Learning and Bayesian Optimization: a Unified Perspective to Learn with a Goal, Francesco Di Fiore, Michela Nardelli, Laura Mainini, https://arxiv.org/abs/2303.01560v2
Learning how to Active Learn: A Deep Reinforcement Learning Approach, Meng Fang, Yuan Li, Trevor Cohn, https://arxiv.org/abs/1708.02383v1
References
Machine learning | Active learning (machine learning) | [
"Engineering"
] | 2,028 | [
"Artificial intelligence engineering",
"Machine learning"
] |
28,802,225 | https://en.wikipedia.org/wiki/Suessite | Suessite is a rare iron silicide mineral with chemical formula: Fe3Si. The mineral was named after Professor Hans E. Suess. It was discovered in 1982 during the chemical analysis of The North Haig olivine pigeonite achondrite (ureilite). It is a cream white color in reflected light, and ranges in size from 1 μm "blebs" to elongated grains that can reach up to 0.45 cm in length. This mineral belongs in the isometric crystal class. The isometric class has crystallographic axes that are all the same length and each of the three axes perpendicular to the other two. It is isotropic, has a structural type of DO3 and a crystal lattice of BiF3.
Optical properties
Suessite is an isotropic mineral, Isotropism is defined as an optical property of a mineral that stays the same from whatever direction it is observed. In thin-section microscopy, an isotropic mineral has only one refractive index. This means that light that passes through the mineral is not split into two different directions, but it passes through unchanged. Suessite, as determined from the previous definition, only has one index of refraction. When Keil, Fuchs, and Berkley first discovered the mineral they described it as having a relatively low optical relief, but there was no determination of the index of refraction. In plane polarized light, suessite is a reddish-brown color that shows no pleochroism.
Importance
"Suessite can form under highly reducing conditions" say the scientists who discovered this mineral. Only one out of eight ureilites studied (the North Haig ureilite) by this group contained suessite. Most contained trace amounts of kamacite which is the mineral from which Suessite is formed. In this particular study, the meteorite that contained suessite contained the highest amounts of shock metamorphism, which can be determined from the size of a shatter cone created from the impact. This could mean that suessite is formed due to the extreme increase in temperature combined with reduction of silicate rims, shortly followed by a rapid decrease in temperature. This means that, in meteorites, the abundance of suessite can be used to identify deformation associated with shock metamorphism, which could be used to determine various characteristics of the studied meteorites.
Other iron silicide minerals
The other natural iron silicides include gupeiite (Fe3Si), hapkeite (Fe2Si), linzhiite (FeSi2), luobusaite (Fe0.84Si2), naquite (FeSi), xifengite (Fe5Si3), and zangboite (TiFeSi2).
References
Native element minerals
Iron minerals
Ferromagnetic materials
Transition metal silicides
Magnetic minerals
Cubic minerals
Minerals in space group 229 | Suessite | [
"Physics"
] | 608 | [
"Materials",
"Ferromagnetic materials",
"Matter"
] |
35,533,594 | https://en.wikipedia.org/wiki/Automation%20integrator | An automation integrator is a systems integrator company or individual who makes different versions of automation hardware and software work together, generally combining several subsystems to work together as one large system.
The title may refer to those who only integrate hardware, although these will often work with software integrators. Software created by automation integrators allows devices to communicate with each other, as well as collecting and reporting data.
The magazine Control Engineering publishes an annual “Automation Integrator Guide” which lists over 2,000 automation integrators. They also give an annual system integrator of the year award to three automation integration firms.
The Control System Integrators Association (CSIA) maintains a buyers' guide of over 1200 member and nonmember systems integrators known as the Industrial Automation Exchange, or CSIA Exchange for short.
Certification
The Control System Integrators Association (CSIA) certifies automation integrators, through an audit based on 79 critical criteria from the best practices manual. Companies must be associate members of the CSIA to be eligible for certification. Integrators can also receive certification through a program launched in 2012 by the Robotics Industries Association.
Industries
Automation Integrators work in a wide variety of industries which use robotics and automation. Some of the most common include:
Automotive
Water and Wastewater
Manufacturing
Packaging
Electrical equipment
Food and beverage
HVAC Controls
Oil and gas
Chemicals
Pharmaceuticals
Power
Utilities
References
External links
The International Society of Automation
Control System Integrators Association
CSIA Industrial Automation Exchange
Industrial Automation Solutions
Automation Integrator Guide
&
System integration | Automation integrator | [
"Engineering"
] | 326 | [
"Systems engineering",
"Control engineering",
"System integration",
"Automation"
] |
35,534,124 | https://en.wikipedia.org/wiki/C21H26N2O4 | {{DISPLAYTITLE:C21H26N2O4}}
The molecular formula C21H26N2O4 (molar mass: 370.44 g/mol) may refer to:
Ciladopa (AY-27,110)
Samidorphan (ALKS-33)
Scholarine
Molecular formulas | C21H26N2O4 | [
"Physics",
"Chemistry"
] | 71 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
35,537,980 | https://en.wikipedia.org/wiki/Acoustic%20attenuation | In acoustics, acoustic attenuation is a measure of the energy loss of sound propagation through an acoustic transmission medium. Most media have viscosity and are therefore not ideal media. When sound propagates in such media, there is always thermal consumption of energy caused by viscosity. This effect can be quantified through the Stokes's law of sound attenuation. Sound attenuation may also be a result of heat conductivity in the media as has been shown by G. Kirchhoff in 1868. The Stokes-Kirchhoff attenuation formula takes into account both viscosity and thermal conductivity effects.
For heterogeneous media, besides media viscosity, acoustic scattering is another main reason for removal of acoustic energy. Acoustic attenuation in a lossy medium plays an important role in many scientific researches and engineering fields, such as medical ultrasonography, vibration and noise reduction.
Power-law frequency-dependent acoustic attenuation
Many experimental and field measurements show that the acoustic attenuation coefficient of a wide range of viscoelastic materials, such as soft tissue, polymers, soil, and porous rock, can be expressed as the following power law with respect to frequency:
where is the pressure, the position, the wave propagation distance, the angular frequency, the attenuation coefficient, and and the frequency-dependent exponent are real, non-negative material parameters obtained by fitting experimental data; the value of ranges from 0 to 4. Acoustic attenuation in water is frequency-squared dependent, namely . Acoustic attenuation in many metals and crystalline materials is frequency-independent, namely . In contrast, it is widely noted that the of viscoelastic materials is between 0 and 2. For example, the exponent of sediment, soil, and rock is about 1, and the exponent of most soft tissues is between 1 and 2.
The classical dissipative acoustic wave propagation equations are confined to the frequency-independent and frequency-squared dependent attenuation, such as the damped wave equation and the approximate thermoviscous wave equation. In recent decades, increasing attention and efforts have been focused on developing accurate models to describe general power-law frequency-dependent acoustic attenuation. Most of these recent frequency-dependent models are established via the analysis of the complex wave number and are then extended to transient wave propagation. The multiple relaxation model considers the power law viscosity underlying different molecular relaxation processes. Szabo proposed a time convolution integral dissipative acoustic wave equation. On the other hand, acoustic wave equations based on fractional derivative viscoelastic models are applied to describe the power law frequency dependent acoustic attenuation. Chen and Holm proposed the positive fractional derivative modified Szabo's wave equation and the fractional Laplacian wave equation. See for a paper which compares fractional wave equations with model power-law attenuation. This book on power-law attenuation also covers the topic in more detail.
The phenomenon of attenuation obeying a frequency power-law may be described using a causal wave equation, derived from a fractional constitutive equation between stress and strain. This wave equation incorporates fractional time derivatives:
See also and the references therein.
Such fractional derivative models are linked to the commonly recognized hypothesis that multiple relaxation phenomena (see Nachman et al.) give rise to the attenuation measured in complex media. This link is further described in and in the survey paper.
For frequency band-limited waves, Ref. describes a model-based method to attain causal power-law attenuation using a set of discrete relaxation mechanisms within the Nachman et al. framework.
In porous fluid-saturated sedimentary rocks, such as sandstone, acoustic attenuation is primarily caused by the wave-induced flow of the pore fluid relative to the solid frame, with varying between 0.5 and 1.5.
See also
Absorption (acoustics)
Fractional calculus
References
Sound
Sound measurements
Acoustics
Physical phenomena | Acoustic attenuation | [
"Physics",
"Mathematics"
] | 824 | [
"Physical phenomena",
"Sound measurements",
"Physical quantities",
"Quantity",
"Classical mechanics",
"Acoustics"
] |
35,538,241 | https://en.wikipedia.org/wiki/Topological%20rigidity | In the mathematical field of topology, a manifold M is called topologically rigid if every manifold homotopically equivalent to M is also homeomorphic to M.
Motivation
A central problem in topology is determining when two spaces are the same i.e. homeomorphic or diffeomorphic. Constructing a morphism explicitly is almost always impractical. If we put further condition on one or both spaces (manifolds) we can exploit this additional structure in order to show that the desired morphism must exist.
Rigidity theorem is about when a fairly weak equivalence between two manifolds (usually a homotopy equivalence) implies the existence of stronger equivalence homeomorphism, diffeomorphism or isometry.
Definition.
A closed topological manifold M is called topological rigid if any homotopy equivalence f : N → M with some manifold N as source and M as target is homotopic to a homeomorphism.
Examples
Example 1.
If closed 2-manifolds M and N are homotopically equivalent then they are homeomorphic. Moreover, any homotopy equivalence of closed surfaces deforms to a homeomorphism.
Example 2.
If a closed manifold Mn (n ≠ 3) is homotopy-equivalent to Sn then Mn is homeomorphic to Sn.
Rigidity theorem in geometry
Definition.
A diffeomorphism of flat-Riemannian manifolds is said to be affine iff it carries geodesics to geodesic.
Theorem (Bieberbach)
If f : M → N is a homotopy equivalence between flat closed connected Riemannian manifolds then f is homotopic to an affine homeomorphism.
Mostow's rigidity theorem
Theorem: Let M and N be compact, locally symmetric Riemannian manifolds with everywhere non-positive curvature having no closed one or two dimensional geodesic subspace which are direct factor locally. If f : M → N is a homotopy equivalence then f is homotopic to an isometry.
Theorem (Mostow's theorem for hyperbolic n-manifolds, n ≥ 3): If M and N are complete hyperbolic n-manifolds, n ≥ 3 with finite volume and f : M → N is a homotopy equivalence then f is homotopic to an isometry.
These results are named after George Mostow.
Algebraic form
Let Γ and Δ be discrete subgroups of the isometry group of hyperbolic n-space H, where n ≥ 3, whose quotients H/Γ and H/Δ have finite volume. If Γ and Δ are isomorphic as discrete groups then they are conjugate.
Remarks
(1) In the 2-dimensional case any manifold of genus at least two has a hyperbolic structure. Mostow's rigidity theorem does not apply in this case. In fact, there are many hyperbolic structures on any such manifold; each such structure corresponds to a point in Teichmuller space.
(2) On the other hand, if M and N are 2-manifolds of finite volume then it is easy to show that they are homeomorphic exactly when their fundamental groups are the same.
Application
The group of isometries of a finite-volume hyperbolic n-manifold M (for n ≥ 3) is finitely generated and isomorphic to π1(M).
References
Topology
Maps of manifolds
Homotopy theory | Topological rigidity | [
"Physics",
"Mathematics"
] | 699 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
35,538,934 | https://en.wikipedia.org/wiki/Saturation%20dome | A saturation dome is a graphical representation of the combination of vapor and gas that is used in thermodynamics. It can be used to find either the pressure or the specific volume as long as one already has at least one of these properties.
Description
A saturation dome uses the projection of a P–v–T diagram (pressure, specific volume, and temperature) onto the P–v plane. The points that create the left-hand side of the dome represent the saturated liquid states, while the points on the right-hand side represent the saturated vapor states (commonly referred to as the “dry” region). On the left-hand side of the dome there is compressed liquid and on the right-hand side there is superheated gas.
Within the dome itself, there is a liquid–vapor mixture. This two-phase region is commonly referred to as the “wet” region. The percentage of liquid and vapor can be calculated using vapor quality. The triple state line is where the three phases (solid, liquid, and vapor) exist in equilibrium.
Critical point
The point at the very top of the dome is called the critical point. This point is where the saturated liquid and saturated vapor lines meet. Past this point, it is impossible for a liquid–vapor transformation to occur. It is also where the critical temperature and critical pressure meet. Beyond this point, it is also impossible to distinguish between the liquid and vapor phases.
States
A saturation state is the point where a phase change begins or ends. For example, the saturated liquid line represents the point where any further addition of energy will cause a small portion of the liquid to convert to vapor. Likewise, along the saturated vapor line, any removal of energy will cause some of the vapor to condense back into a liquid, producing a mixture. When a substance reaches the saturated liquid line it is commonly said to be at its boiling point. The temperature will remain constant while it is at constant pressure underneath the saturation dome (boiling water stays at a constant of 212F) until it reaches the saturated vapor line. This line is where the mixture has converted completely to vapor. Further heating of the saturated vapor will result in a superheated vapor state. This is because the vapor will be at a temperature higher than the saturation temperature (212F for water) for a given pressure.
Vapor quality
Vapor quality refers to the vapor–liquid mixture that is contained underneath the
dome. This quality is defined as the fraction of the total mixture which is vapor, based on
mass. A
fully saturated vapor has a quality of 100% while a saturated liquid has a quality of 0%.
Quality can be estimated graphically as it is related to the specific volume, or how far horizontally across the dome the point exists. At the saturated liquid state, the specific volume is denoted as vf, while at the saturated vapor stage it is denoted as vg.
Quality can be calculated by the equation:
References
Phase transitions | Saturation dome | [
"Physics",
"Chemistry"
] | 601 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Statistical mechanics",
"Matter"
] |
35,539,188 | https://en.wikipedia.org/wiki/Multifunction%20tester | A multifunction tester or MFT is an electronic device used by electricians to test electrical circuits that use the "low" and "extra-low voltages" typically used by consumers in domestic, commercial and agricultural settings.
Multifunction testers are able to perform continuity tests (or low ohms resistance tests) and insulation resistance tests (or high ohms resistance tests) and they may also be able to perform earth fault loop impedance tests, prospective short-circuit current tests, earth electrode tests and RCD tests.
Electrical circuits | Multifunction tester | [
"Engineering"
] | 112 | [
"Electrical engineering",
"Electronic engineering",
"Electrical circuits"
] |
35,539,811 | https://en.wikipedia.org/wiki/Orbit%20modeling | Orbit modeling is the process of creating mathematical models to simulate motion of a massive body as it moves in orbit around another massive body due to gravity. Other forces such as gravitational attraction from tertiary bodies, air resistance, solar pressure, or thrust from a propulsion system are typically modeled as secondary effects. Directly modeling an orbit can push the limits of machine precision due to the need to model small perturbations to very large orbits. Because of this, perturbation methods are often used to model the orbit in order to achieve better accuracy.
Background
The study of orbital motion and mathematical modeling of orbits began with the first attempts to predict planetary motions in the sky, although in ancient times the causes remained a mystery. Newton, at the time he formulated his laws of motion and of gravitation, applied them to the first analysis of perturbations, recognizing the complex difficulties of their calculation.
Many of the great mathematicians since then have given attention to the various problems involved; throughout the 18th and 19th centuries there was demand for accurate tables of the position of the Moon and planets for purposes of navigation at sea.
The complex motions of orbits can be broken down. The hypothetical motion that the body follows under the gravitational effect of one other body only is typically a conic section, and can be readily modeled with the methods of geometry. This is called a two-body problem, or an unperturbed Keplerian orbit. The differences between the Keplerian orbit and the actual motion of the body are caused by perturbations. These perturbations are caused by forces other than the gravitational effect between the primary and secondary body and must be modeled to create an accurate orbit simulation. Most orbit modeling approaches model the two-body problem and then add models of these perturbing forces and simulate these models over time. Perturbing forces may include gravitational attraction from other bodies besides the primary, solar wind, drag, magnetic fields, and propulsive forces.
Analytical solutions (mathematical expressions to predict the positions and motions at any future time) for simple two-body and three-body problems exist; none have been found for the n-body problem except for certain special cases. Even the two-body problem becomes insoluble if one of the bodies is irregular in shape.
Due to the difficulty in finding analytic solutions to most problems of interest, computer modeling and simulation is typically used to analyze orbital motion. A wide variety of software is available to simulate orbits and trajectories of spacecraft.
Keplerian orbit model
In its simplest form, an orbit model can be created by assuming that only two bodies are involved, both behave as spherical point-masses, and that no other forces act on the bodies. For this case the model is simplified to a Kepler orbit.
Keplerian orbits follow conic sections. The mathematical model of the orbit which gives the distance between a central body and an orbiting body can be expressed as:
Where:
is the distance
is the semi-major axis, which defines the size of the orbit
is the eccentricity, which defines the shape of the orbit
is the true anomaly, which is the angle between the current position of the orbiting object and the location in the orbit at it is closest to the central body (called the periapsis)
Alternately, the equation can be expressed as:
Where is called the semi-latus rectum of the curve. This form of the equation is particularly useful when dealing with parabolic trajectories, for which the semi-major axis is infinite.
An alternate approach uses Isaac Newton's law of universal gravitation as defined below:
where:
is the magnitude of the gravitational force between the two point masses
is the gravitational constant
is the mass of the first point mass
is the mass of the second point mass
is the distance between the two point masses
Making an additional assumption that the mass of the primary body is much greater than the mass of the secondary body and substituting in Newton's second law of motion, results in the following differential equation
Solving this differential equation results in Keplerian motion for an orbit.
In practice, Keplerian orbits are typically only useful for first-order approximations, special cases, or as the base model for a perturbed orbit.
Orbit simulation methods
Orbit models are typically propagated in time and space using special perturbation methods. This is performed by first modeling the orbit as a Keplerian orbit. Then perturbations are added to the model to account for the various perturbations that affect the orbit.
Special perturbations can be applied to any problem in celestial mechanics, as it is not limited to cases where the perturbing forces are small. Special perturbation methods are the basis of the most accurate machine-generated planetary ephemerides.
see, for instance, Jet Propulsion Laboratory Development Ephemeris
Cowell's method
Cowell's method is a special perturbation method;
mathematically, for mutually interacting bodies, Newtonian forces on body from the other bodies are simply summed thus,
where
is the acceleration vector of body
is the gravitational constant
is the mass of body
and are the position vectors of objects and
is the distance from object to object
with all vectors being referred to the barycenter of the system. This equation is resolved into components in , , and these are integrated numerically to form the new velocity and position vectors as the simulation moves forward in time. The advantage of Cowell's method is ease of application and programming. A disadvantage is that when perturbations become large in magnitude (as when an object makes a close approach to another) the errors of the method also become large.
Another disadvantage is that in systems with a dominant central body, such as the Sun, it is necessary to carry many significant digits in the arithmetic because of the large difference in the forces of the central body and the perturbing bodies.
Encke's method
Encke's method begins with the osculating orbit as a reference and integrates numerically to solve for the variation from the reference as a function of time.
Its advantages are that perturbations are generally small in magnitude, so the integration can proceed in larger steps (with resulting lesser errors), and the method is much less affected by extreme perturbations than Cowell's method. Its disadvantage is complexity; it cannot be used indefinitely without occasionally updating the osculating orbit and continuing from there, a process known as rectification.
Letting be the radius vector of the osculating orbit, the radius vector of the perturbed orbit, and the variation from the osculating orbit,
and are just the equations of motion of and ,
where is the gravitational parameter with and the masses of the central body and the perturbed body, is the perturbing acceleration, and and are the magnitudes of and .
Substituting from equations () and () into equation (),
which, in theory, could be integrated twice to find . Since the osculating orbit is easily calculated by two-body methods, and are accounted for and can be solved. In practice, the quantity in the brackets, , is the difference of two nearly equal vectors, and further manipulation is necessary to avoid the need for extra significant digits.
Sperling–Burdet method
In 1991 Victor R. Bond and Michael F. Fraietta created an efficient and highly accurate method for solving the two-body perturbed problem. This method uses the linearized and regularized differential equations of motion derived by Hans Sperling and a perturbation theory based on these equations developed by C.A. Burdet in the year 1864. In 1973, Bond and Hanssen improved Burdet's set of differential equations by using the total energy of the perturbed system as a parameter instead of the two-body energy and by reducing the number of elements to 13. In 1989 Bond and Gottlieb embedded the Jacobian integral, which is a constant when the potential function is explicitly dependent upon time as well as position in the Newtonian equations. The Jacobian constant was used as an element to replace the total energy in a reformulation of the differential equations of motion. In this process, another element which is proportional to a component of the angular momentum is introduced. This brought the total number of elements back to 14. In 1991, Bond and Fraietta made further revisions by replacing the Laplace vector with another vector integral as well as another scalar integral which removed small secular terms which appeared in the differential equations for some of the elements.
The Sperling–Burdet method is executed in a 5 step process as follows:
Step 1: Initialization
Given an initial position, , an initial velocity, , and an initial time, , the following variables are initialized:
Perturbations due to perturbing masses, defined as and , are evaluated
Perturbations due to other accelerations, defined as , are evaluated
Step 2: Transform elements to coordinates
where are Stumpff functions
Step 3: Evaluate differential equations for the elements
Step 4: Integration
Here the differential equations are integrated over a period to obtain the element value at
Step 5: Advance
Set and return to step 2 until simulation stopping conditions are met.
Perturbations
Perturbing forces cause orbits to become perturbed from a perfect Keplerian orbit. Models for each of these forces are created and executed during the orbit simulation so their effects on the orbit can be determined.
Non-spherical gravity
The Earth is not a perfect sphere nor is mass evenly distributed within the Earth. This results in the point-mass gravity model being inaccurate for orbits around the Earth, particularly Low Earth orbits. To account for variations in gravitational potential around the surface of the Earth, the gravitational field of the Earth is modeled with spherical harmonics which are expressed through the equation:
where
is the gravitational parameter defined as the product of G, the universal gravitational constant, and the mass of the primary body.
is the unit vector defining the distance between the primary and secondary bodies, with being the magnitude of the distance.
represents the contribution to of the spherical harmonic of degree n and order m, which is defined as:
where:
is the mean equatorial radius of the primary body.
is the magnitude of the position vector from the center of the primary body to the center of the secondary body.
and are gravitational coefficients of degree n and order m. These are typically found through gravimetry measurements.
The unit vectors define a coordinate system fixed on the primary body. For the Earth, lies in the equatorial plane parallel to a line intersecting Earth's geometric center and the Greenwich meridian, points in the direction of the North polar axis, and
is referred to as a derived Legendre polynomial of degree n and order m. They are solved through the recurrence relation:
is sine of the geographic latitude of the secondary body, which is .
are defined with the following recurrence relation and initial conditions:
When modeling perturbations of an orbit around a primary body only the sum of the terms need to be included in the perturbation since the point-mass gravity model is accounted for in the term
Third-body perturbations
Gravitational forces from third bodies can cause perturbations to an orbit. For example, the Sun and Moon cause perturbations to Orbits around the Earth. These forces are modeled in the same way that gravity is modeled for the primary body by means of direct gravitational N-body simulations. Typically, only a spherical point-mass gravity model is used for modeling effects from these third bodies.
Some special cases of third-body perturbations have approximate analytic solutions. For example, perturbations for the right ascension of the ascending node and argument of perigee for a circular Earth orbit are:
where:
is the change to the right ascension of the ascending node in degrees per day.
is the change to the argument of perigee in degrees per day.
is the orbital inclination.
is the number of orbital revolutions per day.
Solar radiation
Solar radiation pressure causes perturbations to orbits. The magnitude of acceleration it imparts to a spacecraft in Earth orbit is modeled using the equation below:
where:
is the magnitude of acceleration in meters per second-squared.
is the cross-sectional area exposed to the Sun in meters-squared.
is the spacecraft mass in kilograms.
is the reflection factor which depends on material properties. for absorption, for specular reflection, and for diffuse reflection.
For orbits around the Earth, solar radiation pressure becomes a stronger force than drag above altitude.
Propulsion
There are many different types of spacecraft propulsion. Rocket engines are one of the most widely used. The force of a rocket engine is modeled by the equation:
{| border="0" cellpadding="2"
|-
|align=right|where:
|
|-
!align=right|
|align=left|= exhaust gas mass flow
|-
!align=right|
|align=left|= effective exhaust velocity
|-
!align=right|
|align=left|= actual jet velocity at nozzle exit plane
|-
!align=right|
|align=left|= flow area at nozzle exit plane (or the plane where the jet leaves the nozzle if separated flow)
|-
!align=right|
|align=left|= static pressure at nozzle exit plane
|-
!align=right|
|align=left|= ambient (or atmospheric) pressure
|}
Another possible method is a solar sail. Solar sails use radiation pressure in a way to achieve a desired propulsive force. The perturbation model due to the solar wind can be used as a model of propulsive force from a solar sail.
Drag
The primary non-gravitational force acting on satellites in low Earth orbit is atmospheric drag. Drag will act in opposition to the direction of velocity and remove energy from an orbit. The force due to drag is modeled by the following equation:
where
is the force of drag,
is the density of the fluid,
is the velocity of the object relative to the fluid,
is the drag coefficient (a dimensionless parameter, e.g. 2 to 4 for most satellites)
is the reference area.
Orbits with an altitude below generally have such high drag that the orbits decay too rapidly to give a satellite a sufficient lifetime to accomplish any practical mission. On the other hand, orbits with an altitude above have relatively small drag so that the orbit decays slow enough that it has no real impact on the satellite over its useful life. Density of air can vary significantly in the thermosphere where most low Earth orbiting satellites reside. The variation is primarily due to solar activity, and thus solar activity can greatly influence the force of drag on a spacecraft and complicate long-term orbit simulation.
Magnetic fields
Magnetic fields can play a significant role as a source of orbit perturbation as was seen in the Long Duration Exposure Facility. Like gravity, the magnetic field of the Earth can be expressed through spherical harmonics as shown below:
where
is the magnetic field vector at a point above the Earth's surface.
represents the contribution to of the spherical harmonic of degree n and order m, defined as:
where:
is the mean equatorial radius of the primary body.
is the magnitude of the position vector from the center of the primary body to the center of the secondary body.
is a unit vector in the direction of the secondary body with its origin at the center of the primary body.
and are Gauss coefficients of degree n and order m. These are typically found through magnetic field measurements.
The unit vectors define a coordinate system fixed on the primary body. For the Earth, lies in the equatorial plane parallel to a line intersecting Earth's geometric center and the Greenwich meridian, points in the direction of the North polar axis, and
is referred to as a derived Legendre polynomial of degree n and order m. They are solved through the recurrence relation:
is defined as: 1 if m = 0, for and , and for and
is sine of the geographic latitude of the secondary body, which is .
are defined with the following recurrence relation and initial conditions:
See also
n-body problem
Orbital resonance
Osculating orbit
Perturbation (astronomy)
Sphere of influence (astrodynamics)
Two-body problem
Notes
References
External links
Gravity maps of the Earth
Orbital perturbations
Dynamical systems
Dynamics of the Solar System | Orbit modeling | [
"Physics",
"Astronomy",
"Mathematics"
] | 3,327 | [
"Dynamics of the Solar System",
"Mechanics",
"Solar System",
"Dynamical systems"
] |
35,542,982 | https://en.wikipedia.org/wiki/Xenia%20%28plants%29 | Xenia (also known as the xenia effect) in plants is the effect of pollen on seeds and fruit of the fertilized plant. The effect is separate from the contribution of the pollen towards the next generation.
The term was coined in 1881 by the botanist Wilhelm Olbers Focke to refer to effects on maternal tissues, including the seed coat and pericarp, but at that time endosperm was also thought to be a maternal tissue, and the term became closely associated with endosperm effects. The term metaxenia was later coined and is still sometimes used to describe the effects on purely maternal tissues.
Endosperm effects in the seed
One of the most familiar examples of xenia is the different colours that can be produced in maize (Zea mays) by assortment of alleles via individual pollen grains. Such maize cobs are cultivated for decorative purposes.
The endosperm tissue, which makes up most of the bulk of a maize seed, is not produced by the mother plant, but is the product of fertilization, and genetic factors carried by the pollen affect its colour. For example, a yellow-seeded race may have its yellow colour determined by a recessive allele. If it receives pollen from a purple-seeded race that has one copy of a dominant allele for purple colour and one copy of the recessive allele for yellow seed, the resulting cob will have some yellow and some purple seeds.
Qualities affected in the endosperm of sorghum may include starchiness, sweetness, waxiness, or other aspects.
Fruit-growth effects
The vigour of the seeds forming inside a fruit can affect the growth of the fruit itself. For example, in two plant species whose fruit ripen asynchronously (Vaccinium corymbosum and Amelanchier arborea) the fruit with more seeds ripened faster.
Genetically engineered crops
Because there is concern about pollen from genetically modified (GM) crops, male-sterile forms are being considered, particularly of maize. Male-fertile non-GM plants must then be grown with the GM crop to ensure pollination. In some cases, a xenia effect due to the genetic difference between the two strains has been observed that increases grain yield, and could make it financially viable to grow the male-sterile plants in such a mixture.
See also
Extranuclear inheritance
Maternal effect
Plasmid
References
Plant physiology | Xenia (plants) | [
"Biology"
] | 498 | [
"Plant physiology",
"Plants"
] |
35,543,722 | https://en.wikipedia.org/wiki/Macroscopic%20quantum%20phenomena | Macroscopic quantum phenomena are processes showing quantum behavior at the macroscopic scale, rather than at the atomic scale where quantum effects are prevalent. The best-known examples of macroscopic quantum phenomena are superfluidity and superconductivity; other examples include the quantum Hall effect, Josephson effect and topological order. Since 2000 there has been extensive experimental work on quantum gases, particularly Bose–Einstein condensates.
Between 1996 and 2016 six Nobel Prizes were given for work related to macroscopic quantum phenomena. Macroscopic quantum phenomena can be observed in superfluid helium and in superconductors, but also in dilute quantum gases, dressed photons such as polaritons and in laser light. Although these media are very different, they are all similar in that they show macroscopic quantum behavior, and in this respect they all can be referred to as quantum fluids.
Quantum phenomena are generally classified as macroscopic when the quantum states are occupied by a large number of particles (of the order of the Avogadro number) or the quantum states involved are macroscopic in size (up to kilometer-sized in superconducting wires).
Consequences of the macroscopic occupation
The concept of macroscopically occupied quantum states is introduced by Fritz London. In this section it will be explained what it means if a single state is occupied by a very large number of particles. We start with the wave function of the state written as
with Ψ0 the amplitude and the phase. The wave function is normalized so that
The physical interpretation of the quantity
depends on the number of particles. Fig. 1 represents a container with a certain number of particles with a small control volume ΔV inside. We check from time to time how many particles are in the control box. We distinguish three cases:
There is only one particle. In this case the control volume is empty most of the time. However, there is a certain chance to find the particle in it given by Eq. (). The probability is proportional to ΔV. The factor ΨΨ∗ is called the chance density.
If the number of particles is a bit larger there are usually some particles inside the box. We can define an average, but the actual number of particles in the box has relatively large fluctuations around this average.
In the case of a very large number of particles there will always be a lot of particles in the small box. The number will fluctuate but the fluctuations around the average are relatively small. The average number is proportional to ΔV and ΨΨ∗ is now interpreted as the particle density.
In quantum mechanics the particle probability flow density Jp (unit: particles per second per m2), also called probability current, can be derived from the Schrödinger equation to be
with q the charge of the particle and the vector potential; cc stands for the complex conjugate of the other term inside the brackets. For neutral particles , for superconductors (with e the elementary charge) the charge of Cooper pairs. With Eq. ()
If the wave function is macroscopically occupied the particle probability flow density becomes a particle flow density. We introduce the fluid velocity vs via the mass flow density
The density (mass per volume) is
so Eq. () results in
This important relation connects the velocity, a classical concept, of the condensate with the phase of the wave function, a quantum-mechanical concept.
Superfluidity
At temperatures below the lambda point, helium shows the unique property of superfluidity. The fraction of the liquid that forms the superfluid component is a macroscopic quantum fluid. The helium atom is a neutral particle, so . Furthermore, when considering helium-4, the relevant particle mass is , so Eq. () reduces to
For an arbitrary loop in the liquid, this gives
Due to the single-valued nature of the wave function
with integer, we have
The quantity
is the quantum of circulation. For a circular motion with radius r
In case of a single quantum ()
When superfluid helium is put in rotation, Eq. () will not be satisfied for all loops inside the liquid unless the rotation is organized around vortex lines (as depicted in Fig. 2). These lines have a vacuum core with a diameter of about 1 Å (which is smaller than the average particle distance). The superfluid helium rotates around the core with very high speeds. Just outside the core (r = 1 Å), the velocity is as large as 160 m/s. The cores of the vortex lines and the container rotate as a solid body around the rotation axes with the same angular velocity. The number of vortex lines increases with the angular velocity (as shown in the upper half of the figure). Note that the two right figures both contain six vortex lines, but the lines are organized in different stable patterns.
Superconductivity
In the original paper Ginzburg and Landau observed the existence of two types of superconductors depending
on the energy of the interface between the normal and superconducting states. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state consisting of a baroque pattern of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II.
The most important finding from Ginzburg–Landau theory was made by Alexei Abrikosov in 1957.
He used Ginzburg–Landau theory to explain experiments on superconducting alloys and thin films. He found that in a type-II superconductor in a high magnetic field, the field penetrates in a triangular lattice of quantized tubes of flux vortices. For this and related work, he was awarded the Nobel Prize in 2003 with Ginzburg and Leggett.
Fluxoid quantization
For superconductors the bosons involved are the so-called Cooper pairs which are quasiparticles formed by two electrons. Hence m = 2me and q = −2e where me and e are the mass of an electron and the elementary charge. It follows from Eq. () that
Integrating Eq. () over a closed loop gives
As in the case of helium we define the vortex strength
and use the general relation
where Φ is the magnetic flux enclosed by the loop. The so-called fluxoid is defined by
In general the values of κ and Φ depend on the choice of the loop. Due to the single-valued nature of the wave function and Eq. () the fluxoid is quantized
The unit of quantization is called the flux quantum
The flux quantum plays a very important role in superconductivity. The earth magnetic field is very small (about 50 μT), but it generates one flux quantum in an area of 6 μm by 6 μm. So, the flux quantum is very small. Yet it was measured to an accuracy of 9 digits as shown in Eq. (). Nowadays the value given by Eq. () is exact by definition.
In Fig. 3 two situations are depicted of superconducting rings in an external magnetic field. One case is a thick-walled ring and in the other case the ring is also thick-walled, but is interrupted by a weak link. In the latter case we will meet the famous Josephson relations. In both cases we consider a loop inside the material. In general a superconducting circulation current will flow in the material. The total magnetic flux in the loop is the sum of the applied flux Φa and the self-induced flux Φs induced by the circulation current
Thick ring
The first case is a thick ring in an external magnetic field (Fig. 3a). The currents in a superconductor only flow in a thin layer at the surface. The thickness of this layer is determined by the so-called London penetration depth. It is of μm size or less. We consider a loop far away from the surface so that vs = 0 everywhere so κ = 0. In that case the fluxoid is equal to the magnetic flux (Φv = Φ). If vs = 0 Eq. () reduces to
Taking the rotation gives
Using the well-known relations and shows that the magnetic field in the bulk of the superconductor is zero as well. So, for thick rings, the total magnetic flux in the loop is quantized according to
Interrupted ring, weak links
Weak links play a very important role in modern superconductivity. In most cases weak links are oxide barriers between two superconducting thin films, but it can also be a crystal boundary (in the case of high-Tc superconductors). A schematic representation is given in Fig. 4. Now consider the ring which is thick everywhere except for a small section where the ring is closed via a weak link (Fig. 3b). The velocity is zero except near the weak link. In these regions the velocity contribution to the total phase change in the loop is given by (with Eq. ())
The line integral is over the contact from one side to the other in such a way that the end points of the line are well inside the bulk of the superconductor where . So the value of the line integral is well-defined (e.g. independent of the choice of the end points). With Eqs. (), (), and ()
Without proof we state that the supercurrent through the weak link is given by the so-called DC Josephson relation
The voltage over the contact is given by the AC Josephson relation
The names of these relations (DC and AC relations) are misleading since they both hold in DC and AC situations. In the steady state (constant ) Eq. () shows that V=0 while a nonzero current flows through the junction. In the case of a constant applied voltage (voltage bias) Eq. () can be integrated easily and gives
Substitution in Eq. () gives
This is an AC current. The frequency
is called the Josephson frequency. One μV gives a frequency of about 500 MHz. By using Eq. () the flux quantum is determined with the high precision as given in Eq. ().
The energy difference of a Cooper pair, moving from one side of the contact to the other, is . With this expression Eq. () can be written as which is the relation for the energy of a photon with frequency ν.
The AC Josephson relation (Eq. ()) can be easily understood in terms of Newton's law, (or from one of the London equation's). We start with Newton's law
Substituting the expression for the Lorentz force and using the general expression for the co-moving time derivative gives
Eq. () gives so
Take the line integral of this expression. In the end points the velocities are zero so the ∇v2 term gives no contribution. Using and Eq. (), with and , gives Eq. ().
DC SQUID
Fig. 5 shows a so-called DC SQUID. It consists of two superconductors connected by two weak links. The fluxoid quantization of a loop through the two bulk superconductors and the two weak links demands
If the self-inductance of the loop can be neglected the magnetic flux in the loop Φ is equal to the applied flux
with B the magnetic field, applied perpendicular to the surface, and A the surface area of the loop. The total supercurrent is given by
Substitution of Eq() in () gives
Using a well known geometrical formula we get
Since the sin-function can vary only between −1 and +1 a steady solution is only possible if the applied current is below a critical current given by
Note that the critical current is periodic in the applied flux with period . The dependence of the critical current on the applied flux is depicted in Fig. 6. It has a strong resemblance with the interference pattern generated by a laser beam behind a double slit. In practice the critical current is not zero at half integer values of the flux quantum of the applied flux. This is due to the fact that the self-inductance of the loop cannot be neglected.
Type II superconductivity
Type-II superconductivity is characterized by two critical fields called Bc1 and Bc2. At a magnetic field Bc1 the applied magnetic field starts to penetrate the sample, but the sample is still superconducting. Only at a field of Bc2 the sample is completely normal. For fields in between Bc1 and Bc2 magnetic flux penetrates the superconductor in well-organized patterns, the so-called Abrikosov vortex lattice similar to the pattern shown in Fig. 2. A cross section of the superconducting plate is given in Fig. 7. Far away from the plate the field is homogeneous, but in the material superconducting currents flow which squeeze the field in bundles of exactly one flux quantum. The typical field in the core is as big as 1 tesla. The currents around the vortex core flow in a layer of about 50 nm with current densities on the order of 15 A/m2. That corresponds with 15 million ampère in a wire of one mm2.
Dilute quantum gases
The classical types of quantum systems, superconductors and superfluid helium, were discovered in the beginning of the 20th century. Near the end of the 20th century, scientists discovered how to create very dilute atomic or molecular gases, cooled first by laser cooling and then by evaporative cooling. They are trapped using magnetic fields or optical dipole potentials in ultrahigh vacuum chambers. Isotopes which have been used include rubidium (Rb-87 and Rb-85), strontium (Sr-87, Sr-86, and Sr-84) potassium (K-39 and K-40), sodium (Na-23), lithium (Li-7 and Li-6), and hydrogen (H-1). The temperatures to which they can be cooled are as low as a few nanokelvin. The developments have been very fast in the past few years. A team of NIST and the University of Colorado has succeeded in creating and observing vortex quantization in these systems. The concentration of vortices increases with the angular velocity of the rotation, similar to the case of superfluid helium and superconductivity.
See also
Charge density wave
Chiral magnetic effect
Domain wall (magnetism)
Flux pinning
Flux quantization
Ginzburg–Landau theory
Husimi Q representation
Josephson effect
Magnetic flux quantum
Meissner effect
N-slit interferometric equation
Quantum boomerang effect
Quantum turbulence
Quantum vortex
Schrödinger's cat paradox
Second sound
SQUID
Superconductivity
Topological defect
Type-I superconductor
Type-II superconductor
References and footnotes
Atomic, molecular, and optical physics
Condensed matter physics
Exotic matter
Phases of matter
Quantum phases | Macroscopic quantum phenomena | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,278 | [
"Quantum phases",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
" molecular",
"Exotic matter",
"Atomic",
"Matter",
" and optical physics"
] |
34,345,534 | https://en.wikipedia.org/wiki/State%20Hydrological%20Institute | The State Hydrological Institute (SHI; , ГГИ) is a research institute of Russia in the field of developing methods for locating hydrological networks and river hydrometry, creating modern models and methods for accelerated measurements of water discharge, runoff accounting at hydroelectric power plants and other hydrology structures.
SHI is created under the initiative of the Russian Academy of Sciences in October 1919.
The institute cooperates with international organizations such as UNESCO and the World Meteorological Organization.
Structure
SHI has about 350 employees. The research staff includes about 60 doctors. The institute currently includes such departments as:
Department of Valday for hydrometeorological experimental research
Major experimental laboratory in a town Ilichovo
Department of runoff calculation and water management problems
Department of channel processes
Department of metrology and standardization
Department of hydrophysics
Department of researching of hydroecology
Department of water resources and water balance
Department of scientific-technical information
Department of flooding research
Department of climate change research
Department of remote sensing methods and geoinformation systems
Department of river network
Department of water cadastre
Notable staff
Vladimir Wiese (1886–1954), a Russian and Soviet oceanographer and explorer of the Arctic
Mikhail Budyko (1920–2001), a Soviet and Russian climatologist
Valeryan Uryvaev (1908–1968), a Soviet hydrologist and director of the institute from 1942 to 1968
Igor Shiklomanov (1939–2010), a Soviet and Russian hydrologist and director of the institute from 1981 to 2010
Oleg Anisimov (1957–), a Russian climate scientist
References
External links
1919 establishments in Russia
Hydraulic engineering
Hydrology organizations
Research institutes established in 1919
Research institutes in Saint Petersburg
Research institutes in the Soviet Union
Water in the Soviet Union
Meteorology in the Soviet Union | State Hydrological Institute | [
"Physics",
"Engineering",
"Environmental_science"
] | 354 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydrology organizations",
"Hydraulic engineering"
] |
34,346,707 | https://en.wikipedia.org/wiki/Holomorphic%20Embedding%20Load-flow%20method | The Holomorphic Embedding Load-flow Method (HELM) is a solution method for the power-flow equations of electrical power systems. Its main features are that it is direct (that is, non-iterative) and that it mathematically guarantees a consistent selection of the correct operative branch of the multivalued problem, also signalling the condition of voltage collapse when there is no solution. These properties are relevant not only for the reliability of existing off-line and real-time applications, but also because they enable new types of analytical tools that would be impossible to build with existing iterative load-flow methods (due to their convergence problems). An example of this would be decision-support tools providing validated action plans in real time.
The HELM load-flow algorithm was invented by Antonio Trias and has been granted two US Patents.
A detailed description was presented at the 2012 IEEE PES General Meeting and subsequently published.
The method is founded on advanced concepts and results from complex analysis, such as holomorphicity, the theory of algebraic curves, and analytic continuation. However, the numerical implementation is rather straightforward as it uses standard linear algebra and the Padé approximation. Additionally, since the limiting part of the computation is the factorization of the admittance matrix and this is done only once, its performance is competitive with established fast-decoupled loadflows. The method is currently implemented into industrial-strength real-time and off-line packaged EMS applications.
Background
The load-flow calculation is one of the most fundamental components in the analysis of power systems and is the cornerstone for almost all other tools used in power system simulation and management. The load-flow equations can be written in the following general form:
where the given (complex) parameters are the admittance matrix
, the bus shunt admittances
, and the bus power
injections representing
constant-power loads and generators.
To solve this non-linear system of algebraic equations, traditional load-flow algorithms were developed based on three iterative techniques: the Gauss–Seidel method,
which has poor convergence properties but very little memory requirements and is straightforward to implement; the full Newton–Raphson method
which has fast (quadratic) iterative convergence properties, but it is computationally costly; and the Fast Decoupled Load-Flow (FDLF) method,
which is based on Newton–Raphson, but greatly reduces its computational cost by means of a decoupling approximation that is valid in most transmission networks. Many other incremental improvements exist; however, the underlying technique in all of them is still an iterative solver, either of Gauss-Seidel or of Newton type. There are two fundamental problems with all iterative schemes of this type. On the one hand, there is no guarantee that the iteration will always converge to a solution; on the other, since the system has multiple solutions, it is not possible to control which solution will be selected. As the power system approaches the point of voltage collapse, spurious solutions get closer to the correct one, and the iterative scheme may be easily attracted to one of them because of the phenomenon of Newton fractals: when the Newton method is applied to complex functions, the basins of attraction for the various solutions show fractal behavior. As a result, no matter how close the chosen initial point of the iterations (seed) is to the correct solution, there is always some non-zero chance of straying off to a different solution. These fundamental problems of iterative loadflows have been extensively documented. A simple illustration for the two-bus model is provided in Although there exist homotopic continuation techniques that alleviate the problem to some degree, the fractal nature of the basins of attraction precludes a 100% reliable method for all electrical scenarios.
The key differential advantage of the HELM is that it is fully deterministic and unambiguous: it guarantees that the solution always
corresponds to the correct operative solution, when it exists; and it signals the non-existence of the solution when the conditions are such that there is no solution (voltage collapse). Additionally, the method is competitive with the FDNR method in terms of computational cost. It brings a solid mathematical treatment of the load-flow problem that provides new insights not previously available with the iterative numerical methods.
Methodology and applications
HELM is grounded on a rigorous mathematical theory, and in practical terms it could be summarized as follows:
Define a specific (holomorphic) embedding for the equations in terms of a complex parameter , such that for the system has an obvious correct solution, and for one recovers the original problem.
Given this holomorphic embedding, it is now possible to compute univocally power series for voltages as analytic functions of . The correct load-flow solution at will be obtained by analytic continuation of the known correct solution at .
Perform the analytic continuation using algebraic approximants, which in this case are guaranteed to either converge to the solution if it exists, or not converge if the solution does not exist (voltage collapse).
HELM provides a solution to a long-standing problem of all iterative load-flow methods, namely the unreliability of the iterations in finding the correct solution (or any solution at all).
This makes HELM particularly suited for real-time applications, and mandatory for any EMS software based on exploratory algorithms, such as contingency analysis, and under alert and emergency conditions solving operational limits violations and restoration providing guidance through action plans.
Holomorphic embedding
For the purposes of the discussion, we will omit the treatment of controls, but the method can accommodate all types of controls. For the constraint equations imposed by these controls, an appropriate holomorphic embedding must be also defined.
The method uses an embedding technique by means of a complex parameter .
The first key ingredient in the method lies in requiring the embedding to be holomorphic, that is, that the system of equations for voltages is turned into a system of equations for functions in such a way that the new system defines as holomorphic functions (i.e. complex analytic) of the new complex variable . The aim is to be able to use the process of analytic continuation which will allow the calculation of at . Looking at equations (), a necessary condition for the embedding to be holomorphic is that is replaced under the embedding with , not . This is because complex conjugation itself is not a holomorphic function. On the other hand, it is easy to see that the replacement does allow the equations to define a holomorphic function . However, for a given arbitrary embedding, it remains to be proven that is indeed holomorphic. Taking into account all these considerations, an embedding of this type is proposed:
With this choice, at the right hand side terms become zero, (provided that the denominator is not zero), this corresponds to the case where all the injections are zero and this case has a well known and simple operational solution: all voltages are equal and all flow intensities are zero. Therefore, this choice for the embedding provides at s=0 a well known operational solution.
Now using classical techniques for variable elimination in polynomial systems (results from the theory of Resultants and Gröbner basis it can be proven that equations () do in fact define as holomorphic functions. More significantly, they define as algebraic curves. It is this specific fact, which becomes true because the embedding is holomorphic that guarantees the uniqueness of the result. The solution at determines uniquely the solution everywhere (except on a finite number of branch cuts), thus getting rid of the multi-valuedness of the load-flow problem.
The technique to obtain the coefficients for the power series expansion (on ) of voltages is quite straightforward, once one realizes that equations () can be used to obtain them order after order. Consider the power series expansion for and . By substitution into equations () and identifying terms at each order in , one obtains:
It is then straightforward to solve the sequence of linear systems () successively order after order, starting from . Note that the coefficients of the expansions for and are related by the simple convolution formulas derived from the following identity:
so that the right-hand side in () can always be calculated from the solution of the system at the previous order. Note also how the procedure works by solving just linear systems, in which the matrix remains constant.
A more detailed discussion about this procedure is offered in Ref.
Analytic continuation
Once the power series at are calculated to the desired order, the problem of calculating them at becomes one of analytic continuation. It should be strongly remarked that this does not have anything in common with the techniques of homotopic continuation. Homotopy is powerful since it only makes use of the concept of continuity and thus it is applicable to general smooth nonlinear systems, but on the other hand it does not always provide a reliable method to approximate the functions (as it relies on iterative schemes such as Newton-Raphson).
It can be proven that algebraic curves are complete global analytic functions, that is, knowledge of the power series expansion at one point (the so-called germ of the function) uniquely determines the function everywhere on the complex plane, except on a finite number of branch cuts. Stahl's extremal domain theorem further asserts that there exists a maximal domain for the analytic continuation of the function, which corresponds to the choice of branch cuts with minimal logarithmic capacity measure. In the case of algebraic curves the number of cuts is finite, therefore it would be feasible to find maximal continuations by finding the combination of cuts with minimal capacity. For further improvements, Stahl's theorem on the convergence of Padé Approximants states that the diagonal and supra-diagonal Padé (or equivalently, the continued fraction approximants to the power series) converge to the maximal analytic continuation. The zeros and poles of the approximants remarkably accumulate on the set of branch cuts having minimal capacity.
These properties confer the load-flow method with the ability to unequivocally detect the condition of voltage collapse: the algebraic approximations are guaranteed to either converge to the solution if it exists, or not converge if the solution does not exist.
See also
Power-flow study
Power system simulation
Unit commitment problem in electrical power production
Notes
References
Power engineering | Holomorphic Embedding Load-flow method | [
"Engineering"
] | 2,162 | [
"Power engineering",
"Electrical engineering",
"Energy engineering"
] |
34,353,944 | https://en.wikipedia.org/wiki/Nitroxide-mediated%20radical%20polymerization | Nitroxide-mediated radical polymerization is a method of radical polymerization that makes use of an nitroxide initiator to generate polymers with well controlled stereochemistry and a very low dispersity. It is a type of reversible-deactivation radical polymerization.
Alkoxyamine Initiators
The initiating materials for nitroxide-mediated radical polymerization (NMP) are a family of compounds referred to as alkoxyamines. An alkoxyamine can essentially be viewed as an alcohol bound to a secondary amine by an N-O single bond. The utility of this functional group is that under certain conditions, homolysis of the C-O bond can occur, yielding a stable radical in the form of a 2-center 3-electron N-O system and a carbon radical which serves as an initiator for radical polymerization. For the purposes of NMP, the R groups attached to the nitrogen are always bulky, sterically hindering groups and the R group in the O- position forms a stable radical, generally is benzylic for polymerization to occur successfully. NMP allows for excellent control of chain length and structure, as well as a relative lack of true termination that allows polymerization to continue as long as there is available monomer. Because of this it is said to be “living".
Persistent radical effect
The living nature of NMP is due to the persistent radical effect (PRE). The PRE is a phenomenon observable in some radical systems which leads to the highly favored formation of one product to the near exclusion of other radical couplings due to one of the radical species being particularly stable, existing in greater and greater concentrations as the reaction progresses while the other one is transient, reacting quickly with either itself in a termination step or with the persistent radical to form a desired product. As time goes on, a higher concentration of the persistent radical is present, which couples reversibly with itself, meaning that any of the transient radical still present tends to couple with the persistent radical rather than itself due to greater availability. This leads to a greater proportion of cross-coupling than self-coupling in radical species.
In the case of a nitroxide-mediated polymerization reaction, the persistent radical is the nitroxide species and the transient radical is always the carbon radical. This leads to repeated coupling of the nitroxide to the growing end of the polymer chain, which would ordinarily be considered a termination step, but is in this case reversible. Because of the high rate of coupling of the nitroxide to the growing chain end, there is little coupling of two active growing chains, which would be an irreversible terminating step limiting the chain length. The nitroxide binds and unbinds to the growing chain, protecting it from termination steps. This ensures that any available monomer can be easily scavenged by active chains. Because this polymerization process does not naturally self-terminate, this polymerization process is described as “living,” as the chains continue to grow under suitable reaction conditions whenever there is reactive monomer to “feed” them. Because of the PRE, it can be assumed that at any given time, almost all of the growing chains are “capped” by a mediating nitroxide, meaning that they dissociate and grow at very similar rates, creating a largely uniform chain length and structure.
Nitroxide stability
As stated above, nitroxide radicals are effective mediators of well-controlled radical polymerization because they are quite stable, allowing them to act as persistent radicals in a reaction mixture. This stability is a result of their unique structure. In most diagrams, the radical is depicted on the oxygen, but another resonance structure exists which is more helpful in explaining their stability in which the radical is on the nitrogen, which has a double bond to the oxygen. In addition to this resonance stability, nitroxides used in NMRP always contain bulky, sterically hindering groups in the R1 and R2 positions. The significant steric bulk of these substituents entirely prevents radical coupling in the N-centered resonance form while significantly reducing it in the O-centered form. These bulky groups contribute stability, but only if there is no resonance provided by allyl or aromatic groups α to the N. These result in decreased stability of the nitroxide, presumably because they offer less sterically hindered sites for radical coupling to take place. The resulting inactivity of the radical makes hemolytic cleavage of the alkoxyamine quite fast in more sterically hindered species.
Nitroxide choice
The choice of a specific nitroxide species to use has a large effect on the efficacy of an attempted polymerization. An effective polymerization (fast rate of chain growth, consistent chain length) results from a nitroxide with a fast C-O homolysis and relatively few side reactions. A more polar solvent lends itself better to C-O homolysis, so polar solvents which cannot bind to a labile nitroxide are the most effective for NMP. It is generally agreed that the structural factor that has the greatest effect on the ability of a nitroxide to mediate a radical polymerization is steric bulk. Generally speaking, greater steric bulk on the nitroxide leads to greater strain on the alkoxyamine, leading to the most easily broken bond, the C-O single bond, cleaving homolytically.
Ring size
In the case of cyclic nitroxides, five-membered ring systems have been shown to cleave more slowly than six-membered rings and acyclic nitroxides with t-butyl moieties as their R groups cleaved fastest of all. This difference in the rate of cleavage was determined to result not from a difference in C-O bond lengths, but in the difference of C-O-N bond angle in the alkoxyamine. The smaller the bond angle the greater the steric interaction between the nitroxide and the alkyl fragment and the more easily the initiator species broke apart.
Steric bulk
The efficiency of polymerization increases more and more with increased steric bulk of the nitroxide up to a point. TEMPO ((2,2,6,6-Tetramethylpiperidin-1-yl)oxyl) is capable of inducing the polymerization of styrene and styrene derivatives fairly easily, but is not sufficiently labile to induce polymerization of butyl acrylate under most conditions. TEMPO derivatives with even bulkier groups at the positions α to N have a rate of homolysis great enough to induce NMP of butyl acrylate, and the bulkier the α groups, the faster polymerization occurs. This indicates that the steric bulk of the nitroxide fragment can be a good indicator of the strength of an alkoxyamine initiator, at least up to a point. The equilibrium of its homolysis and reformation favors the radical form to the extent that recombination to reform an alkoxyamine over the course of NMP occurs too slowly to maintain control of chain length.
Preparation methods
Because TEMPO, which is commercially available, is a sufficient nitroxide mediator for the synthesis of polystyrene derivatives, the preparation of alkoxyamine initiators for NMP of copolymers is in many cases a matter of attaching a nitroxide group (TEMPO) to a specifically synthesized alkyl fragment. Several methods have been reported to achieve this transformation.
Jacobsen's catalyst
Jacobsen's catalyst is a manganese-based catalyst commonly used for the stereoselective epoxidation of alkenes. This epoxidation proceeds by a radical addition mechanism, which can be taken advantage of by introducing the radical TEMPO group into the reaction mixture. After treatment with a mild reducing agent such as sodium borohydride, this yields the product of a Markovnikov addition of nitroxide to the alkene. Jacobsen's catalyst is fairly mild, and a wide variety of functionalities on the alkene substrate can be tolerated. Practical yields are not necessarily as high as those reported by Dao et al., however.
Hydrazine
An alternative method is to react a substrate with a C-Br bond at the desired location of the nitroxide with hydrazine, generating an alkyl substituted hydrazine which is then exposed to a nitroxide radical and a mild oxidating agent such as lead dioxide. This generates a carbon-centered radical which couples with the nitroxide to generate the desired alkoxyamine. This method has the disadvantage of being relatively inefficient for some species, as well as the inherent danger of having to work with extremely toxic hydrazine and the inconvenience of having to run reactions in inert atmosphere.
Treatment of aldehydes with hydrogen peroxide
Yet another published alkoxyamine synthesis involves treatment of aldehydes with hydrogen peroxide, which adds to the carbonyl group. The resulting species rearranges in situ in the presence of CuCl forming formic acid and the desired alkyl radical, which couples with tempo to produce the target alkoxyamine. The reaction appears to give fairly good yields and tolerates a variety of functional groups in the alkyl chain.
Electrophilic bromination and nucleophilic attack
A synthesis has been described by Moon and Kang consisting of a one-electron reduction of a nitroxide radical in metallic sodium to yield a nucleophilic nitroxide. The nitroxide nucleophile is then added to an appropriate alkyl bromide, yielding the alkoxyamine by a simple SN2 reaction. This technique has the advantage of requiring only the appropriate alkyl bromide to be synthesized without requiring inconvenient reaction conditions and extremely hazardous reagents like Braso et al.’s method.
References
Polymerization reactions | Nitroxide-mediated radical polymerization | [
"Chemistry",
"Materials_science"
] | 2,064 | [
"Polymerization reactions",
"Polymer chemistry"
] |
5,078,656 | https://en.wikipedia.org/wiki/Spackling%20paste | Spackling paste or spackle is a putty used to fill holes, small cracks, and other minor surface defects in wood, drywall, and plaster. Typically, spackling is composed of gypsum plaster from hydrated calcium sulfate and glue.
Comparison with joint compound
Spackling paste is comparable and contrastable with joint compound as both look similar and serve the similar purpose of filling in low spots in walls and ceilings. The chief differences are that spackling paste typically dries faster, shrinks less during drying, and is meant for smaller repairs, and not for a whole room or house. It is not uncommon for the general public to call any of these products "spackle", but tradespersons usually specify joint compound (drywall mud) when that is specifically meant.
Spackle trademark
Spackle is an abandoned trademark of the Muralo Company, located in Bayonne, New Jersey. Muralo's product is dry powder, to be mixed with water by the user to form putty or paste brought to market in 1927, then patented and trademarked in 1928. The term spackle has since become a genericized trademark applied in the United States to a variety of household hole-filling products.
The first written appearance of the generic use of the word spackle was around 1940. The product name was likely derived from the German word , meaning "putty knife" or "filler." Other possible origins include Russian (tr. ; to fill holes with putty or caulk), Polish (spatula or putty knife), and Yiddish (to fill in small holes in plaster), all of which are likely derived from German.
Polyfilla
In the UK, Ireland, South Africa, Australia, and Canada, the brand "Polyfilla", multipurpose filler, is used as a generic term for spackling paste, even though it differs from spackle in being cellulose based. The manufacturers claim that it has an advantage over spackle in that it does not shrink or crack.
See also
Caulking
Putty
Wood putty
Home repair
Joint compound
Plastering
References
External links
Official Website
Home improvement
Building materials
Plastering | Spackling paste | [
"Physics",
"Chemistry",
"Engineering"
] | 446 | [
"Building engineering",
"Materials stubs",
"Coatings",
"Architecture",
"Construction",
"Materials",
"Plastering",
"Matter",
"Building materials"
] |
5,080,727 | https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20quantum%20computer | Nuclear magnetic resonance quantum computing (NMRQC) is one of the several proposed approaches for constructing a quantum computer, that uses the spin states of nuclei within molecules as qubits. The quantum states are probed through the nuclear magnetic resonances, allowing the system to be implemented as a variation of nuclear magnetic resonance spectroscopy. NMR differs from other implementations of quantum computers in that it uses an ensemble of systems, in this case molecules, rather than a single pure state.
Initially the approach was to use the spin properties of atoms of particular molecules in a liquid sample as qubits - this is known as liquid state NMR (LSNMR). This approach has since been superseded by solid state NMR (SSNMR) as a means of quantum computation.
Liquid state NMR
The ideal picture of liquid state NMR (LSNMR) quantum information processing (QIP) is based on a molecule in which some of its atom's nuclei behave as spin- systems. Depending on which nuclei we are considering they will have different energy levels and different interaction with its neighbours and so we can treat them as distinguishable qubits. In this system we tend to consider the inter-atomic bonds as the source of interactions between qubits and exploit these spin-spin interactions to perform 2-qubit gates such as CNOTs that are necessary for universal quantum computation. In addition to the spin-spin interactions native to the molecule an external magnetic field can be applied (in NMR laboratories) and these impose single qubit gates. By exploiting the fact that different spins will experience different local fields we have control over the individual spins.
The picture described above is far from realistic since we are treating a single molecule. NMR is performed on an ensemble of molecules, usually with as many as 10^15 molecules. This introduces complications to the model, one of which is introduction of decoherence. In particular we have the problem of an open quantum system interacting with a macroscopic number of particles near thermal equilibrium (~mK to ~300 K). This has led the development of decoherence suppression techniques that have spread to other disciplines such as trapped ions. The other significant issue with regards to working close to thermal equilibrium is the mixedness of the state. This required the introduction of ensemble quantum processing, whose principal limitation is that as we introduce more logical qubits into our system we require larger samples in order to attain discernable signals during measurement.
Solid state NMR
Solid state NMR (SSNMR), unlike LSNMR uses a solid state sample, for example a nitrogen vacancy diamond lattice rather than a liquid sample. This has many advantages such as lack of molecular diffusion decoherence, lower temperatures can be achieved to the point of suppressing phonon decoherence and a greater variety of control operations that allow us to overcome one of the major problems of LSNMR that is initialisation. Moreover, as in a crystal structure we can localize precisely the qubits, we can measure each qubit individually, instead of having an ensemble measurement as in LSNMR.
History
The use of nuclear spins for quantum computing was first discussed by Seth Lloyd and by David DiVincenzo.
Manipulation of nuclear spins for quantum computing using liquid state NMR was introduced independently by Cory, Fahmy and Havel and Gershenfeld and Chuang in 1997. Some early success was obtained in performing quantum algorithms in NMR systems due to the relative maturity of NMR technology. For instance, in 2001 researchers at IBM reported the successful implementation of Shor's algorithm in a 7-qubit NMR quantum computer. However, even from the early days, it was recognized that NMR quantum computers would never be very useful due to the poor scaling of the signal-to-noise ratio in such systems. More recent work, particularly by Caves and others, shows that all experiments in liquid state bulk ensemble NMR quantum computing to date do not possess quantum entanglement, thought to be required for quantum computation. Hence NMR quantum computing experiments are likely to have been only classical simulations of a quantum computer.
Mathematical representation
The ensemble is initialized to be the thermal equilibrium state (see quantum statistical mechanics). In mathematical parlance, this state is given by the density matrix:
where H is the hamiltonian matrix of an individual molecule and
where is the Boltzmann constant and the temperature. That the initial state in NMR quantum computing is in thermal equilibrium is one of the main differences compared to other quantum computing techniques, where they are initialized in a pure state. Nevertheless, suitable mixed states are capable of reflecting quantum dynamics which lead to Gershenfeld and Chuang to term them "pseudo-pure states".
Operations are performed on the ensemble through radio frequency (RF) pulses applied perpendicular to a strong, static magnetic field, created by a very large magnet. See nuclear magnetic resonance.
Consider applying a magnetic field along the z axis, fixing this as the principal quantization axis, on a liquid sample. The Hamiltonian for a single spin would be given by the Zeeman or chemical shift term:
where is the operator for the z component of the nuclear angular momentum, and is the resonance frequency of the spin, which is proportional to the applied magnetic field.
Considering the molecules in the liquid sample to contain two spin- nuclei, the system Hamiltonian will have two chemical shift terms and a dipole coupling term:
Control of a spin system can be realized by means of selective RF pulses applied perpendicular to the quantization axis. In the case of a two spin system as described above, we can distinguish two types of pulses: "soft" or spin-selective pulses, whose frequency range encompasses one of the resonant frequencies only, and therefore affects only that spin; and "hard" or nonselective pulses whose frequency range is broad enough to contain both resonant frequencies and therefore these pulses couple to both spins. For detailed examples of the effects of pulses on such a spin system, the reader is referred to Section 2 of work by Cory et al.
See also
Kane quantum computer
References
Quantum information science
Nuclear magnetic resonance | Nuclear magnetic resonance quantum computer | [
"Physics",
"Chemistry"
] | 1,262 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
5,081,444 | https://en.wikipedia.org/wiki/Extraordinary%20optical%20transmission | Extraordinary optical transmission (EOT) is the phenomenon of greatly enhanced transmission of light through a subwavelength aperture in an otherwise opaque metallic film which has been patterned with a regularly repeating periodic structure. Generally when light of a certain wavelength falls on a subwavelength aperture, it is diffracted isotropically in all directions evenly, with minimal far-field transmission. This is the understanding from classical aperture theory as described by Bethe. In EOT however, the regularly repeating structure enables much higher transmission efficiency to occur, up to several orders of magnitude greater than that predicted by classical aperture theory. It was first described in 1998.
This phenomenon that was fully analyzed with a microscopic scattering model is partly attributed to the presence of surface plasmon resonances and constructive interference. A surface plasmon (SP) is a collective excitation of the electrons at the junction between a conductor and an insulator and is one of a series of interactions between light and a metal surface called Plasmonics.
Currently, there is experimental evidence of EOT out of the optical range. Analytical approaches also predict EOT on perforated plates with a perfect conductor model. Holes can somewhat emulate plasmons at other regions of the electromagnetic spectrum where they do not exist. Then, the plasmonic contribution is a very particular peculiarity of the EOT resonance and should not be taken as the main contribution to the phenomenon. More recent work has shown a strong contribution from overlapping evanescent wave coupling, which explains why surface plasmon resonance enhances the EOT effect on both sides of a metallic film at optical frequencies, but accounts for the terahertz-range transmission.
Simple analytical explanations of this phenomenon have been elaborated, emphasizing the similarity between arrays of particles and arrays of holes, and establishing that the phenomenon is dominated by diffraction.
Applications
EOT is expected to play an important role in the creation of components of efficient photonic integrated circuits (PICs). Photonic integrated circuits are analogous to electronic circuits but based upon photons instead of electrons.
One of the most ground-breaking results linked to EOT is the possibility to implement a Left-Handed Metamaterial (LHM) by simply stacking hole arrays.
EOT-based chemical and biological sensing (for example, improving ELISA based antibody detection) is another major area of research. Much like in a traditional surface plasmon resonance sensor, the EOT efficiency varies with the wavelength of the incident light, and the value of the in-plane wavevector component. This can be exploited as a means of transducing chemical binding events by measuring a change in the local dielectric constant (due to binding of the target species) as a shift in the spectral location and/or intensity of the EOT peak. Variation of the hole geometry alters the spectral location of the EOT peak such that the chemical binding events can be optically detected at a desired wavelength. EOT-based sensing offers one key advantage over a Kretschmann-style SPR chemical sensor, that of being an inherently nanometer-micrometer scale device; it is therefore particularly amenable to miniaturization.
References
Quantum optics
Electromagnetism
Plasmonics
Metamaterials
Diffraction | Extraordinary optical transmission | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 671 | [
"Plasmonics",
"Electromagnetism",
"Physical phenomena",
"Spectrum (physical sciences)",
"Metamaterials",
"Quantum optics",
"Quantum mechanics",
"Surface science",
"Materials science",
"Diffraction",
"Crystallography",
"Fundamental interactions",
"Condensed matter physics",
"Nanotechnology"... |
5,081,545 | https://en.wikipedia.org/wiki/Electromechanical%20coupling%20coefficient | The electromechanical coupling coefficient is a numerical measure of the conversion efficiency between electrical and acoustic energy in piezoelectric materials.
Qualitatively the electromechanical coupling coefficient, k, can be described as:
References
Vilnius University, Laboratory of Physical Acoustics
Dimensionless numbers of physics
Condensed matter physics
Electrical phenomena | Electromechanical coupling coefficient | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 67 | [
"Physical phenomena",
"Materials science stubs",
"Phases of matter",
"Materials science",
"Electrical phenomena",
"Condensed matter physics",
"Condensed matter stubs",
"Matter"
] |
5,082,809 | https://en.wikipedia.org/wiki/Mutual%20standardisation | Mutual Standardisation is a term used within spatial epidemiology to refer to when ecological bias results as a consequence of adjusting disease rates for confounding at the area level but leaving the exposure unadjusted and vice versa. This bias is prevented by adjusting in the same way both the exposure and disease rates. This adjustment is rarely possible as it requires data on within-area distribution of the exposure and confounder variables. (Elliot, 2001)
See also
Outline of public health
References
Elliott, P., J. C. Wakefield, N. G. Best and D. J. Briggs (eds.) 2001. Spatial Epidemiology: Methods and Applications. Oxford University Press, Oxford.
Epidemiology | Mutual standardisation | [
"Environmental_science"
] | 150 | [
"Epidemiology",
"Environmental social science"
] |
5,084,075 | https://en.wikipedia.org/wiki/Titanium%28II%29%20chloride | Titanium(II) chloride is the chemical compound with the formula TiCl2. The black solid has been studied only moderately, probably because of its high reactivity. Ti(II) is a strong reducing agent: it has a high affinity for oxygen and reacts irreversibly with water to produce H2. The usual preparation is the thermal disproportionation of TiCl3 at 500 °C. The reaction is driven by the loss of volatile TiCl4:
2 TiCl3 → TiCl2 + TiCl4
The method is similar to that for the conversion of VCl3 into VCl2 and VCl4.
TiCl2 crystallizes as the layered CdI2 structure. Thus, the Ti(II) centers are octahedrally coordinated to six chloride ligands.
Derivatives
Molecular complexes are known such as TiCl2(chel)2, where chel is DMPE (CH3)2PCH2CH2P(CH3)2 and TMEDA ((CH3)2NCH2CH2N(CH3)2). Such species are prepared by reduction of related Ti(III) and Ti(IV) complexes.
Unusual electronic effects have been observed in these species: TiCl2[(CH3)2PCH2CH2P(CH3)2]2 is paramagnetic with a triplet ground state, but Ti(CH3)2[(CH3)2PCH2CH2P(CH3)2]2 is diamagnetic.
A solid-state derivative of TiCl2 is Na2TiCl4, which has been prepared by the reaction of Ti metal with TiCl3 in a NaCl flux. This species adopts a linear chain structure wherein again the Ti(II) centers are octahedral with terminal, axial halides.
References
Titanium halides
Chlorides
Titanium(II) compounds | Titanium(II) chloride | [
"Chemistry"
] | 398 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
6,697,465 | https://en.wikipedia.org/wiki/LNG%20storage%20tank | A liquefied natural gas storage tank or LNG storage tank is a specialized type of storage tank used for the storage of Liquefied Natural Gas. LNG storage tanks can be found in ground, above ground or in LNG carriers. The common characteristic of LNG Storage tanks is the ability to store LNG at the very low temperature of -162 °C (-260 °F). LNG storage tanks have double containers, where the inner contains LNG and the outer container contains insulation materials. The most common tank type is the full containment tank. Tanks vary greatly in size, depending on usage.
In-ground LNG tanks are also used; these are lined or unlined tanks beneath ground level. The low temperature of the LNG freezes the soil and provides effective containment. The tank is sealed with an aluminium alloy roof at ground level. Historically there have been problems with some unlined tanks with the escape of LNG into fissures, the gradual expansion of extent of the frozen ground, and ice heave which have limited the operational capability of in-ground tanks. All piping connected to the LNG tanks, whether above ground or in-ground, are routed through the top of the vessel. This mitigates against loss of containment in the event off a piping breach.
In LNG storage the pressure and temperature within the tank will continue to rise. LNG is a cryogen, and is kept in its liquid state at very low temperatures. The temperature within the tank will remain constant if the pressure is kept constant by allowing the boil off gas to escape from the tank. This is known as auto-refrigeration.
The world's largest above-ground tank (Delivered in 2000) is the 180 million liters full containment type for Osaka Gas Co., Ltd.
(2) The world's largest tank (Delivered in 2001) is the 200 million liters Membrane type for Toho Gas Co., Ltd.
See also
Liquefied natural gas terminal
References
Storage
Storage tanks | LNG storage tank | [
"Chemistry",
"Engineering"
] | 411 | [
"Chemical equipment",
"Storage tanks",
"Petroleum",
"Petroleum stubs"
] |
31,366,108 | https://en.wikipedia.org/wiki/Lubachevsky%E2%80%93Stillinger%20algorithm | Lubachevsky-Stillinger (compression) algorithm (LS algorithm, LSA, or LS protocol) is a numerical procedure suggested by F. H. Stillinger and Boris D. Lubachevsky that simulates or imitates a physical process of compressing an assembly of hard particles. As the LSA may need thousands of arithmetic operations even for a few particles, it is usually carried out on a computer.
Phenomenology
A physical process of compression often involves a contracting hard boundary of the container, such as a piston pressing against the particles. The LSA is able to simulate such a scenario. However, the LSA was originally introduced in the setting without a hard boundary where the virtual particles were "swelling" or expanding in a fixed, finite virtual volume with periodic boundary conditions. The absolute sizes of the particles were increasing but particle-to-particle relative sizes remained constant. In general, the LSA can handle an external compression and an internal particle expansion, both occurring simultaneously and possibly, but not necessarily, combined with a hard boundary. In addition, the boundary can be mobile.
In a final, compressed, or "jammed" state, some particles are not jammed, they are able to move within "cages" formed by their immobile, jammed neighbors and the hard boundary, if any. These free-to-move particles are not an artifact, or pre-designed, or target feature of the LSA, but rather a real phenomenon. The simulation revealed this phenomenon, somewhat unexpectedly for the authors of the LSA. Frank H. Stillinger coined the term "rattlers" for the free-to-move particles, because if one physically shakes a compressed bunch of hard particles, the rattlers will be rattling.
In the "pre-jammed" mode when the density of the configuration is low and when the particles are mobile, the compression and expansion can be stopped, if so desired. Then the LSA, in effect, would be simulating a granular flow. Various dynamics of the instantaneous collisions can be simulated such as: with or without a full restitution, with or without tangential friction. Differences in masses of the particles can be taken into account. It is also easy and sometimes proves useful to "fluidize" a jammed configuration, by decreasing the sizes of all or some of the particles. Another possible extension of the LSA is replacing the hard collision force potential (zero outside the particle, infinity at or inside) with a piece-wise constant force potential. The LSA thus modified would approximately simulate molecular dynamics with continuous
short range particle-particle force interaction. External force fields, such as gravitation, can be also introduced, as long as the inter-collision motion of each particle can be represented by a simple one-step calculation.
Using LSA for spherical particles of different sizes and/or for jamming in a non-commeasureable size container proved to be a useful technique for generating and studying micro-structures formed under conditions of a crystallographic defect or a geometrical frustration It should be added that the original LS protocol was designed primarily for spheres of same or different sizes.
Any deviation from the spherical (or circular in two dimensions) shape, even a simplest one, when spheres are replaced with ellipsoids (or ellipses in two dimensions), causes thus modified LSA to slow down substantially.
But as long as the shape is spherical, the LSA is able to handle particle assemblies in tens to hundreds of thousands
on today's (2011) standard personal computers. Only a very limited experience was reported
in using the LSA in dimensions higher than 3.
Implementation
The state of particle jamming is achieved via simulating a granular flow. The flow is rendered as a discrete event simulation, the events being particle-particle or particle-boundary collisions. Ideally, the calculations should have been
performed with the infinite precision. Then the jamming would have occurred ad infinitum. In practice, the precision is finite as is the available resolution of representing the real numbers in the computer memory, for example, a double-precision resolution. The real calculations are stopped when inter-collision runs of the non-rattler particles become
smaller than an explicitly or implicitly specified small threshold. For example, it is useless to continue the calculations when inter-collision runs are smaller than the roundoff error.
The LSA is efficient in the sense that the events are processed essentially in an event-driven fashion, rather than in a time-driven fashion. This means almost no calculation is wasted on computing or maintaining the positions and velocities of the particles between the collisions. Among the event-driven algorithms intended for the same task of simulating granular flow, like, for example, the algorithm of D.C. Rapaport, the LSA is distinguished by a simpler data structure and data handling.
For any particle at any stage of calculations the LSA keeps record of only two events: an old, already processed committed event, which comprises the committed event time stamp, the particle state (including position and velocity), and, perhaps, the "partner" which could be another particle or boundary identification, the one with which the particle collided in the past, and a new event proposed for a future processing with a similar set of parameters. The new event is not committed. The maximum of the committed old event times must never exceed the minimum of the non-committed new event times.
Next particle to be examined by the algorithm has the current minimum of new event times. At examining the chosen particle,
what was previously the new event, is declared to be the old one and to be committed, whereas the next new event is being scheduled, with its new time stamp, new state, and new partner, if any. As the next new event for a particle is being set,
some of the neighboring particles may update their non-committed new events to better account for the new information.
As the calculations of the LSA progress, the collision rates of particles may and usually do increase. Still the LSA successfully approaches the jamming state as long as those rates remain comparable among all the particles, except for the rattlers. (Rattlers experience consistently low collision rates. This property allows one to detect rattlers.) However,
it is possible for a few particles, even just for a single particle, to experience a very high collision rate along the approach to a certain simulated time. The rate will be increasing without a bound in proportion to the rates of collisions in the rest of the particle ensemble. If this happens, then the simulation will be stuck in time, it won't be able to progress toward the state of jamming.
The stuck-in-time failure can also occur when simulating a granular flow without particle compression or expansion. This failure mode was recognized by the practitioners of granular flow simulations as an "inelastic collapse" because it often occurs in such simulations when the restitution coefficient in collisions is low (i.e. inelastic). The failure is not specific to only the LSA algorithm. Techniques to avoid the failure have been proposed.
History
The LSA was a by-product of an attempt to find a fair measure of speedup in parallel simulations. The Time Warp parallel simulation algorithm by David Jefferson was advanced as a method to simulate asynchronous spatial interactions of fighting units in combat models on a parallel computer. Colliding particles models offered similar simulation tasks with spatial interactions of particles but clear of the details that are non-essential for exposing the simulation techniques. The speedup was presented as the ratio of the execution time on a uniprocessor over that on a multiprocessor, when executing the same parallel Time Warp algorithm. Boris D. Lubachevsky noticed that such a speedup assessment might be faulty because executing a parallel algorithm for a task on a uniprocessor is not necessarily the fastest way to perform the task on such a machine. The LSA was created in an attempt to produce a faster uniprocessor simulation and hence to have a more fair assessment of the parallel speedup. Later on, a parallel simulation algorithm, different from the Time Warp, was also proposed, that, when run on a uniprocessor, reduces to the LSA.
References
External links
LSA in action. A collection of animations by Alexander Donev
Source C++ codes of a version of the LSA in arbitrary dimensions
Volume fluctuation distribution in granular packs studied using the LSA
LSA generalized for particles of arbitrary shape
LSA used for production of representative volumes of microscale failures in packed granular materials
Computational physics
Simulation
Granularity of materials | Lubachevsky–Stillinger algorithm | [
"Physics",
"Chemistry"
] | 1,773 | [
"Computational physics",
"Materials",
"Particle technology",
"Granularity of materials",
"Matter"
] |
31,367,072 | https://en.wikipedia.org/wiki/Frog%20battery | A frog battery is an electrochemical battery consisting of a number of dead frogs (or sometimes live ones), which form the cells of the battery connected in a series arrangement. It is a kind of biobattery. It was used in early scientific investigations of electricity and academic demonstrations.
The principle behind the battery is the injury potential created in a muscle when it is damaged, although this was not fully understood in the 18th and 19th centuries; the potential being caused incidentally due to the dissection of the frog's muscles.
The frog battery is an example of a class of biobatteries which can be made from any number of animals. The general term for an example of this class is the muscular pile.
The first well-known frog battery was created by Carlo Matteucci in 1845, but there had been others before him. Matteucci also created batteries out of other animals, and Giovanni Aldini created a battery from ox heads.
Background
In the early days of electrical research, a common method of detecting electric current was by means of a frog's leg galvanoscope. A good supply of live frogs was kept to hand by the researcher ready to have their legs prepared for the galvanoscope. Frogs were therefore a convenient material to use in other experiments. They were small, easily handled, the legs were especially sensitive to electric current, and they carried on responding longer than other animal candidates for this role.
Preparation
It was usual to use the thighs of frogs for the battery construction. The legs of the frog were first skinned, then the lower leg was cut off at the knee joint and discarded. Damaging the muscle during this procedure would detract from the results. The thigh muscle was then cut in two transversely to produce two half-thighs. Only the lower, conical shaped piece was kept. The half-thighs were then laid on an insulator of varnished wood so arranged that the inside surface of one was in contact with the outside surface of the next, with the conical ends of the outside surface being pushed into the cavity of the cut surface. The ends of the pile were placed in cups of water sunk into the wood and formed the terminals of the battery.
The arrangement of inside surface connected to outside surface was on the basis of the incorrect theory that there was an electric current in muscles continually flowing from the inside to the outside. It is now known that the half-thighs were more successful at generating electricity because they had suffered the greatest injury to the muscle. This effect of increased electric potential due to injury is known as demarcation potential or injury potential.
Other constructions could also be used. For instance the complete rear legs could be used with the sciatic nerves exposed so that the nerve of one frog could be connected to the feet of the next. Whole frogs too could be used. Although it was more time-consuming to prepare the thigh muscles, most experimenters preferred to do this since it gave better results.
History
The first frog battery was constructed by Eusebio Valli in the 1790s with a chain of 10 frogs. Valli had difficulty understanding all of his own results; he followed Luigi Galvani in believing that animal electricity (or galvanic electricity) was a different phenomenon from metal-metal electricity (or voltaic electricity), even denying its existence. Alessandro Volta's theory was proved correct when he succeeded in constructing the voltaic pile without the use of any animal material. Because Valli found himself on the wrong side in this dispute, and refused to change his opinion despite the evidence, his work has become a bit of a backwater and his frog battery is little known and poorly documented.
Leopoldo Nobili built a frog battery in 1818 out of complete frog legs which he called a frog pile. He used this to investigate animal electricity but his experiments were strongly criticised by Volta who argued that the true source of electricity was dissimilar metals in the external circuit. According to Volta, fluids in the frog merely provided the electrolyte.
The first well-known frog battery was constructed by Carlo Matteucci which was described in a paper presented to the Royal Society in 1845 by Michael Faraday on his behalf. It later also appeared in the popular medical student physics textbook Elements of Natural Philosophy by Golding Bird. Matteucci constructed his battery from a pile of 12 to 14 half-thighs of frogs. Despite the misguided theory behind the half-thigh battery, Matteucci's frog battery was nevertheless sufficiently powerful to decompose potassium iodide. Matteucci aimed with this apparatus to address Volta's criticism of Nobili by constructing a circuit, as far as possible, entirely out of biological material and hence prove the existence of animal electricity. Matteucci also studied the effects vacuum, various gases, and poisons had on the frog battery, concluding that in many cases its operation was not affected even when the substance would be toxic or lethal to the living animal.
Frogs were not the only creatures to be press-ganged into serving as battery components. In 1803, Giovanni Aldini demonstrated that electricity could be obtained from an ox head from a freshly killed animal. A frog galvanoscope connected between the ox's tongue and ear showed a reaction when the circuit was completed through the experimenter's own body. A greater reaction was obtained when Aldini joined two or three heads together into a battery. Later, in the 1840s, Matteucci also created eel batteries, pigeon batteries and rabbit batteries. Further, he created a battery out of living pigeons by connecting a wound made on the breast of one pigeon to the body of the next. Matteucci states that this design was based on a pre-existing battery of living frogs.
References
Bibliography
Bird, Golding Elements of Natural Philosophy, London: John Churchill 1848.
Bird, Golding Lectures on Electricity and Galvanism, London: Longman, Brown, Green, & Longmans 1849.
Clarke, Edwin; Jacyna, L. S. Nineteenth-Century Origins of Neuroscientific Concepts, University of California Press, 1992 .
Clarke, Edwin; O'Malley, Charles Donald The Human Brain and Spinal Cord: a historical study illustrated by writings from antiquity to the twentieth century, Norman Publishing, 1996 .
Hellman, Hal Great Feuds in Medicine, John Wiley and Sons, 2001
Matteucci, Carlo "The muscular current" Philosophical Transactions, pp. 283–295, 1845.
Matteucci, Carlo "Matteucci's lectures on living beings", American Journal of Science and Arts, series 2, vol.5, pp. 390–398, May 1848.
Kipnis, Nahum "Changing a theory: the case of Volta's contact electricity", Nuova Voltiana, vol.5 (2003), pp. 143–162, Università degli studi di Pavia, 2003 .
Rutter, J. O. N. Human Electricity, J.W. Parker and Son, 1854.
Valli, Eusobio; Moorcroft W. (trans.), Experiments on Animal Electricity, With Their Application to Physiology, London: Printed for J. Johnson, 1793 .
Battery types
Bioelectrochemistry
History of technology | Frog battery | [
"Chemistry",
"Technology"
] | 1,473 | [
"Bioelectrochemistry",
"Science and technology studies",
"Electrochemistry",
"History of technology",
"History of science and technology"
] |
31,367,574 | https://en.wikipedia.org/wiki/Hellenic%20Aeronautical%20Engineers%20Society | The Hellenic Aeronautical Engineers Society (HAES) (Greek: Σύλλογος Ελλήνων Αεροναυπηγών) is the society of professional licensed Aeronautical Engineers in Greece.
The purpose of HAES is to provide a basis where Greek-licensed Aeronautical Engineers can fraternize and coordinate scientific and professional efforts to assist the state and support, develop, and promote aviation and space activities.
HAES was first registered in 1975 as a society (noncommercial) and it is a branch organization of the Hellenic Technical (Engineering) Chambers (Τεχνικό Επιμελητήριο Ελλάδας) in Greece. The Society is a member and the national representative of the International Council of the Aeronautical Sciences (ICAS), the Council of European Aerospace Societies (CEAS), and the European Federation of National Engineering Associations (FEANI).
The society numbers approximately 250 members, almost all having university degrees in Aeronautical Engineering from countries outside Greece (mostly the United Kingdom, Italy, Germany, France, and the United States) due to the practically non-existence of such academic programs in Greece until recently.
The main requirement for one to become a member is to have a Professional License in Aeronautical Engineering from the Hellenic Technical Chambers in Greece and be in good standing with the chamber.
References
External links
Official website
Engineering societies based in Greece
Aerospace engineering organizations | Hellenic Aeronautical Engineers Society | [
"Engineering"
] | 300 | [
"Aeronautics organizations",
"Aerospace engineering organizations",
"Aerospace engineering"
] |
31,370,522 | https://en.wikipedia.org/wiki/Tsuji%E2%80%93Trost%20reaction | The Tsuji–Trost reaction (also called the Trost allylic alkylation or allylic alkylation) is a palladium-catalysed substitution reaction involving a substrate that contains a leaving group in an allylic position. The palladium catalyst first coordinates with the allyl group and then undergoes oxidative addition, forming the -allyl complex. This allyl complex can then be attacked by a nucleophile, resulting in the substituted product.
This work was first pioneered by Jirō Tsuji in 1965 and, later, adapted by Barry Trost in 1973 with the introduction of phosphine ligands.
The scope of this reaction has been expanded to many different carbon, nitrogen, and oxygen-based nucleophiles, many different leaving groups, many different phosphorus, nitrogen, and sulfur-based ligands, and many different metals (although palladium is still preferred).
The introduction of phosphine ligands led to improved reactivity and numerous asymmetric allylic alkylation strategies. Many of these strategies are driven by the advent of chiral ligands, which are often able to provide high enantioselectivity and high diastereoselectivity under mild conditions. This modification greatly expands the utility of this reaction for many different synthetic applications. The ability to form carbon-carbon, carbon-nitrogen, and carbon-oxygen bonds under these conditions, makes this reaction very appealing to the fields of both medicinal chemistry and natural product synthesis.
History
In 1962, Smidt published work on the palladium-catalysed oxidation of alkenes to carbonyl groups. In this work, it was determined that the palladium catalyst activated the alkene for the nucleophilic attack of hydroxide. Gaining insight from this work, Tsuji hypothesized that a similar activation could take place to form carbon-carbon bonds.
In 1965, Tsuji reported work that confirmed his hypothesis. By reacting an allylpalladium chloride dimer with the sodium salt of diethyl malonate, the group was able to form a mixture of monoalkylated and dialkylated product.
The scope of the reaction was expanded only gradually until Trost discovered the next big breakthrough in 1973. While attempting to synthesize acyclic sesquiterpene homologs, Trost ran into problems with the initial procedure and was not able to alkylate his substrates. These problems were overcome with the addition of triphenylphosphine to the reaction mixture.
These conditions were then tested out for other substrates and some led to "essentially instantaneous reaction at room temperature." Soon after, he developed a way to use these ligands for asymmetric synthesis. Not surprisingly, this spurred on many other investigations of this reaction and has led to the important role that this reaction now holds in synthetic chemistry.
Mechanism
Starting with a zerovalent palladium species and a substrate containing a leaving group in the allylic position, the Tsuji–Trost reaction proceeds through the catalytic cycle outlined below.
First, the palladium coordinates to the alkene, forming a η2 -allyl-Pd0 Π complex. The next step is oxidative addition in which the leaving group is expelled with inversion of configuration and a η3 -allyl-PdII is created (also called ionization). The nucleophile then adds to the allyl group regenerating the η2 -allyl-Pd0 complex. At the completion of the reaction, the palladium detaches from the alkene and can start again in the catalytic cycle.
"Hard" versus "soft" nucleophiles
The nucleophiles used are typically generated from precursors (pronucleophiles) in situ after their deprotonation with base. These nucleophiles are then subdivided into "hard" and "soft" nucleophiles using a paradigm for describing nucleophiles that largely rests on the of their conjugate acids. "Hard" nucleophiles typically have conjugate acids with greater than 25, while "soft" nucleophiles typically have conjugate acids with less than 25.
This descriptor is important because of the impact these nucleophiles have on the stereoselectivity of the product. Stabilized or "soft" nucleophiles invert the stereochemistry of the -allyl complex. This inversion in conjunction with the inversion in stereochemistry associated with the oxidative addition of palladium yields a net retention of stereochemistry. Unstabilized or "hard" nucleophiles, on the other hand, retain the stereochemistry of the -allyl complex, resulting in a net inversion of stereochemistry.
This trend is explained by examining the mechanisms of nucleophilic attack. "Soft" nucleophiles attack the carbon of the allyl group, while "hard" nucleophiles attack the metal center, followed by reductive elimination.
Phosphine ligands
Phosphine ligands, such as triphenylphosphine or the Trost ligand, have been used to greatly expand the scope of the Tsuji–Trost reaction. These ligands can modulate the properties of the palladium catalyst such as steric bulk as well as the electronic properties. Importantly, these ligands can also instill chirality to the final product, making it possible for these reactions to be carried out asymmetrically as shown below.
Allylic asymmetric substitution
The enantioselective version of the Tsuji–Trost reaction is called the Trost asymmetric allylic alkylation (Trost AAA) or simply, asymmetric allylic alkylation (AAA). These reactions are often used in asymmetric synthesis. The reaction was originally developed with a palladium catalyst supported by the Trost ligand, although suitable conditions have greatly expanded since then.
Enantioselectivity can be imparted to the reaction during any of the steps aside from the decomplexation of the palladium from the alkene since the stereocenter is already set at that point. Five main ways have been conceptualized to take advantage of these steps and yield enantioselective reaction conditions.
These methods of enantiodiscrimination were previously reviewed by Trost:
Preferential ionization via enantioselective olefin complexation
Enantiotopic ionization of leaving groups
Attack at enantiotopic ends of the allyl complex
Enantioface exchange in the -allyl complex
Differentiation of prochiral nucleophile faces
The favored method for enantiodiscrimination is largely dependent on the substrate of interest, and in some cases, the enantioselectivity may be influenced by several of these factors.
Scope
Nucleophiles
Many different nucleophiles have been reported to be effective for this reaction. Some of the most common nucleophiles include malonates, enolates, primary alkoxides, carboxylates, phenoxides, amines, azide, sulfonamides, imides, and sulfones.
Leaving groups
The scope of leaving groups has also been expanded to include a number of different leaving groups, although carbonates, phenols, phosphates, halides and carboxylates are the most widely used.
"Hard" and "soft" nucleophiles
Recent work has demonstrated that the scope of "soft" nucleophiles can be expanded to include some pronucleophiles that have much higher than ~ 25. Some of these "soft" nucleophiles have ranging all the way to 32, and even more basic pronucleophiles (~44) have been shown to act as soft nucleophiles with the addition of Lewis acids that help to facilitate deprotonation. The improved pKa range of "soft" nucleophiles is critical because these nucleophiles are the only ones that have been explored for enantioselective reactions until very recently (although non-enantioselective reactions of "hard" nucleophiles have been known for some time). By increasing the scope of pronucleophiles that act as "soft" nucleophiles, these substrates can also be incorporated into enantioselective reactions using previously reported and well characterized methods.
Ligands
Building on the reactivity of the triphenylphosphine ligand, the structure of ligands used for the Tsuji–Trost reaction quickly became more complex. Today, these ligands may contain phosphorus, sulfur, nitrogen or some combination of these elements, but most studies have concentrated on the mono- and diphosphine ligands. These ligands can be further classified based on the nature of their chirality, with some ligands containing central chirality on the phosphorus or carbon atoms, some containing biaryl axial chirality, and others containing planar chirality.
Diphosphine ligands with central chirality emerged as an effective type of ligand (particularly for asymmetric allylic alkylation procedures) with the Trost Ligand being one such example.
Phosphinooxazolines (PHOX) ligands have been employed in the AAA, particularly with carbon-based nucleophiles.
Additional substrates
The reaction substrate has also been extended to allenes. In this specific ring expansion the AAA reaction is also accompanied by a Wagner–Meerwein rearrangement:
Applications
Pharmaceutical/natural products synthesis
The ability to form carbon-carbon, carbon-nitrogen, and carbon-oxygen bonds enantioselectively under mild conditions makes the Trost asymmetric allylic alkylation extremely appealing for the synthesis of complex molecules.
An example of this reaction is the synthesis of an intermediate in the combined total synthesis of galantamine and morphine with 1 mol% [pi-allylpalladium chloride dimer], 3 mol% (S,S) Trost ligand, and triethylamine in dichloromethane at room temperature. These conditions result in the formation of the (−)-enantiomer of the aryl ether in 72% chemical yield and 88% enantiomeric excess.
Another Tsuji–Trost reaction was used during the initial stages of the synthesis of (−)-neothiobinupharidine. This recent work demonstrates the ability of this reaction to give highly diastereoselective (10:1) and enantioselective (97.5:2.5) products from achiral starting material with only a small amount of catalyst (1%).
Palladium detection
Aside from the practical application of this reaction in medicinal chemistry and natural product synthesis, recent work has also used the Tsuji–Trost reaction to detect palladium in various systems. This detection system is based on a non-fluorescent fluorescein-derived sensor (longer-wavelength sensors have also recently been developed for other applications) that becomes fluorescent only in the presence of palladium or platinum.
This palladium/platinum sensing ability is driven by the Tsuji–Trost reaction. The sensor contains an allyl group with the fluorescein functioning as the leaving group. The -allyl complex is formed and after a nucleophile attacks, the fluorescein is released, yielding a dramatic increase in fluorescence.
This simple, high-throughput method to detect palladium by monitoring fluorescence has been shown to be useful in monitoring palladium levels in metal ores, pharmaceutical products, and even in living cells. With the ever-increasing popularity of palladium catalysis, this type of quick detection should be very useful in reducing the contamination of pharmaceutical products and preventing the pollution of the environment with palladium and platinum.
References
External links
Org. Synth. 1989, 67, 105
Org. Synth. 2009, 86, 47
example of tsuji-trost reaction in total synthesis see : http://www.biocis.u-psud.fr/IMG/pdf/concise_total_synthesis_of_Minfiensine.pdf the second reaction found on website of the biocis team : http://www.biocis.u-psud.fr/spip.php?article332
Organic reactions
Substitution reactions
Palladium
Name reactions
Allyl complexes | Tsuji–Trost reaction | [
"Chemistry"
] | 2,625 | [
"Name reactions",
"Organic reactions"
] |
20,198,857 | https://en.wikipedia.org/wiki/Thomas%E2%80%93Fermi%20model | The Thomas–Fermi (TF) model, named after Llewellyn Thomas and Enrico Fermi, is a quantum mechanical theory for the electronic structure of many-body systems developed semiclassically shortly after the introduction of the Schrödinger equation. It stands separate from wave function theory as being formulated in terms of the electronic density alone and as such is viewed as a precursor to modern density functional theory. The Thomas–Fermi model is correct only in the limit of an infinite nuclear charge. Using the approximation for realistic systems yields poor quantitative predictions, even failing to reproduce some general features of the density such as shell structure in atoms and Friedel oscillations in solids. It has, however, found modern applications in many fields through the ability to extract qualitative trends analytically and with the ease at which the model can be solved. The kinetic energy expression of Thomas–Fermi theory is also used as a component in more sophisticated density approximation to the kinetic energy within modern orbital-free density functional theory.
Working independently, Thomas and Fermi used this statistical model in 1927 to approximate the distribution of electrons in an atom. Although electrons are distributed nonuniformly in an atom, an approximation was made that the electrons are distributed uniformly in each small volume element ΔV (i.e. locally) but the electron density can still vary from one small volume element to the next.
Kinetic energy
For a small volume element ΔV, and for the atom in its ground state, we can fill out a spherical momentum space volume VF up to the Fermi momentum pF, and thus,
where is the position vector of a point in ΔV.
The corresponding phase space volume is
The electrons in ΔVph are distributed uniformly with two electrons per h3 of this phase space volume, where h is the Planck constant. Then the number of electrons in ΔVph is
The number of electrons in ΔV is
where is the electron number density.
Equating the number of electrons in ΔV to that in ΔVph gives
The fraction of electrons at that have momentum between p and is
Using the classical expression for the kinetic energy of an electron with mass me, the kinetic energy per unit volume at for the electrons of the atom is
where a previous expression relating to has been used and
Integrating the kinetic energy per unit volume over all space, results in the total kinetic energy of the electrons,
This result shows that the total kinetic energy of the electrons can be expressed in terms of only the spatially varying electron density according to the Thomas–Fermi model. As such, they were able to calculate the energy of an atom using this expression for the kinetic energy combined with the classical expressions for the nuclear-electron and electron-electron interactions (which can both also be represented in terms of the electron density).
Potential energies
The potential energy of an atom's electrons, due to the electric attraction of the positively charged nucleus is
where is the potential energy of an electron at that is due to the electric field of the nucleus.
For the case of a nucleus centered at with charge Ze, where Z is a positive integer and e is the elementary charge,
The potential energy of the electrons due to their mutual electric repulsion is,
Total energy
The total energy of the electrons is the sum of their kinetic and potential energies,
Thomas–Fermi equation
In order to minimize the energy E while keeping the number of electrons constant, we add a Lagrange multiplier term of the form
,
to E. Letting the variation with respect to n vanish then gives the equation
which must hold wherever is nonzero. If we define the total potential by
then
If the nucleus is assumed to be a point with charge Ze at the origin, then and will both be functions only of the radius , and we can define φ(r) by
where a0 is the Bohr radius. From using the above equations together with Gauss's law, φ(r) can be seen to satisfy the Thomas–Fermi equation
For chemical potential μ = 0, this is a model of a neutral atom, with an infinite charge cloud where is everywhere nonzero and the overall charge is zero, while for μ < 0, it is a model of a positive ion, with a finite charge cloud and positive overall charge. The edge of the cloud is where φ(r) = 0. For μ > 0, it can be interpreted as a model of a compressed atom, so that negative charge is squeezed into a smaller space. In this case the atom ends at the radius r where .
Inaccuracies and improvements
Although this was an important first step, the Thomas–Fermi equation's accuracy is limited because the resulting expression for the kinetic energy is only approximate, and because the method does not attempt to represent the exchange energy of an atom as a conclusion of the Pauli exclusion principle. A term for the exchange energy was added by Dirac in 1930, which significantly improved its accuracy.
However, the Thomas–Fermi–Dirac theory remained rather inaccurate for most applications. The largest source of error was in the representation of the kinetic energy, followed by the errors in the exchange energy, and due to the complete neglect of electron correlation.
In 1962, Edward Teller showed that Thomas–Fermi theory cannot describe molecular bonding – the energy of any molecule calculated with TF theory is higher than the sum of the energies of the constituent atoms. More generally, the total energy of a molecule decreases when the bond lengths are uniformly increased. This can be overcome by improving the expression for the kinetic energy.
One notable historical improvement to the Thomas–Fermi kinetic energy is the Weizsäcker (1935) correction,
which is the other notable building block of orbital-free density functional theory. The problem with the inaccurate modelling of the kinetic energy in the Thomas–Fermi model, as well as other orbital-free density functionals, is circumvented in Kohn–Sham density functional theory with a fictitious system of non-interacting electrons whose kinetic energy expression is known.
See also
Thomas–Fermi screening
Further reading
R. P. Feynman, N. Metropolis, and E. Teller. "Equations of State of Elements Based on the Generalized Thomas-Fermi Theory". Physical Review 75, #10 (May 15, 1949), pp. 1561–1573.
References
Atomic physics
Density functional theory | Thomas–Fermi model | [
"Physics",
"Chemistry"
] | 1,293 | [
"Density functional theory",
"Quantum chemistry",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
20,206,325 | https://en.wikipedia.org/wiki/Dose%20area%20product | Dose area product (DAP) is a quantity used in assessing the radiation risk from diagnostic X-ray radiography examinations and interventional procedures, like angiography. It is defined as the absorbed dose multiplied by the area irradiated, expressed in gray-centimetres squared (Gy·cm2 – sometimes the prefixed units dGy·cm2, mGy·cm2 or cGy·cm2 are also used). Gray (Gy) is the SI unit of absorbed dose of ionizing radiation, while the milligray (mGy) is its subunit, numerically equivalent to the millisievert (mSv) as used to quantify equivalent and effective doses for gamma (γ) and X-rays.
Manufacturers of DAP meters usually calibrate them in terms of absorbed dose to air. DAP reflects not only the dose within the radiation field but also the area of tissue irradiated. Therefore, it may be a better indicator of the overall risk of inducing cancer than the dose within the field. It also has the advantage of being easily measured, with the permanent installation of a DAP meter on the X-ray set.
Due to the divergence of a beam emitted from a "point source", the area irradiated (A) increases with the square of distance from the source (A ∝ d2), while radiation intensity (I) decreases according to the inverse square of distance (I ∝ 1/d2). Consequently, the product of intensity and area, and therefore DAP, is independent of distance from the source.
DICOM "X-Ray Acquisition Dose Module" metadata within each medical imaging study often includes various DAP and dose length product (DLP) parameters.
How DAP is measured
An ionization chamber is placed beyond the X-ray collimators and must intercept the entire X-ray field for an accurate reading. Different parameters of the X-ray set, such as peak voltage (kVp), X-ray tube current (mA), exposure time, or the area of the field, can also be changed.
For example, a X-ray field with an entrance dose of 1 mGy will yield a 25 mGy·cm2 DAP value. When the field is increased to with the same entrance dose, the DAP increases to 100 mGy·cm2, which is four times the previous value.
Kerma Area Product
Kerma area product (KAP) is a related quantity, which for all practical radiation protection purposes is equal to dose area product. However, strictly speaking , where g is the fraction of energy of liberated charged particles that is lost in radiative processes in the material and the dose is expressed in absorbed dose to air. The value of g for diagnostic X-rays is only a fraction of a percent.
Adult coronary angiography and PCI procedures expose patients to an average DAP in the range of 20 to 106 Gy·cm2 and 44 to 143 Gy·cm2 respectively.
See also
Effective dose
Equivalent dose
Computed tomography dose index
References
Radiology
Medical terminology
Nuclear medicine
Medical physics | Dose area product | [
"Physics"
] | 636 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
20,206,762 | https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20angiotensin%20receptor%20blockers | The angiotensin receptor blockers (ARBs), also called angiotensin (AT1) receptor antagonists or sartans, are a group of antihypertensive drugs that act by blocking the effects of the hormone angiotensin II (Ang II) in the body, thereby lowering blood pressure. Their structure is similar to Ang II and they bind to Ang II receptors as inhibitors, e.g., [T24 from Rhys Healthcare].
ARBs are widely used drugs in the clinical setting today, their main indications being mild to moderate hypertension, chronic heart failure, secondary stroke prevention and diabetic nephropathy.
The discovery and development of ARBs is a demonstrative example of modern rational drug design and how design can be used to gain further knowledge of physiological systems, in this case, the characterization of the subtypes of Ang II receptors.
History
In 1898, the physiologist Robert Tigerstedt and his student, Per Bergman, experimented with rabbits by injecting them with kidney extracts. Their results suggested the kidneys produced a protein, which they named renin, that caused a rise in blood pressure. In the 1930s, Goldblatt conducted experiments where he constricted the renal blood flow in dogs; he found the ischaemic kidneys did in fact secrete a chemical that caused vasoconstriction. In 1939, renin was found not to cause the rise in blood pressure, but was an enzyme which catalyzed the formation of the substances that were responsible, namely, angiotensin I (Ang I) and Ang II.
In the 1970s, scientists first observed Ang II to harm the heart and kidneys, and individuals with high levels of renin activity in plasma were at increased risk of myocardial infarction and stroke.
With the introduction of angiotensin converting enzyme (ACE) inhibitors in the late 1970s it was confirmed that Ang II plays an important role in regulating blood pressure and electrolyte and fluid balance.
Before that attempts had been made to develop useful Ang II receptor antagonists and initially, the main focus was on angiotensin peptide analogues. Saralasin and other Ang II analogues were potent Ang II receptor blockers but the main problem was a lack of oral bioavailability.
In the early 1980s it was noted that a series of imidazole-5-acetic acid derivatives diminished blood pressure responses to Ang II in rats. Two compounds, S-8307 and S-8308, were later found to be highly specific and promising non-peptide Ang II receptor antagonists but using molecular modeling it was seen that their structures would have to mimic more closely the pharmacophore of Ang II. Structural modifications were made and the orally active, potent and selective nonpeptide AT1 receptor blocker losartan was developed. In 1995 losartan was approved for clinical use in the United States and since then six additional ARBs have been approved. These drugs are known for their excellent side-effects profiles, which clinical trials have shown to be similar to those of placebos.
The angiotensin II receptor
The actions of Ang II are mediated by angiotensin receptors, AT1 and AT2. These receptors are members of the G protein-coupled receptors family which are seven transmembrane helices, connected by interchanging extracellular and intracellular loops.
Each G protein-coupled receptor couples to a specific G-protein which leads to activation of a special effector system. AT1 receptors are for instance primarily coupled through the Gq/11 group of G-proteins.
Two more angiotensin receptors have been described, AT3 and AT4, but their role is still unknown.
Distribution in the body
AT1 receptors are mainly found in the heart, adrenal glands, brain, liver and kidneys. Their main role is to regulate blood pressure as well as fluid and electrolyte balance.
AT2 receptors are highly expressed in the developing fetus but they decline rapidly after birth. In the adult, AT2 receptors are present only at low levels and are mostly found in the heart, adrenal glands, uterus, ovaries, kidneys and brain.
Functions
Most of the known actions of Ang II are mediated through the AT1 receptors, for example vasoconstriction, aldosterone release, renal sodium reabsorption and vasopressin secretion. The AT2 receptor also takes part in regulation of blood pressure and renal function but mediates antagonistic effects compared to the AT1 receptor.
Binding pockets
Ang II binds to AT1 receptors via various binding sites. The primary binding site is at the extracellular region of the AT1 receptor where Ang II interacts with residues in the N-terminus of the AT1 receptor and its first and third extracellular loops. The transmembrane helices also contribute to the binding via the C-terminal carboxyl group that interacts with Lys199 in the upper part of helix 5 of the receptor; see figure 1 for details.
The ionic bridge formed between Lys199 and the carboxyl terminal group of the Phe8 residue of Ang II is most likely stabilized by the Trp253 residue. In addition, Phe259 and Asp263 in transmembrane helix 6 and Lys102 and Ser105 in the outer region of transmembrane helix 3 have also been implicated in Ang II binding. This region may possibly participate in the stabilization of the receptor's ratification and in the formation of the intramembrane binding pocket.
Mechanism of action
Blood pressure and fluid and electrolyte homeostasis is regulated by the renin–angiotensin–aldosterone system.
Renin, an enzyme released from the kidneys, converts the inactive plasma protein angiotensinogen into angiotensin I (Ang I). Then Ang I is converted to Ang II with angiotensin converting enzyme (ACE), see figure 2. Ang II in plasma then binds to AT-receptors.
ARBs are blocking the last part of the renin–angiotensin pathway and block the pathway more specifically than ACE inhibitors.
The AT1 receptor mediates Ang II to cause increased cardiac contractility, sodium reabsorption and vasoconstriction which all lead to increased blood pressure. By blocking AT1 receptors, ARBs lead to lower blood pressure.
An insurmountable inhibition of the AT1 receptor is achieved when the maximum response of Ang II cannot be restored in the presence of the ARB, no matter how high the concentration of Ang II is.
The angiotensin receptor blockers can inhibit the receptor in a competitive surmountable, competitive insurmountable or noncompetitive fashion, depending upon the rate at which they dissociate from the receptor.
Drug discovery and development
Development from saralasin to losartan and eprosartan
For a simple overview of the development of ARBs, see figure 3.
Because of saralasin, the first Ang II antagonist, and the development of the first ACE inhibitor captopril, it was generally acknowledged that Ang II receptor antagonists might be promising as effective antihypertensive agents.
Saralasin was developed in the early 1970s and is an octapeptide analogue of Ang II, where the amino acids Asp1, Ile5 and Phe8 have been replaced with Ser1, Val5 and Ala8, respectively. Saralasin was not orally bioavailable, had short duration of action and showed partial agonist activity and therefore it was not suitable as a drug.
Thus the goal was to develop a smaller nonpeptide substance with similar inhibition and binding features. At this time, a group at DuPont had already started the screening of nonpeptide mimics of Ang II using existing substances from chemical libraries.
Research investigators at Takeda discovered in 1982 the weak nonpeptide Ang II antagonists S-8307 and S-8308 from a group of 1-benzylimidazole-5-acetic acid derivatives. S-8307 and S-8308 have moderate potency, short duration of action and limited oral bioavailability, however they are selective and competitive AT1 receptor antagonists without partial agonist activity. A group at DuPont postulated that both Ang II and the Takeda leads were bound at the same receptor site. These two substances served as lead compounds for further optimization of AT1 receptor blockers.
Using nuclear magnetic resonance studies on the spatial structure of Ang II, scientists at DuPont discovered that the Takeda structures had to be enlarged at a particular position to resemble more closely the much larger peptide Ang II.
Computer modeling was used to compare S-8308 and S-8307 with Ang II and it was seen that Ang II contains two acidic residues near the NH2 terminus. These groups were not mimicked by the Takeda leads and therefore it was hypothesized that acidic functional groups would have to be added to the compounds.
The 4-carboxy-derivative EXP-6155 had a binding activity which was ten-fold greater than that of S-8308 which further strengthened this hypothesis.
By replacing the 4-carboxy-group with a 2-carboxy-benzamido-moiety the compound EXP-6803 was synthesized. It had highly increased binding affinity but was only active when administered intravenously.
Replacing the 2-carboxy-benzamido-group with a 2-carboxy-phenyl-group created the lipophilic biphenyl-containing EXP-7711, which exhibited good oral activity but slightly less affinity for the AT1 receptor.
Then the polar carboxyl group was replaced with a more lipophilic tetrazole group in order to increase oral bioavailability and duration of action further and the compound thus formed was named losartan. This development took place in 1986 and losartan became the first successful Ang II antagonist drug, approved as such in the United States in 1995 and was marketed by Merck under the brand name Cozaar.
This development was an extensive program and it is estimated that the process from the Takeda structures to the final substance, losartan, took more than fifty person-years of work in biological testing and chemical modifications. This represents an excellent investment given that a recent study estimated that losartan administration in the European union may reduce health care provision costs by 2.5 billion euro over 3.5 years.
Using a different lead, optimization from S-8308, eprosartan was developed by SmithKline Beecham in 1992. Eprosartan does not have a biphenyl-methyl structure but in order to mimic the C-terminal end of Ang II the 5-acetic acid group was replaced with an a-thienylacrylic acid and a 4-carboxy-moiety. Eprosartan is a selective, potent and competitive AT1 antagonist and its binding to AT1 receptors is rapid, reversible, saturable and of high affinity.
Development from losartan to other drugs
Losartan, valsartan, candesartan, irbesartan, telmisartan and olmesartan all contain a biphenyl-methyl group.
Losartan is partly metabolized to its 5-carboxylic acid metabolite EXP 3174, which is a more potent AT1 receptor antagonist than its parent compound
and has been a model for the continuing development of several other ARBs.
Valsartan, candesartan and irbesartan were all developed in 1990.
Valsartan, first marketed by Novartis, is a nonheterocyclic ARB, where the imidazole of losartan has been replaced by an acylated amino acid.
Irbesartan was developed by Sanofi Research and is longer acting than valsartan and losartan and it has an imidazolinone ring where a carbonyl group functions as a hydrogen bond acceptor instead of the hydroxymethyl group in losartan. Irbesartan is a non-competitive inhibitor.
Candesartan cilexetil (TCV 116) is a benzimidazole which was developed at Takeda and is an ester carbonate prodrug. In vivo, it is rapidly converted to the much more potent corresponding 7-carboxylic acid, candesartan. In the interaction of candesartan with AT1 receptor the carboxyl group of the benzimidazole ring plays an important role. Candesartan and its prodrug have stronger blood pressure lowering effects than EXP 3174 and losartan.
Telmisartan, which was discovered and developed in 1991 by Boehringer Ingelheim, has carboxylic acid as the biphenyl acidic group. It has the longest elimination half-life of the ARBs or about 24 hours.
Olmesartan medoxomil was developed by Sankyo in 1995 and is the newest ARB on the market, marketed in 2002. It is an ester prodrug like candesartan cilexetil. In vivo, the prodrug is completely and rapidly hydrolyzed to the active acid form, olmesartan (RNH-6270). It has a hydroxyisopropyl group connected to the imidazole ring in addition to the carboxyl group.
Pharmacophore and structure-activity relationship
Pharmacophore
There are three functional groups that are the most important parts for the bioactivity of ARBs, see figure 1 for details.
The first one is the imidazole ring that binds to amino acids in helix 7 (Asn295). The second group is the biphenyl-methyl group that binds to amino acids in both helices 6 and 7 (Phe301, Phe300, Trp253 and His256). The third one is the tetrazole group that interacts with amino acids in helices 4 and 5 (Arg167 and Lys199).
The tetrazole group has been successfully replaced by a carboxylic acid group as is the case with telmisartan.
Structure-activity relationship (SAR)
Most of the ARBs have the same pharmacophore so the difference in their biochemical and physiological effects is mostly due to different substituents. Activity of a drug is dependent of its affinity for the substrate site and the length of time it binds to the site.
Lipophilic substituents like the linear alkyl group at the 2-position on the imidazole ring together with the biphenyl-methyl group, associate with hydrophobic pockets of the receptor. An acidic group like tetrazole, CO2H or NHSO2CF3 at the 1-position of the biphenyl-methyl group will bind to a basic position in the receptor and are required for potent antagonistic activity.
In valsartan, the imidazole ring of losartan has been replaced with an acylated amino acid.
Several substituents have been tried at the 4- and 5- positions on the imidazole ring. The chloro and hydroxymethyl groups connected to these positions in losartan are probably not of much importance in receptor binding since the other ARBs do not possess these functional groups and have comparable or better binding affinities than losartan. Irbesartan has a carbonyl group at the 5-position, functioning as a hydrogen bond acceptor in place of the hydroxymethyl group of losartan, resulting in a longer binding to the receptor.
The structure of eprosartan is the one that differs most from the other ARBs, the usual biphenyl-methyl group has been replaced by a carboxy benzyl group that mimics more closely the phenolic moiety of Tyr4 group of Ang II. This change results in a stronger binding to the receptor but the biochemical and physiological effects are not significantly improved.
Telmisartan has a carboxylic acid at the 2-position of the biphenyl-methyl group and is more potent than the tetrazole analogue.
It has been reported that imidazoles that have hydroxymethyl and carboxy groups at the 4- and 5 position, possessed potent antagonistic activity, caused by the hydrogen bonding and hydrophilicity of the hydroxymethyl group.
It has also been reported that an hydroxy group in the 4-position on the imidazole ring, plays an important role in the binding affinity and compensates for the disadvantage of lipophilicity of the bulky alkyl group.
These results show that a medium-sized hydroxy alkyl group, such as CHMeOH and CMe2OH, is favorable for the substituent of the 4-position on the imidazole ring. Furthermore, the ionizable group is favorable for the binding affinity.
Candesartan and olmesartan have the highest affinity for the AT1 receptors, followed by irbesartan and eprosartan. Valsartan, telmisartan and EXP 3174 have similar affinities that are about ten-fold less than that of candesartan. Losartan has the least affinity. ARBs' affinity for the AT2 receptor is generally much lower (or around 10,000 times less) than for the AT1 subtype. Therefore, they allow unhindered stimulation of the AT2 receptor.
Drug comparison and pharmacokinetics
ARBs have a large therapeutic index and therefore their (mostly low) oral bioavailability does not appear to be of clinical significance.
As can be seen in table 1, these drugs are highly plasma protein-bound and therefore oral administration once a day should provide sufficient antihypertensive effects.
Around 14% of orally ingested losartan is metabolized to its 5-carboxylic acid metabolite EXP 3174. As mentioned before, candesartan cilexetil and olmesartan medoxomil are inactive ester prodrugs that are completely hydrolyzed to their active forms by esterases during absorption from the gastrointestinal tract. These three metabolites are more potent AT1 receptor antagonists than their prodrugs. The other ARBs do not have active metabolites.
All of the ARBs, except for valsartan and olmesartan, are metabolized in some way by the cytochrome P450 (CYP) enzyme 2C9, that is found in the human liver. CYP2C9 is for example responsible for the metabolizing of losartan to EXP 3174 and the slow metabolizing of valsartan and candesartan to their inactive metabolites. Telmisartan is, on the other hand, in part metabolized by glucuronidation and olmesartan is excreted as the unchanged drug.
Telmisartan is the only ARB that can cross the blood–brain barrier and can therefore inhibit centrally mediated effects of Ang II, contributing to even better blood pressure control.
All of the ARBs have the same mechanism of action and differences in their potency can be related to their different pharmacokinetic profiles. A few clinical head-to-head comparisons have been made and candesartan, irbesartan and telmisartan appear to be slightly more effective than losartan in lowering blood pressure. This difference may be related to different strengths of activity at the receptor level, such as duration and strength of receptor binding.
ARBs under development
Several new nonpeptide ARBs are undergoing clinical trials or are at pre-clinical stages of development. Among these are embusartan (BAY 10-6734 or BAY 10-6734), KRH-594, fonsartan (HR 720) and pratosartan (KT3-671). Pratosartan, for example, has a novel structure: a seven-membered ring that bears an oxo moiety (C=O) fused to the imidazole ring (figure 4), and its affinity for the AT1 receptor is about 7 times higher than losartan's. The purpose of the oxo group is similar to that of the carboxylic acid groups on other ARBs.
Other attributes of ARBs are also under investigation, such as the positive effects of telmisartan on lipid and glucose metabolism and losartan's effects of lowering uric acid levels. Such effects might lead to new indications for these drugs but further research is needed.
See also
Discovery and development of renin inhibitors
References
Angiotensin II receptor antagonists
Angiotensin Receptor Blockers, Discovery And Development Of | Discovery and development of angiotensin receptor blockers | [
"Chemistry",
"Biology"
] | 4,406 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
20,207,005 | https://en.wikipedia.org/wiki/List%20of%20A1%20genes%2C%20proteins%20or%20receptors | This is a list of genes, proteins or receptors named A1 or Alpha-1 :
Actin, alpha 1
Actinin, alpha 1
Adaptor-related protein complex 2, alpha 1
Aldehyde dehydrogenase 3 family, member A1
Aldehyde dehydrogenase 4 family, member A1
Aldehyde dehydrogenase 5 family, member A1
Aldehyde dehydrogenase 6 family, member A1
Aldehyde dehydrogenase 9 family, member A1
Aldehyde dehydrogenase 18 family, member A1
Aldo-keto reductase family 1, member A1
Alpha-1-microglobulin/bikunin precursor
Apolipoprotein A1 and ApoA-1 Milano
ATPase, H+ transporting, lysosomal V0 subunit a1
ATPase, Na+/K+ transporting, alpha 1
ATP synthase, H+ transporting, mitochondrial F1 complex, alpha 1
BCL2-related protein A1
Butyrophilin, subfamily 1, member A1
Butyrophilin, subfamily 3, member A1
Capping protein (actin filament) muscle Z-line, alpha 1
Carboxypeptidase A1
Casein kinase 1, alpha 1
Casein kinase 2, alpha 1
Catenin (cadherin-associated protein), alpha 1
Centaurin, alpha 1
Cholinergic receptor, nicotinic, alpha 1
Coagulation factor XIII, A1 polypeptide
collagen, type I, alpha 1
collagen, type II, alpha 1
Collagen, type III, alpha 1
Collagen, type IV, alpha 1
Collagen, type V, alpha 1
Collagen, type VI, alpha 1
Collagen, type VII, alpha 1
Collagen, type VIII, alpha 1
Collagen, type IX, alpha 1
Collagen, type X, alpha 1
Collagen, type XI, alpha 1
Collagen, type XII, alpha 1
Collagen, type XIII, alpha 1
Collagen, type XIV, alpha 1
Collagen, type XV, alpha 1
Collagen, type XVI, alpha 1
Collagen, type XVII, alpha 1
Collagen, type XVIII, alpha 1
Collagen, type XIX, alpha 1
Collagen, type XXV, alpha 1
Collagen, type XXVII, alpha 1
Crystallin, beta A1
Cyclic nucleotide-gated channel alpha 1
Cyclin A1
Cytochrome P450, family 1, member A1
Defensin, alpha 1
Dystrophin-associated protein A1
Ephrin A1
Eukaryotic translation elongation factor 1 alpha 1
Family with sequence similarity 13, member A1
Family with sequence similarity 19 (chemokine (C-C motif)-like), member A1
Gamma-aminobutyric acid (GABA) A receptor, alpha 1
Gap junction protein, alpha 1
GDNF family receptor alpha 1
Glutathione S-transferase A1
Glycine receptor, alpha 1
Heat shock protein 90kDa alpha (cytosolic), member A1
Hemoglobin, alpha 1
Heterogeneous nuclear ribonucleoprotein A1
Homeobox A1
Immunoglobulin heavy constant alpha 1
Importin alpha 1
Interferon, alpha 1
Interleukin 13 receptor, alpha 1
Karyopherin alpha 1
Laminin, alpha 1
Major histocompatibility complex, class II, DP alpha 1
Major histocompatibility complex, class II, DQ alpha 1
Myosin light chain A1, an actin-binding protein
NADH dehydrogenase (ubiquinone), alpha 1
Nucleolar protein, member A1
PCDHA4
Phospholipase A1
Phosphorylase kinase, alpha 1
Plexin A1
Polymerase (DNA directed), alpha 1
Potassium large conductance calcium-activated channel, subfamily M, alpha 1
Proteasome (prosome, macropain) subunit, alpha 1
Protein kinase, AMP-activated, alpha 1
Protein tyrosine phosphatase, receptor type, f polypeptide (PTPRF), interacting protein (liprin), alpha 1
Protocadherin alpha 1
Pulmonary surfactant-associated protein A1
Pyruvate dehydrogenase (lipoamide) alpha 1
RNA binding motif protein, Y-linked, family 1, member A1
Replication protein A1
S100 calcium binding protein A1
Sec61 alpha 1
Serum amyloid A1
Solute carrier family 35 (CMP-sialic acid transporter), member A1
Spectrin, alpha 1
Sperm protein associated with the nucleus, X-linked, family member A1
Syntrophin, alpha 1
Transient receptor potential cation channel, member A1
UDP glucuronosyltransferase 1 family, polypeptide A1
Urea Transporter A1
a gene found in the maize encoding for the dihydroflavonol 4-reductase (reducing dihydroflavonols into flavan-4-ols) in the phlobaphene metabolic pathway
proteins
α-1-antitrypsin, an acute-phase protein in the Alpha 1-antitrypsin deficiency, a genetic disorder
Annexin A1, a human protein
Outer membrane phospholipase A1, a bacterial protein
receptors
α-1-Adrenoceptor, an adrenergic receptor with the primary effect of vasoconstriction
Alpha-1 blocker, a variety of drugs which block α1-adrenergic receptors in arteries and smooth muscles
Adenosine A1 receptor
EPH receptor A1
A1, a subfamily of rhodopsin-like receptors
SR-A1, a type of scavenger receptors
domains
CTA1, a portion of the cholera toxin chain
alleles
A1, an allele in the DRD2 TaqI polymorphism that could be involved in alcoholism
Receptor
Molecular-biology-related lists | List of A1 genes, proteins or receptors | [
"Chemistry"
] | 1,242 | [
"Molecular-biology-related lists",
"Molecular biology"
] |
20,207,520 | https://en.wikipedia.org/wiki/Histamine%20trifluoromethyl%20toluidide | Histamine trifluoromethyl toluidide (HTFMT) is a mixed H1/H2 histamine agonist which is significantly more potent than histamine itself.
It also produces additional actions which appear to be independent of histamine receptors.
References
Histamine agonists
Imidazoles
Trifluoromethyl compounds | Histamine trifluoromethyl toluidide | [
"Chemistry"
] | 78 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
20,207,531 | https://en.wikipedia.org/wiki/Immethridine | Immethridine is a histamine agonist selective for the H3 subtype.
References
Histamine agonists
Imidazoles
4-Pyridyl compounds | Immethridine | [
"Chemistry"
] | 39 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
20,208,066 | https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20triptans | Triptans are a family of tryptamine-based drugs used as abortive medication in the treatment of migraines and cluster headaches. They are selective 5-hydroxytryptamine/serotonin1B/1D (5-HT1B/1D) agonists. Migraine is a complex disease which affects about 15% of the population and can be highly disabling.
Triptans have advantages over ergotamine and dihydroergotamine, such as selective pharmacology, well established safety record and evidence-based prescribing instructions. Triptans are therefore often preferred treatment in migraine.
History
Search for a new anti-migraine drug started at Glaxo in 1972. Studies in the 1960s showed that vasoconstriction from 5-HT, ergotamine and noradrenaline could reduce migraine attacks. Research also showed that platelet 5-HT level is reduced during migraine. Because there are too many side-effects for 5-HT to be used as a drug, scientists started research on the receptors of 5-HT in order to discover and develop a more specific agonist for 5-HT receptors. Research on the 5-HT receptors and their effect led to discovery of several types and subtypes of 5-HT. AH24167 showed a vasodilation effect instead of vasoconstriction due to the agonist effect on another type of 5-HT receptors later assigned the name 5-HT7. AH25086 was the second compound developed and showed a vasoconstriction effect but was not released as a drug due to low per oral bioavailability. Continued research led to the discovery of the first triptan drug, sumatriptan, that had both vasoconstriction effect, as well as better oral bioavailability. Sumatriptan was first launched in the Netherlands in 1991 and became available in the United States during 1993.
Mechanism
Triptans are specific and selective agonists for the 5-HT1 receptors. Sumatriptan binds to 5-HT1D receptors, zolmitriptan, rizatriptan, naratriptan, almotriptan, and frovatriptan binds to 5-HT1B/1D and eletriptan binds to 5-HT1B/1D/1F receptors. Triptans are believed to exert their effects through vasoconstriction, leading to reduced carotid arterial circulation without affecting cerebral blood flow, peripheral neuronal inhibition, or inhibition of transmission through second order neurons of the trigeminocervical complex.
Receptors
5-HT receptors are all G-protein coupled receptors (GPCR) except for 5-HT3 which is a ligand gated ion channel. The receptors that have been found to be involved in migraine are 5-HT1B, 5-HT1D and 5-HT1F receptors. 5-HT1B are found in meningeal arteries, agonism of 5-HT1B causes vasoconstriction in cranial nerves. The 5-HT1D receptors are located primarily in the trigeminal nerve in the central nervous system (CNS). They are also found in vascular smooth muscles, mediating contraction. Agonism of 5-HT1D receptors subdues the release of inflammatory mediators. It has been shown that both 5-HT1B and 5-HT1D receptors in humans have a very similar amino acid structures, from which the similarities in binding properties can be expected.
Design
All triptans have an indole structure identical to the neurotransmitter 5-HT. Classic triptan structure contain side chain on the indole ring, and a basic nitrogen in a similar distance from the indole structure. The main structural difference of the triptans is the position of the sulfonamide and the side chain attached to it (see figure 1 and table 1). Rizatriptan and zolmitriptan have instead of a sulfonamide a triazole and 2-oxazolidone respectively. Another exception to the classic structure is seen on eletriptan where the nitrogen-alkyl chain connected to the indole ring is replaced with a dimethyl-pyrrolidine, and in naratriptan where the nitrogen-alkyl chain is replaced with a 1-methyl-piperidine ring.
One of the frovatriptan side chains forms an additional ring with the indole, resulting in a carbazole ring system.
Structures of the triptans
The 5-HT1B/D pharmacophore
5-HT1B and 5-HT1D receptors are considered very similar, they share amino acid homology and their ligands expose similar binding properties thus they have similar pharmacophore. The pharmacophore model for these receptors ligands is qualitative and defines the relative positions of important groups. It is defined with following five main features: an aromatic group (usually the indole), protonated amine (a donor of hydrogen bond), acceptor of hydrogen bond, additional hydrogen bond site (both donor and acceptor) and hydrophobic region located between both hydrogen bond sites, see figure 2. The main binding points were concluded to be the protonated amine and the hydrogen bond site. It was observed that the double bond region in the indole was necessary for the agonism in this series of compounds. Figure 3 shows how different drugs fit the pharmacophore, with a C and N linked analogues of 5-HT1D agonist. The marked sites on the figure are responsible for the affinity. The pharmacophore can be characterized as amphipathic, that means that the structure has both hydrophobic and hydrophilic groups.
Relevant structural features of triptans and binding to the receptor
Triptan structures were designed from the structure of 5-HT to attain affinity to 5-HT receptors, hence the identical indole structure. The hydroxyl group (-OH) on the hexane of the indole core and the alkyl-amine side chain on position C3 on 5-HT have been replaced with other compounds, such as sulfonamides or azol-ring structured derivatives and different amine-alkyl side chains. An electro-negative group can form a hydrogen bond with Thr in the pocket of the receptor. Sulfonamide derivatives attached to the hexane ring of the indole structure have electro-negative properties, as well as the triazole and 2-oxazolidone on rizatriptan and zolmitriptan respectively. This can increase binding ability of the compound and the efficacy, especially with the 5-HT1D receptor.
A schematic drawing of the binding of sumatriptan to 5-HT1D receptor can be seen in figure 4. One study showed that sumatriptan fits better in the binding site of the receptor when the side chain with the protonated nitrogen atom is folded back over the indole structure. This alignment contributes to the hydrogen bonding between the nitrogen in the sulfonamine and the Ser138 in the binding site. It is also favorable to the formation of the hydrogen bond between the oxygen of the sulfonamine and Thr202. Other binding in the pocket of the binding site occurs with the nitrogen atom in the pentene ring of the indole structure of the triptan and the amino acid Ser352. This energetically favorable position of the agonist makes it possible for additional binding of the ligand to other Ser in the binding site, along with additional anchoring between Phe in the pocket of the binding site and the indole of the agonist. The binding of Phe and the triptan is caused by π stacking interactions of the indole and amino acid and an additional effect on this interaction is because of dispersive effect of amino acid leucine (Leu; not shown in figure 4). The amino acids Trp343 and Tyr346 both have electron rich π-systems in their aromatic structures. With their position in the binding site they create a sort of aromatic cage around the protonated nitrogen atom of the side chain on position C3 on the triptans (this nitrogen atom is protonated at physiological condition), and thereby stabilizes the ion bond the nitrogen atom has formed with a carboxylate on aspartic acid. Side chains of the surrounding amino acids can have an effect on the binding of the nitrogen atom, mainly three Phe can affect the methyl groups bound to the nitrogen atom (not shown in figure 4).
Eletriptan has higher affinity for the receptor, which is probably a result of the bulky substituents of the structure. The amine is protonated at physiological pH condition, triggering better uptake. The uptake rate of the agonist is different depending on whether the amine in R2 is primary, secondary or tertiary but the latter seem to give the best results. For the R1 substituent an electron rich sulfonamide groups and amide group has shown the best results in receptor binding and activity. It has been observed that a relationship is between absorption and molecular size hence larger hydrophilic molecules tended to have poor absorption. A small R1 substituent is necessary to maintain the rapid oral bioavailability of triptans.
By placing an electron-withdrawing group or large group on position C2 on the indole structure the 5-HT agonist is conversed into an antagonist. This is thought to be because the indole ring is unable to occupy the aromatic part of the binding site.
Triptan drugs
Properties of formulations
Sumatriptan was the pioneer drug in this class. The second generation's triptans such as zolmitriptan, naratriptan, rizatriptan, almotriptan, eletriptan and frovatriptan soon became available.
Different triptans are available in different formulations and in different strengths (see table 2). They have been formulated as subcutaneous injections, oral tablets, orally disintegrating tablets, nasal spray and as rectal suppositories.
Delivery system of the triptans may play an important role in the onset of action. The selection of anti-migraine drug for patients depends on their symptoms. The first selective 5-HT1B/1D agonist, sumatriptan, was first synthesized as a subcutaneous injection, then as an oral tablets and more recently as a nasal spray, it is also available in some countries as suppositories. The subcutaneous injection is the fastest way to stop a rapidly progressing migraine attack. The sumatriptan nasal spray provides faster onset of action than the tablets but it produces a similar headache response at 2 hours. Some patients prefer the nasal spray as it works more rapidly than the tablets and does not have as many adverse effects as the subcutaneous injection. Nasal spray is although not suitable for all patients, because some patients experience bad taste and lack of consistency of response. Zolmitriptan was developed with the strategy to create a more lipophilic compound, with faster absorption and better ability to cross the blood brain barrier than sumatriptan. It is available as tablets, orally disintegrating tablets and as nasal spray in some countries. Rizatriptan is available as tablets and orally disintegrating tablets but naratriptan, almotriptan, eletriptan and frovatriptan are only available in tablets, for now.
a Specific enzyme not yet reported.
The U.S. Food and Drug Administration (FDA) approved a new drug April 15, 2008, which is a combination of sumatriptan 85 mg and naproxen 500 mg (NSAID).
Triptans and NSAIDs work on distinct mechanism involved in migraine and therefore may offer improved treatment when administrated together.
Pharmacokinetics
Pharmacokinetic properties (see table 3) are important when new drugs are developed.
Patients seek rapid onset of action to relief the headache. Relatively short tmax, good bioavailability and lipophilicity are pharmacokinetic properties that have been associated with rapid onset of action. It has been speculated that good ability to cross the blood brain barrier and relatively long terminal elimination half-life may result in a lower incidence of headache recurrence. Sumatriptan and rizatriptan undergo first pass hepatic metabolism and result in lower bioavailability.
t1/2 = Elimination half-life;
tmax = Time to reach peak plasma drug concentration;
ClR = Renal Clearance;
LogDpH7.4 = Measure of lipophilicity at pH 7.4. Increasing number indicate greater solubility;
VD = Volume of distribution
M = Male; F = Female
Future research
Most triptans were developed and introduced in the 1990s. Further studies have not shown much promise regarding the development of new triptans with better duration of action, efficacy and safety profile. Therefore, it is unlikely that further variations will be developed and new anti-migraine drugs are likely to have another mechanism of action.
References
Triptans, Discovery And Development Of | Discovery and development of triptans | [
"Chemistry",
"Biology"
] | 2,822 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
20,208,243 | https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20dipeptidyl%20peptidase-4%20inhibitors | Dipeptidyl peptidase-4 inhibitors (DPP-4 inhibitors) are enzyme inhibitors that inhibit the enzyme dipeptidyl peptidase-4 (DPP-4). They are used in the treatment of type 2 diabetes mellitus. Inhibition of the DPP-4 enzyme prolongs and enhances the activity of incretins that play an important role in insulin secretion and blood glucose control regulation.
Type 2 diabetes mellitus is a chronic metabolic disease that results from inability of the β-cells in the pancreas to secrete sufficient amounts of insulin to meet the body's needs. Insulin resistance and increased hepatic glucose production can also play a role by increasing the body's demand for insulin. Current treatments, other than insulin supplementation, are sometimes not sufficient to achieve control and may cause undesirable side effects, such as weight gain and hypoglycemia. In recent years, new drugs have been developed, based on continuing research into the mechanism of insulin production and regulation of the metabolism of sugar in the body. The enzyme DPP-4 has been found to play a significant role.
History
Since its discovery in 1967, serine protease DPP-4 has been a popular subject of research. Inhibitors of DPP-4 have long been sought as tools to elucidate the functional significance of the enzyme. The first inhibitors were characterized in the late 1980s and 1990s. Each inhibitor was important to establish an early structure activity relationship (SAR) for subsequent investigation. The inhibitors fall into two main classes, those that interact covalently with DPP-4 and those that do not. DPP-4 is a dipeptidase that selectively binds substrates that contain proline at the P1-position, thus many DPP-4 inhibitors have 5-membered heterocyclic rings that mimic proline, e.g. pyrrolidine, cyanopyrrolidine, thiazolidine and cyanothiazolidine. These compounds commonly form covalent bonds to the catalytic residue Ser630.
In 1994, researchers from Zeria Pharmaceuticals unveiled cyanopyrrolidines with a nitrile function group that was assumed to form an imidate with the catalytic serine. Concurrently other DPP-4 inhibitors without a nitrile group were published but they contained other serine-interacting motifs, e.g. boronic acids, phosphonates or diacyl hydroxylamines. These compounds were not as potent because of the similarity of DPP-4 and prolyl oligopeptidase (PEP) and also suffered from chemical instability. Ferring Pharmaceuticals filed for patent on two cyanopyrrolidine DPP-4 inhibitors, which they published in 1995. These compounds had excellent potency and improved chemical stability.
In 1995, Edwin B. Villhauer at Novartis started to explore N-substituted glycinyl-cyanopyrrolidines based on the fact that DPP-4 identifies N-methylglycine as an N-terminal amino acid. This group of new cyanopyrrolidines became extremely popular field of research in the following years. Some trials with dual inhibitors of DPP-4 and vasopeptidase have been represented, since vasopeptidase inhibition is believed to enhance the antidiabetic effect of DPP-4 inhibition by stimulating insulin secretion. Vasopeptidase-inhibiting motif is connected to the DPP-4 inhibitor at the N-substituent.
DPP-4 mechanism
Fig.1: During a meal, the incretins glucagon-like peptide 1 (GLP-1) and glucose-dependent gastric inhibitory polypeptide (GIP) are released by the small intestine into the blood stream. These hormones regulate insulin secretion in a glucose-dependent manner. (GLP-1 has many roles in the human body. It stimulates insulin biosynthesis, inhibits glucagon secretion, slows gastric emptying, reduces appetite and stimulates regeneration of islet β-cells.)
GLP-1 and GIP have extremely short plasma half-lives due to very rapid inactivation, catalyzed by the enzyme DPP-4. Inhibition of DPP-4 slows their inactivation, thereby potentiating their action, leading to lower plasma glucose levels, hence its utility in the treatment of type 2 diabetes. (Figure 1).
DPP-4 distribution and function
DPP-4 is attached to the plasma membrane of the endothelium of almost every organ in the body. Tissues which strongly express DPP-4 include the exocrine pancreas, sweat glands, salivary and mammary glands, thymus, lymph nodes, biliary tract, kidney, liver, placenta, uterus, prostate, skin, and the capillary bed of the gut mucosa (where most GLP-1 is inactivated locally). It is also present, in soluble form, in body fluids, such as blood plasma and cerebrospinal fluid. (It also happens that DPP-4 is the CD26 T-cell activating antigen.)
DPP-4 selectively cleaves two amino acids from peptides, such as GLP-1 and GIP, which have proline or alanine in the second position (Figure 2). At the active site where DPP-4 has its effect, there is a characteristic arrangement of three amino acids, Asp-His-Ser. Since alanine and proline are crucial for the biological activity of GPL-1 and GIP, they are inactivated by cleaving away these amino acids. Thus, preventing the degradation of the incretin hormones GLP-1 and GIP by inhibition of DPP-4 has potential as a therapeutic strategy in the treatment of type 2 diabetes.
DPP-4 characteristics
Since DPP-4 is a protease, it is not unexpected that inhibitors would likely have a peptide nature and this theme has carried through to contemporary research.
Structure
X-ray structures of DPP-4 that have been published since 2003 give rather detailed information about the structural characteristics of the binding site. Many structurally diverse DPP-4 inhibitors have been discovered and it is not that surprising considering the properties of the binding site:
1. A deep lipophilic pocket combined with several exposed aromatic side chains for achieving high affinity small molecule binding.
2. A significant solvent access that makes it possible to tune the physico-chemical properties of the inhibitors that leads to better pharmacokinetic behavior.
DPP-4 is a 766-amino acid transmembrane glycoprotein that belongs to the prolyloligopeptidase family. It consists of three parts; a cytoplasmic tail, a transmembrane region and an extracellular part. The extracellular part is divided into a catalytic domain and an eight-bladed β-propeller domain. The latter contributes to the inhibitor binding site. The catalytic domain shows an α/β-hydrolase fold and contains the catalytic triad Ser630 - Asp708 - His740. The S1-pocket is very hydrophobic and is composed of the side chains: Tyr631, Val656, Trp662, Tyr666 and Val711. Existing X-ray structures show that there is not much difference in size and shape of the pocket that indicates that the S1-pocket has high specificity for proline residues
Binding site
DPP-4 inhibitors usually have an electrophilic group that can interact with the hydroxyl of the catalytic serine in the active binding site (Figure 3). Frequently that group is a nitrile group but can also be boronic acid or diphenyl phosphonate. This electrophilic group can bind to the imidate complex with covalent bonds and slow, tight-binding kinetics but this group is also responsible for stability issues due to reactions with the free amino group of the P2-amino acid. Therefore, inhibitors without the electrophilic group have also been developed, but these molecules have shown toxicity due to affinity to other dipeptidyl peptidases, e.g. DPP-2, DPP-8 and DPP-9.
DPP-4 inhibitors span diverse structural types. In 2007 few of the most potent compounds contain a proline mimetic cyanopyrrolidine P1 group. This group enhances the potency, probably due to a transient covalent trapping of the nitrile group by the active site Ser630 hydroxyl, leading to delayed dissociation and slow tight binding of certain inhibitors. When these potency enhancements were achieved, some chemical stability issues were noted and more advanced molecules had to be made. To avoid these stability issues, the possibility to exclude the nitrile group was investigated. Amino acids with aryl or polar side chains did not show appreciable DPP-4 inhibition and in fact, all compounds without the nitrile group in this research suffered a 20 to 50-fold loss of potency corresponding to the compounds containing the nitrile group.
Discovery and development
It is important to find a fast and accurate system to discover new DPP-4 inhibitors with ideal therapeutic profiles. High throughput screening (HTS) usually gives low hit rates in identifying the inhibitors but virtual screening (VS) can give higher rates. VS has for example been used to screen for small primary aliphatic amines to identify fragments that could be placed in S1 and S2 sites of DPP-4. On the other hand, these fragments were not very potent and therefore identified as a starting point to design better ones.
Three-dimensional models can provide a useful tool for designing novel DPP-4 inhibitors. Pharmacophore models have been made based on key chemical features of compounds with DPP-4 inhibitory activity. These models can provide a hypothetical picture of the primary chemical feature responsible for inhibitory activity.
The first DPP-4 inhibitors were reversible inhibitors and came with bad side effects because of low selectivity. Researchers suspected that inhibitors with short half-lives would be preferred in order to minimize possible side effects. However, since clinical trials showed the opposite, the latest DPP-4 inhibitors have a long-lasting effect. One of the first reported DPP-4 inhibitor was P32/98 from Merck. It used thiazolidide as the P1-substitute and was the first DPP-4 inhibitor that showed effects in both animals and humans but it was not developed to a market drug due to side effects. Another old inhibitor is DPP-728 from Novartis, where 2-cyanopyrrolidine is used as the P1-substitute. The addition of the cyano group generally increases the potency. Therefore, researchers' attention was directed to those compounds. Usually, DPP-4 inhibitors are either substrate-like or non-substrate-like.
Substrate-like inhibitors
Substrate-like inhibitors (Figure 4) are more common than the non-substrate-likes. They bind either covalently or non-covalently and have a basic structure where the P1-substituent occupies the S1-pocket and the P2-substituent occupies the S2-pocket. Usually they contain a proline mimetic that occupies the S1-pocket. Large substituents on the 2-cyanopyrrolidine ring are normally not tolerated since the S1-pocket is quite small.
Since DPP-4 is identical with the T-cell activation marker CD26 and DPP-4 inhibitors are known to inhibit T-cell proliferation, these compounds were initially thought to be potential immunomodulators. When the function against type 2 diabetes was discovered, the cyanopyrrolidines became a highly popular research material. A little later vildagliptin and saxagliptin, which are the most developed cyanopyrrolidine DPP-4 inhibitors to date, were discovered.
Cyanopyrrolidines
Cyanopyrrolidines have two key interactions to the DPP-4 complex:
1. Nitrile in the position of the scissile bond of the peptidic substrate that is important for high potency. The nitrile group forms reversible covalent bonds with the catalytically active serine hydroxyl (Ser630), i.e. cyanopyrrolidines are competitive inhibitors with slow dissociation kinetics.
2. Hydrogen bonding network between the protonated amino group and a negatively charged region of the protein surface, Glu205, Glu206 and Tyr662. All cyanopyrrolidines have basic, primary or secondary amine, which makes this network possible but these compounds usually drop in potency if these amines are changed. Nonetheless, two patent applications unveil that the amino group can be changed, i.e. replaced by a hydrazine, but it is claimed that these compounds do not only act via DPP-4 inhibition but also prevent diabetic vascular complications by acting as a radical scavenger.
Structure-activity relationship (SAR)
Important structure-activity relationship:
1. Strict steric constraint exists around the pyrrolidine ring of cyanopyrrolidine-based inhibitors, with only hydrogen, fluoro, acetylene, nitrile, or methano substitution permitted.
2. Presence of a nitrile moiety on the pyrrolidine ring is critical to achieving potent activity
Also, systematic SAR investigation has shown that the ring size and stereochemistry for the P2 position is quite conditioned. A 5-membered ring and L-configuration has shown better results than a 4-membered or 6-membered ring with D-configuration. Only minor changes on the pyrrolidine ring can be tolerated, since the good fit of the ring with the hydrophobic S1 pocket is very important for high affinity. Some trials have been made, e.g. by replacing the pyrrolidine with a thiazoline. That led to improved potency but also loss of chemical stability. Efforts to improve chemical stability often led to loss of specificity because of interactions with DPP-8 and DPP-9. These interactions have been connected with increased toxicity and mortality in animals. There are strict limitations in the P1 position and hardly any changes are tolerated. On the other hand, a variety of changes can be made in the P2 position. In fact, substitution with quite big branched side chains, e.g. tert-butylglycin, normally increased activity and chemical stability, which could lead to longer-lasting inhibition of the DPP-4 enzyme. It has also been noted that biaryl-based side chains can also give highly active inhibitors. It was originally believed that only lipophilic substitution would be tolerated. Now it is stated that also the substitution of polar negatively charged side-chains as well as hydrophilic substitution can lead to excellent inhibitory activity.
Chemical stability
In general, DPP-4 inhibitors are not very stable compounds. Therefore, many researchers focus on enhancing the stability for cyanopyrrolidines. The most widespread technique to improve chemical stability is to incorporate a steric bulk. The two cyanopyrrolidines that have been most pronounced, vildagliptin and saxagliptin, were created in this manner. K579 is a DPP-4 inhibitor discovered by researchers at Kyowa Hakko Kyogo. It had improved not only chemical stability but also a longer-lasting action. That long-lasting action was most likely due to slow dissociation of the enzyme-inhibitor complex and an active oxide metabolite that undergoes enterohepatic circulation. The discovery of the active oxide was in fact a big breakthrough as it led to the development of vildagliptin and saxagliptin. One major problem in DPP-4 inhibitor stability is intramolecular cyclization. The precondition for the intramolecular cyclization is the conversion of the trans-rotamer, which is the DPP-4 binding rotamer (Figure 5). Thus, preventing this conversion will increase stability. This prevention was successful when incorporating an amide group into a ring, creating a compound that kept the DPP-4 inhibitory activity that, did not undergo the intramolecular cyclization and was even more selective over different DPP enzymes. It has also been reported that a cyanoazetidine in the P1 position and a β-amino acid in the P2 position increased stability.
Vildagliptin
Vildagliptin (Galvus)(Figure 6) was first synthesized in May 1998 and was named after Edwin B. Villhauer. It was discovered when researchers at Novartis examined adamantyl derivatives that had proven to be very potent. The adamantyl group worked as a steric bulk and slowed intramolecular cyclization while increasing chemical stability. Furthermore, the primary metabolites were highly active. To avoid additional chiral center a hydroxylation at the adamantyl ring was carried out (Figure 6). The product, vildagliptin, was even more stable, undergoing intramolecular cyclization 30-times slower, and having high DPP-4 inhibitory activity and longer-lasting pharmacodynamic effect.
Saxagliptin
Researchers at Bristol-Myers Squibb found that increased steric bulk of the N-terminal amino acid side-chain led to increased stability. To additionally increase stability the trans-rotamer was stabilized with a cis-4,5-methano substitution of the pyrrolidine ring, resulting in an intramolecular van-der-Waals interaction, thus preventing intramolecular cyclisation. Because of that increased stability, the researchers continued their investigation on cis-4,5-methano cyanopyrrolidines and came across with a new adamantyl derivative, which showed extraordinary ex vivo DPP-4 inhibition in rat plasma. Also noted, high microsomal turnover rate which indicated that the derivative was quickly converted to an active metabolite. After hydroxylation on the adamantyl group they had a product with better microsomal stability and improved chemical stability. That product was named saxagliptin (Onglyza) (Figure 6). In June 2008 AstraZeneca and Bristol-Myers Squibb submitted a new drug application for Onglyza in the United States and a marketing authorization application in Europe. Approval was granted in the United States by the FDA in July 2009 for Onglyza 5 mg and Onglyza 2.5 mg. This was later combined with extended-release metformin (taken once daily) and approved by the FDA in January 2011 under the trade name Kombiglyze XR.
Denagliptin
Denagliptin (Figure 6) is an advanced compound with a branched side-chain at the P2 position, but also has (4S)-fluoro substitution on the cyanopyrrolidine ring. It is a well-known DPP-4 inhibitor developed by GlaxoSmithKline (GSK). Biological evaluations have shown that the S-configuration of the amino acid portion is essential for the inhibitory activity since the R-configuration showed reluctantly inhibition. These findings will be useful in future designs and synthesis of DPP-4 inhibitors. GSK suspended Phase III clinical trials in October 2008.
Azetidine based compounds
Informations for this group of inhibitors are quite restricted. Azetidine-based DPP-4 inhibitors can roughly be grouped into three main subcategories: 2-cyanoazetidines, 3-fluoroazetidines, and 2-ketoazetidines. The most potent ketoazetidines and cyanoazetidines have large hydrophobic amino acid groups bound to the azetidine nitrogen and are active below 100nM.
Non-substrate-like inhibitors
Non-substrate-like inhibitors do not take after dipeptidic nature of DPP-4 substrates. They are non-covalent inhibitors and usually have an aromatic ring that occupies the S1-pocket, instead of the proline mimetic.
In 1999, Merck started a drug development program on DPP-4 inhibitors. When they started internal screening and medicinal chemistry program, two DPP-4 inhibitors were already in clinical trials, isoleucyl thiazolidide (P32/38) and NVP-DPP728 from Novartis. Merck in-licensed L-threo-isoleucyl thiazolidide and its allo stereoisomer. In animal studies, they found that both isomers had similar affinity for DPP-4, similar in vivo efficacy, similar pharmacokinetic and metabolic profiles. Nevertheless, the allo isomer was 10-fold more toxic. The researchers found out that this difference in toxicity was due to the allo isomer's greater inhibition of DPP-8 and DPP-9 but not because of selective DPP-4 inhibition. More research also supported that DPP-4 inhibition would not cause compromised immune function. Once this link between affinity for DPP-8/DPP-9 and toxicity was discovered, Merck decided on identifying an inhibitor with more than a thousandfold affinity for DPP-4 over the other dipeptidases. For this purpose, they used positional scanning libraries. From scanning these libraries, the researchers discovered that both DPP-4 and DPP-8 showed a strong preference for breaking down peptides with a proline at the P1 position but they found a great difference at the P2 site; i.e., they found that acidic functionality at the P2 position could provide a greater affinity for DPP-4 over DPP-8. Merck kept up doing even more research and screening. They stopped working on compounds from the α-amino acid series related to isoleucyl thiazolidide due to lack of selectivity but instead they discovered a very selective β-amino acid piperazine series through SAR studies on two screening leads. When trying to stabilize the piperazine moiety, a group of bicyclic derivatives were made, which led to the identification of a potent and selective triazolopiperazine series. Most of these analogs showed excellent pharmacokinetic properties in preclinical species. Optimization of these compounds finally led to the discovery of sitagliptin.
Sitagliptin
Sitagliptin (Januvia) has a novel structure with β-amino amide derivatives (Figure 7). Since sitagliptin has shown excellent selectivity and in vivo efficacy it urged researchers to inspect the new structure of DPP-4 inhibitors with appended β-amino acid moiety. Further studies are being developed to optimize these compounds for the treatment of diabetes.
In October 2006 sitagliptin became the first DPP-4 inhibitor that got FDA approval for the treatment of type 2 diabetes. Crystallographic structure of sitagliptin along with molecular modeling has been used to continue the search for structurally diverse inhibitors. A new potent, selective and orally bioavailable DPP-4 inhibitor was discovered by replacing the central cyclohexylamine in sitagliptin with 3-aminopiperidine. A 2-pyridyl substitution was the initial SAR breakthrough since that group plays a significant role in potency and selectivity for DPP-4.
It has been shown with an X-ray crystallography how sitagliptin binds to the DPP-4 complex:
1. The trifluorophenyl group occupies the S1-pocket
2. The trifluoromethyl group interacts with the side chains of residues Arg358 and Ser209.
3. The amino group forms a salt bridge with Tyr662 and the carboxylated groups of the two glutamate residues, Glu205 and Glu206.
4. The triazolopiperazine group collides with the phenyl group of residue Phe357
Constrained phenylethylamine compounds
Researchers at Abbott Laboratories identified three novel series of DPP-4 inhibitors using HTS. After more research and optimization ABT-341 was discovered (Figure 8). It is a potent and selective DPP-4 inhibitor with a 2D-structure very similar to sitagliptin. However, the 3D-structure is quite different. ABT-341 also has a trifluorophenyl group that occupies the S1-pocket and the free amino group, but the two carbonyl groups are orientated 180° away from each other. ABT-341 is also believed to interact with the Tyr547, probably because of steric hindrance between the cyclohexenyl ring and the tyrosine side-chain. Omarigliptin is one of such compound which is in Phase-III development by Merck & Co.
Pyrrolidine compounds
The pyrrolidine type of DPP-4 inhibitors was first discovered after HTS. Research showed that the pyrrolidine rings were the part of the compounds that fit into the binding site. Further development has led to fluoro substituted pyrrolidines that show superior activity, as well as pyrrolidines with fused cyclopropylrings that are highly active.
Xanthine-based compounds
This is a different class of inhibitors that was identified with HTS. Aromatic heterocyclic-based DPP-4 inhibitors have gained increased attention recently. The first patents describing xanthines (Figure 10) as DPP-4 inhibitors came from Boehringer-Ingelheim(BI) and Novo Nordisk.
When xanthine based DPP-4 inhibitors are compared with sitagliptin and vildagliptin it has shown a superior profile. Xanthines are believed to have higher potency, longer-lasting inhibition and longer-lasting improvement of glucose tolerance.
Alogliptin
Alogliptin (Figure 9) is a novel DPP-4 inhibitor developed by the Takeda Pharmaceutical Company. Researchers hypothesized that a quinazolinone based structure (Figure 9) would have the necessary groups to interact with the active site on the DPP-4 complex. Quinazolinone based compounds interacted effectively with the DPP-4 complex, but suffered from low metabolic half-life. It was found that when replacing the quinazolinone with a pyrimidinedione, the metabolic stability was increased and the result was a potent, selective, bioavailable DPP-4 inhibitor named alogliptin. The quinazoline based compounds showed potent inhibition and excellent selectivity over related protease, DPP-8. However, short metabolic half-life due to oxidation of the A-ring phenyl group was problematic. At first, the researchers tried to make a fluorinated derivative. The derivative showed improved metabolic stability and excellent inhibition of the DPP-4 enzyme. However, it was also found to inhibit CYP 450 3A4 and block the hERG channel. The solution to this problem was to replace the quinazolinone with other heterocycles, but the quinazolinone could be replaced without any loss of DPP-4 inhibition. Alogliptin was discovered when quinazolinone was replaced with a pyrimidinedione. Alogliptin has shown excellent inhibition of DPP-4 and extraordinary selectivity, greater than 10.000 fold over the closely related serine proteases DPP-8 and DPP-9. Also, it does not inhibit the CYP 450 enzymes nor block the hERG channel at concentration up to 30 μM. Based on this data, alogliptin was chosen for preclinical evaluation. In January 2007 alogliptin was undergoing the phase III clinical trial and in October 2008 it was being reviewed by the U.S. Food and Drug Administration.
Linagliptin
Researchers at BI discovered that using a buty-2-nyl group resulted in a potent candidate, called BI-1356 (Figure 10). In 2008 BI-1356 was undergoing phase III clinical trials; it was released as linagliptin in May 2011. X-ray crystallography has shown that that xanthine type binds the DPP-4 complex in a different way than other inhibitors:
1. The amino group also interacts with the Glu205, Glu206 and Tyr662
2. The buty-2-nyl group occupies the S1-pocket
3. The uracil group undergoes a π-stacking interaction with the Tyr547 residue
4. The quinazoline group undergoes a π-stacking interaction with the Trp629 residue
Pharmacology
The pharmacokinetic properties of sitagliptin and vildagliptin appear unaffected by age, sex or BMI. Clinical researches have shown that sitagliptin and vildagliptin do not have the side effects that tend to follow type 2 diabetes treatment, e.g. weight gain and hyperglycemia, but however, other side effects have been observed, including upper respiratory tract infections, sore throat and diarrhea.
See also
Dipeptidyl peptidase-4
Dipeptidyl peptidase-4 inhibitors
Linagliptin
Vildagliptin
Sitagliptin
Saxagliptin
Berberine
teneligliptin
gosogliptin
References
Dipeptidyl peptidase-4 inhibitors
Dipeptidyl peptidase-4 inhibitors | Discovery and development of dipeptidyl peptidase-4 inhibitors | [
"Chemistry",
"Biology"
] | 6,260 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
20,209,037 | https://en.wikipedia.org/wiki/WebScarab | WebScarab is a web security application testing tool. It serves as a proxy that intercepts and allows people to alter web browser web requests (both HTTP and HTTPS) and web server replies. WebScarab also may record traffic for further review.
Overview
WebScarab is an open source tool developed by The Open Web Application Security Project (OWASP), and was implemented in Java so it could run across multiple operating systems.
In 2013 official development of WebScarab slowed, and it appears that OWASP's Zed Attack Proxy ("ZAP") Project (another Java-based, open source proxy tool but with more features and active development) is WebScarab's official successor, although ZAP itself was forked from the Paros Proxy, not WebScarab.
References
External links
Official Webpage
Web development
Software testing tools
Computer network security
Software using the GNU General Public License
Free software programmed in Java (programming language) | WebScarab | [
"Engineering"
] | 200 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Web development",
"Software engineering",
"Computer network security"
] |
20,209,852 | https://en.wikipedia.org/wiki/Intracavernous%20injection | An intracavernous (or intracavernosal) injection is an injection into the base of the penis. This injection site is often used to administer medications to check for or treat erectile dysfunction in adult men (in, for example, a combined intracavernous injection and stimulation test). The more common medications administered in this manner include Caverject, Trimix (prostaglandin, papaverine, and phentolamine), Bimix (papaverine and phentolamine), and Quadmix (prostaglandin, papaverine, phentolamine, and either atropine or forskolin). These medications are all types of vasodilators and cause tumescence within 15 minutes.
Common side effects include priapism, bruising, fibrosis, Peyronie's disease, and pain.
Priapism is also often treated with intracavernous injections, usually with sympathomimetic vasoconstricting drugs like adrenaline or phenylephrine.
References
Male genital procedures
Routes of administration
Dosage forms | Intracavernous injection | [
"Chemistry"
] | 236 | [
"Pharmacology",
"Routes of administration"
] |
32,853,038 | https://en.wikipedia.org/wiki/Ampelmann%20system | The Ampelmann system is an offshore personnel transfer system which was founded in 2008 as a spin-off of the Delft University of Technology. The motion compensation platform allows access from a moving vessel to offshore structures, even in high wave conditions. transferring offshore crew from various types of vessels to offshore oil & gas platforms, offshore turbines and all other fixed and floating structures at sea.
Ampelmann technology
Accessing any offshore structure can be problematic due to the movement of a vessel compared to the structure. The Ampelmann eliminates any relative motion by taking instant measurements of the ship’s motions and then compensating movement using a Stewart platform. This means that the top of the Ampelmann remains completely stationary compared to the structure. The offshore gangway can then be extended towards the structure, so all personnel can walk to work offshore safely, even in high wave conditions. The system operates at maximum windspeed of 20 m/s or 38 knots.
Besides transferring people, the system can also be used for cargo transfer up to 1000kg.
Clients
The customers that use this system are mainly operating in the offshore oil & gas industry and the offshore wind industry. They use the system to enable their employees to perform maintenance on offshore wind turbines or to work on an offshore oil rig. Both market segments are growing in a rapid pace. The offshore wind sector, for example, has grown 54% in 2009. This growth has mainly been caused by the shift to more sustainable forms of energy generation by governments all over the world.
The customer relation of Ampelmann is only business-to-business (B2B). Platforms are usually owned by private companies, whereas wind parks are always run by private companies. There is no agent of independent status between Ampelmann and the user of the system.
External links
PhD thesis
Google Patents
Offshore Access
References
Offshore engineering | Ampelmann system | [
"Engineering"
] | 376 | [
"Construction",
"Offshore engineering"
] |
32,854,451 | https://en.wikipedia.org/wiki/Ti-6Al-4V | Ti-6Al-4V (UNS designation R56400), also sometimes called TC4, Ti64, or ASTM Grade 5, is an alpha-beta titanium alloy with a high specific strength and excellent corrosion resistance. It is one of the most commonly used titanium alloys and is applied in a wide range of applications where low density and excellent corrosion resistance are necessary such as e.g. aerospace industry and biomechanical applications (implants and prostheses).
Studies of titanium alloys used in armors began in the 1950s at the Watertown Arsenal, which later became a part of the Army Research Laboratory.
A 1948 graduate of MIT, Stanley Abkowitz (1927-2017) was a pioneer in the titanium industry and is credited for the invention of the Ti-6Al-4V during his time at the US Army’s Watertown Arsenal Laboratory in the early 1950s.
Titanium/Aluminum/Vanadium alloy was hailed as a major breakthrough with strategic military significance. It is the most commercially successful titanium alloy and is still in use today, having shaped numerous industrial and commercial applications.
Increased use of titanium alloys as biomaterials is occurring due to their lower modulus, superior biocompatibility and enhanced corrosion resistance when compared to more conventional stainless steels and cobalt-based alloys. These attractive properties were a driving force for the early introduction of α (cpTi) and α+β (Ti—6Al—4V) alloys as well as for the more recent development of new Ti-alloy compositions and orthopaedic metastable b titanium alloys. The latter possess enhanced biocompatibility, reduced elastic modulus, and superior strain-controlled and notch fatigue resistance. However, the poor shear strength and wear resistance of titanium alloys have nevertheless limited their biomedical use. Although the wear resistance of b-Ti alloys has shown some improvement when compared to a#b alloys, the ultimate utility of orthopaedic titanium alloys as wear components will require a more complete fundamental understanding of the wear mechanisms involved.
Chemistry
(in wt. %)
Physical and mechanical properties
Ti-6Al-4V titanium alloy commonly exists in alpha, with hcp crystal structure, (SG : P63/mmc) and beta, with bcc crystal structure, (SG : Im-3m) phases. While mechanical properties are a function of the heat treatment condition of the alloy and can vary based upon properties, typical property ranges for well-processed Ti-6Al-4V are shown below. Aluminum stabilizes the alpha phase, while vanadium stabilizes the beta phase.
Ti-6Al-4V has a very low thermal conductivity at room temperature of 6.7 to 7.5 W/m·K, which contributes to its relatively poor machinability.
The alloy is vulnerable to cold dwell fatigue.
Heat treatment of Ti-6Al-4V
Ti-6Al-4V is heat treated to vary the amounts of and microstructure of and phases in the alloy. The microstructure will vary significantly depending on the exact heat treatment and method of processing. Three common heat treatment processes are mill annealing, duplex annealing, and solution treating and aging.
Applications
Aerospace structures. The Boeing 787 is 15% titanium by weight, and the Airbus A350 is 14%.
Biomedical implants and prostheses
High-performance race cars
High-end bicycles
Additive manufacturing
Apple iPhone 15 Pro (Max) case, iPhone 16 Pro and Pro Max cases and Apple Watch Series 10 titanium and Ultra 2 cases
Marine applications: Ti-6Al-4V Grade 5 is extensively used in marine applications due to its exceptional corrosion resistance in seawater environments. Ti-6Al-4V is applied in components exposed to marine atmospheres and underwater conditions, such as shipbuilding, offshore oil and gas platforms, and subsea equipment. Its resistance to corrosion helps in reducing maintenance costs and extending the lifespan of marine equipment.
Specifications
UNS: R56400
AMS Standard: 4928
ASTM Standard: F1472
ASTM Standard: B265 Grade 5
References
Titanium alloys | Ti-6Al-4V | [
"Chemistry"
] | 846 | [
"Titanium alloys",
"Alloys"
] |
32,855,569 | https://en.wikipedia.org/wiki/Thermochemical%20nanolithography | Thermochemical nanolithography (TCNL) or thermochemical scanning probe lithography (tc-SPL) is a scanning probe microscopy-based nanolithography technique which triggers thermally activated chemical reactions to change the chemical functionality or the phase of surfaces. Chemical changes can be written very quickly through rapid probe scanning, since no mass is transferred from the tip to the surface, and writing speed is limited only by the heat transfer rate. TCNL was invented in 2007 by a group at the Georgia Institute of Technology. Riedo and collaborators demonstrated that TCNL can produce local chemical changes with feature sizes down to 12 nm at scan speeds up to 1 mm/s.
TCNL was used in 2013 to create a nano-scale replica of the Mona Lisa "painted" with different probe tip temperatures. Called the Mini Lisa, the portrait measured , about 1/25,000th the size of the original.
Technique
The AFM thermal cantilevers are generally made from a silicon wafers using traditional bulk and surface micro-machining processes. Through the application of an electric current through its highly doped silicon wings, resistive heating occurs at the light doping zone around the probe tip, where the largest fraction of the heat is dissipated. The tip is able to change its temperature very quickly due to its small volume; an average tip in contact with polycarbonate has a time constant of 0.35 ms. The tips can be cycled between ambient temperature and 1100 °C at up to 10 MHz while the distance of the tip from the surface and the tip temperature can be controlled independently.
Applications
Thermally activated reactions have been triggered in proteins, organic semiconductors, electroluminescent conjugated polymers, and nanoribbon resistors. Deprotection of functional groups (sometimes involving a temperature gradients), and the reduction of graphene oxide has been demonstrated. The wettability of a polymer surface at the nanoscale has been modified, and nanostructures of poly(p-phenylene vinylene) (an electroluminescence conjugated polymer) have been created. Nanoscale templates on polymer films for the assembly of nano-objects such as proteins and DNA have also been created and crystallization of ferroelectric ceramics with storage densities up to 213 Gb/in2 have been produced.
The use of a material that can undergo multiple chemical reactions at significantly different temperatures could lead to a multi-state system, wherein different functionalities can be addressed at different temperatures. Synthetic polymers, such as PMCC, have been used as functional layers on substrate, which allow for high-resolution patterning.
Comparison with other lithographic techniques
Thermo-mechanical scanning probe lithography relies on the application of heat and force order to create indentations for patterning purposes (see also: Millipede memory). Thermal scanning probe lithography (t-SPL) specializes on removing material from a substrate without the intent of chemically altering the created topography. Local oxidation nanolithography relies on oxidation reactions in a water meniscus around the probe tip.
See also
Atomic force microscopy
Dip-pen nanolithography
Local oxidation nanolithography
Nanolithography
Nanotechnology
Scanning probe lithography
Scanning probe microscopy
References
External links
picoForce Laboratory at the Georgia Institute of Technology
http://www.picoforcelab.org/thermochemical-nanolithography-tcnl
Nanotechnology | Thermochemical nanolithography | [
"Materials_science",
"Engineering"
] | 718 | [
"Nanotechnology",
"Materials science"
] |
32,861,742 | https://en.wikipedia.org/wiki/Health%20and%20usage%20monitoring%20systems | Health and usage monitoring systems (HUMS) is a generic term given to activities that utilize data collection and analysis techniques to help ensure availability, reliability and safety of vehicles. Activities similar to, or sometimes used interchangeably with, HUMS include condition-based maintenance (CBM) and operational data recording (ODR). This term HUMS is often used in reference to airborne craft and in particular rotor-craft – the term is cited as being introduced by the offshore oil industry after a commercial Chinook crashed in the North Sea, killing all but one passenger and one crew member in 1986.
HUMS technology and regulation continues to be developed.
HUMS are now used not only for safety but for a number of other reasons including
Maintenance: reduced mission aborts, fewer instances of aircraft on ground (AOG), simplified logistics for fleet deployment
Cost: “maintain as you fly” maintenance flights are not required. Performing repairs when the damage is minor increases the aircraft mean time before failure (MTBF) and decreases the mean time to repair (MTTR)
Operational: improved flight safety, mission reliability and effectiveness
Performance: improved aircraft performance and reduced fuel consumption
Recent advances in the technology include predictive algorithms providing Remaining Useful Life estimates of components and automated wireless data transfer from the aircraft via WiFi or Cellular.
References
External links
United Electronic Industries
BAE Systems
GE Aviation
GPMS Foresight
Maintenance | Health and usage monitoring systems | [
"Engineering"
] | 279 | [
"Maintenance",
"Mechanical engineering"
] |
25,663,206 | https://en.wikipedia.org/wiki/Psychoacoustics | Psychoacoustics is the branch of psychophysics involving the scientific study of the perception of sound by the human auditory system. It is the branch of science studying the psychological responses associated with sound including noise, speech, and music. Psychoacoustics is an interdisciplinary field including psychology, acoustics, electronic engineering, physics, biology, physiology, and computer science.
Background
Hearing is not a purely mechanical phenomenon of wave propagation, but is also a sensory and perceptual event. When a person hears something, that something arrives at the ear as a mechanical sound wave traveling through the air, but within the ear it is transformed into neural action potentials. These nerve pulses then travel to the brain where they are perceived. Hence, in many problems in acoustics, such as for audio processing, it is advantageous to take into account not just the mechanics of the environment, but also the fact that both the ear and the brain are involved in a person's listening experience.
The inner ear, for example, does significant signal processing in converting sound waveforms into neural stimuli, this processing renders certain differences between waveforms imperceptible. Data compression techniques, such as MP3, make use of this fact. In addition, the ear has a nonlinear response to sounds of different intensity levels; this nonlinear response is called loudness. Telephone networks and audio noise reduction systems make use of this fact by nonlinearly compressing data samples before transmission and then expanding them for playback. Another effect of the ear's nonlinear response is that sounds that are close in frequency produce phantom beat notes, or intermodulation distortion products.
Limits of perception
The human ear can nominally hear sounds in the range . The upper limit tends to decrease with age; most adults are unable to hear above . Under ideal laboratory conditions, the lowest frequency that has been identified as a musical tone is 12 Hz. Tones between 4 and 16 Hz can be perceived via the body's sense of touch.
Human perception of audio signal time separation has been measured to be less than 10 microseconds. This does not mean that frequencies above are audible, but that time discrimination is not directly coupled with frequency range.
Frequency resolution of the ear is about 3.6 Hz within the octave of That is, changes in pitch larger than 3.6 Hz can be perceived in a clinical setting. However, even smaller pitch differences can be perceived through other means. For example, the interference of two pitches can often be heard as a repetitive variation in the volume of the tone. This amplitude modulation occurs with a frequency equal to the difference in frequencies of the two tones and is known as beating.
The semitone scale used in Western musical notation is not a linear frequency scale but logarithmic. Other scales have been derived directly from experiments on human hearing perception, such as the mel scale and Bark scale (these are used in studying perception, but not usually in musical composition), and these are approximately logarithmic in frequency at the high-frequency end, but nearly linear at the low-frequency end.
The intensity range of audible sounds is enormous. Human eardrums are sensitive to variations in sound pressure and can detect pressure changes from as small as a few micropascals (μPa) to greater than . For this reason, sound pressure level is also measured logarithmically, with all pressures referenced to (or ). The lower limit of audibility is therefore defined as , but the upper limit is not as clearly defined. The upper limit is more a question of the limit where the ear will be physically harmed or with the potential to cause noise-induced hearing loss.
A more rigorous exploration of the lower limits of audibility determines that the minimum threshold at which a sound can be heard is frequency dependent. By measuring this minimum intensity for testing tones of various frequencies, a frequency-dependent absolute threshold of hearing (ATH) curve may be derived. Typically, the ear shows a peak of sensitivity (i.e., its lowest ATH) between , though the threshold changes with age, with older ears showing decreased sensitivity above 2 kHz.
The ATH is the lowest of the equal-loudness contours. Equal-loudness contours indicate the sound pressure level (dB SPL), over the range of audible frequencies, that are perceived as being of equal loudness. Equal-loudness contours were first measured by Fletcher and Munson at Bell Labs in 1933 using pure tones reproduced via headphones, and the data they collected are called Fletcher–Munson curves. Because subjective loudness was difficult to measure, the Fletcher–Munson curves were averaged over many subjects.
Robinson and Dadson refined the process in 1956 to obtain a new set of equal-loudness curves for a frontal sound source measured in an anechoic chamber. The Robinson-Dadson curves were standardized as ISO 226 in 1986. In 2003, was revised as equal-loudness contour using data collected from 12 international studies.
Sound localization
Sound localization is the process of determining the location of a sound source. The brain utilizes subtle differences in loudness, tone and timing between the two ears to allow us to localize sound sources. Localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the zenith or vertical angle, and the distance (for static sounds) or velocity (for moving sounds). Humans, as most four-legged animals, are adept at detecting direction in the horizontal, but less so in the vertical directions due to the ears being placed symmetrically. Some species of owls have their ears placed asymmetrically and can detect sound in all three planes, an adaption to hunt small mammals in the dark.
Masking effects
Suppose a listener can hear a given acoustical signal under silent conditions. When a signal is playing while another sound is being played (a masker), the signal has to be stronger for the listener to hear it. The masker does not need to have the frequency components of the original signal for masking to happen. A masked signal can be heard even though it is weaker than the masker. Masking happens when a signal and a masker are played together—for instance, when one person whispers while another person shouts—and the listener doesn't hear the weaker signal as it has been masked by the louder masker. Masking can also happen to a signal before a masker starts or after a masker stops. For example, a single sudden loud clap sound can make sounds inaudible that immediately precede or follow. The effects of backward masking is weaker than forward masking. The masking effect has been widely studied in psychoacoustical research. One can change the level of the masker and measure the threshold, then create a diagram of a psychophysical tuning curve that will reveal similar features. Masking effects are also used in lossy audio encoding, such as MP3.
Missing fundamental
When presented with a harmonic series of frequencies in the relationship 2f, 3f, 4f, 5f, etc. (where f is a specific frequency), humans tend to perceive that the pitch is f. An audible example can be found on YouTube.
Software
The psychoacoustic model provides for high quality lossy signal compression by describing which parts of a given digital audio signal can be removed (or aggressively compressed) safely—that is, without significant losses in the (consciously) perceived quality of the sound.
It can explain how a sharp clap of the hands might seem painfully loud in a quiet library but is hardly noticeable after a car backfires on a busy, urban street. This provides great benefit to the overall compression ratio, and psychoacoustic analysis routinely leads to compressed music files that are one-tenth to one-twelfth the size of high-quality masters, but with discernibly less proportional quality loss. Such compression is a feature of nearly all modern lossy audio compression formats. Some of these formats include Dolby Digital (AC-3), MP3, Opus, Ogg Vorbis, AAC, WMA, MPEG-1 Layer II (used for digital audio broadcasting in several countries), and ATRAC, the compression used in MiniDisc and some Walkman models.
Psychoacoustics is based heavily on human anatomy, especially the ear's limitations in perceiving sound as outlined previously. To summarize, these limitations are:
High-frequency limit
Absolute threshold of hearing
Temporal masking (forward masking, backward masking)
Simultaneous masking (also known as spectral masking)
A compression algorithm can assign a lower priority to sounds outside the range of human hearing. By carefully shifting bits away from the unimportant components and toward the important ones, the algorithm ensures that the sounds a listener is most likely to perceive are most accurately represented.
Music
Psychoacoustics includes topics and studies that are relevant to music psychology and music therapy. Theorists such as Benjamin Boretz consider some of the results of psychoacoustics to be meaningful only in a musical context.
Irv Teibel's Environments series LPs (1969–79) are an early example of commercially available sounds released expressly for enhancing psychological abilities.
Applied psychoacoustics
Psychoacoustics has long enjoyed a symbiotic relationship with computer science. Internet pioneers J. C. R. Licklider and Bob Taylor both completed graduate-level work in psychoacoustics, while BBN Technologies originally specialized in consulting on acoustics issues before it began building the first packet-switched network.
Licklider wrote a paper entitled "A duplex theory of pitch perception".
Psychoacoustics is applied within many fields of software development, where developers map proven and experimental mathematical patterns in digital signal processing. Many audio compression codecs such as MP3 and Opus use a psychoacoustic model to increase compression ratios. The success of conventional audio systems for the reproduction of music in theatres and homes can be attributed to psychoacoustics and psychoacoustic considerations gave rise to novel audio systems, such as psychoacoustic sound field synthesis. Furthermore, scientists have experimented with limited success in creating new acoustic weapons, which emit frequencies that may impair, harm, or kill. Psychoacoustics are also leveraged in sonification to make multiple independent data dimensions audible and easily interpretable. This enables auditory guidance without the need for spatial audio and in sonification computer games and other applications, such as drone flying and image-guided surgery. It is also applied today within music, where musicians and artists continue to create new auditory experiences by masking unwanted frequencies of instruments, causing other frequencies to be enhanced. Yet another application is in the design of small or lower-quality loudspeakers, which can use the phenomenon of missing fundamentals to give the effect of bass notes at lower frequencies than the loudspeakers are physically able to produce (see references).
Automobile manufacturers engineer their engines and even doors to have a certain sound.
See also
Related fields
Cognitive neuroscience of music
Music psychology
Psychoacoustic topics
A-weighting, a commonly used perceptual loudness transfer function
ABX test
Audiology
Auditory illusion
Auditory scene analysis incl. 3D-sound perception, localization
Binaural beats
Blind signal separation
Combination tone (also Tartini tone)
Deutsch's Scale illusion
Equivalent rectangular bandwidth (ERB)
Franssen effect
Glissando illusion
Hypersonic effect
Language processing
Levitin effect
Misophonia
Musical tuning
Noise health effects
Octave illusion
Pitch (music)
Precedence effect
Psycholinguistics
Rate-distortion theory
Sound localization
Sound of fingernails scraping chalkboard
Sound masking
Speech perception
Speech recognition
Timbre
Tritone paradox
References
Notes
Sources
E. Larsen and R.M. Aarts (2004), Audio Bandwidth extension. Application of Psychoacoustics, Signal Processing and Loudspeaker Design., J. Wiley.
External links
The Musical Ear—Perception of Sound
—Simulation of Free-field Hearing by Head Phones
GPSYCHO—An Open-source Psycho-Acoustic and Noise-Shaping Model for ISO-Based MP3 Encoders.
Definition of: perceptual audio coding
Java appletdemonstrating masking
Temporal Masking
HyperPhysics Concepts—Sound and Hearing
The MP3 as Standard Object
Cognitive musicology
Music psychology
Acoustics | Psychoacoustics | [
"Physics"
] | 2,508 | [
"Classical mechanics",
"Acoustics"
] |
25,668,599 | https://en.wikipedia.org/wiki/Pentazenium | In chemistry, the pentazenium cation (also known as pentanitrogen) is a positively-charged polyatomic ion with the chemical formula and structure . Together with solid nitrogen polymers and the azide anion, it is one of only three poly-nitrogen species obtained in bulk quantities.
History
Within the High Energy Density Matter research program, run by the U.S. Air Force since 1986, systematic attempts to approach polynitrogen compounds began in 1998, when Air Force Research Laboratory at Edwards AFB became interested in researching alternatives to the highly toxic hydrazine-based rocket fuel and simultaneously funded several such proposals. Karl O. Christe, then, a senior investigator at AFRL, chose to attempt building linear out of and , based on the proposed bond structure:
The reaction succeeded, and was created in sufficient quantities to be fully characterized by NMR, IR and Raman spectroscopy in 1999. The salt was highly explosive, but when was replaced by , a stronger Lewis acid, much more stable was produced, shock-resistant and thermally stable up to 60–70 °C. This made bulk quantities, easy handling, and X-ray crystal structure analysis possible.
Actually N5+ had been predicted by ab initio calculations as a member of the dicyanamide isoelectronic series by Pyykkö and Runeberg in 1991 and this was quoted as ref. [10] of Christe [2] in 1999.
Preparation
Reaction of and in dry HF at −78 °C is the only known method so far:
Chemistry
is capable of oxidizing water, NO, and , but not or ; its electron affinity is 10.44 eV (1018.4 kJ/mol). For this reason, must be prepared and handled in a dry environment:
Due to stability of the fluoroantimonate, it is used as the precursor for all other known salts, typically accomplished by metathesis reactions in non-aqueous solvents such as HF, , , or , where suitable hexafluoroantimonates are insoluble:
The most stable salts of decompose when heated to 50–60 °C: , , and , while the most unstable salts that were obtained and studied, and were extremely shock and temperature sensitive, exploding in solutions as dilute as 0.5 mmol. A number of salts, such as fluoride, azide, nitrate, or perchlorate, cannot be formed.
Structure and bonding
In valence bond theory, pentazenium can be described by six resonance structures:
,
where the last three structures have smaller contributions to the overall structure because they have less favorable formal charge states than the first three.
According to both ab initio calculations and the experimental X-ray structure, the cation is planar, symmetric, and approximately V-shaped, with bond angles 111° at the central atom (angle N2–N3–N4) and 168° at the second and fourth atoms (angles N1–N2–N3 and N3–N4–N5). The bond lengths for N1–N2 and N4–N5 are 1.10 Å and the bond lengths N2–N3 and N3–N4 are 1.30 Å.
See also
Pentazole
Azide
Pentazenium tetraazidoborate
References
Cations
Nitrogen
Explosive chemicals | Pentazenium | [
"Physics",
"Chemistry"
] | 702 | [
"Cations",
"Ions",
"Explosive chemicals",
"Matter"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.