id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
45,555,423 | https://en.wikipedia.org/wiki/Active%20thermography | Active thermography is an advanced nondestructive testing procedure, which uses a thermographic measurement of a tested material thermal response after its external excitation. This principle can be used also for non-contact infrared non-destructive testing (IRNDT) of materials.
The IRNDT method is based on an excitation of a tested material by an external source, which brings some energy to the material. Halogen lamps, flash-lamps, ultrasonic horn or other sources can be used as the excitation source for the IRNDT. The excitation causes a tested material thermal response, which is measured by an infrared camera. It is possible to obtain information about the tested material surface and sub-surface defects or material inhomogeneities by using a suitable combination of excitation source, excitation procedure, infrared camera and evaluation method.
Modern thermographic systems with high-speed and high-sensitivity IR cameras extend the possibilities of the inspection method. Modularity of the systems allows their usage for research and development applications as well as in modern industrial production lines.
Thermovision nondestructive testing of components can be carried out on a wide range of various materials. Thermographic inspection of material can be regarded as a method of infrared defectoscopy, that is capable of revealing material imperfections such as cracks, defects, voids, cavities and other inhomogeneities. The thermographic testing can be provided on individual components in a laboratory or directly on technology facilities that are in duty.
Theory
Active thermography uses an external source for measured object excitation, that means introducing an energy into the object. The excitation sources can be classified by the principles:
optical radiation or microwaves absorption,
electromagnetic induction,
elastic waves transformation (e.g. ultrasound),
convection (e.g. hot air),
plastic deformation transformation (thermoplastic effect during mechanical loading).
Various excitation sources can be used for the active thermography and nondestructive testing, for example laser heating, flash lamps, halogen lamps, electrical heating, ultrasonic horn, eddy currents, microwaves, and others. The measured object can be heated by an external source directly, e.g. by halogen lamps or hot air. The material inhomogeneities or defects cause then a distortion of temperature field. This distortion is detected as temperature differences on the material surface. Another possibility is to use thermophysical processes in the material, when mechanical or electrical energy is transformed into thermal energy due to defects and inhomogeneities. It creates local temperature sources, which cause temperature differences detected on the object surface by infrared techniques, such as in the case of ultrasound excitation.
Methods
A lot of methods were developed for active thermography for the nondestructive testing measurement evaluation. The evaluation methods selection depends on application, used excitation source and excitation type (pulse, periodic, continuous). In the simplest case, the response is evident from a thermogram directly. However, it is necessary to use advanced analysis techniques in most cases. The most common methods include Lock-In, Pulse or Transient (Step thermography) evaluation techniques, with continuous excitation used in some cases:
Lock-In thermography (periodic excitation method). A modulated periodic source is used for the excitation. The phase and amplitude shift of the measured signal are evaluated and the analysis can be done by various techniques. Halogen lamps, LED lamps, ultrasound, laser or an electric current are suitable excitation sources. It has the advantage that it can be used on large surfaces, and it puts a low thermal energy on the part being inspected. The disadvantage is a longer measurement time and dependence of detection capabilities on a geometrical orientation of defects (except of an indirect excitation such as ultrasound). The Lock-In method is suitable for testing components with a low thermal diffusivity and it has many modifications for various specific applications (such as Lock-In Ref, Lock-In Online, etc.).
Pulse thermography (pulse method). A very short pulse – usually in the units of milliseconds – is used to excite the object. The cooling process is then analyzed. A flash lamp is typically used as an excitation source. The advantage of this method is the speed of the analysis and a possibility to estimate the defects depth. The disadvantage is a limited depth of the analysis, a limited area that can be inspected (with regard to a usable power of excitation sources) and a dependence of detection capabilities on geometrical orientation of defects.
Transient thermography (step thermography, thermal wave method). In principle, the excitation and evaluation are similar to the pulse thermography, however, the pulse length is much bigger. Less powerful excitation sources are required compared to the pulse thermography. It is therefore possible to analyze larger areas and the measurement time is shorter than in the case of Lock-In thermography. As in the pulse thermography, the sensitivity of the method is limited by the geometrical orientation of defects. Halogen lamps are the suitable excitation source for this type of evaluation.
Continual excitation. The simplest method usable only in special applications.
A high-speed cooled infrared camera with a high sensitivity is commonly used for IRNDT applications. However, an uncooled bolometric infrared camera can be used for specific applications. It can significantly reduce acquisition costs of the measurement system.
The IR nondestructive testing system are usually modular. It means that various excitation sources can be combined with various infrared cameras and various evaluation methods depending on application, tested material, measuring time demands, size of a tested area, etc. The modularity allows universal usage of the system for various industrial, scientific and research applications.
Applications
IRNDT (infra-red nondestructive testing) method is suitable for detection and inspection of cracks, defects, cavities, voids and inhomogeneities in material, it is also possible to use the method for inspection of welded joints of metal and plastic parts, inspection of solar cells and solar panels, determination of internal structure of material etc.
The main advantage of IRNDT method is availability for inspection of various materials in wide range of industrial and research applications. IRNDT measurement is fast, nondestructive and noncontact. Restrictive condition for IRNDT method is inspection depth combined with dimension and orientation of defect/crack/inhomogeneity in material.
Inspection of laser welded plastic parts
Laser welding of plastics is a progressive technology of connecting materials with different optical properties. Classical methods for testing of welding performance and weld joints quality – such as the metallographic cut microscopic analysis or X-ray tomography – are not suitable for routine measurements. Pulse IRNDT analysis can be successfully used for weld inspection in many cases.
The images show an example of plastic parts inspection with a defective weld and with a correct weld. The gaps in the defective weld and the correct uninterrupted weld line are both well visible in the results of the IRNDT flash-pulse analysis.
Inspection of laser welded joints
Laser beam welding is a modern technology of fusion welding. Currently finds its wide usage not only in the field of scientific research but also establishes itself in a variety of industries. Among the most frequent users belong the automotive industry, which due to its stable continuous innovation enables fast implementation of advanced technologies in their production. It is clear that laser welding significantly enhances engineering designs and thus brings a number of new products which previously could not be made by conventional methods.
The laser welding can produce quality welds of different types, both extremely thin and thick blanks. Weldable are common carbon steels, stainless steels, aluminum and its alloys, copper, titanium and last but not least, special materials and its combinations.
An integral part of the weldment production is a quality control. Unlike conventional non-destructive test methods, IRNDT is used not only after the laser welding process, but also during it. This makes possible to decide whether or not to the weldment comply with established quality criteria during manufacture process.
Solar cells testing
Active thermography, particularly lock-in thermography, is widely employed for inspecting solar cells. While effective, lock-in thermography often requires physical contact with the solar cell for excitation. However, techniques that involve periodic excitation using light sources allow for non-contact testing of electrode-free cells. Common methods such as Illuminated Lock-In Thermography (ILIT) and Open Circuit Voltage Illuminated Lock-In Thermography (VOC-ILIT) are used to investigate defects or issues like ohmic shunts, cracks, open or short circuits, and degradation in photovoltaic materials. Pulsed thermography, another method under investigation, provides a non-contact alternative with significantly reduced inspection times; however, it usually offers lower detectability than the ILIT method.
References
Materials science | Active thermography | [
"Physics",
"Materials_science",
"Engineering"
] | 1,870 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
45,558,284 | https://en.wikipedia.org/wiki/Geometric%20phase%20analysis | Geometric phase analysis is a method of digital signal processing used to determine crystallographic quantities such as d-spacing or strain from high-resolution transmission electron microscope images. The analysis needs to be performed using specialized computer program.
Principle
In geometric phase analysis, local changes in the periodicity of a high resolution image of a crystalline material are quantified, resulting in a two-dimensional map. Quantities which can be mapped with geometric phase analysis include interplanar distances (d-spacing), two-dimensional deformation and strain tensors and displacement vectors. This allows strain fields to be determined at very high resolution, down to the unit cell of the material. Importantly, GPA performed on images that have sub unit-cell resolution can produce erroneous results. For example, a change in composition may appear as a component of the deformation tensor, with the result that an interface appears to have a strain field associated with it when in fact there is none.
Since the calculations are performed in the frequency domain, the input image, with a periodicity of the crystal lattice, must be transformed into a spatial frequency representation using a 2D Fourier transform. From a mathematical point of view, the frequency image is a complex matrix with a size equal to the original image. From a crystallographic point of view, there is an analogy between the 2D Fourier transform and diffraction pattern and the reciprocal lattice. The intensity peaks (or power peaks) in the Fourier transform correspond to crystallographic planes depicted in the original image, specifically a sine wave with the orientation and period of the corresponding planes. A change in the phase of this sine wave indicates a change in the position of its peaks and troughs, which can be interpreted as a component of a 2D deformation tensor.
Due to the complex nature of the frequency image, it can be used to calculate amplitude and phase. Together with a vector of one crystallographic plane depicted in the image, the amplitude and phase can be used to generate a 2D map of d-spacing. If two vectors of non-parallel planes are known, the method can be used to generate maps of strain and displacement.
Software
In order to perform geometric phase analysis, a computer tool is needed. Firstly, because manual evaluation of transforms between spatial and frequential domain would be highly impractical. Secondly, a vector of crystallographic plane is an important input parameter and the analysis is sensitive to the accuracy of its localization. Therefore, the accuracy and repeatability of the analysis requires precise localization of diffraction spots.
The required functionalities are available in several software packages including Strain++ and the crystallographic suite CrysTBox. The latter offers an interactive geometric phase analysis called gpaGUI. In both packages it is possible to locate peaks in the Fourier transform with sub-pixel precision (e.g. diffractGUI).
See also
High-resolution transmission electron microscopy
Fourier transform
Transmission electron microscope
CrysTBox
References
Crystallography
Electron microscopy
Geometric measurement
Digital signal processing
Applied mathematics | Geometric phase analysis | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 614 | [
"Geometric measurement",
"Electron",
"Electron microscopy",
"Physical quantities",
"Applied mathematics",
"Quantity",
"Materials science",
"Crystallography",
"Condensed matter physics",
"Geometry",
"Microscopy"
] |
58,600,169 | https://en.wikipedia.org/wiki/Hill%20limit%20%28solid-state%29 | In solid-state physics, the Hill limit is a critical distance defined in a lattice of actinide or rare-earth atoms. These atoms own partially filled or levels in their valence shell and are therefore responsible for the main interaction between each atom and its environment. In this context, the hill limit is defined as twice the radius of the -orbital. Therefore, if two atoms of the lattice are separate by a distance greater than the Hill limit, the overlap of their -orbital becomes negligible. A direct consequence is the absence of hopping for the f electrons, ie their localization on the ion sites of the lattice.
Localized f electrons lead to paramagnetic materials since the remaining unpaired spins are stuck in their orbitals. However, when the rare-earth lattice (or a single atom) is embedded in a metallic one (intermetallic compound), interactions with the conduction band allow the f electrons to move through the lattice even for interatomic distances above the Hill limit.
See also
Anderson impurity model
References
Solid-state chemistry
Rare earth alloys
Electrical conductors | Hill limit (solid-state) | [
"Physics",
"Chemistry",
"Materials_science"
] | 223 | [
"Rare earth alloys",
"Materials",
"Alloys",
"Electrical conductors",
"Condensed matter physics",
"nan",
"Matter",
"Solid-state chemistry"
] |
58,604,145 | https://en.wikipedia.org/wiki/EUCMOS | EUCMOS is the abbreviation of the conference series "European Congress on Molecular Spectroscopy".
Scope
The European Congress on Molecular Spectroscopy (EUCMOS) is held every two years. The first Congress in this series was held in Basel (Switzerland) in 1951. It focuses on all aspects of spectroscopic methods and techniques (including applications), as well as computational and theoretical approaches for the investigation of structure, dynamics, and properties of molecular systems.
This Congress covers various scientific topics including vibrational, electronic and rotational spectroscopies, spectroscopy of surfaces and interfaces, spectroscopy of biological molecules, computational methods in spectroscopy, applied spectroscopies (archaeology, geology, mineralogy, arts, environmental analysis, food analysis, and processing), new materials, and time-resolved spectroscopy.
History
The European Molecular Spectroscopy Group, which was constituted informally after the Second World War to bring together spectroscopists from across Europe, met for the first time in Konstanz in 1947. Reinhard Mecke was at the time working in temporary accommodation at Wallhausen, a small village on the shores of Lake Constance, and the meeting (initiated by invitation of Professors Jean Lecomte and Alfred Kastler from Paris) was attended by French, German and Austrian spectroscopists.
However, the meeting which has since become regarded as the first of the EUCMOS series was organised under the auspices of Ernst Miescher in Basel in 1951, followed every two years by conferences in Paris (1953), Oxford (1955), Freiburg (1957), Bologna (1959), Amsterdam (1961), Budapest (1963), Copenhagen (1965), Madrid (1967) and Liége (1969). The next meeting was not held until 1973 when it was organized in Tallinn. The 1975 meeting in Strasbourg was devoted to the molecular spectroscopy of dense phases. The biennial meetings were perturbed for the second time in 1991 when EUCMOS XX, which was due to be held in Zagreb, had to be cancelled because of the Civil war in Yugoslavia. The following meeting, in Vienna, was brought forward by a year and the meetings have since been held in the even years. At EUCMOS XXII held in Essen (1994), William James Orville-Thomas retired as President of the International Committee and Austin Barnes was elected to this post. During his mandate, 11 conferences of the series were held, included the one organized in Coimbra in the Year 2000 (EUCMOS XXV). In EUCMOS XXXIII (2016, Szeged) Barnes retired as President of the International Committee and Rui Fausto (vice-President since EUCMOS XXVII, in Cracow, together with Henryk Ratajczak) was elected to this position. The new President had already been chosen as the organizer of the following EUCMOS meeting in Coimbra, 2018 (EUCMOS XXXIV). Sylvia Turrel and Michael Schmitt are the current vice-Presidents of the International Committee.
EUCMOS gathered over the years (present and future) Nobel prize winners from all areas of molecular physics as plenary speakers. This starts in 1953 with Alfred Kastler in Paris (Nobel prize 1966) followed by Gerhard Herzberg 1989 in Leipzig (Nobel prize 1971), Harold Kroto 2000 in Coimbra (Nobel prize 1996) and Theodor W. Hänsch 2010 in Florence (Nobel prize 2005).
Overview
* Not held because of the civil war in Yugoslavia.
References
attribution contains material licensed under CC-BY-3.0 from http://www.qui.uc.pt/eucmos2018/EUCMOS_history.html © 2016 EUCMOS 2018
International conferences
Spectroscopy | EUCMOS | [
"Physics",
"Chemistry"
] | 768 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
49,916,168 | https://en.wikipedia.org/wiki/Cavity%20optomechanics | Cavity optomechanics is a branch of physics which focuses on the interaction between light and mechanical objects on low-energy scales. It is a cross field of optics, quantum optics, solid-state physics and materials science. The motivation for research on cavity optomechanics comes from fundamental effects of quantum theory and gravity, as well as technological applications.
The name of the field relates to the main effect of interest: the enhancement of radiation pressure interaction between light (photons) and matter using optical resonators (cavities). It first became relevant in the context of gravitational wave detection, since optomechanical effects must be taken into account in interferometric gravitational wave detectors. Furthermore, one may envision optomechanical structures to allow the realization of Schrödinger's cat. Macroscopic objects consisting of billions of atoms share collective degrees of freedom which may behave quantum mechanically (e.g. a sphere of micrometer diameter being in a spatial superposition between two different places). Such a quantum state of motion would allow researchers to experimentally investigate decoherence, which describes the transition of objects from states that are described by quantum mechanics to states that are described by Newtonian mechanics. Optomechanical structures provide new methods to test the predictions of quantum mechanics and decoherence models and thereby might allow to answer some of the most fundamental questions in modern physics.
There is a broad range of experimental optomechanical systems which are almost equivalent in their description, but completely different in size, mass, and frequency. Cavity optomechanics was featured as the most recent "milestone of photon history" in nature photonics along well established concepts and technology like quantum information, Bell inequalities and the laser.
Concepts of cavity optomechanics
Physical processes
Stokes and anti-Stokes scattering
The most elementary light-matter interaction is a light beam scattering off an arbitrary object (atom, molecule, nanobeam etc.). There is always elastic light scattering, with the outgoing light frequency identical to the incoming frequency . Inelastic scattering, in contrast, is accompanied by excitation or de-excitation of the material object (e.g. internal atomic transitions may be excited). However, it is always possible to have Brillouin scattering independent of the internal electronic details of atoms or molecules due to the object's mechanical vibrations:
where is the vibrational frequency. The vibrations gain or lose energy, respectively, for these Stokes/anti-Stokes processes, while optical sidebands are created around the incoming light frequency:
If Stokes and anti-Stokes scattering occur at an equal rate, the vibrations will only heat up the object. However, an optical cavity can be used to suppress the (anti-)Stokes process, which reveals the principle of the basic optomechanical setup: a laser-driven optical cavity is coupled to the mechanical vibrations of some object. The purpose of the cavity is to select optical frequencies (e.g. to suppress the Stokes process) that resonantly enhance the light intensity and to enhance the sensitivity to the mechanical vibrations. The setup displays features of a true two-way interaction between light and mechanics, which is in contrast to optical tweezers, optical lattices, or vibrational spectroscopy, where the light field controls the mechanics (or vice versa) but the loop is not closed.
Radiation pressure force
Another but equivalent way to interpret the principle of optomechanical cavities is by using the concept of radiation pressure. According to the quantum theory of light, every photon with wavenumber carries a momentum , where is the Planck constant. This means that a photon reflected off a mirror surface transfers a momentum onto the mirror due to the conservation of momentum. This effect is extremely small and cannot be observed on most everyday objects; it becomes more significant when the mass of the mirror is very small and/or the number of photons is very large (i.e. high intensity of the light). Since the momentum of photons is extremely small and not enough to change the position of a suspended mirror significantly, the interaction needs to be enhanced. One possible way to do this is by using optical cavities. If a photon is enclosed between two mirrors, where one is the oscillator and the other is a heavy fixed one, it will bounce off the mirrors many times and transfer its momentum every time it hits the mirrors. The number of times a photon can transfer its momentum is directly related to the finesse of the cavity, which can be improved with highly reflective mirror surfaces. The radiation pressure of the photons does not simply shift the suspended mirror further and further away as the effect on the cavity light field must be taken into account: if the mirror is displaced, the cavity's length changes, which also alters the cavity resonance frequency. Therefore, the detuning—which determines the light amplitude inside the cavity—between the changed cavity and the unchanged laser driving frequency is modified. It determines the light amplitude inside the cavity – at smaller levels of detuning more light actually enters the cavity because it is closer to the cavity resonance frequency. Since the light amplitude, i.e. the number of photons inside the cavity, causes the radiation pressure force and consequently the displacement of the mirror, the loop is closed: the radiation pressure force effectively depends on the mirror position. Another advantage of optical cavities is that the modulation of the cavity length through an oscillating mirror can directly be seen in the spectrum of the cavity.
Optical spring effect
Some first effects of the light on the mechanical resonator can be captured by converting the radiation pressure force into a potential,
and adding it to the intrinsic harmonic oscillator potential of the mechanical oscillator, where is the slope of the radiation pressure force. This combined potential reveals the possibility of static multi-stability in the system, i.e. the potential can feature several stable minima. In addition, can be understood to be a modification of the mechanical spring constant,
This effect is known as the optical spring effect (light-induced spring constant).
However, the model is incomplete as it neglects retardation effects due to the finite cavity photon decay rate . The force follows the motion of the mirror only with some time delay, which leads to effects like friction. For example, assume the equilibrium position sits somewhere on the rising slope of the resonance. In thermal equilibrium, there will be oscillations around this position that do not follow the shape of the resonance because of retardation. The consequence of this delayed radiation force during one cycle of oscillation is that work is performed, in this particular case it is negative,, i.e. the radiation force extracts mechanical energy (there is extra, light-induced damping). This can be used to cool down the mechanical motion and is referred to as optical or optomechanical cooling. It is important for reaching the quantum regime of the mechanical oscillator where thermal noise effects on the device become negligible. Similarly, if the equilibrium position sits on the falling slope of the cavity resonance, the work is positive and the mechanical motion is amplified. In this case the extra, light-induced damping is negative and leads to amplification of the mechanical motion (heating). Radiation-induced damping of this kind has first been observed in pioneering experiments by Braginsky and coworkers in 1970.
Quantized energy transfer
Another explanation for the basic optomechanical effects of cooling and amplification can be given in a quantized picture: by detuning the incoming light from the cavity resonance to the red sideband, the photons can only enter the cavity if they take phonons with energy from the mechanics; it effectively cools the device until a balance with heating mechanisms from the environment and laser noise is reached. Similarly, it is also possible to heat structures (amplify the mechanical motion) by detuning the driving laser to the blue side; in this case the laser photons scatter into a cavity photon and create an additional phonon in the mechanical oscillator.
The principle can be summarized as: phonons are converted into photons when cooled and vice versa in amplification.
Three regimes of operation: cooling, heating, resonance
The basic behaviour of the optomechanical system can generally be divided into different regimes, depending on the detuning between the laser frequency and the cavity resonance frequency :
Red-detuned regime, (most prominent effects on the red sideband, ): In this regime state exchange between two resonant oscillators can occur (i.e. a beam-splitter in quantum optics language). This can be used for state transfer between phonons and photons (which requires the so-called "strong coupling regime") or the above-mentioned optical cooling.
Blue-detuned regime, (most prominent effects on the blue sideband, ): This regime describes "two-mode squeezing". It can be used to achieve quantum entanglement, squeezing, and mechanical "lasing" (amplification of the mechanical motion to self-sustained optomechanical oscillations / limit cycle oscillations), if the growth of the mechanical energy overwhelms the intrinsic losses (mainly mechanical friction).
On-resonance regime, : In this regime the cavity is simply operated as an interferometer to read the mechanical motion.
The optical spring effect also depends on the detuning. It can be observed for high levels of detuning () and its strength varies with detuning and the laser drive.
Mathematical treatment
Hamiltonian
The standard optomechanical setup is a Fabry–Pérot cavity, where one mirror is movable and thus provides an additional mechanical degree of freedom. This system can be mathematically described by a single optical cavity mode coupled to a single mechanical mode. The coupling originates from the radiation pressure of the light field that eventually moves the mirror, which changes the cavity length and resonance frequency. The optical mode is driven by an external laser. This system can be described by the following effective Hamiltonian:
where and are the bosonic annihilation operators of the given cavity mode and the mechanical resonator respectively, is the frequency of the optical mode, is the position of the mechanical resonator, is the mechanical mode frequency, is the driving laser frequency, and is the amplitude. It satisfies the commutation relations
is now dependent on . The last term describes the driving, given by
where is the input power coupled to the optical mode under consideration and its linewidth. The system is coupled to the environment so the full treatment of the system would also include optical and mechanical dissipation (denoted by and respectively) and the corresponding noise entering the system.
The standard optomechanical Hamiltonian is obtained by getting rid of the explicit time dependence of the laser driving term and separating the optomechanical interaction from the free optical oscillator. This is done by switching into a reference frame rotating at the laser frequency (in which case the optical mode annihilation operator undergoes the transformation ) and applying a Taylor expansion on . Quadratic and higher-order coupling terms are usually neglected, such that the standard Hamiltonian becomes
where the laser detuning and the position operator . The first two terms ( and ) are the free optical and mechanical Hamiltonians respectively. The third term contains the optomechanical interaction, where is the single-photon optomechanical coupling strength (also known as the bare optomechanical coupling). It determines the amount of cavity resonance frequency shift if the mechanical oscillator is displaced by the zero point uncertainty , where is the effective mass of the mechanical oscillator. It is sometimes more convenient to use the frequency pull parameter, or , to determine the frequency change per displacement of the mirror.
For example, the optomechanical coupling strength of a Fabry–Pérot cavity of length with a moving end-mirror can be directly determined from the geometry to be .
This standard Hamiltonian is based on the assumption that only one optical and mechanical mode interact. In principle, each optical cavity supports an infinite number of modes and mechanical oscillators which have more than a single oscillation/vibration mode. The validity of this approach relies on the possibility to tune the laser in such a way that it only populates a single optical mode (implying that the spacing between the cavity modes needs to be sufficiently large). Furthermore, scattering of photons to other modes is supposed to be negligible, which holds if the mechanical (motional) sidebands of the driven mode do not overlap with other cavity modes; i.e. if the mechanical mode frequency is smaller than the typical separation of the optical modes.
Linearization
The single-photon optomechanical coupling strength is usually a small frequency, much smaller than the cavity decay rate , but the effective optomechanical coupling can be enhanced by increasing the drive power. With a strong enough drive, the dynamics of the system can be considered as quantum fluctuations around a classical steady state, i.e. , where is the mean light field amplitude and denotes the fluctuations. Expanding the photon number , the term can be omitted as it leads to a constant radiation pressure force which simply shifts the resonator's equilibrium position. The linearized optomechanical Hamiltonian can be obtained by neglecting the second order term :
where . While this Hamiltonian is a quadratic function, it is considered "linearized" because it leads to linear equations of motion. It is a valid description of many experiments, where is typically very small and needs to be enhanced by the driving laser. For a realistic description, dissipation should be added to both the optical and the mechanical oscillator. The driving term from the standard Hamiltonian is not part of the linearized Hamiltonian, since it is the source of the classical light amplitude around which the linearization was executed.
With a particular choice of detuning, different phenomena can be observed (see also the section about physical processes). The clearest distinction can be made between the following three cases:
: a rotating wave approximation of the linearized Hamiltonian, where one omits all non-resonant terms, reduces the coupling Hamiltonian to a beamsplitter operator, . This approximation works best on resonance; i.e. if the detuning becomes exactly equal to the negative mechanical frequency. Negative detuning (red detuning of the laser from the cavity resonance) by an amount equal to the mechanical mode frequency favors the anti-Stokes sideband and leads to a net cooling of the resonator. Laser photons absorb energy from the mechanical oscillator by annihilating phonons in order to become resonant with the cavity.
: a rotating wave approximation of the linearized Hamiltonian leads to other resonant terms. The coupling Hamiltonian takes the form , which is proportional to the two-mode squeezing operator. Therefore, two-mode squeezing and entanglement between the mechanical and optical modes can be observed with this parameter choice. Positive detuning (blue detuning of the laser from the cavity resonance) can also lead to instability. The Stokes sideband is enhanced, i.e. the laser photons shed energy, increasing the number of phonons and becoming resonant with the cavity in the process.
: In this case of driving on-resonance, all the terms in must be considered. The optical mode experiences a shift proportional to the mechanical displacement, which translates into a phase shift of the light transmitted through (or reflected off) the cavity. The cavity serves as an interferometer augmented by the factor of the optical finesse and can be used to measure very small displacements. This setup has enabled LIGO to detect gravitational waves.
Equations of motion
From the linearized Hamiltonian, the so-called linearized quantum Langevin equations, which govern the dynamics of the optomechanical system, can be derived when dissipation and noise terms to the Heisenberg equations of motion are added.
Here and are the input noise operators (either quantum or thermal noise) and and are the corresponding dissipative terms. For optical photons, thermal noise can be neglected due to the high frequencies, such that the optical input noise can be described by quantum noise only; this does not apply to microwave implementations of the optomechanical system. For the mechanical oscillator thermal noise has to be taken into account and is the reason why many experiments are placed in additional cooling environments to lower the ambient temperature.
These first order differential equations can be solved easily when they are rewritten in frequency space (i.e. a Fourier transform is applied).
Two main effects of the light on the mechanical oscillator can then be expressed in the following ways:
The equation above is termed the optical-spring effect and may lead to significant frequency shifts in the case of low-frequency oscillators, such as pendulum mirrors. In the case of higher resonance frequencies ( MHz), it does not significantly alter the frequency. For a harmonic oscillator, the relation between a frequency shift and a change in the spring constant originates from Hooke's law.
The equation above shows optical damping, i.e. the intrinsic mechanical damping becomes stronger (or weaker) due to the optomechanical interaction. From the formula, in the case of negative detuning and large coupling, mechanical damping can be greatly increased, which corresponds to the cooling of the mechanical oscillator. In the case of positive detuning the optomechanical interaction reduces effective damping. Instability can occur when the effective damping drops below zero (), which means that it turns into an overall amplification rather than a damping of the mechanical oscillator.
Important parameter regimes
The most basic regimes in which the optomechanical system can be operated are defined by the laser detuning and described above. The resulting phenomena are either cooling or heating of the mechanical oscillator. However, additional parameters determine what effects can actually be observed.
The good/bad cavity regime (also called the resolved/unresolved sideband regime) relates the mechanical frequency to the optical linewidth. The good cavity regime (resolved sideband limit) is of experimental relevance since it is a necessary requirement to achieve ground state cooling of the mechanical oscillator, i.e. cooling to an average mechanical occupation number below . The term "resolved sideband regime" refers to the possibility of distinguishing the motional sidebands from the cavity resonance, which is true if the linewidth of the cavity, , is smaller than the distance from the cavity resonance to the sideband (). This requirement leads to a condition for the so-called sideband parameter: . If the system resides in the bad cavity regime (unresolved sideband limit), where the motional sideband lies within the peak of the cavity resonance. In the unresolved sideband regime, many motional sidebands can be included in the broad cavity linewidth, which allows a single photon to create more than one phonon, which leads to greater amplification of the mechanical oscillator.
Another distinction can be made depending on the optomechanical coupling strength. If the (enhanced) optomechanical coupling becomes larger than the cavity linewidth (), a strong-coupling regime is achieved. There the optical and mechanical modes hybridize and normal-mode splitting occurs. This regime must be distinguished from the (experimentally much more challenging) single-photon strong-coupling regime, where the bare optomechanical coupling becomes of the order of the cavity linewidth, . Effects of the full non-linear interaction described by only become observable in this regime. For example, it is a precondition to create non-Gaussian states with the optomechanical system. Typical experiments currently operate in the linearized regime (small ) and only investigate effects of the linearized Hamiltonian.
Experimental realizations
Setup
The strength of the optomechanical Hamiltonian is the large range of experimental implementations to which it can be applied, which results in wide parameter ranges for the optomechanical parameters. For example, the size of optomechanical systems can be on the order of micrometers or in the case for LIGO, kilometers. (although LIGO is dedicated to the detection of gravitational waves and not the investigation of optomechanics specifically).
Examples of real optomechanical implementations are:
Cavities with a moving mirror: the archetype of an optomechanical system. The light is reflected from the mirrors and transfers momentum onto the movable one, which in turn changes the cavity resonance frequency.
Membrane-in-the-middle system: a micromechanical membrane is brought into a cavity consisting of fixed massive mirrors. The membrane takes the role of the mechanical oscillator. Depending on the positioning of the membrane inside the cavity, this system behaves like the standard optomechanical system.
Levitated system: an optically levitated nanoparticle is brought into a cavity consisting of fixed massive mirrors. The levitated nanoparticle takes the role of the mechanical oscillator. Depending on the positioning of the particle inside the cavity, this system behaves like the standard optomechanical system.
Microtoroids that support an optical whispering gallery mode can be either coupled to a mechanical mode of the toroid or evanescently to a nanobeam that is brought in proximity.
Optomechanical crystal structures: patterned dielectrics or metamaterials can confine optical and/or mechanical (acoustic) modes. If the patterned material is designed to confine light, it is called a photonic crystal cavity. If it is designed to confine sound, it is called a phononic crystal cavity. Either can be used respectively as the optical or mechanical component. Hybrid crystals, which confine both sound and light to the same area, are especially useful, as they form a complete optomechanical system.
Electromechanical implementations of an optomechanical system use superconducting LC circuits with a mechanically compliant capacitance like a membrane with metallic coating or a tiny capacitor plate glued onto it. By using movable capacitor plates, mechanical motion (physical displacement) of the plate or membrane changes the capacitance , which transforms mechanical oscillation into electrical oscillation. LC oscillators have resonances in the microwave frequency range; therefore, LC circuits are also termed microwave resonators. The physics is exactly the same as in optical cavities but the range of parameters is different because microwave radiation has a larger wavelength than optical light or infrared laser light.
A purpose of studying different designs of the same system is the different parameter regimes that are accessible by different setups and their different potential to be converted into tools of commercial use.
Measurement
The optomechanical system can be measured by using a scheme like homodyne detection. Either the light of the driving laser is measured, or a two-mode scheme is followed where a strong laser is used to drive the optomechanical system into the state of interest and a second laser is used for the read-out of the state of the system. This second "probe" laser is typically weak, i.e. its optomechanical interaction can be neglected compared to the effects caused by the strong "pump" laser.
The optical output field can also be measured with single photon detectors to achieve photon counting statistics.
Relation to fundamental research
One of the questions which are still subject to current debate is the exact mechanism of decoherence. In the Schrödinger's cat thought experiment, the cat would never be seen in a quantum state: there needs to be something like a collapse of the quantum wave functions, which brings it from a quantum state to a pure classical state. The question is where the boundary lies between objects with quantum properties and classical objects. Taking spatial superpositions as an example, there might be a size limit to objects which can be brought into superpositions, there might be a limit to the spatial separation of the centers of mass of a superposition or even a limit to the superposition of gravitational fields and its impact on small test masses. Those predictions can be checked with large mechanical structures that can be manipulated at the quantum level.
Some easier to check predictions of quantum mechanics are the prediction of negative Wigner functions for certain quantum states, measurement precision beyond the standard quantum limit using squeezed states of light, or the asymmetry of the sidebands in the spectrum of a cavity near the quantum ground state.
Applications
Years before cavity optomechanics gained the status of an independent field of research, many of its techniques were already used in gravitational wave detectors where it is necessary to measure displacements of mirrors on the order of the Planck scale. Even if these detectors do not address the measurement of quantum effects, they encounter related issues (photon shot noise) and use similar tricks (squeezed coherent states) to enhance the precision. Further applications include the development of quantum memory for quantum computers, high precision sensors (e.g. acceleration sensors) and quantum transducers e.g. between the optical and the microwave domain (taking advantage of the fact that the mechanical oscillator can easily couple to both frequency regimes).
Related fields and expansions
In addition to the standard cavity optomechanics explained above, there are variations of the simplest model:
Pulsed optomechanics: the continuous laser driving is replaced by pulsed laser driving. It is useful for creating entanglement and allows backaction-evading measurements.
Quadratic coupling: a system with quadratic optomechanical coupling can be investigated beyond the linear coupling term . The interaction Hamiltonian would then feature a term with . In membrane-in-the-middle setups it is possible to achieve quadratic coupling in the absence of linear coupling by positioning the membrane at an extremum of the standing wave inside the cavity. One possible application is to carry out a quantum nondemolition measurement of the phonon number.
Reversed dissipation regime: in the standard optomechanical system the mechanical damping is much smaller than the optical damping. A system where this hierarchy is reversed can be engineered; i.e. the optical damping is much smaller than the mechanical damping (). Within the linearized regime, symmetry implies an inversion of the above described effects; For example, cooling of the mechanical oscillator in the standard optomechanical system is replaced by cooling of the optical oscillator in a system with reversed dissipation hierarchy. This effect was also seen in optical fiber loops in the 1970s.
Dissipative coupling: the coupling between optics and mechanics arises from a position-dependent optical dissipation rate instead of a position-dependent cavity resonance frequency , which changes the interaction Hamiltonian and alters many effects of the standard optomechanical system. For example, this scheme allows the mechanical resonator to cool to its ground state without the requirement of the good cavity regime.
Extensions to the standard optomechanical system include coupling to more and physically different systems:
Optomechanical arrays: coupling several optomechanical systems to each other (e.g. using evanescent coupling of the optical modes) allows multi-mode phenomena like synchronization to be studied. So far many theoretical predictions have been made, but only few experiments exist. The first optomechanical array (with more than two coupled systems) consists of seven optomechanical systems.
Hybrid systems: an optomechanical system can be coupled to a system of a different nature (e.g. a cloud of ultracold atoms and a two-level system), which can lead to new effects on both the optomechanical and the additional system.
Cavity optomechanics is closely related to trapped ion physics and Bose–Einstein condensates. These systems share very similar Hamiltonians, but have fewer particles (about 10 for ion traps and 105–108 for Bose–Einstein condensates) interacting with the field of light. It is also related to the field of cavity quantum electrodynamics.
See also
Quantum harmonic oscillator
Optical cavity
Laser cooling
Coherent control
References
Further reading
Daniel Steck, Classical and Modern Optics
Michel Deverot, Bejamin Huard, Robert Schoelkopf, Leticia F. Cugliandolo (2014). Quantum Machines: Measurement and Control of Engineered Quantum Systems. Lecture Notes of the Les Houches Summer School: Volume 96, July 2011. Oxford University Press
Demir, Dilek,"A table-top demonstration of radiation pressure", 2011, Diplomathesis, E-Theses univie. doi:10.25365/thesis.16381
Quantum optics | Cavity optomechanics | [
"Physics"
] | 5,971 | [
"Quantum optics",
"Quantum mechanics"
] |
49,924,075 | https://en.wikipedia.org/wiki/Western%20North%20American%20Naturalist | Western North American Naturalist, formerly The Great Basin Naturalist, is a peer-reviewed scientific journal focusing on biodiversity and conservation of western North America. The journal's geographic coverage includes "from northernmost Canada and Alaska to southern Mexico, and from the Mississippi River to the Pacific Ocean." Established in 1939, it is published by the Monte L. Bean Life Science Museum (Brigham Young University). The journal is published quarterly, with monographs published irregularly in Monographs of the Western North American Naturalist.
History
Vasco M. Tanner founded the magazine after a term as editor at Proceedings magazine. His hope for the journal was to have a publication that covered a wide range of biology-related topics in addition to having a place to publish his own research. From 1939 through 1966, the journal limited the publication of their issues to once or twice a year due to World War II. Franklin Harris encouraged the journal to continue publication, and it was one of the first journals to be used "for exchange purposes" by university libraries. From 1967 on, the journal published quarterly issues. Tanner served as editor of the Great Basin Naturalist until 1970. Steven Wood, Tanner's successor as editor, established an editorial board for the journal. The board allowed for the journal to utilize an improved peer review process.
In 1975, the journal moved its editorial offices to the Monte L. Bean Life Science Museum. In 1976, articles too long for publication in the journal started being published in The Great Basin Naturalist Memoirs series. In 1990, Jim Barnes succeeded Steven Wood as editor. The journal's editor changed again in 1994 to Richard Baumann. In 1999, the publication of The Great Basin Naturalist ended. The journal's title changed to Western North American Naturalist, which started publishing in 2000. In 2006, Mark C. Belk became the journal's new editor. Belk was still the editor in 2017.
Impact
According to Journal Citation Reports, Western North American Naturalist had an impact factor of 0.311 and ranked 147 of 153 in Ecology category in 2016.
References
External links
The Great Basin Naturalist archive at the Biodiversity Heritage Library
Ecology journals
Academic journals established in 1939
English-language journals
Delayed open access journals
Quarterly journals
Academic journals published by museums
1939 establishments in the United States | Western North American Naturalist | [
"Environmental_science"
] | 453 | [
"Environmental science journals",
"Ecology journals"
] |
49,924,144 | https://en.wikipedia.org/wiki/Aquation | Aquation is the chemical reaction involving "incorporation of one or more integral molecules of water" with or without displacement of other atoms or groups. The term is typically employed to refer to reactions of metal complexes where an anion is displaced by water. For example, bromopentaamminecobalt(III) undergoes the following aquation reaction to give a metal aquo complex:
[Co(NH3)5Br]2+ + H2O → [Co(NH3)5(H2O)]3+ + Br−
This aquation reaction is catalyzed both by acid and by base. Acid catalysis involves protonation of the bromide, converting it to a better leaving group. Base hydrolysis proceeds by the SN1cB mechanism, which begins with deprotonation of an ammonia ligand.
See also
Hydration reaction
References
Substitution reactions
Coordination chemistry
Reaction mechanisms
Water chemistry | Aquation | [
"Chemistry"
] | 188 | [
"Reaction mechanisms",
"Coordination chemistry",
"nan",
"Physical organic chemistry",
"Chemical kinetics"
] |
48,565,132 | https://en.wikipedia.org/wiki/Cooling%20load | Cooling load is the rate at which sensible and latent heat must be removed from the space to maintain a constant space dry-bulb air temperature and humidity. Sensible heat into the space causes its air temperature to rise while latent heat is associated with the rise of the moisture content in the space. The building design, internal equipment, occupants, and outdoor weather conditions may affect the cooling load in a building using different heat transfer mechanisms. The SI units are watts.
Overview
The cooling load is calculated to select HVAC equipment that has the appropriate cooling capacity to remove heat from the zone. A zone is typically defined as an area with similar heat gains, similar temperature and humidity control requirements, or an enclosed space within a building with the purpose to monitor and control the zone's temperature and humidity with a single sensor e.g. thermostat. Cooling load calculation methodologies take into account heat transfer by conduction, convection, and radiation. Methodologies include heat balance, radiant time series, cooling load temperature difference, transfer function, and sol-air temperature. Methods calculate the cooling load in either steady state or dynamic conditions and some can be more involved than others. These methodologies and others can be found in ASHRAE handbooks, ISO Standard 11855, European Standard (EN) 15243, and EN 15255. ASHRAE recommends the heat balance method and radiant time series methods.
Differentiation from heat gains
The cooling load of a building should not be confused with its heat gains. Heat gains refer to the rate at which heat is transferred into or generated inside a building. Just like cooling loads, heat gains can be separated into sensible and latent heat gains that can occur through conduction, convection, and radiation. Thermophysical properties of walls, floors, ceilings, and windows, lighting power density (LPD), plug load density, occupant density, and equipment efficiency play an important role in determining the magnitude of heat gains in a building. ASHRAE handbook of fundamentals refers to the following six modes of entry for heat gains:
Solar radiation through transparent surfaces
Heat conduction through exterior walls and roofs
Heat conduction through ceilings, floors, and interior partitions
Heat generated in the space by occupants, lights, and appliances
Energy transfer through direct-with-space ventilation and infiltration of outdoor air
Miscellaneous heats gains
Furthermore, heat extraction rate is the rate at which heat is actually being removed from the space by the cooling equipment. Heat gains, heat extraction rate, and cooling loads values are often not equal due to thermal inertia effects. Heat is stored in the mass of the building and furnishings delaying the time at which it can become a heat gain and be extracted by the cooling equipment to maintain the desired indoor conditions. Another reason is that the inability of the cooling system to keep dry bulb temperature and humidity constant.
Cooling loads in air systems
In air systems, convective heat gains are assumed to become a cooling load instantly. Radiative heat gains are absorbed by walls, floors, ceilings, and furnishings causing an increase in their temperature which will then transfer heat to the space's air by convection. Conductive heat gains are converted to convective and radiative heat gains. If the space's air temperature and humidity are kept constant then heat extraction rate and space cooling load are equal. The resulting cooling load through different air system types in the same built environment can be different.
Cooling loads in radiant systems
In radiant systems, not all convective heat gains become a cooling load instantly because radiant system has limitations on how much heat can be removed from the zone through convection. Radiative heat gains are absorbed by active and non-active cooling surfaces. If absorbed by active surfaces then heat gains become an instant cooling load otherwise a temperature increase will occur in the non-active surface that will eventually cause heat transfer to the space by convection and radiation.
References
Heating, ventilation, and air conditioning
Building engineering
Heat transfer | Cooling load | [
"Physics",
"Chemistry",
"Engineering"
] | 797 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Building engineering",
"Civil engineering",
"Thermodynamics",
"Architecture"
] |
56,710,028 | https://en.wikipedia.org/wiki/Rydberg%20polaron | A Rydberg polaron is an exotic quasiparticle, created at low temperatures, in which a very large atom contains other ordinary atoms in the space between the nucleus and the electrons. For the formation of this atom, scientists had to combine two fields of atomic physics: Bose–Einstein condensates and Rydberg atoms. Rydberg atoms are formed by exciting a single atom into a high-energy state, in which the electron is very far from the nucleus. Bose–Einstein condensates are a state of matter that is produced at temperatures close to absolute zero.
Polarons are induced by using a laser to excite Rydberg atoms contained as impurities in a Bose–Einstein condensate. In those Rydberg atoms, the average distance between the electron and its nucleus can be as large as several hundred nanometres, which is more than a thousand times the radius of a hydrogen atom. Under these circumstances, the distance between the nucleus and the electron of the excited Rydberg atoms is higher than the average distance between the atoms of the condensate. As a result, some atoms lie inside the orbit of the Rydberg atom's electron.
As the atoms don't have an electric charge, they only produce a minimal force on the electron. However, the electron is slightly scattered at the neutral atoms, without even leaving its orbit, and the weak bond that is generated between the Rydberg atom and the atoms inside of it, tying them together, is known as the Rydberg polaron. The excitation was predicted by theorists at Harvard University in 2016 and confirmed in 2018 by spectroscopy in an experiment using a strontium Bose–Einstein condensate. Theoretically, up to 170 ordinary strontium atoms could fit closely inside the new orbital of the Rydberg atom, depending on the radius of the Rydberg atom and the density of the Bose–Einstein condensate. The theoretical work for the experiment was performed by theorists at Vienna University of Technology and Harvard University, while the actual experiment and observation took place at Rice University in Houston, Texas.
See also
Bose–Einstein condensates
Rydberg atoms
References
Condensed matter physics
Exotic matter
Quasiparticles | Rydberg polaron | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 441 | [
"Matter",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Exotic matter",
"Quasiparticles",
"Subatomic particles"
] |
56,715,856 | https://en.wikipedia.org/wiki/Triplet-triplet%20annihilation | Triplet-triplet annihilation (TTA) is an energy transfer mechanism where two molecules in their triplet excited states interact to form a ground state molecule and an excited molecule in its singlet state. This mechanism is example of Dexter energy transfer mechanism. In triplet-triplet annihilation, one molecule transfers its excited state energy to the second molecule, resulting in the first molecule returning to its ground state and the second molecule being promoted to a higher excited singlet state.
Triplet-triplet annihilation was first discovered in the 1960s to explain the observation of delayed fluorescence in anthracene derivatives.
Photon upconversion
Triplet-triplet annihilation combines the energy of two triplet-excited molecules onto one molecule to produce a higher excited state. Since the higher excited state is an emissive singlet state, TTA can be used to achieve photon upconversion which is a process that converts the energy of two photons into one photon of higher energy.
To achieve photon upconversion through triplet-triplet annihilation two types of molecules are often combined: a sensitizer and an emitter (annihilator). The sensitizer absorbs the low energy photon and populates its first excited triplet state (T1) through intersystem crossing. The sensitizer then transfers the excitation energy to the emitter, resulting in a triplet excited emitter and a ground state sensitizer. Two triplet-excited emitters can then undergo triplet-triplet annihilation to produce a singlet excited state (S1) of the emitter, which can emit an upconverted photon.
Requirements
For efficient TTA upconversion, the sensitizer should absorb strongly in the desired excitation range and have high conversion efficiency from the singlet excited state to the triplet excited state. The emitter should have a singlet energy level just below twice the energy of the first triplet excited state. Both the emitter and sensitizer should have long triplet-state lifetimes so that the TTA mechanism has enough time to occur.
Applications
Triplet-triplet annihilation upconversion (TTA-UC) materials have the advantages of needing low excitation power and having changeable emission and excitation light wavelengths. Due to these advantages, many applications of TTA-UC materials have been explored.
Solar cells
Solar cells are electrical devices that convert sunlight to electricity. However, due to the properties of the materials composing solar cells, many solar cells do not harvest low energy (with wavelength above 800 nm) photons efficiently. Thus, the ability for TTA-UC materials to combine the energy of two low energy photons into one high energy photon is desirable to capture more of the energy from sunlight.
Organic light-emitting diodes
Light-emitting materials that can convert non-emissive triplet states into emissive singlet states are crucial in organic light-emitting diodes (OLEDs) as, statistically, 75% of the excited states formed in an OLED are triplet states. TTA materials are well suited to use in OLEDs due to their low operational voltage, small drop-off in efficiency when increasing brightness, and low cost. However, most TTA materials emit photons that are blue to deep blue, which limits their applications in OLEDs until the colour variety diversifies.
Cancer therapy
In photolysis cancer therapy, light is used to selectively break bonds which releases and activates a target drug molecule. The drug molecule can be released near or in tumour sites to combat the disease. TTA-UC materials that can be excited by near-infrared light are desirable for this application since near-infrared light penetrates tissue well.
References
Spectroscopy
Atomic physics | Triplet-triplet annihilation | [
"Physics",
"Chemistry"
] | 785 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
"Spectroscopy",
" and optical physics"
] |
42,272,365 | https://en.wikipedia.org/wiki/Gausson%20%28physics%29 | The Gausson is a soliton which is the solution of the logarithmic Schrödinger equation, which describes a quantum particle in a possible nonlinear quantum mechanics. The logarithmic Schrödinger equation preserves
the dimensional homogeneity of the equation, i.e. the product of the independent solutions in one dimension remain the solution in multiple dimensions.
While the nonlinearity alone cannot cause the quantum entanglement between dimensions, the logarithmic Schrödinger equation can be solved by the separation of variables.
Let the nonlinear Logarithmic Schrödinger equation in one dimension will be given by (, unit mass ):
Let assume the Galilean invariance i.e.
Substituting
The first equation can be written as
Substituting additionally
and assuming
we get the normal Schrödinger equation for the quantum harmonic oscillator:
The solution is therefore the normal ground state of the harmonic oscillator if only
or
The full solitonic solution is therefore given by
where
This solution describes the soliton moving with the constant velocity and not changing
the shape (modulus) of the Gaussian function. When a potential is added, not only can a single Gausson provide an exact solution to a number of cases of the Logarithmic Schrödinger equation, it has been found that a linear combination of Gaussons can very accurately approximate excited states as well.
This superposition property of Gaussons has been demonstrated for quadratic
potentials.
References
Quantum mechanics | Gausson (physics) | [
"Physics"
] | 317 | [
"Theoretical physics",
"Quantum mechanics"
] |
42,279,970 | https://en.wikipedia.org/wiki/Andreotti%E2%80%93Norguet%20formula | The Andreotti–Norguet formula, first introduced by , is a higher–dimensional analogue of Cauchy integral formula for expressing the derivatives of a holomorphic function. Precisely, this formula express the value of the partial derivative of any multiindex order of a holomorphic function of several variables, in any interior point of a given bounded domain, as a hypersurface integral of the values of the function on the boundary of the domain itself. In this respect, it is analogous and generalizes the Bochner–Martinelli formula, reducing to it when the absolute value of the multiindex order of differentiation is . When considered for functions of complex variables, it reduces to the ordinary Cauchy formula for the derivative of a holomorphic function: however, when , its integral kernel is not obtainable by simple differentiation of the Bochner–Martinelli kernel.
Historical note
The Andreotti–Norguet formula was first published in the research announcement : however, its full proof was only published later in the paper . Another, different proof of the formula was given by . In 1977 and 1978, Lev Aizenberg gave still another proof and a generalization of the formula based on the Cauchy–Fantappiè–Leray kernel instead on the Bochner–Martinelli kernel.
The Andreotti–Norguet integral representation formula
Notation
The notation adopted in the following description of the integral representation formula is the one used by and by : the notations used in the original works and in other references, though equivalent, are significantly different. Precisely, it is assumed that
is a fixed natural number,
are complex vectors,
is a multiindex whose absolute value is ,
is a bounded domain whose closure is ,
is the function space of functions holomorphic on the interior of and continuous on its boundary .
the iterated Wirtinger derivatives of order of a given complex valued function are expressed using the following simplified notation:
The Andreotti–Norguet kernel
For every multiindex , the Andreotti–Norguet kernel is the following differential form in of bidegree :
where and
The integral formula
For every function , every point and every multiindex , the following integral representation formula holds
See also
Bergman–Weil formula
Notes
References
, revised translation of the 1990 Russian original.
.
.
.
, .
.
.
, (ebook).
. Collection of articles dedicated to Giovanni Sansone on the occasion of his eighty-fifth birthday.
. The notes form a course, published by the Accademia Nazionale dei Lincei, held by Martinelli during his stay at the Accademia as "Professore Linceo".
, :
Theorems in complex analysis
Several complex variables | Andreotti–Norguet formula | [
"Mathematics"
] | 554 | [
"Theorems in mathematical analysis",
"Functions and mappings",
"Several complex variables",
"Theorems in complex analysis",
"Mathematical objects",
"Mathematical relations"
] |
42,283,768 | https://en.wikipedia.org/wiki/Doctor%20in%20a%20cell | Doctor in a cell is an advanced biotechnology concept introduced in 1998 by Ehud Shapiro of the Weizmann Institute. It envisions autonomous, programmable molecular devices within the human body performing diagnostic and therapeutic functions. The initial design proposed molecular Turing machines capable of replacing traditional drugs with "smart drugs" composed of molecular computing devices programmed with medical knowledge to analyze their environment and release appropriate treatments.
To realize this vision, Shapiro and colleague Kobi Benenson developed molecular implementations at the Weizmann Institute. These included a programmable autonomous automaton with DNA-encoded input and transition rules, and a stochastic automaton that interacts with its environment to release drug molecules in response to cancer markers.
In 2009, Shapiro and PhD student Tom Ran created a prototype molecular system capable of simple logical deductions using DNA strands. This system represented the first molecular-scale implementation of a simple programming language, demonstrating potential for precise targeting and treatment of specific cell types by performing millions of calculations simultaneously.
Further advancements aimed to make DNA computing devices accessible through a compiler bridging high-level programming languages with DNA computing code. Shapiro and Ran also developed a genetic device operating within bacterial cells to identify transcription factors and produce visible markers or therapeutic proteins.
Initial work
The concept was first presented in 1998 by Ehud Shapiro from the Weizmann Institute as a conceptual design for an autonomous, programmable molecular Turing machine, realized at the time as a mechanical device, and a vision of how such machines can cause a revolution in medicine.
The vision suggested that smart drugs, made of autonomous molecular computing devices, programmed with medical knowledge, could supplant present day drugs by analyzing the molecular state of their environment (input) based on programmed medical knowledge (program), and if deemed necessary release a drug molecule in response (output).
First steps towards realization of the vision
To realize this vision, Shapiro set a wet lab at Weizmann and was joined by Kobi Benenson. Within a few years Benenson, Shapiro and colleagues have made steps towards realizing this vision: (1) A molecular implementation of a programmable autonomous automaton in which the input was encoded as a DNA molecule, “software” (automaton transition rules) was encoded by short DNA molecules and the “hardware” was made of made DNA processing enzymes. (2) A simplified implementation of an automaton in which the DNA input molecule is used as fuel (3) A stochastic molecular automata in which transition probabilities can be programmed by varying the concentration of “software” molecules, specifically the relative concentrations of molecules encoding competing transition rules. And (4) Extending the stochastic automaton with input and output mechanisms, allowing it to interact with the environment in a pre-programmed way, and release a specific drug molecule for cancer upon detecting expression levels of mRNA characteristic of a specific cancer. These biomolecular computers were demonstrated in a test tube, wherein a number of cancer markers were pre-mixed to emulate different marker combinations. Biomolecular computers identified the presence of cancer markers (Simultaneously and independently identifying small-cell lung cancer markers and prostate cancer markers). The computer, equipped with medical knowledge, analysed the situation, diagnosed the type of cancer and then released the appropriate drug.
DNA computers capable of simple logical deductions
In 2009, Shapiro and PhD student Tom Ran presented the prototype of an autonomous programmable molecular system, based on the manipulation of DNA strands, which is capable of performing simple logical deductions. This prototype is the first simple programming language implemented on molecular-scale. Introduced into the body, this system has immense potential to accurately target specific cell types and administer the appropriate treatment, as it can perform millions of calculations at the same time and ‘think’ logically. Prof Shapiro’s team aims to make these computers perform highly complex actions and answer complicated questions, following a logical model first proposed by Aristotle over 2000 years ago. The biomolecular computers are extremely small: three trillion computers can fit into a single drop of water. If the computers were given the rule ‘All men are mortal’ and the fact ‘Socrates is a man’, they would answer ‘Socrates is mortal’. Multiple rules and facts were tested by the team and the biomolecular computers answered them correctly each time.
‘User-friendly’ DNA computers
The team has also found a way to make these microscopic computing devices ‘user-friendly’ by creating a compiler – a program for bridging between a high-level computer programming language and DNA computing code. They sought to develop a hybrid in silico/in vitro system that supports the creation and execution of molecular logic programs in a similar way to electronic computers, enabling anyone who knows how to operate an electronic computer, with absolutely no background in molecular biology, to operate a biomolecular computer.
DNA computers via computing bacteria
In 2012, Prof. Ehud Shapiro and Dr. Tom Ran have succeeded in creating a genetic device that operates independently in bacterial cells. The device has been programmed to identify certain parameters and mount an appropriate response. The device searches for transcription factors - proteins that control the expression of genes in the cell. A malfunction of these molecules can disrupt gene expression. In cancer cells, for example, the transcription factors regulating cell growth and division do not function properly, leading to increased cell division and the formation of a tumor. The device, composed of a DNA sequence inserted into a bacterium, performs a "roll call" of transcription factors. If the results match pre-programmed parameters, it responds by creating a protein that emits a green light—supplying a visible sign of a "positive" diagnosis. In follow-up research, the scientists plan to replace the light-emitting protein with one that will affect the cell's fate, for example, a protein that can cause the cell to commit suicide. In this manner, the device will cause only "positively" diagnosed cells to self-destruct. Following the success of the study in bacterial cells, the researchers are planning to test ways of recruiting such bacteria as an efficient system to be conveniently inserted into the human body for medical purposes (which shouldn't be problematic given our natural Microbiome; recent research reveals there are already 10 times more bacterial cells in the human body than human cells, that share our body space in a symbiotic fashion). Yet another research goal is to operate a similar system inside human cells, which are much more complex than bacteria.
References
Implants (medicine)
Medical technology
DNA nanotechnology
Israeli inventions | Doctor in a cell | [
"Materials_science",
"Biology"
] | 1,337 | [
"Nanotechnology",
"DNA nanotechnology",
"Medical technology"
] |
32,456,459 | https://en.wikipedia.org/wiki/Hopper%20%28particulate%20collection%20container%29 | A hopper is a large, inverted pyramidal or conical container used in industrial processes to hold particulate matter or flowable material of any sort (e.g. dust, gravel, nuts, or seeds) and dispense these from the bottom when needed. In some specialized applications even small metal or plastic assembly components can be loaded and dispensed by small hopper systems. In the case of dust collection hoppers the dust can be collected from expelled air. Hoppers for dust collection are often installed in groups to allow for a greater collection quantity. Hoppers are used in many industries to hold material until it is needed, such as flour, sugar or nuts for food manufacturing, food pellets for livestock, crushed ores for refining, etc. Dust hoppers are employed in industrial processes that use air pollution control devices such as dust collectors, electrostatic precipitators, and baghouses/fabric filters. Most hoppers are made of steel.
Process
Materials can be added either manually or automatically to the top of a hopper. For dust collection, it enters the hopper from a collection device. For example, baghouses are shaken or blown with compressed air to release caked-on dust from the bag. Precipitators use a rapping system to release the dirt. The crumbling dust falls into the hopper. Once the material in the hopper reaches capacity, it is released through an opening in the bottom with a diameter of about . Hoppers are rectangular or circular in cross section but have sides that slope at about a 60° angle. Slanted sides make it easier for the contained material to flow out. Conveyors are sometimes used to carry away the particulate matter.
Important components
Hopper walls are often insulated to protect the outside environment and personnel from the contents. Often, the bottom 1/4 – 1/3 of the container is heated to eliminate the possibility of condensation inside the hopper.
The greatest difficulty associated with the removal of very fine material, like flour or dust, from a hopper is the compaction of the material. Moisture content, particle shape and size, and vibration are all factors that contribute to compaction. Typically, with such fine materials, vibrators are installed on the outer walls of a hopper to shake and release the material, however other kinds of discharging aids can be considered if necessary.
References
Air pollution control systems
Material handling | Hopper (particulate collection container) | [
"Physics"
] | 483 | [
"Materials",
"Material handling",
"Matter"
] |
32,457,161 | https://en.wikipedia.org/wiki/Shelling%20%28topology%29 | In mathematics, a shelling of a simplicial complex is a way of gluing it together from its maximal simplices (simplices that are not a face of another simplex) in a well-behaved way. A complex admitting a shelling is called shellable.
Definition
A d-dimensional simplicial complex is called pure if its maximal simplices all have dimension d. Let be a finite or countably infinite simplicial complex. An ordering of the maximal simplices of is a shelling if, for all , the complex
is pure and of dimension one smaller than . That is, the "new" simplex meets the previous simplices along some union of top-dimensional simplices of the boundary of . If is the entire boundary of then is called spanning.
For not necessarily countable, one can define a shelling as a well-ordering of the maximal simplices of having analogous properties.
Properties
A shellable complex is homotopy equivalent to a wedge sum of spheres, one for each spanning simplex of corresponding dimension.
A shellable complex may admit many different shellings, but the number of spanning simplices and their dimensions do not depend on the choice of shelling. This follows from the previous property.
Examples
Every Coxeter complex, and more generally every building (in the sense of Tits), is shellable.
The boundary complex of a (convex) polytope is shellable. Note that here, shellability is generalized to the case of polyhedral complexes (that are not necessarily simplicial).
There is an unshellable triangulation of the tetrahedron.
Notes
References
Algebraic topology
Properties of topological spaces
Topology | Shelling (topology) | [
"Physics",
"Mathematics"
] | 353 | [
"Properties of topological spaces",
"Space (mathematics)",
"Algebraic topology",
"Topological spaces",
"Fields of abstract algebra",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
32,458,543 | https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20spectroscopy%20of%20nucleic%20acids | Nucleic acid NMR is the use of nuclear magnetic resonance spectroscopy to obtain information about the structure and dynamics of nucleic acid molecules, such as DNA or RNA. It is useful for molecules of up to 100 nucleotides, and as of 2003, nearly half of all known RNA structures had been determined by NMR spectroscopy.
NMR has advantages over X-ray crystallography, which is the other method for high-resolution nucleic acid structure determination, in that the molecules are being observed in their natural solution state rather than in a crystal lattice that may affect the molecule's structural properties. It is also possible to investigate dynamics with NMR. This comes at the cost of slightly less accurate and detailed structures than crystallography.
Nucleic acid NMR uses techniques similar to those of protein NMR, but has several differences. Nucleic acids have a smaller percentage of hydrogen atoms, which are the atoms usually observed in NMR, and because nucleic acid double helices are stiff and roughly linear, they do not fold back on themselves to give "long-range" correlations. Nucleic acids also tend to have resonances distributed over a smaller range than proteins, making the spectra potentially more crowded and difficult to interpret.
Experimental methods
Two-dimensional NMR methods are almost always used with nucleic acids. These include correlation spectroscopy (COSY) and total coherence transfer spectroscopy (TOCSY) to detect through-bond nuclear couplings, and nuclear Overhauser effect spectroscopy (NOESY) to detect couplings between nuclei that are close to each other in space. The types of NMR usually done with nucleic acids are 1H NMR, 13C NMR, 15N NMR, and 31P NMR. 19F NMR is also useful if nonnatural nucleotides such as 2'-fluoro-2'-deoxyadenosine are incorporated into the nucleic acid strand, as natural nucleic acids do not contain any fluorine atoms.
1H and 31P have near 100% natural abundance, while 13C and 15N have low natural abundances. For these latter two nuclei, there is the capability of isotopically enriching desired atoms within the molecules, either uniformly or in a site-specific manner. Nucleotides uniformly enriched in 13C and/or 15N can be obtained through biochemical methods, by performing polymerase chain reaction using dNTPs or NTPs derived from bacteria grown in an isotopically enriched environment. Site-specific isotope enrichment must be done through chemical synthesis of the labeled nucleoside phosphoramidite monomer and of the full strand; however these are difficult and expensive to synthesize.
Because nucleic acids have a relatively large number of protons which are solvent-exchangeable, nucleic acid NMR is generally not done in D2O solvent as is common with other types of NMR. This is because the deuterium in the solvent would replace the exchangeable protons and extinguish their signal. H2O is used as a solvent, and other methods are used to eliminate the strong solvent signal, such as saturating the solvent signal before the normal pulse sequence ("presaturation"), which works best a low temperature to prevent exchange of the saturated solvent protons with the nucleic acid protons; or exciting only resonances of interest ("selective excitation"), which has the additional, potentially undesired effect of distorting the peak amplitudes.
Structure determination
The exchangeable and non-exchangeable protons are usually assigned to their specific peaks as two independent groups. For exchangeable protons, which are for the most part the protons involved in base pairing, NOESY can be used to find through-space correlations between on neighboring bases, allowing an entire duplex molecule to be assigned through sequential walking. For nonexchangable protons, many of which are on the sugar moiety of the nucleic acid, COSY and TOCSY are used to identify systems of coupled nuclei, while NOESY is again used to correlate the sugar to the base and each base to its neighboring base. For duplex DNA nonexchangeable protons the H6/H8 protons on the base correlate to their counterparts on neighboring bases and to the H1' proton on the sugar, allowing sequential walking to be done. For RNA, the differences in chemical structure and helix geometry make this assignment more technically difficult, but still possible. The sequential walking methodology is not possible for non-double helical nucleic acid structures, nor for the Z-DNA form, making assignment of resonances more difficult.
Parameters taken from the spectrum, mainly NOESY cross-peaks and coupling constants, can be used to determine local structural features such as glycosidic bond angles, dihedral angles (using the Karplus equation), and sugar pucker conformations. The presence or absence of imino proton resonances, or of coupling between 15N atoms across a hydrogen bond, indicates the presence or absence of basepairing. For large-scale structure, these local parameters must be supplemented with other structural assumptions or models, because errors add up as the double helix is traversed, and unlike with proteins, the double helix does not have a compact interior and does not fold back upon itself. However, long-range orientation information can be obtained through residual dipolar coupling experiments in a medium which imposes a weak alignment on the nucleic acid molecules.
Recently, solid-state NMR methodology has been introduced for the structure determination of nucleic acids. The protocol implies two approaches: nucleotide-type selective labeling of RNA and usage of heteronuclear correlation experiments.
NMR is also useful for investigating nonstandard geometries such as bent helices, non-Watson–Crick basepairing, and coaxial stacking. It has been especially useful in probing the structure of natural RNA oligonucleotides, which tend to adopt complex conformations such as stem-loops and pseudoknots. Interactions between RNA and metal ions can be probed by a number of methods, including observing changes in chemical shift upon ion binding, observing line broadening for paramagnetic ion species, and observing intermolecular NOE contacts for organometallic mimics of the metal ions. NMR is also useful for probing the binding of nucleic acid molecules to other molecules, such as proteins or drugs. This can be done by chemical-shift mapping, which is seeing which resonances are shifted upon binding of the other molecule, or by cross-saturation experiments where one of the binding molecules is selectively saturated and, if bound, the saturation transfers to the other molecule in the complex.
Dynamic properties such as duplex–single strand equilibria and binding rates of other molecules to duplexes can also be determined by its effect on the spin–lattice relaxation time T1, but these methods are insensitive to intermediate rates of 104–108 s−1, which must be investigated with other methods such as solid-state NMR. Dynamics of mechanical properties of a nucleic acid double helix such as bending and twisting can also be studied using NMR. Pulsed field gradient NMR experiments can be used to measure diffusion constants.
History
Nucleic acid NMR studies were performed as early as 1971, and focused on using the low-field imino proton resonances to probe base pairing interactions. These early studies focussed on tRNA because these nucleic acids were the only samples available at that time with low enough molecular weight that the NMR spectral line-widths were practical. The study focussed on the low-field protons because they were the only protons that could be reliably observed in aqueous solution using the best spectrometers available at that time. It was quickly realized that spectra of the low-field imino protons were providing clues to the tertiary structure of tRNA in solution. The first NMR spectrum of a double-helical DNA was published in 1977 using a synthetic, 30-base-pair double helix. To overcome sever line-broadening in native DNA, sheer-degraded natural DNA was prepared and studied to learn about the persistence length of double-helical DNA. At the same time, nucleosome core particles were studied to gain further insight of the flexibility of the double helix. The first NMR spectra reported for a uniform low molecular weight native-sequence DNA, made with restriction enzymes, was reported 1981. This work was also the first report of nucleic acid NMR spectra obtained at high field. Two dimensional NMR studies began to be reported in 1982 and then, with the advent of oligonucleotide synthesis and more sophisticated instrumentation, many detailed structural studies were reported starting in 1983.
References
Nuclear magnetic resonance spectroscopy
Nucleic acids
Biophysics | Nuclear magnetic resonance spectroscopy of nucleic acids | [
"Physics",
"Chemistry",
"Biology"
] | 1,840 | [
"Biomolecules by chemical classification",
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance spectroscopy",
"Biophysics",
"Spectroscopy",
"Nucleic acids"
] |
32,463,736 | https://en.wikipedia.org/wiki/Cold-shock%20domain | In molecular biology, the cold-shock domain (CSD) is a protein domain of about 70 amino acids which has been found in prokaryotic and eukaryotic DNA-binding proteins. Part of this domain is highly similar to the RNP-1 RNA-binding motif.
When Escherichia coli is exposed to a temperature drop from 37 to 10 degrees Celsius, a 4–5 hour lag phase occurs, after which growth is resumed at a reduced rate. During the lag phase, the expression of around 13 proteins, which contain cold shock domains is increased 2–10 fold. These so-called cold shock proteins induced in the cold shock response are thought to help the cell to survive in temperatures lower than optimum growth temperature, by contrast with heat shock proteins induced in the heat shock response, which help the cell to survive in temperatures greater than the optimum, possibly by condensation of the chromosome and organisation of the prokaryotic nucleoid.
References
Protein domains | Cold-shock domain | [
"Biology"
] | 207 | [
"Protein domains",
"Protein classification"
] |
32,463,873 | https://en.wikipedia.org/wiki/Copper%20type%20II%20ascorbate-dependent%20monooxygenase | In molecular biology, the copper type II ascorbate-dependent monooxygenases are a class of enzymes that require copper as a cofactor and which use ascorbate as an electron donor. This family contains two related enzymes, dopamine beta-monooxygenase and peptidylglycine alpha-amidating monooxygenase . There are a few regions of sequence similarities between these two enzymes, two of these regions contain clusters of conserved histidine residues which are most probably involved in binding copper.
References
External links
Protein domains | Copper type II ascorbate-dependent monooxygenase | [
"Biology"
] | 115 | [
"Protein domains",
"Protein classification"
] |
36,650,277 | https://en.wikipedia.org/wiki/Stress%20resultants | Stress resultants are simplified representations of the stress state in structural elements such as beams, plates, or shells. The geometry of typical structural elements allows the internal stress state to be simplified because of the existence of a "thickness'" direction in which the size of the element is much smaller than in other directions. As a consequence the three traction components that vary from point to point in a cross-section can be replaced with a set of resultant forces and resultant moments. These are the stress resultants (also called membrane forces, shear forces, and bending moment) that may be used to determine the detailed stress state in the structural element. A three-dimensional problem can then be reduced to a one-dimensional problem (for beams) or a two-dimensional problem (for plates and shells).
Stress resultants are defined as integrals of stress over the thickness of a structural element. The integrals are weighted by integer powers the thickness coordinate z (or x3). Stress resultants are so defined to represent the effect of stress as a membrane force N (zero power in z), bending moment M (power 1) on a beam or shell (structure). Stress resultants are necessary to eliminate the z dependency of the stress from the equations of the theory of plates and shells.
Stress resultants in beams
Consider the element shown in the adjacent figure. Assume that the thickness direction is x3. If the element has been extracted from a beam, the width and thickness are comparable in size. Let x2 be the width direction. Then x1 is the length direction.
Membrane and shear forces
The resultant force vector due to the traction in the cross-section (A) perpendicular to the x1 axis is
where e1, e2, e3 are the unit vectors along x1, x2, and x3, respectively. We define the stress resultants such that
where N11 is the membrane force and V2, V3 are the shear forces. More explicitly, for a beam of height t and width b,
Similarly the shear force resultants are
Bending moments
The bending moment vector due to stresses in the cross-section A perpendicular to the x1-axis is given by
Expanding this expression we have,
We can write the bending moment resultant components as
Stress resultants in plates and shells
For plates and shells, the x1 and x2 dimensions are much larger than the size in the x3 direction. Integration over the area of cross-section would have to include one of the larger dimensions and would lead to a model that is too simple for practical calculations. For this reason the stresses are only integrated through the thickness and the stress resultants are typically expressed in units of force per unit length (or moment per unit length) instead of the true force and moment as is the case for beams.
Membrane and shear forces
For plates and shells we have to consider two cross-sections. The first is perpendicular to the x1 axis and the second is perpendicular to the x2 axis. Following the same procedure as for beams, and keeping in mind that the resultants are now per unit length, we have
We can write the above as
where the membrane forces are defined as
and the shear forces are defined as
Bending moments
For the bending moment resultants, we have
where r = x3 e3.
Expanding these expressions we have,
Define the bending moment resultants such that
Then, the bending moment resultants are given by
These are the resultants that are often found in the literature but care has to be taken to make sure that the signs are correctly interpreted.
See also
Shear force
Bending moment
Plate theory
Bending of plates
Kirchhoff–Love plate theory
Mindlin–Reissner plate theory
Vibration of plates
References
Continuum mechanics
Mechanics
Solid mechanics
Composite materials | Stress resultants | [
"Physics",
"Engineering"
] | 756 | [
"Solid mechanics",
"Continuum mechanics",
"Composite materials",
"Classical mechanics",
"Materials",
"Mechanics",
"Mechanical engineering",
"Matter"
] |
36,652,063 | https://en.wikipedia.org/wiki/Lightening%20holes | Lightening holes are holes in structural components of machines and buildings used by a variety of engineering disciplines to make structures lighter. The edges of the hole may be flanged to increase the rigidity and strength of the component. The holes can be circular, triangular, elliptical, or rectangular and should have rounded edges, but they should never have sharp corners, to avoid the risk of stress risers, and they must not be too close to the edge of a structural component.
Usage
Aviation
Lightening holes are often used in the aviation industry. This allows an aircraft to be as lightweight as possible, retaining the durability and airworthiness of the aircraft structure.
Maritime
Lightening holes have also been used in marine engineering to increase seaworthiness of the vessel.
Motorsports
Lightening holes became a prominent feature of motor racing in the 1920s and 1930s. Chassis members, suspension components, engine housings and even connecting rods were drilled with a range of holes, of sizes almost as large as the component.
Military
Lightening holes have been used in various military vehicles, aircraft, equipment and weaponry platforms. This allows equipment to be lighter in weight as well as increase the ruggedness and durability. They are usually made by drilling holes, pressed stamping or machining and can also save strategic materials and cost during wartime production.
Architecture
Lightening holes have been used on various architecture designs. During the 1980s and early 1990s, lightening holes were fashionable and somewhat seen as futuristic and were used in the likes of industrial units, car showrooms, shopping precincts, sports centres etc. Parsons House in London is a notable building that uses lightening holes since its renovation in 1988. Ringwood Health & Leisure Centre in Hampshire is another notable example.
See also
Honeycomb structure
Hollow structural section
Isogrid
Truss
References
External links
Tests Of Beams Having Webs With Large Circular Lightening Holes, by L. Ross Levin, National Advisory Committee for Aeronautics
The Strength And Stiffness Of Shear Webs With And Without Lightening Holes, by Paul Kuhn, National Advisory Committee for Aeronautics
The Strength And Stiffness Of Shear Webs With Round Lightening Holes Having 45° Flanges, by Paul Kuhn, National Advisory Committee for Aeronautics
Mechanical engineering
Civil engineering
Structural engineering
Aerospace engineering
Marine engineering
Military engineering | Lightening holes | [
"Physics",
"Engineering"
] | 458 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Construction",
"Military engineering",
"Civil engineering",
"Mechanical engineering",
"Aerospace engineering",
"Marine engineering"
] |
36,653,939 | https://en.wikipedia.org/wiki/Pozzolanic%20activity | The pozzolanic activity is a measure for the degree of reaction over time or the reaction rate between a pozzolan and Ca2+ or calcium hydroxide (Ca(OH)2) in the presence of water. The rate of the pozzolanic reaction is dependent on the intrinsic characteristics of the pozzolan such as the specific surface area, the chemical composition and the active phase content.
Physical surface adsorption is not considered as being part of the pozzolanic activity, because no irreversible molecular bonds are formed in the process.
Reaction
The pozzolanic reaction is the chemical reaction that occurs in portland cement upon the addition of pozzolans. It is the main reaction involved in the Roman concrete invented in Ancient Rome and used to build, for example, the Pantheon. The pozzolanic reaction converts a silica-rich precursor with no cementing properties, to a calcium silicate, with good cementing properties.
In chemical terms, the pozzolanic reaction occurs between calcium hydroxide, also known as portlandite (Ca(OH)2), and silicic acid (written as H4SiO4, or Si(OH)4, in the geochemical notation):
Ca(OH)2 + H4SiO4 → CaH2SiO4·2 H2O
or summarized in abbreviated cement chemist notation:
CH + SH → C-S-H
The pozzolanic reaction can also be written in an ancient industrial silicate notations as:
+ →
or even directly:
+ →
Both notations still coexist in the literature, depending on the research field considered. However, the more recent geochemical notation in which the Si atom is tetracoordinated by four hydroxyl groups (, also commonly noted ) is more correct than the ancient industrial silicate notation for which silicic acid () was represented in the same way as carbonic acid () whose geometrical configuration is trigonal planar. When only considering mass balance, they are equivalent and both are used.
The product CaH2SiO4·2 H2O is a calcium silicate hydrate, also abbreviated as C-S-H in cement chemist notation, the hyphenation denotes the variable stoichiometry. The atomic (or molar) ratio Ca/Si, CaO/SiO2, or C/S, and the number of water molecules can vary and the above-mentioned stoichiometry may differ.
Many pozzolans may also contain aluminate, or Al(OH)4−, that will react with calcium hydroxide and water to form calcium aluminate hydrates such as C4AH13, C3AH6 or hydrogarnet, or in combination with silica C2ASH8 or strätlingite (cement chemist notation). In the presence of anionic groups such as sulfate, carbonate or chloride, AFm phases and AFt or ettringite phases can form.
Pozzolanic reaction is a long term reaction, which involves dissolved silicic acid, water and CaO or Ca(OH)2 or other pozzolans to form a strong cementation matrix. This process is often irreversible. Sufficient amount of free calcium ion and a high pH of 12 and above is needed to initiate and maintain the pozzolanic reaction. This is because at a pH of around 12, the solubility of silicon and aluminium ions is high enough to support the pozzolanic reaction.
Activity determining parameters
Particle properties
Prolonged grinding results in an increased pozzolanic activity by creating a larger specific surface area available for reaction. Moreover, grinding also creates crystallographic defects at and below the particle surface. The dissolution rate of the strained or partially disconnected silicate moieties is strongly enhanced. Even materials which are commonly not regarded to behave as a pozzolan, such as quartz, can become reactive once ground below a certain critical particle diameter.
Composition
The overall chemical composition of a pozzolan is considered as one of the parameters governing long-term performance (e.g. compressive strength) of the blended cement binder, ASTM C618 prescribes that a pozzolan should contain SiO2 + Al2O3 + Fe2O3 ≥ 70 wt.%. In case of a (quasi) one phase material such as blast-furnace slags the overall chemical composition can be considered as meaningful parameter, for multi-phase materials only a correlation between the pozzolanic activity and the chemistry of the active phases can be sought.
Many pozzolans consist of a heterogeneous mixture of phases of different pozzolanic activity. Obviously, the content in reactive phases is an important property determining the overall reactivity. In general, the pozzolanic activity of phases thermodynamically stable at ambient conditions is low when compared to on an equal specific surface basis to less thermodynamically stable phase assemblages. Volcanic ash deposits containing large amounts of volcanic glass or zeolites are more reactive than quartz sands or detrital clay minerals. In this respect, the thermodynamic driving force behind the pozzolanic reaction serves as a rough indicator of the potential reactivity of a (alumino-)silicate material. Similarly, materials showing structural disorder such as glasses show higher pozzolanic activities than crystalline ordered compounds.
Reaction conditions
The rate of the pozzolanic reaction can also be controlled by external factors such as the mix proportions, the amount of water or space available for the formation and growth of hydration products and the temperature of reaction. Therefore, typical blended cement mix design properties such as the replacement ratio of pozzolan for Portland cement, the water to binder ratio and the curing conditions strongly affect the reactivity of the added pozzolan.
Pozzolanic activity tests
Mechanical tests
Mechanical evaluation of the pozzolanic activity is based upon a comparison of the compressive strength of mortar bars containing pozzolans as a partial replacement for Portland cement to reference mortar bars containing only Portland cement as binder. The mortar bars are prepared, cast, cured and tested following a detailed set of prescriptions. Compressive strength testing is carried out at fixed moments, typically 3, 7, and 28 days after mortar preparation. A material is considered pozzolanically active when it contributes to the compressive strength, taking into account the effect of dilution. Most national and international technical standards or norms include variations of this methodology.
Chemical tests
A pozzolanic material is by definition capable of binding calcium hydroxide in the presence of water. Therefore, the chemical measurement of this pozzolanic activity represents a way of evaluating pozzolanic materials. This can be done by directly measuring the amount of calcium hydroxide a pozzolan consumes over time. At high water to binder ratio (suspended solutions), this can be measured by titrimety or by spectroscopic techniques. At lower water to binder ratios (pastes), thermal analysis or X-ray powder diffraction techniques are commonly used to determine remaining calcium hydroxide contents. Other direct methods have been developed that aim to directly measure the degree of reaction of the pozzolan itself. Here, selective dissolutions, X-ray powder diffraction or scanning electron microscopy image analysis methods have been used.
Indirect methods comprise on the one hand methods that investigate which material properties are responsible for the pozzolan's reactivity with portlandite. Material properties of interest are the (re)active silica and alumina content, the specific surface area and/or the reactive mineral and amorphous phases of the pozzolanic material. Other methods indirectly determine the extent of the pozzolanic activity by measuring an indicative physical property of the reacting system. Measurements of the electrical conductivity, chemical shrinkage of the pastes or the heat evolution by heat flow calorimetry reside in the latter category.
See also
Aerated autoclaved concrete
Alkali-aggregate reaction
Alkali-carbonate reaction
Alkali-silica reaction
Calcium silicate hydrate (C-S-H)
Calthemite
Cement
Cement chemist notation
Cenospheres
Concrete
Concrete degradation
Energetically modified cement (EMC)
Fly ash
Geopolymer
Metakaolin
Portland cement
Pozzolan
Pozzolana
Rice husk ash
Roman concrete
Silica fume
Sodium silicate
References
Further reading
Cook D.J. (1986) Natural pozzolanas. In: Swamy R.N., Editor (1986) Cement Replacement Materials, Surrey University Press, p. 200.
Lechtman H. and Hobbs L. (1986) "Roman Concrete and the Roman Architectural Revolution", Ceramics and Civilization Volume 3: High Technology Ceramics: Past, Present, Future, edited by W.D. Kingery and published by the American Ceramics Society, 1986; and Vitruvius, Book II:v,1; Book V:xii2.
McCann A.M. (1994) "The Roman Port of Cosa" (273 BC), Scientific American, Ancient Cities, pp. 92–99, by Anna Marguerite McCann. Covers, hydraulic concrete, of "Pozzolana mortar" and the 5 piers, of the Cosa harbor, the Lighthouse on pier 5, diagrams, and photographs. Height of Port city: 100 BC.
Cement
Concrete
Masonry | Pozzolanic activity | [
"Engineering"
] | 1,943 | [
"Structural engineering",
"Concrete",
"Construction",
"Masonry"
] |
40,854,179 | https://en.wikipedia.org/wiki/Neutron%20microscope | Neutron microscopes use neutrons focused by small-angle neutron scattering to create images by passing neutrons through an object to be investigated. The neutrons that aren't absorbed by the object hit scintillation targets where induced nuclear fission of lithium-6 can be detected and be used to produce an image.
Neutrons have no electric charge, enabling them to penetrate substances to gain information about structure that is not accessible through other forms of microscopy. As of 2013, neutron microscopes offered four-fold magnification and 10-20 times better illumination than pinhole neutron cameras. The system increases the signal rate at least 50-fold.
Neutrons interact with atomic nuclei via the strong force. This interaction can scatter neutrons from their original path and can also absorb them. Thus, a neutron beam becomes progressively less intense as it moves deeper within a substance. In this way, neutrons are analogous to x-rays for studying object interiors.
Darkness in an x-ray image corresponds to the amount of matter the x-rays pass through. The density of a neutron image provides information on neutron absorption. Absorption rates vary by many orders of magnitude among the chemical elements.
While neutrons have no charge, they do have spin and therefore a magnetic moment that can interact with external magnetic fields.
Applications
Neutron imaging has potential for studying so-called soft materials, as small changes in the location of hydrogen within a material can produce highly visible changes in a neutron image.
Neutrons also offer unique capabilities for research in magnetic materials. The neutron's lack of electric charge means there is no need to correct magnetic measurements for errors caused by stray electric fields and charges. Polarized neutron beams orient neutron spins in one direction. This allows measurement of the strength and characteristics of a material's magnetism.
Neutron-based instruments have the ability to probe inside metal objects — such as fuel cells, batteries and engines to study their internal structure. Neutron instruments are also uniquely sensitive to lighter elements that are important in biological materials.
Shadowgraphs
Shadowgraphs are images produced by casting a shadow on a surface, usually taken with a pinhole camera and are widely used for nondestructive testing. Such cameras provide low illumination levels that require long exposure times. They also provide poor spatial resolution. The resolution of such a lens cannot be smaller than the hole diameter. A good balance between illumination and resolution is obtained when the pinhole diameter is about 100 times smaller than the distance between the pinhole and the image screen, effectively making the pinhole an f/100 lens. The resolution of an f/100 pinhole is about half a degree.
Wolter mirror
Glass lenses and conventional mirrors are useless for working with neutrons, because they pass through such materials without refraction or reflection. Instead, the neutron microscope employs a Wolter mirror, similar in principle to grazing incidence mirrors used for x-ray and gamma-ray telescopes.
When a neutron grazes the surface of a metal at a sufficiently small angle, it is reflected away from the metal surface at the same angle. When this occurs with light, the effect is called total internal reflection. The critical angle for grazing reflection is large enough (a few tenths of a degree for thermal neutrons) that a curved mirror can be used. Curved mirrors then allow an imaging system to be made.
The microscope uses several reflective cylinders nested inside each other, to increase the surface area available for reflection.
Measurement
The neutron flux at the imaging focal plane is measured by a CCD imaging array with a neutron scintillation screen in front of it. The scintillation screen is made of zinc sulfide, a fluorescent compound, laced with lithium. When a thermal neutron is absorbed by a lithium-6 nucleus, it causes a fission reaction that produces helium, tritium and energy. These fission products cause the ZnS phosphor to light up, producing an optical image for capture by the CCD array.
See also
Electron microscope
ISIS neutron and muon source
LARMOR neutron microscope
Microscope image processing
X-ray microscope
References
Neutron instrumentation
Microscopes | Neutron microscope | [
"Chemistry",
"Technology",
"Engineering"
] | 819 | [
"Microscopes",
"Neutron instrumentation",
"Measuring instruments",
"Microscopy"
] |
40,858,397 | https://en.wikipedia.org/wiki/Cantellated%2024-cell%20honeycomb | In four-dimensional Euclidean geometry, the cantellated 24-cell honeycomb is a uniform space-filling honeycomb. It can be seen as a cantellation of the regular 24-cell honeycomb, containing rectified tesseract, cantellated 24-cell, and tetrahedral prism cells.
Alternate names
Cantellated icositetrachoric tetracomb/honeycomb
Small rhombated demitesseractic tetracomb (sricot)
Small prismatodisicositetrachoric tetracomb
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
16-cell honeycomb
24-cell honeycomb
Rectified 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 112
o3o3x4o3x - sricot - O112
5-polytopes
Honeycombs (geometry) | Cantellated 24-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 378 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
40,858,426 | https://en.wikipedia.org/wiki/Cantitruncated%2024-cell%20honeycomb | In four-dimensional Euclidean geometry, the cantitruncated 24-cell honeycomb is a uniform space-filling honeycomb. It can be seen as a cantitruncation of the regular 24-cell honeycomb, containing truncated tesseract, cantitruncated 24-cell, and tetrahedral prism cells.
Alternate names
Cantellated icositetrachoric tetracomb/honeycomb
Great rhombated icositetrachoric tetracomb (gricot)
Great prismatodisicositetrachoric tetracomb
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
16-cell honeycomb
24-cell honeycomb
Rectified 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 114
o3o3x4x3x - gricot - O114
5-polytopes
Honeycombs (geometry)
Truncated tilings | Cantitruncated 24-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 389 | [
"Honeycombs (geometry)",
"Truncated tilings",
"Tessellation",
"Crystallography",
"Symmetry"
] |
34,060,358 | https://en.wikipedia.org/wiki/Heat%20kernel%20signature | A heat kernel signature (HKS) is a feature descriptor for use in deformable shape analysis and belongs to the group of spectral shape analysis methods. For each point in the shape, HKS defines its feature vector representing the point's local and global geometric properties. Applications include segmentation, classification, structure discovery, shape matching and shape retrieval.
HKS was introduced in 2009 by Jian Sun, Maks Ovsjanikov and Leonidas Guibas. It is based on heat kernel, which is a fundamental solution to the heat equation. HKS is one of the many recently introduced shape descriptors which are based on the Laplace–Beltrami operator associated with the shape.
Overview
Shape analysis is the field of automatic digital analysis of shapes, e.g., 3D objects. For many shape analysis tasks (such as shape matching/retrieval), feature vectors for certain key points are used instead of using the complete 3D model of the shape. An important requirement of such feature descriptors is for them to be invariant under certain transformations. For rigid transformations, commonly used feature descriptors include shape context, spin images, integral volume descriptors and multiscale local features, among others. HKS allows isometric transformations which generalizes rigid transformations.
HKS is based on the concept of heat diffusion over a surface. Given an initial heat distribution over the surface, the heat kernel relates the amount of heat transferred from to after time . The heat kernel is invariant under isometric transformations and stable under small perturbations to the isometry. In addition, the heat kernel fully characterizes shapes up to an isometry and represents increasingly global properties of the shape with increasing time. Since is defined for a pair of points over a temporal domain, using heat kernels directly as features would lead to a high complexity. HKS instead restricts itself to just the temporal domain by considering only . HKS inherits most of the properties of heat kernels under certain conditions.
Technical details
The heat diffusion equation over a compact Riemannian manifold (possibly with a boundary) is given by,
where is the Laplace–Beltrami operator and is the heat distribution at a point at time . The solution to this equation can be expressed as,
The eigen decomposition of the heat kernel is expressed as,
where and are the eigenvalue and eigenfunction of . The heat kernel fully characterizes a surface up to an isometry: For any surjective map between two Riemannian manifolds and , if then is an isometry, and vice versa. For a concise feature descriptor, HKS restricts the heat kernel only to the temporal domain,
HKS, similar to the heat kernel, characterizes surfaces under the condition that the eigenvalues of for and are non-repeating. The terms can be intuited as a bank of low-pass filters, with determining the cutoff frequencies.
Practical considerations
Since is, in general, a non-parametric continuous function, HKS is in practice represented as a discrete sequence of values sampled at times .
In most applications, the underlying manifold for an object is not known. The HKS can be computed if a mesh representation of the manifold is available, by using a discrete approximation to and using the discrete analogue of the heat equation. In the discrete case, the Laplace–Beltrami operator is a sparse matrix and can be written as,
where is a positive diagonal matrix with entries corresponding to the area of the triangles in the mesh sharing the vertex , and is a symmetric semi-definite weighting matrix. can be decomposed into , where is a diagonal matrix of the eigenvalues of arranged in the ascending order, and is the matrix with the corresponding orthonormal eigenvectors. The discrete heat kernel is the matrix given by,
The elements represents the heat diffusion between vertices and after time . The HKS is then given by the diagonal entries of this matrix, sampled at discrete time intervals. Similar to the continuous case, the discrete HKS is robust to noise.
Limitations
Non-repeating eigenvalues
The main property that characterizes surfaces using HKS up to an isometry holds only when the eigenvalues of the surfaces are non-repeating. There are certain surfaces (especially those with symmetry) where this condition is violated. A sphere is a simple example of such a surface.
Time parameter selection
The time parameter in the HKS is closely related to the scale of global information. However, there is no direct way to choose the time discretization. The existing method chooses time samples logarithmically which is a heuristic with no guarantees
Time complexity
The discrete heat kernel requires eigendecomposition of a matrix of size , where is the number of vertices in the mesh representation of the manifold. Computing the eigendecomposition is an expensive operation, especially as increases.
Note, however, that because of the inverse exponential dependence on the eigenvalue, typically only a small (less than 100) eigenvectors are sufficient to obtain a good approximation of the HKS.
Non-isometric transformations
The performance guarantees for HKS only hold for truly isometric transformations. However, deformations for real shapes are often not isometric. A simple example of such transformation is closing of the fist by a person, where the geodesic distances between two fingers changes.
Relation with other methods
Source:
Curvature
The (continuous) HKS at a point , on the Riemannian manifold is related to the scalar curvature by,
Hence, HKS can as be interpreted as the curvature of at scale .
Wave kernel signature (WKS)
The WKS follows a similar idea to the HKS, replacing the heat equation with the Schrödinger wave equation,
where is the complex wave function. The average probability of measuring the particle at a point is given by,
where is the initial energy distribution. By fixing a family of these energy distributions , the WKS can be obtained as a discrete sequence . Unlike HKS, the WKS can be intuited as a set of band-pass filters leading to better feature localization. However, the WKS does not represent large-scale features well (as they are filtered out) yielding poor performance at shape matching applications.
Global point signature (GPS)
Similar to the HKS, the GPS is based on the Laplace-Beltrami operator. GPS at a point is a vector of scaled eigenfunctions of the Laplace–Beltrami operator computed at . The GPS is a global feature whereas the scale of the HKS can be varied by varying the time parameter for heat diffusion. Hence, the HKS can be used in partial shape matching applications whereas the GPS cannot.
Spectral graph wavelet signature (SGWS)
SGWS provides a general form for spectral descriptors, where one can obtain HKS by specifying the filter function. SGWS is a multiresolution local descriptor that is not only isometric invariant, but also compact, easy to compute and combines the advantages of both band-pass and low-pass filters.
Extensions
Scale invariance
Even though the HKS represents the shape at multiple scales, it is not inherently scale invariant. For example, the HKS for a shape and its scaled version are not the same without pre-normalization. A simple way to ensure scale invariance is by pre-scaling each shape to have the same surface area (e.g. 1). Using the notation above, this means:
Alternatively, scale-invariant version of the HKS can also be constructed by generating a Scale space representation. In the scale-space, the HKS of a scaled shape corresponds to a translation up to a multiplicative factor. The Fourier transform of this HKS changes the time-translation into the complex plane, and the dependency on translation can be eliminated by considering the modulus of the transform.
.
An alternative scale invariant HKS can be established by working out its construction through a scale invariant metric, as defined in.
Volumetric HKS
The HKS is defined for a boundary surface of a 3D shape, represented as a 2D Riemannian manifold. Instead of considering only the boundary, the entire volume of the 3D shape can be considered to define the volumetric version of the HKS. The Volumetric HKS is defined analogous to the normal HKS by considering the heat equation over the entire volume (as a 3-submanifold) and defining a Neumann boundary condition over the 2-manifold boundary of the shape. Volumetric HKS characterizes transformations up to a volume isometry, which represent the transformation for real 3D objects more faithfully than boundary isometry.
Shape Search
The scale-invariant HKS features can be used in the bag-of-features model for shape retrieval applications. The features are used to construct geometric words by taking into account their spatial relations, from which shapes can be constructed (analogous to using features as words and shapes as sentences). Shapes themselves are represented using compact binary codes to form an indexed collection. Given a query shape, similar shapes in the index with possibly isometric transformations can be retrieved by using the Hamming distance of the code as the nearness-measure.
References
Image processing
Heat transfer
Digital geometry
Topology
Differential geometry | Heat kernel signature | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,897 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Topology",
"Space",
"Thermodynamics",
"Geometry",
"Spacetime"
] |
34,063,376 | https://en.wikipedia.org/wiki/Matrix%20Chernoff%20bound | For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:
The following theorems answer this general question under various assumptions; these assumptions are named below by analogy to their classical, scalar counterparts. All of these theorems can be found in , as the specific application of a general result which is derived below. A summary of related works is given.
Matrix Gaussian and Rademacher series
Self-adjoint matrices case
Consider a finite sequence of fixed,
self-adjoint matrices with dimension , and let be a finite sequence of independent standard normal or independent Rademacher random variables.
Then, for all ,
where
Rectangular case
Consider a finite sequence of fixed matrices with dimension , and let be a finite sequence of independent standard normal or independent Rademacher random variables.
Define the variance parameter
Then, for all ,
Matrix Chernoff inequalities
The classical Chernoff bounds concern the sum of independent, nonnegative, and uniformly bounded random variables.
In the matrix setting, the analogous theorem concerns a sum of positive-semidefinite random matrices subjected to a uniform eigenvalue bound.
Matrix Chernoff I
Consider a finite sequence of independent, random, self-adjoint matrices with dimension .
Assume that each random matrix satisfies
almost surely.
Define
Then
Matrix Chernoff II
Consider a sequence of independent, random, self-adjoint matrices that satisfy
almost surely.
Compute the minimum and maximum eigenvalues of the average expectation,
Then
The binary information divergence is defined as
for .
Matrix Bennett and Bernstein inequalities
In the scalar setting, Bennett and Bernstein inequalities describe the upper tail of a sum of independent, zero-mean random variables that are either bounded or subexponential. In the matrix
case, the analogous results concern a sum of zero-mean random matrices.
Bounded case
Consider a finite sequence of independent, random, self-adjoint matrices with dimension .
Assume that each random matrix satisfies
almost surely.
Compute the norm of the total variance,
Then, the following chain of inequalities holds for all :
The function is defined as for .
Consider a sequence of independent and identically distributed random column vectors in . Assume that each random vector satisfies almost surely, and . Then, for all ,
Subexponential case
Consider a finite sequence of independent, random, self-adjoint matrices with dimension .
Assume that
for .
Compute the variance parameter,
Then, the following chain of inequalities holds for all :
Rectangular case
Consider a finite sequence of independent, random, matrices with dimension .
Assume that each random matrix satisfies
almost surely.
Define the variance parameter
Then, for all
holds.
Matrix Azuma, Hoeffding, and McDiarmid inequalities
Matrix Azuma
The scalar version of Azuma's inequality states that a scalar martingale exhibits normal concentration about its mean value, and the scale for deviations is controlled by the total maximum squared range of the difference sequence.
The following is the extension in matrix setting.
Consider a finite adapted sequence of self-adjoint matrices with dimension , and a fixed sequence of self-adjoint matrices that satisfy
almost surely.
Compute the variance parameter
Then, for all
The constant 1/8 can be improved to 1/2 when there is additional information available. One case occurs when each summand is conditionally symmetric.
Another example requires the assumption that commutes almost surely with .
Matrix Hoeffding
Placing addition assumption that the summands in Matrix Azuma are independent gives a matrix extension of Hoeffding's inequalities.
Consider a finite sequence of independent, random, self-adjoint matrices with dimension , and let be a sequence of fixed self-adjoint matrices.
Assume that each random matrix satisfies
almost surely.
Then, for all
where
An improvement of this result was established in :
for all
where
Matrix bounded difference (McDiarmid)
In scalar setting, McDiarmid's inequality provides one common way of bounding the differences by applying Azuma's inequality to a Doob martingale. A version of the bounded differences inequality holds in the matrix setting.
Let be an independent, family of random variables, and let be a function that maps variables to a self-adjoint matrix of dimension .
Consider a sequence of fixed self-adjoint matrices that satisfy
where and range over all possible values of for each index .
Compute the variance parameter
Then, for all
where .
An improvement of this result was established in (see also ):
for all
where and
Survey of related theorems
The first bounds of this type were derived by . Recall the theorem above for self-adjoint matrix Gaussian and Rademacher bounds:
For a finite sequence of fixed,
self-adjoint matrices with dimension and for a finite sequence of independent standard normal or independent Rademacher random variables, then
where
Ahlswede and Winter would give the same result, except with
.
By comparison, the in the theorem above commutes and ; that is, it is the largest eigenvalue of the sum rather than the sum of the largest eigenvalues. It is never larger than the Ahlswede–Winter value (by the norm triangle inequality), but can be much smaller. Therefore, the theorem above gives a tighter bound than the Ahlswede–Winter result.
The chief contribution of was the extension of the Laplace-transform method used to prove the scalar Chernoff bound (see Chernoff bound#Additive form (absolute error)) to the case of self-adjoint matrices. The procedure given in the derivation below. All of the recent works on this topic follow this same procedure, and the chief differences follow from subsequent steps. Ahlswede & Winter use the Golden–Thompson inequality to proceed, whereas Tropp uses Lieb's Theorem.
Suppose one wished to vary the length of the series (n) and the dimensions of the
matrices (d) while keeping the right-hand side approximately constant. Then
n must vary approximately as the log of d. Several papers have attempted to establish a bound without a dependence on dimensions. Rudelson and Vershynin give a result for matrices which are the outer product of two vectors. provide a result without the dimensional dependence for low rank matrices. The original result was derived independently from the Ahlswede–Winter approach, but proves a similar result using the Ahlswede–Winter approach.
Finally, Oliveira proves a result for matrix martingales independently from the Ahlswede–Winter framework. Tropp slightly improves on the result using the Ahlswede–Winter framework. Neither result is presented in this article.
Derivation and proof
Ahlswede and Winter
The Laplace transform argument found in is a significant result in its own right:
Let be a random self-adjoint matrix. Then
To prove this, fix . Then
The second-to-last inequality is Markov's inequality. The last inequality holds since . Since the left-most quantity is independent of , the infimum over remains an upper bound for it.
Thus, our task is to understand Nevertheless, since trace and expectation are both linear, we can commute them, so it is sufficient to consider , which we call the matrix generating function. This is where the methods of and diverge. The immediately following presentation follows .
The Golden–Thompson inequality implies that
, where we used the linearity of expectation several times.
Suppose . We can find an upper bound for by iterating this result. Noting that , then
Iterating this, we get
So far we have found a bound with an infimum over . In turn, this can be bounded. At any rate, one can see how the Ahlswede–Winter bound arises as the sum of largest eigenvalues.
Tropp
The major contribution of is the application of Lieb's theorem where had applied the Golden–Thompson inequality. Tropp's corollary is the following: If is a fixed self-adjoint matrix and is a random self-adjoint matrix, then
Proof: Let . Then Lieb's theorem tells us that
is concave.
The final step is to use Jensen's inequality to move the expectation inside the function:
This gives us the major result of the paper: the subadditivity of the log of the matrix generating function.
Subadditivity of log mgf
Let be a finite sequence of independent, random self-adjoint matrices. Then for all ,
Proof: It is sufficient to let . Expanding the definitions, we need to show that
To complete the proof, we use the law of total expectation. Let be the expectation conditioned on . Since we assume all the are independent,
Define .
Finally, we have
where at every step m we use Tropp's corollary with
Master tail bound
The following is immediate from the previous result:
All of the theorems given above are derived from this bound; the theorems consist in various ways to bound the infimum. These steps are significantly simpler than the proofs given.
References
Linear algebra | Matrix Chernoff bound | [
"Mathematics"
] | 1,940 | [
"Linear algebra",
"Algebra"
] |
34,066,960 | https://en.wikipedia.org/wiki/C3H2F4 | {{DISPLAYTITLE:C3H2F4}}
The molecular formula C3H2F4 (molar mass: 114.04 g/mol, exact mass: 114.0093 u) may refer to:
Various isomers of tetrafluoropropene, sometimes used as refrigerants
Various isomers of tetrafluorocyclopropane | C3H2F4 | [
"Chemistry"
] | 85 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
34,067,386 | https://en.wikipedia.org/wiki/Saglin | Saglin is a protein produced by the salivary glands of mosquitoes. It is thought that this protein allows the malarial sporozoite to bind to the salivary glands, allowing invasion. It is currently under investigation as a potential drug target to help control transmission of the disease by controlling transmission in the vector.
References
Proteins | Saglin | [
"Chemistry"
] | 69 | [
"Proteins",
"Biomolecules by chemical classification",
"Molecular biology"
] |
34,069,020 | https://en.wikipedia.org/wiki/Pump%20drill | A pump drill is a simple hand-powered device used to impart a rapid rotating motion to a rod (the spindle or drill shaft). It can be used for fire making or as a drill to make holes in various materials. It consists of: the drill shaft, a narrow board with a hole through the center, a weight (usually a heavy disc) acting as a flywheel, and a length of cord. The weight is attached to the shaft, near the bottom end, and the hole board is slipped over the top. The cord is run through a hole or slot near the top of the shaft and attached to both ends of the hole board. The length of the cord is such that, at its lowest position, the board lies just above the weight.
The end of the shaft usually has a slot or hole that can hold a hard bit that does the actual drilling, either by abrasion or by cutting. For wood, the bit may be an auger bit or a simple triangular blade that can cut while rotating in either direction.
To use, the shaft is first turned by hand so that the cord wraps around the top part, as much as possible, and the board is as the highest position. A smooth downward pressure is exerted on the board, causing the shaft to rapidly spin. Once the bottom is reached, the pressure is relieved. The weight then keeps the shaft spinning so that the cord wind again around it, in the opposite sense, pulling the board up to the starting position, much like a yo-yo or button whirligig. The process then can be repeated.
See also
Bow drill, strap drill
Hand drill
References
Fire making
Mechanical hand tools | Pump drill | [
"Physics"
] | 344 | [
"Mechanics",
"Mechanical hand tools"
] |
57,164,125 | https://en.wikipedia.org/wiki/DNVGL-ST-E271 | The DNVGL-ST-E271 (formerly DNV 2.7-1) is a regulation issued by DNV (actual DNV GL) regarding the offshore containers specifications.
DNV 2.7-1 was initially issued in 1989 and the most recent version “DNV Standard for Certification No. 2.7-1 Offshore Containers” was released in June 2013. It is a set of transport related requirements for offshore containers.
It refers to: design, manufacture, testing, certification, marking and periodical inspection (will detail each of them later). The purpose was to insure that containers are safe and suitable for repeated use.
Prior to 1989 there was no specific regulation for offshore equipment handling and lifting although offshore container handling is significantly more dangerous than onshore. For offshore containers, the rate of wear and tear is higher than in most other environments. Containers are required to be constructed to withstand the forces encountered in offshore operations, and will not suffer complete failure even if subject to more extreme loads.
DNV 2.7-1 is fully compliant with EN 12079 part 1 (offshore containers) and part 2 (lifting sets) and distinct as regards to part 3 (periodical inspection).
References
Offshore engineering | DNVGL-ST-E271 | [
"Engineering"
] | 257 | [
"Construction",
"Offshore engineering"
] |
57,164,504 | https://en.wikipedia.org/wiki/Power%20plant%20engineering | Power plant engineering, abbreviated as TPTL, is a branch of the field of energy engineering, and is defined as the engineering and technology required for the production of an electric power station. Technique is focused on power generation for industry and community, not just for household electricity production. This field is a discipline field using the theoretical basis of mechanical engineering and electrical. The engineering aspects of power generation have developed with technology and are becoming more and more complicated. The introduction of nuclear technology and other existing technology advances have made it possible for power to be created in more ways and on a larger scale than was previously possible. Assignment of different types of engineers for the design, construction, and operation of new power plants depending on the type of system being built, such as whether it is fueled by fossil fuels, nuclear, hydropower, or solar power.
History
Power plant engineering got its start in the 1800s when small systems were used by individual factories to provide electrical power. Originally the only source of power came from DC, or direct current, systems. While this was suitable for business, electricity was not accessible for most of the public body. During these times, the coal-powered steam engine was costly to run and there was no way for the power to be transmitted over distances. Hydroelectricity was one of the most utilized forms of power generation as water mills could be used to create power to transmit to small towns.
It wasn't until the introduction of AC, or alternating current, power systems that allowed for the creation of power plants as we know them today. AC systems allowed power to be transmitted over larger distances than DC systems allowed and thus, large power stations were able to be created. One of the progenitors of long-distance power-transmission was the Lauffen to Frankfurt power plant which spanned 109 miles. The Lauffen-Frankfurt demonstrated how three-phase power could be effectively applied to transmit power over long distances. Three-phase power had been the progeny of years of research in power distribution and the Lauffen-Frankfurt was the first exhibition to show its future potential.
The engineering knowledge needed to perform these tasks enlists the help of several fields of engineering including mechanical, electrical, nuclear and civil engineers. When power plants were up and coming, engineering tasks needed to create these facilities mainly consisted of mechanical, civil, and electrical engineers. These disciplines allowed for the planning and construction of power plants. But when nuclear power plants were created it introduced nuclear engineers to perform the calculations necessary to maintain safety standards.
Governing principles
First Law of Thermodynamics
In simple terms, the first law of thermodynamics states that energy cannot be created nor destroyed; however, power can be converted from one form of energy to another form of energy. This is especially important in power generation because power production in nearly all types of power plants relies upon the use of a generator. Generators are used to convert mechanical energy into electrical energy; for example, wind turbines utilize a large blade connected to a shaft which turns the generator when rotated. The generator then creates electricity due to the interaction of a conductor within a magnetic field. In this case, the mechanical energy generated by the wind is converted, through the generator, into electric energy. Most power plants rely on these conversions to create usable electric power.
Second law of thermodynamics
The second law of thermodynamics conceptualizes that the entropy of a closed system can never decrease. As the law relates to power plants, it dictates that heat is to flow from a body at high temperature to a body at low temperature (the device in which electricity is being generated). This law is particularly pertinent to thermal power plants which derive their energy from the combustion of a fuel source.
Types of power plants
All power plants are created with the same goal: to produce electric power as efficiently as possible. However, as technology has evolved, the sources of energy used in power plants has evolved as well. The introduction of more renewable/sustainable forms of energy has caused an increase in the improvement and creation of certain power plants.
Hydroelectric power plants
Hydroelectric power plants generate power using the force of water to turn generators. They can be categorized into three different types; impoundment, diversion and pumped storage. Impoundment and diversion hydroelectric power plants operate similarly in that each involves creating a barrier to keep water from flowing at an uncontrollable rate, and then controlling the flow rate of water to pass through turbines to create electricity at an ideal level. Hydraulic civil engineers are in charge of calculating flow rates and other volumetric calculations necessary to turn the generators to the electrical engineers specifications. Pumped storage hydroelectric power plants operate in a similar manner but only function at peak hours of power demand. At calm hours the water is pumped uphill, then is released at peak hours to flow from a high to low elevation to turn turbines. The engineering knowledge required to assess the performance of pumped-storage hydroelectric power plants is very similar to that of the impoundment and diversion power plants.
Thermal power plants
Thermal power plants are split into two different categories; those that create electricity by burning fuel and those that create electricity via prime mover. A common example of a thermal power plant that produces electricity by the consumption of fuel is the nuclear power plant. Nuclear power plants use a nuclear reactor's heat to turn water into steam. This steam is sent through a turbine which is connected to an electric generator to generate electricity. Nuclear power plants account for 20% of America's electricity generation. Another example of a fuel burning power plant is coal power plant. Coal power plants generate 50% of the United States' electricity supply. Coal power plants operate in a manner similar to nuclear power plants in that the heat from the burning coal powers a steam turbine and electric generator. There are several types of engineers that work in a Thermal Power Plant. Mechanical engineers maintain performance of the thermal power plants while keeping the plants in operation. Nuclear engineers generally handle fuel efficiency and disposal of nuclear waste; however, in Nuclear Power Plants they work directly with nuclear equipment. Electrical Engineers deals with the power generating equipment as well as the calculations.
Solar power plants
Solar power plants derive their energy from sunlight, which is made accessible via photovoltaics (PV's). Photovoltaic panels, or solar panels, are constructed using photovoltaic cells which are made of silica materials that release electrons when they are warmed by the thermal energy of the sun. The new flow of electrons generates electricity within the cell. While PV's are an efficient method of producing electricity, they do burn out after a decade and thus, must be replaced; however, their efficiency, cost of operation, and lack of noise/physical pollutants make them one of the cleanest and least expensive forms of energy. Solar power plants require the work of many facets of engineering; electrical engineers are especially crucial in constructing the solar panels and connecting them into a grid, and computer engineers code the cells themselves so that electricity can be effectively and efficiently produced, and civil engineers play the very important role of identifying areas where solar plants are able to collect the most energy.
Wind power plants
Wind power plants, also known as wind turbines, derive their energy from the wind by connecting a generator to the fan blades and using the rotational motion caused by wind to power the generator. Then the generated power is fed back into the power grid. Wind power plants can be implemented on large, open expanses of land or on large bodies of water such as the oceans; they rely on being in areas that experience significant amounts of wind. Technically, wind turbines are a form of solar power in that they rely on pressure differentials caused by uneven heating of the Earth's atmosphere. Wind turbines solicit knowledge from mechanical, electrical, and civil engineers. Knowledge of fluid dynamics from the help of mechanical engineers is crucial in determining the viability of locations for wind turbines. Electrical engineers ensure that power generation and transmission is possible. Civil engineers are important in the construction and utilization of wind turbines.
Education
Power plant engineering covers a broad spectrum of engineering disciplines. The field can solicit information from mechanical, chemical, electrical, nuclear, and civil engineers.
Mechanical
Mechanical engineers work to maintain and control machinery that is used to power the plant. To work in this field, mechanical engineers require a bachelor's degree in Engineering and license passing both the Professional Engineering Exam (PE) and Fundamental Engineering Exam (FE). Mechanical engineers have additional roles that are needed to be considered based on their careers. When working in thermal power plants, mechanical engineers make sure heavy machinery like boilers and turbines, are working in optimal condition and power is continually generated. Mechanical engineers also work with the operations of the plant. In nuclear and hydraulic power plants the engineers work to make sure that heavy machinery is maintained and preventive maintenance is performed.
Electrical
Electrical engineers work with electrical appliances while making sure electronic instruments and appliances are working in company and state level satisfaction. They require licenses passing both the Professional Engineering Exam (PE) and Fundamental Engineering Exam (FE). It is also preferred that they have a bachelor's degree approved by the Accreditation Board of Engineering and Technology, Inc. (ABET) and field experience before getting an entry-level position.
Nuclear
Nuclear engineers develop and research methods, machinery and systems concerning radiation and energy in subatomic levels. They require on-site experience and a bachelor's degree in engineering. These engineers work in Nuclear Power plants and require licenses for practice while working in the power plant. They require work experience, passing the Professional Engineering Exam(PE), Fundamental Engineering Exam (FE), and a degree from an Accreditation Board for Engineering and Technology, Inc (ABET) approved school. Nuclear engineers work with the handling of nuclear material and operations of a nuclear power plant. These operations can range from handling of nuclear wastes, nuclear material experiments, and design of nuclear equipment.
Civil
Civil engineers focuses on the power plant's construction, expenses, and building. Civil Engineers require passing the Professional Engineering Exam (PE), Fundamental Engineering Exam (FE), and a degree from an Accreditation Board of Engineering and Technology, Inc. (ABET) approved school. They work with making sure the structure of the power plant, the location, and the design and safety of the power plant.
Associations
While there are many disparities between the aforementioned engineering disciplines, they all cover material related to heat or electricity transmission. Obtaining a degree from an ABET accredited school in any one of these disciplines is essential to becoming a power plant engineer. There are also many associations which qualified engineers can join, including the American Society of Mechanical Engineers (ASME), the Institute of Electric and Electronic Engineers (IEEE), and the American Society of Power Engineers (ASOPE).
Fields
Power plant operation and maintenance consists of optimizing the efficiency and power output of power plants and ensuring long term operation. These power plants are large scale, and used to supply power for communities and industry. Individual household electric power generators are not included.
Power station design consists of the design of new power plant systems. There are many types of power plants, and each type requires specific expertise, as well as interdisciplinary teamwork, to build a modern system.
See also
Power engineering
Mechanical engineering
Electrical engineering
Civil engineering
Photovoltaics
Thermal power station
Hydroelectricity
First law of thermodynamics
Second law of thermodynamics
Wind power
References
Brighthub Engineering. Retrieved 2018-04-18.
External links
American Society of Power Engineers
American Society of Mechanical Engineers
Institute of Electric and Electronics Engineers
Industrial engineering
Power engineering
Engineering disciplines | Power plant engineering | [
"Engineering"
] | 2,344 | [
"Energy engineering",
"Industrial engineering",
"nan",
"Power engineering",
"Electrical engineering"
] |
57,169,339 | https://en.wikipedia.org/wiki/Bing%E2%80%93Borsuk%20conjecture | In mathematics, the Bing–Borsuk conjecture states that every -dimensional homogeneous absolute neighborhood retract space is a topological manifold. The conjecture has been proved for dimensions 1 and 2, and it is known that the 3-dimensional version of the conjecture implies the Poincaré conjecture.
Definitions
A topological space is homogeneous if, for any two points , there is a homeomorphism of which takes to .
A metric space is an absolute neighborhood retract (ANR) if, for every closed embedding (where is a metric space), there exists an open neighbourhood of the image which retracts to .
There is an alternate statement of the Bing–Borsuk conjecture: suppose is embedded in for some and this embedding can be extended to an embedding of . If has a mapping cylinder neighbourhood of some map with mapping cylinder projection , then is an approximate fibration.
History
The conjecture was first made in a paper by R. H. Bing and Karol Borsuk in 1965, who proved it for and 2.
Włodzimierz Jakobsche showed in 1978 that, if the Bing–Borsuk conjecture is true in dimension 3, then the Poincaré conjecture must also be true.
The Busemann conjecture states that every Busemann -space is a topological manifold. It is a special case of the Bing–Borsuk conjecture. The Busemann conjecture is known to be true for dimensions 1 to 4.
References
Topology
Conjectures
Unsolved problems in mathematics
Manifolds | Bing–Borsuk conjecture | [
"Physics",
"Mathematics"
] | 312 | [
"Unsolved problems in mathematics",
"Space (mathematics)",
"Topological spaces",
"Conjectures",
"Topology",
"Space",
"Manifolds",
"Geometry",
"Spacetime",
"Mathematical problems"
] |
57,172,319 | https://en.wikipedia.org/wiki/Busemann%20G-space | In mathematics, a Busemann G-space is a type of metric space first described by Herbert Busemann in 1942.
If is a metric space such that
for every two distinct there exists such that (Menger convexity)
every -bounded set of infinite cardinality possesses accumulation points
for every there exists such that for any distinct points there exists such that (geodesics are locally extendable)
for any distinct points , if such that , and , then (geodesic extensions are unique).
then X is said to be a Busemann G-space. Every Busemann G-space is a homogeneous space.
The Busemann conjecture states that every Busemann G-space is a topological manifold. It is a special case of the Bing–Borsuk conjecture. The Busemann conjecture is known to be true for dimensions 1 to 4.
References
Metric spaces
Topology
Manifolds | Busemann G-space | [
"Physics",
"Mathematics"
] | 178 | [
"Mathematical structures",
"Space (mathematics)",
"Metric spaces",
"Topological spaces",
"Topology",
"Space",
"Manifolds",
"Geometry",
"Spacetime"
] |
45,559,478 | https://en.wikipedia.org/wiki/Dioxidanylium | Dioxidanylium, which is protonated molecular oxygen, or just protonated oxygen, is an ion with formula .
It is formed when hydrogen containing substances combust, and exists in the ionosphere, and in plasmas that contain oxygen and hydrogen. Oxidation by O2 in superacids could be by way of the production of protonated molecular oxygen.
It is the conjugate acid of dioxygen. The proton affinity of dioxygen (O2) is 4.4 eV.
Significance
Protonated molecular oxygen is of interest in trying to detect dioxygen in space. Because Earth's atmosphere is full of O2, its spectrum from a space object is impossible to observe from the ground. However should be much more detectable.
Formation
Reaction of dioxygenyl with hydrogen:
+ H2 → + H•
The reaction of the trihydrogen cation with dioxygen is approximately thermoneutral:
O2 + → + H2
When atomic hydrogen, created in an electric discharge is rapidly cooled with oxygen and condensed in solid neon, several reactive ions and molecules are produced. These include HO2 (hydroperoxyl), HOHOH−, H2O(HO), HOHO− as well as . This reaction also forms hydrogen peroxide (H2O2) and hydrogen tetroxide (H2O4).
Properties
In the infrared spectrum the v1 band due to vibrating O–H has a band head at 3016.73 cm−1.
Reactions
A helium complex (He–O2H+) also is known.
appears to react rapidly with hydrogen:
+ H2 → O2 +
also reacts with dinitrogen and water:
+ H2O → O2 + H3O+
Related
The protonated molecular oxygen dimer has a lower energy than that of protonated molecular oxygen.
References
Reactive oxygen species
Cations
Oxoacids | Dioxidanylium | [
"Physics",
"Chemistry"
] | 397 | [
"Cations",
"Ions",
"Matter"
] |
45,563,730 | https://en.wikipedia.org/wiki/Miscibility%20gap | A miscibility gap is a region in a phase diagram for a mixture of components where the mixture exists as two or more phases – any region of composition of mixtures where the constituents are not completely miscible.
The IUPAC Gold Book defines miscibility gap as "Area within the coexistence curve of an isobaric phase diagram (temperature vs composition) or an isothermal phase diagram (pressure vs composition)."
A miscibility gap between isostructural phases may be described as the solvus, a term also used to describe the boundary on a phase diagram between a miscibility gap and other phases.
Thermodynamically, miscibility gaps indicate a maximum (e.g. of Gibbs energy) in the composition range.
Named miscibility gaps
A number of miscibility gaps in phase systems are named, including
The huttenlocher (found in bytownite, anorthite composition An55-95.), boggild (in labradorite, An39-48 and An53-63.) and peristerite (in oligoclase, ~An5-15.) miscibility gaps in the plagioclase feldspars.
A Nishwawa horn, term for a miscibility gap existing when phases with different magnetic properties co-exist in the phase diagram.
Miscibility gaps in liquid states can cause spinodal decomposition, commonly referred to as oiling out, as commonly occurs in oil/water mixtures.
See also
Miscibility
Solid solution
Incongruent melting
References
Materials science
Phase transitions
Geochemistry | Miscibility gap | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 337 | [
"Physical phenomena",
"Phase transitions",
"Applied and interdisciplinary physics",
"Critical phenomena",
"Materials science",
"Phases of matter",
"nan",
"Statistical mechanics",
"Matter"
] |
45,570,695 | https://en.wikipedia.org/wiki/YASA%20Limited | YASA is a British manufacturer of electric motors and motor controllers for use in automotive and industrial applications. The company was founded in 2009 by the CTO Dr Tim Woolmer who is also the holder of a number of related motor technology patents.
Although initial commercial adoption was in high-performance cars, markets for YASA e-motors and generators now include the off-road, marine, industrial and aerospace sectors.
History
YASA Limited (formerly YASA Motors Limited) was founded in September 2009 to commercialise a permanent-magnet axial-flux electric motor (YASA stands for Yokeless and Segmented Armature). The motor was developed for the Morgan LIFEcar in 2008 by Dr Malcolm McCulloch and Dr Tim Woolmer, then a PhD student, at the University of Oxford. In 2015, YASA Motors launched the P400 Series of motors in serial production for volume manufacturers. In January 2018, YASA's 1st series production facility, capable of 100,000 units per year, was officially opened by Greg Clark, UK Secretary of State for Business, Energy and Industrial Strategy.
In May 2019, the company announced "Ferrari selects YASA electric motor for SF90 Stradale, the company's first hybrid production series supercar"
On 22 July 2021 YASA Limited was acquired by Mercedes-Benz.
On 21 May 2024, YASA received approval to establish its new headquarters at a former RAF base in Bicester, Oxfordshire.
Products and applications
YASA offer a range of off-the-shelf and custom motors for use in a number of applications such as electric (BEV) and hybrid vehicle drivetrain, power generation and hydraulics replacement systems.
Standard Motors and Controllers
YASA's standard electric motors have been used in several high-performance cars such as the Drive eO PP03 (the first EV to win the Pikes Peak International Hill Climb outright), Jaguar C-X75, Koenigsegg Regera, and a Lola Le Mans Prototype converted by Drayson Racing, which set a world electric land speed record in 2013.
YASA P400 Series
The YASA P400 series of electric motors produces up to peak power at 700 V and of peak torque at 450 Amps. At this peak power, the off-the-shelf P400 Series achieves a power density of , with continuous rating of up to . The stator of the P400 series motors is oil cooled, and can optionally include additional air cooling.
YASA 750 R
The YASA 750 R is the larger and more powerful electric motor in YASA's standard range, producing up to of peak power at 700 V and of peak torque. Continuous operating power for the 750 R is stated at up to .
Custom Powertrain Solutions: E-Motors, Controllers & Integrated Electric Drive Units (EDU)
As well as standard products, YASA designs and manufactures e-motors that are fully integrated into the drivetrain of their OEM and Tier 1 automotive customers. The e-motors feature power densities up to in vehicle applications that include P2 Hybrid Vehicle Powertrain, P4 traction motor for e-axle and REx (range-extension).
References
External links
Engineering companies of the United Kingdom
Manufacturing companies of the United Kingdom
Manufacturing companies established in 2009
Electric motor manufacturers
Electric vehicle industry
Electrical engineering companies
Industrial machine manufacturers
2009 establishments in England
British brands | YASA Limited | [
"Engineering"
] | 684 | [
"Industrial machine manufacturers",
"Industrial machinery"
] |
43,707,763 | https://en.wikipedia.org/wiki/Hot%20hardness | In materials engineering and metallurgy, hot hardness or red hardness (when a metal glows a dull red from the heat) corresponds to hardness of a material at high temperatures. As the temperature of the material increases, hardness decreases and at some point a drastic change in hardness occurs. The hardness at this point is termed the hot or red hardness of that material. Such changes can be seen in materials such as heat treated alloys.
References
Hardness tests
Solid mechanics | Hot hardness | [
"Physics",
"Materials_science"
] | 93 | [
"Solid mechanics",
"Classical mechanics stubs",
"Classical mechanics",
"Materials testing",
"Mechanics",
"Hardness tests"
] |
43,708,627 | https://en.wikipedia.org/wiki/MPMC | Massively Parallel Monte Carlo (MPMC) is a Monte Carlo method package primarily designed to simulate liquids, molecular interfaces, and functionalized nanoscale materials. It was developed originally by Jon Belof and is now maintained by a group of researchers in the Department of Chemistry and SMMARTT Materials Research Center at the University of South Florida. MPMC has been applied to the scientific research challenges of nanomaterials for clean energy, carbon sequestration, and molecular detection. Developed to run efficiently on the most powerful supercomputing platforms, MPMC can scale to extremely large numbers of CPUs or GPUs (with support provided for NVidia's CUDA architecture). Since 2012, MPMC has been released as an open-source software project under the GNU General Public License (GPL) version 3, and the repository is hosted on GitHub.
History
MPMC was originally written by Jon Belof (then at the University of South Florida) in 2007 for applications toward the development of nanomaterials for hydrogen storage. Since then MPMC has been released as an open source project and been extended to include a number of simulation methods relevant to statistical physics. The code is now further maintained by a group of researchers (Christian Cioce, Keith McLaughlin, Brant Tudor, Adam Hogan and Brian Space) in the Department of Chemistry and SMMARTT Materials Research Center at the University of South Florida.
Features
MPMC is optimized for the study of nanoscale interfaces. MPMC supports simulation of Coulomb and Lennard-Jones systems, many-body polarization, coupled-dipole van der Waals, quantum rotational statistics, semi-classical quantum effects, advanced importance sampling methods relevant to fluids, and numerous tools for the development of intermolecular potentials. The code is designed to efficiently run on high-performance computing resources, including the network of some of the most powerful supercomputers in the world made available through the National Science Foundation supported project Extreme Science and Engineering Discovery Environment (XSEDE).
Applications
MPMC has been applied to the scientific challenges of discovering nanomaterials for clean energy applications, capturing and sequestering carbon dioxide, designing tailored organometallic materials for chemical weapons detection, and quantum effects in cryogenic hydrogen for spacecraft propulsion. Also simulated and published have been the solid, liquid, supercritical, and gaseous states of matter of nitrogen (N2) and carbon dioxide (CO2).
See also
References
External links
Monte Carlo particle physics software
Science software for Linux
Computational physics
Monte Carlo methods
Theoretical chemistry
Stochastic models
Molecular modelling
Free software programmed in C | MPMC | [
"Physics",
"Chemistry"
] | 537 | [
"Molecular physics",
"Monte Carlo methods",
"Computational physics",
"Theoretical chemistry",
"Molecular modelling",
"nan"
] |
39,519,079 | https://en.wikipedia.org/wiki/Electromagnetic%20pulse | An electromagnetic pulse (EMP), also referred to as a transient electromagnetic disturbance (TED), is a brief burst of electromagnetic energy. The origin of an EMP can be natural or artificial, and can occur as an electromagnetic field, as an electric field, as a magnetic field, or as a conducted electric current. The electromagnetic interference caused by an EMP can disrupt communications and damage electronic equipment. An EMP such as a lightning strike can physically damage objects such as buildings and aircraft. The management of EMP effects is a branch of electromagnetic compatibility (EMC) engineering.
The first recorded damage from an electromagnetic pulse came with the solar storm of August 1859, or the Carrington Event.
In modern warfare, weapons delivering a high energy EMP are designed to disrupt communications equipment, the computers needed to operate modern warplanes, or even put the entire electrical network of a target country out of commission.
General characteristics
An electromagnetic pulse is a short surge of electromagnetic energy. Its short duration means that it will be spread over a range of frequencies. Pulses are typically characterized by:
The mode of energy transfer (radiated, electric, magnetic or conducted).
The range or spectrum of frequencies present.
Pulse waveform: shape, duration and amplitude.
The frequency spectrum and the pulse waveform are interrelated via the Fourier transform which describes how component waveforms may sum to the observed frequency spectrum.
Types of energy
EMP energy may be transferred in any of four forms:
Electric field
Magnetic field
Electromagnetic radiation
Electrical conduction
According to Maxwell's equations, a pulse of electric energy will always be accompanied by a pulse of magnetic energy. In a typical pulse, either the electric or the magnetic form will dominate. It can be shown that the non-linear Maxwell's equations can have time-dependent self-similar electromagnetic shock wave solutions where the electric and the magnetic field components have a discontinuity.
In general, only radiation acts over long distances, with the magnetic and electric fields acting over short distances. There are a few exceptions, such as a solar magnetic flare.
Frequency ranges
A pulse of electromagnetic energy typically comprises many frequencies from very low to some upper limit depending on the source. The range defined as EMP, sometimes referred to as "DC [direct current] to daylight", excludes the highest frequencies comprising the optical (infrared, visible, ultraviolet) and ionizing (X and gamma rays) ranges.
Some types of EMP events can leave an optical trail, such as lightning and sparks, but these are side effects of the current flow through the air and are not part of the EMP itself.
Pulse waveforms
The waveform of a pulse describes how its instantaneous amplitude (field strength or current) changes over time. Real pulses tend to be quite complicated, so simplified models are often used. Such a model is typically described either in a diagram or as a mathematical equation.
Most electromagnetic pulses have a very sharp leading edge, building up quickly to their maximum level. The classic model is a double-exponential curve which climbs steeply, quickly reaches a peak and then decays more slowly. However, pulses from a controlled switching circuit often approximate the form of a rectangular or "square" pulse.
EMP events usually induce a corresponding signal in the surrounding environment or material. Coupling usually occurs most strongly over a relatively narrow frequency band, leading to a characteristic damped sine wave. Visually it is shown as a high frequency sine wave growing and decaying within the longer-lived envelope of the double-exponential curve. A damped sinewave typically has much lower energy and a narrower frequency spread than the original pulse, due to the transfer characteristic of the coupling mode. In practice, EMP test equipment often injects these damped sinewaves directly rather than attempting to recreate the high-energy threat pulses.
In a pulse train, such as from a digital clock circuit, the waveform is repeated at regular intervals. A single complete pulse cycle is sufficient to characterise such a regular, repetitive train.
Types
An EMP arises where the source emits a short-duration pulse of energy. The energy is usually broadband by nature, although it often excites a relatively narrow-band damped sine wave response in the surrounding environment. Some types are generated as repetitive and regular pulse trains.
Different types of EMP arise from natural, man-made, and weapons effects.
Types of natural EMP events include:
Lightning electromagnetic pulse (LEMP). The discharge is typically an initial current flow of perhaps millions of amps, followed by a train of pulses of decreasing energy.
Electrostatic discharge (ESD), as a result of two charged objects coming into proximity or even contact.
Meteoric EMP. The discharge of electromagnetic energy resulting from either the impact of a meteoroid with a spacecraft or the explosive breakup of a meteoroid passing through the Earth's atmosphere.
Coronal mass ejection (CME), sometimes referred to as a solar EMP. A burst of plasma and accompanying magnetic field, ejected from the solar corona and released into the solar wind.
Types of (civil) man-made EMP events include:
Switching action of electrical circuitry, whether isolated or repetitive (as a pulse train).
Electric motors can create a train of pulses as the internal electrical contacts make and break connections as the armature rotates.
Gasoline engine ignition systems can create a train of pulses as the spark plugs are energized or fired.
Continual switching actions of digital electronic circuitry.
Power line surges. These can be up to several kilovolts, enough to damage electronic equipment that is insufficiently protected.
Types of military EMP include:
Nuclear electromagnetic pulse (NEMP), as a result of a nuclear explosion. A variant of this is the high altitude nuclear EMP (HEMP), which produces a secondary pulse due to particle interactions with the Earth's atmosphere and magnetic field.
Non-nuclear electromagnetic pulse (NNEMP) weapons.
Lightning
Lightning is unusual in that it typically has a preliminary "leader" discharge of low energy building up to the main pulse, which in turn may be followed at intervals by several smaller bursts.
Electrostatic discharge (ESD)
ESD events are characterized by high voltages of many kV, but small currents sometimes cause visible sparks. ESD is treated as a small, localized phenomenon, although technically a lightning flash is a very large ESD event. ESD can also be man-made, as in the shock received from a Van de Graaff generator.
An ESD event can damage electronic circuitry by injecting a high-voltage pulse, besides giving people an unpleasant shock. Such an ESD event can also create sparks, which may in turn ignite fires or fuel-vapour explosions. For this reason, before refueling an aircraft or exposing any fuel vapor to the air, the fuel nozzle is first connected to the aircraft to safely discharge any static.
Switching pulses
The switching action of an electrical circuit creates a sharp change in the flow of electricity. This sharp change is a form of EMP.
Simple electrical sources include inductive loads such as relays, solenoids, and brush contacts in electric motors. These typically send a pulse down any electrical connections present, as well as radiating a pulse of energy. The amplitude is usually small and the signal may be treated as "noise" or "interference". The switching off or "opening" of a circuit causes an abrupt change in the current flowing. This can in turn cause a large pulse in the electric field across the open contacts, causing arcing and damage. It is often necessary to incorporate design features to limit such effects.
Electronic devices such as vacuum tubes or valves, transistors, and diodes can also switch on and off very quickly, causing similar issues. One-off pulses may be caused by solid-state switches and other devices used only occasionally. However, the many millions of transistors in a modern computer may switch repeatedly at frequencies above 1 GHz, causing interference that appears to be continuous.
Nuclear electromagnetic pulse (NEMP)
A nuclear electromagnetic pulse is the abrupt pulse of electromagnetic radiation resulting from a nuclear explosion. The resulting rapidly changing electric fields and magnetic fields may couple with electrical/electronic systems to produce damaging current and voltage surges.
The intense gamma radiation emitted can also ionize the surrounding air, creating a secondary EMP as the atoms of air first lose their electrons and then regain them.
NEMP weapons are designed to maximize such EMP effects as the primary damage mechanism, and some are capable of destroying susceptible electronic equipment over a wide area.
A high-altitude electromagnetic pulse (HEMP) weapon is a NEMP warhead designed to be detonated far above the Earth's surface. The explosion releases a blast of gamma rays into the mid-stratosphere, which ionizes as a secondary effect and the resultant energetic free electrons interact with the Earth's magnetic field to produce a much stronger EMP than is normally produced in the denser air at lower altitudes.
Non-nuclear electromagnetic pulse (NNEMP)
Non-nuclear electromagnetic pulse (NNEMP) is a weapon-generated electromagnetic pulse without use of nuclear technology. Devices that can achieve this objective include a large low-inductance capacitor bank discharged into a single-loop antenna, a microwave generator, and an explosively pumped flux compression generator. To achieve the frequency characteristics of the pulse needed for optimal coupling into the target, wave-shaping circuits or microwave generators are added between the pulse source and the antenna. Vircators are vacuum tubes that are particularly suitable for microwave conversion of high-energy pulses.
NNEMP generators can be carried as a payload of bombs, cruise missiles (such as the CHAMP missile) and drones, with diminished mechanical, thermal and ionizing radiation effects, but without the consequences of deploying nuclear weapons.
The range of NNEMP weapons is much less than nuclear EMP. Nearly all NNEMP devices used as weapons require chemical explosives as their initial energy source, producing only one millionth the energy of nuclear explosives of similar weight. The electromagnetic pulse from NNEMP weapons must come from within the weapon, while nuclear weapons generate EMP as a secondary effect. These facts limit the range of NNEMP weapons, but allow finer target discrimination. The effect of small e-bombs has proven to be sufficient for certain terrorist or military operations. Examples of such operations include the destruction of electronic control systems critical to the operation of many ground vehicles and aircraft.
The concept of the explosively pumped flux compression generator for generating a non-nuclear electromagnetic pulse was conceived as early as 1951 by Andrei Sakharov in the Soviet Union, but nations kept work on non-nuclear EMP classified until similar ideas emerged in other nations.
Effects
Minor EMP events, and especially pulse trains, cause low levels of electrical noise or interference which can affect the operation of susceptible devices. For example, a common problem in the mid-twentieth century was interference emitted by the ignition systems of gasoline engines, which caused radio sets to crackle and TV sets to show stripes on the screen. CISPR 25 was established to set threshold standards that vehicles must meet for electromagnetic interference(EMI) emissions.
At a high voltage level an EMP can induce a spark, for example from an electrostatic discharge when fuelling a gasoline-engined vehicle. Such sparks have been known to cause fuel-air explosions and precautions must be taken to prevent them.
A large and energetic EMP can induce high currents and voltages in the victim unit, temporarily disrupting its function or even permanently damaging it.
A powerful EMP can also directly affect magnetic materials and corrupt the data stored on media such as magnetic tape and computer hard drives. Hard drives are usually shielded by heavy metal casings. Some IT asset disposal service providers and computer recyclers use a controlled EMP to wipe such magnetic media.
A very large EMP event, such as a lightning strike or an air bursted nuclear weapon, is also capable of damaging objects such as trees, buildings and aircraft directly, either through heating effects or the disruptive effects of the very large magnetic field generated by the current. An indirect effect can be electrical fires caused by heating. Most engineered structures and systems require some form of protection against lightning to be designed in. A good means of protection is a Faraday shield designed to protect certain items from being destroyed.
Control
Like any electromagnetic interference, the threat from EMP is subject to control measures. This is true whether the threat is natural or man-made.
Therefore, most control measures focus on the susceptibility of equipment to EMP effects, and hardening or protecting it from harm. Man-made sources, other than weapons, are also subject to control measures in order to limit the amount of pulse energy emitted.
The discipline of ensuring correct equipment operation in the presence of EMP and other RF threats is known as electromagnetic compatibility (EMC).
Test simulation
To test the effects of EMP on engineered systems and equipment, an EMP simulator may be used.
Induced pulse simulation
Induced pulses are of much lower energy than threat pulses and so are more practicable to create, but they are less predictable. A common test technique is to use a current clamp in reverse, to inject a range of damped sine wave signals into a cable connected to the equipment under test. The damped sine wave generator is able to reproduce the range of induced signals likely to occur.
Threat pulse simulation
Sometimes the threat pulse itself is simulated in a repeatable way. The pulse may be reproduced at low energy in order to characterise the subject's response prior to damped sinewave injection, or at high energy to recreate the actual threat conditions. A small-scale ESD simulator may be hand-held. Bench- or room-sized simulators come in a range of designs, depending on the type and level of threat to be generated.
At the top end of the scale, large outdoor test facilities incorporating high-energy EMP simulators have been built by several countries. The largest facilities are able to test whole vehicles including ships and aircraft for their susceptibility to EMP. Nearly all of these large EMP simulators used a specialized version of a Marx generator. Examples include the huge wooden-structured ATLAS-I simulator (also known as TRESTLE) at Sandia National Labs, New Mexico, which was at one time the world's largest EMP simulator. Papers on this and other large EMP simulators used by the United States during the latter part of the Cold War, along with more general information about electromagnetic pulses, are now in the care of the SUMMA Foundation, which is hosted at the University of New Mexico. The US Navy also has a large facility called the Electro Magnetic Pulse Radiation Environmental Simulator for Ships I (EMPRESS I).
Safety
High-level EMP signals can pose a threat to human safety. In such circumstances, direct contact with a live electrical conductor should be avoided. Where this occurs, such as when touching a Van de Graaff generator or other highly charged object, care must be taken to release the object and then discharge the body through a high resistance, in order to avoid the risk of a harmful shock pulse when stepping away.
Very high electric field strengths can cause breakdown of the air and a potentially lethal arc current similar to lightning to flow, but electric field strengths of up to 200 kV/m are regarded as safe.
According to research from Edd Gent, a 2019 report by the Electric Power Research Institute, which is funded by utility companies, found that a large EMP attack would probably cause regional blackouts but not a nationwide grid failure and that recovery times would be similar to those of other large-scale outages. It is not known how long these electrical blackouts would last, or what extent of damage would occur across the country. It is possible that neighboring countries of the U.S. could also be affected by such an attack, depending on the targeted area and people.
According to an article from Naureen Malik, with North Korea's increasingly successful missile and warhead tests in mind, Congress moved to renew funding for the Commission to Assess the Threat to the U.S. from Electromagnetic Pulse Attack as part of the National Defense Authorization Act.
According to research from Yoshida Reiji, in a 2016 article for the Tokyo-based nonprofit organization Center for Information and Security Trade Control, Onizuka warned that a high-altitude EMP attack would damage or destroy Japan's power, communications and transport systems as well as disable banks, hospitals and nuclear power plants.
In popular culture
By 1981, a number of articles on electromagnetic pulse in the popular press spread knowledge of the EMP phenomenon into the popular culture. EMP has been subsequently used in a wide variety of fiction and other aspects of popular culture. Popular media often depict EMP effects incorrectly, causing misunderstandings among the public and even professionals. Official efforts have been made in the U.S. to remedy these misconceptions.
The novel One Second After by William R. Forstchen and the following books One Year After, The Final Day and Five Years After portrait the story of a fictional character named John Matherson and his community in Black Mountain, North Carolina that after the US loses a war and an EMP attack "sends our nation [the US] back to the Dark Ages".
See also
References
Citations
Sources
Katayev, I.G. (1966). Electromagnetic Shock Waves Iliffe Books Ltd. Dorset House, Stanford Street, London, England
External links
TRESTLE: Landmark of the Cold War, a short documentary film on the SUMMA Foundation website
Electromagnetic compatibility
Electromagnetic radiation
Electronic warfare
Energy weapons
Nuclear weapons
Pulsed power
Nuclear warfare | Electromagnetic pulse | [
"Physics",
"Chemistry",
"Engineering"
] | 3,610 | [
"Radio electronics",
"Physical phenomena",
"Electromagnetic compatibility",
"Physical quantities",
"Electromagnetic radiation",
"Power (physics)",
"Radiation",
"Nuclear warfare",
"Electrical engineering",
"Pulsed power",
"Radioactivity"
] |
39,521,700 | https://en.wikipedia.org/wiki/Biconic%20cusp | The biconic cusp, also known as the picket fence reactor, was one of the earliest suggestions for plasma confinement in a fusion reactor. It consists of two parallel electromagnets with the current running in opposite directions, creating oppositely directed magnetic fields. The two fields interact to form a "null area" between them where the fusion fuel can be trapped.
The concept arose as a reaction to an issue raised by Edward Teller in 1953. Teller noted that any design that had the plasma held on the inside of concave magnetic fields would be naturally unstable. The cusp concept had fields that were convex, and the plasma was held within an area of little or no field in the inside of the device. The concept was independently presented in 1954 by both Harold Grad at the Courant Institute in New York and James L. Tuck at Los Alamos.
At first there was little interest in the design because Teller's problem was not being seen in other early fusion machines. By the late 1950s it was clear these machines all had serious problems, and Teller's was only one of many. This led to renewed interest in the cusp, and several machines were built to test the concept through the early 1960s. All of these devices leaked their fuel plasma at rates much greater than predicted and most work on the concept ended by the mid-1960s. Mikhail Ioffe later demonstrated why these problems arose.
A later device that shares some design with the cusp is the polywell concept of the 1990s. This can be thought of as multiple cusps arranged in three dimensions.
History
Early development
In 1953, at a now-famous but then-secret meeting, Edward Teller raised the theoretical issue of the flute instability. This suggested that any fusion machine that confined the plasma on the inside of a curved field, as opposed to the outside of the curvature, would be naturally unstable and rapidly eject its plasma. This sort of "bad curvature" was part of almost all designs of the era, including the z-pinch, the stellarator and the magnetic mirrors. All of these designs had curves with the plasma on the inside of concave fields and was expected to be unstable.
At the time, the very early machines being built did not show evidence of this problem, but were too small to conclusively show it anyway. Other instabilities were being seen, some very serious, but the flute was just not appearing. Nevertheless, a number of researchers began considering new concepts that did not use this sort of field arrangement and would thus be naturally stable. The cusp concept was independently developed in 1954 by James L. Tuck at Los Alamos and Harold Grad at New York University. Tuck's design differed from Grad's largely in that it consisted of a series of cusps placed in a line. A single-cusp version was seen as a simpler device to test the concept, and a magnet assembly for one such machine was built at Los Alamos.
Calculations at Los Alamos noted that the plasma would escape the reactor because the magnetic lines were "open" and ions following a certain trajectory would be free to leave the core. This meant the picket fence would lose plasma at a fast rate, no matter how stable it was, and it would not be useful as a power-producing reactor. Despite this, it could still be useful for experimental purposes if it retained its plasma longer than unstable devices, giving them time to perform measurements that might be impossible in other devices. Grad's work found another solution; although the plasma leakage was fast at low density, at higher density the self-repulsion between the ions and electrons would trap it for much longer times. There appeared to be several ways this might be accomplished.
Before the system was considered further, results from newer versions of the other designs all seemed to be suggesting Teller's issue was simply not being seen, or was at least far below predictions. Among them, the pinch concept had been demonstrating serious problems, but Tuck and others had continued studying the system and were introducing new solutions. The resulting "stabilized pinch" appeared to solve the stability problems and a new series of much larger pinch machines began to be built, headlined by the ZETA reactor in the UK. Interest in the cusp declined as the other approaches appeared to be on the brink of producing fusion. Los Alamos' magnet assembly was placed in storage. Grad's group had also largely abandoned the concept by late 1956.
Renewed interest
In early 1958, the British announced ZETA had produced fusion. Months later, they were forced to publish a retraction, noting that the neutrons they saw were not from fusion events, but a new type of instability that had not been previously seen. Over the next year, simlar problems were seen in all of the designs and the illusion of progress was shattered.
As the problems were being studied, the original work on the cusp design was reconsidered. The power supply for the early machine at Los Alamos had been sitting in a warehouse for years, and was then taken out of storage and used to build the single-cusp Picket Fence I. Its simplicity meant similar systems were built at General Atomics, Livermore, Harwell, the University of Utrecht and the Kharkov Institute, the Stevens Institute of Technology, and others.
By 1960, Picket Fence had overcome a number of early problems. Initial results measuring the light being emitted by the hot plasma suggested it was stable for up to 1 millisecond, but further diagnostics demonstrated this was only a few microseconds and the light was the result of a sort of afterglow. Improvements in the device resulted in significant gains, and plasma confinement improved to about 50 microseconds, but this was still far less than desired.
Description
The magnetic fields in this system were made by electromagnets placed close together. This was a theoretical construct used to model how to contain plasma. The fields were made by two coils of wire facing one another. These electromagnets had poles which faced one another and in the center was a null point in the magnetic field. This was also termed a zero point field. These devices were explored theoretically by Dr. Harold Grad at NYU's Courant Institute in the late 1950s and early 1960s. Because the fields were planar symmetric this plasma system was simple to model.
Particle behavior
Simulations of these geometries revealed the existence of three classes of particles. The first class moved back and forth far away from the null point. These particles would be reflected close to the poles of the electromagnets and the plane cusp in the center. This reflection was due to the magnetic mirror effect. These are very stable particles, but their motion changes as they radiate energy over time. This radiation loss arose from acceleration or deceleration by the field and can be calculated using the Larmor formula. The second particle moved close to the null point in the center. Because particles passed through locations with no magnetic field, their motions could be straight, with an infinite gyroradius. This straight motion caused the particle to make a more erratic path through the fields. The third class of particles was a transition between these types. Biconic cusps were recently revived because of its similar geometry to the Polywell fusion reactor.
References
Citations
Bibliography
Further reading
Biconic cusp simulation work
Magnetic devices
Fusion power | Biconic cusp | [
"Physics",
"Chemistry"
] | 1,505 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics"
] |
39,527,337 | https://en.wikipedia.org/wiki/Metric-affine%20gravitation%20theory | In comparison with General Relativity, dynamic variables of metric-affine gravitation theory are both a pseudo-Riemannian metric and a general linear connection on a world manifold . Metric-affine gravitation theory has been suggested as a natural generalization of Einstein–Cartan theory of gravity with torsion where a linear connection obeys the condition that a covariant derivative of a metric equals zero.
Metric-affine gravitation theory straightforwardly comes from gauge gravitation theory where a general linear connection plays the role of a gauge field. Let be the tangent bundle over a manifold provided with bundle coordinates . A general linear connection on is represented by a connection tangent-valued form:
It is associated to a principal connection on the principal frame bundle of frames in the tangent spaces to whose structure group is a general linear group . Consequently, it can be treated as a gauge field. A pseudo-Riemannian metric on is defined as a global section of the quotient bundle , where is the Lorentz group. Therefore, one can regard it as a classical Higgs field in gauge gravitation theory. Gauge symmetries of metric-affine gravitation theory are general covariant transformations.
It is essential that, given a pseudo-Riemannian metric , any linear connection on admits a splitting
in the Christoffel symbols
a nonmetricity tensor
and a contorsion tensor
where
is the torsion tensor of .
Due to this splitting, metric-affine gravitation theory possesses a different collection of dynamic variables which are a pseudo-Riemannian metric, a non-metricity tensor and a torsion tensor. As a consequence, a Lagrangian of metric-affine gravitation theory can contain different terms expressed both in a curvature of a connection and its torsion and non-metricity tensors. In particular, a metric-affine f(R) gravity, whose Lagrangian is an arbitrary function of a scalar curvature of , is considered.
A linear connection is called the metric connection for a
pseudo-Riemannian metric if is its integral section, i.e.,
the metricity condition
holds. A metric connection reads
For instance, the Levi-Civita connection in General Relativity is a torsion-free metric connection.
A metric connection is associated to a principal connection on a Lorentz reduced subbundle of the frame bundle corresponding to a section of the quotient bundle . Restricted to metric connections, metric-affine gravitation theory comes to the above-mentioned Einstein – Cartan gravitation theory.
At the same time, any linear connection defines a principal adapted connection on a Lorentz reduced subbundle by its restriction to a Lorentz subalgebra of a Lie algebra of a general linear group . For instance, the Dirac operator in metric-affine gravitation theory in the presence of a general linear connection is well defined, and it depends just of the adapted connection . Therefore, Einstein–Cartan gravitation theory can be formulated as the metric-affine one, without appealing to the metricity constraint.
In metric-affine gravitation theory, in comparison with the Einstein – Cartan one, a question on a matter source of a non-metricity tensor arises. It is so called hypermomentum, e.g., a Noether current of a scaling symmetry.
See also
Gauge gravitation theory
Einstein–Cartan theory
Affine gauge theory
Classical unified field theories
References
G. Sardanashvily, Classical gauge gravitation theory, Int. J. Geom. Methods Mod. Phys. 8 (2011) 1869–1895;
C. Karahan, A. Altas, D. Demir, Scalars, vectors and tensors from metric-affine gravity, General Relativity and Gravitation 45 (2013) 319–343;
Theories of gravity | Metric-affine gravitation theory | [
"Physics"
] | 802 | [
"Theoretical physics",
"Theories of gravity"
] |
58,609,821 | https://en.wikipedia.org/wiki/Airport%20Connector | Airport Connector is a typical name for roads connecting major highways to airports. It may refer to:
Airport Connector (Harrisburg), a short freeway connecting Pennsylvania Route 283 to Harrisburg International Airport
T. F. Green Airport Connector Road, a short freeway connecting Interstate 95 to T. F. Green Airport near Warwick, Rhode Island
Bradley Airport Connector, a freeway connecting Interstate 91 to Bradley International Airport near Hartford, Connecticut
Harry Reid Airport Connector, a partially limited-access road designated Nevada State Route 171, connecting Harry Reid International Airport to Interstate 215 and Nevada State Route 593 (Tropicana Avenue) in Paradise, Nevada
Hardy Airport Connector, a tolled connection from the Hardy Toll Road to George Bush Intercontinental Airport in Houston, Texas
See also
Airport Tunnel (disambiguation)
Connector | Airport Connector | [
"Engineering"
] | 158 | [
"Airport infrastructure",
"Aerospace engineering"
] |
58,611,869 | https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28angular%20momentum%29 | The following table lists various orders of magnitude for angular momentum, in Joule-seconds.
Table
See also
Orders of magnitude (rotational speed)
Orders of magnitude (momentum)
Orders of magnitude (magnetic moment)
References
External links
Angular momentum
Angular momentum | Orders of magnitude (angular momentum) | [
"Physics",
"Mathematics"
] | 50 | [
"Orders of magnitude",
"Units of measurement",
"Physical quantities",
"Quantity",
"Angular momentum",
"Momentum",
"Moment (physics)"
] |
58,616,210 | https://en.wikipedia.org/wiki/Lu%20Jiaxi%20%28mathematician%29 | Lu Jiaxi (; June 10, 1935 – October 31, 1983) was a self-taught Chinese mathematician who made important contributions in combinatorial design theory. He was a high school physics teacher in a remote city and worked in his spare time on the problem of large sets of disjoint Steiner triple systems.
Biography
Background
Lu Jiaxi was born in a poor family in Shanghai. His father was a seller of soy sauce concentrate. His parents had four children, but the three older children all died early from illness, and Lu Jiaxi was the only surviving child.
When he was in junior middle school, his father died from an illness that the family could not afford to treat, so he started working after finishing junior middle school in 1949 to earn a living. He served an apprenticeship at an automobile hardware firm in Shanghai. In October 1951, he was admitted to a statistics training course in Shenyang offered by the administration for electrical equipment industry of Northeast China, and he finished first in his class. He was then assigned to a motor factory in Harbin.
While working at the factory, he self-studied high school materials. He also learned Russian at a night school, and later English and Japanese to be able to look up literature. In 1956, he joined the fight against the flooding of Songhua River, for which he was commended. In 1957, he passed the college entrance exam and was admitted to Department of Physics of Jilin Normal University, now called Northeast Normal University (not the university that took the same name in 2002).
After graduation in 1961, he was assigned to Baotou Steel and Iron Institute, now called Inner Mongolia University of Science and Technology, as a teaching assistant. In 1962, after reorganization of the institute, he was assigned first to the Teaching and Research Office of Baotou Education Bureau, then to several middle schools in Baotou as a physics teacher. He worked at Baotou Eighth Middle School, Baotou Fifth Middle School, Baotou Twenty-fourth Middle School from 1965 to 1973, and Baotou Ninth Middle School from 1973 to his death in 1983. Because of his physics background and his past experience as a factory worker, he was also in charge of a school-run factory which produced radio components. He married in the summer of 1972 to a doctor introduced by his colleague.
Mathematical research
In the summer of 1956, he read a popular science book on mathematical problems written by Chinese mathematician Sun Zeying (, published under the name J. Tseying Sun) called Shuxue Fangfa Qu Yin () and was fascinated by the Kirkman's schoolgirl problem. He devoted himself into solving the generalized version of the problem, studying relevant areas in mathematics on his own and spending a lot of time on research.
In December 1961, he wrote up a paper on its solution and another paper on Latin squares and sent them to Institute of Mathematics of the Chinese Academy of Sciences. The reply letter came in February 1963. They did not directly comment on the papers but suggested that he check them himself and send them to journals if the results were new. They also included some references on the latest developments. He revised his paper on the generalized Kirkman's schoolgirl problem and submitted it to Shuxue Tongbao () in March 1963. His paper was too long and technical for the journal, which aimed at middle school teachers. However, it took the journal one year to reply to him, stating that he should submit it elsewhere. After further revisions, he submitted the paper to Acta Mathematica Sinica on March 14, 1965. On February 7, 1966, he received the rejection letter from the journal criticizing it as "basically not really new results, worthless". It was realized after his death that this rejection was a serious mistake. In 1966, he sent two other papers to journals with no response, since with the start of the Cultural Revolution all academic activities were disrupted. He was discouraged by the rejections and wrote in his diary that he "had since then given up the thought of submitting papers".
After the Cultural Revolution, he resubmitted some revised papers, but they were not accepted. In April 1979, in some journal issues of 1974 and 1975 that he managed to borrow from Beijing, he unexpectedly learned from a paper of Haim Hanani that the problem which he solved in his 1965 paper had been solved and first published in 1971 by Ray-Chaudhuri and R. M. Wilson, which was a big blow to him.
He went on to tackle the problem of large sets of disjoint Steiner triple systems. Zhu Lie (), a professor of mathematics at Soochow University working also in combinatorial mathematics, realized the importance of his work and suggested that he submit it to the international journal Journal of Combinatorial Theory, Series A. He wrote to its editorial board that he had essentially solved the problem, and the editors replied to him that if what he said was true, it would be a major achievement. (Many leaders in the field had worked on the problem starting from in 1917. Only a few special cases were solved at the time. A note published by in 1981 said: "[A]n extensive amount of work has been done on [this] problem ... This problem remains far from settled however".) So he brushed up his English and borrowed a typewriter to type up his work. It was a tremendous task for him as he could type at most four pages a night. He submitted a total of nearly 200 typed pages to the journal.
The journal received six of his series of papers between September 1981 and March 1983 and published the first three in March 1983. The editors informed him that they would also publish his next three papers. He also sent a paper on resolvable balanced incomplete block designs to Acta Mathematica Sinica in August 1979, and a revised version was received by the journal in September 1983. This paper was published in July 1984 and was regarded as equally important by international experts.
In spite of his heavy teaching duties, he carried on with his private mathematical research, often working until after midnight. He also made occasional trips to Beijing to find library resources. He told his colleagues that while he liked physics more, certain material conditions were required for physics research, but he only needed paper to do mathematics.
Unfortunately, laborious work and harsh living conditions made his health deteriorate over time. His family of four lived in a small house of ten-odd square meters. The only table at home was used by his daughters, so he had to do his calculations on a broken kang bed-stove. On his trips to Beijing, he bought hard seat tickets since he could not afford sleeper tickets. He ate his dried food in a library in the daytime and slept on a bench in a train station at night. He sometimes wrote in his diary about how his mental fatigue affected his research and his teaching, and that he needed to get healthier for his research. After he had received dozens of copies of the journal issue containing his first three papers, his family and friends reminded him to get some rest, but he said that he could not since he had not much time. To have a better research environment, he tried to get transferred to university with his friends' help since 1978, but he could not find any suitable position after several years of effort.
Late recognition and death
His Western peers discovered a leader in the field with exceptional achievements, while he still remained largely unknown to the Chinese mathematical community. In the first ever combinatorial mathematics conference in China held in Dalian in July 1983, when two Canadian mathematicians Eric Mendelsohn and John Adrian Bondy, who were the referees of Lu's papers, arrived and asked for Lu Jiaxi, one of the organizers thought they were looking for the President of the Chinese Academy of Sciences with the same sounding name. It was the first time the Chinese combinatorialists got to know him. Upon hearing Lu present his work in a session, Wu Lisheng of Soochow University recommended that he should give a talk on it at the closing ceremony. After his talk, he received a unanimous accolade. In August, he took part in a combinatorics workshop in Hefei as a helper and gave a talk there.
Although he had gained recognition from scholars in his field, he was still living in poverty. School leaders did not appreciate his engagement in research, seeing it as a deviation from his work duties. They even assigned him more duties such as timekeeping on sports day to keep him occupied. In a school general meeting, the principal reprimanded him, saying, "We are a middle school. Someone wants to be a scientist, he may as well be transferred to the Academy of Sciences." When he was finally invited to mathematical conferences in China, his middle school refused to support his travel fee, saying that according to the rules allowances were provided only for teaching related activities. He had to borrow from his friends to attend the conferences.
The conferences brought him into the mathematical community. He was invited as a speaker to the fourth national conference of the Chinese Mathematical Society in late October. Several universities in China wanted to offer him positions, and he decided to go to South China Normal University. The Canadian mathematicians were planning to invite him for a visit at University of Toronto.
After attending the national conference of the Chinese Mathematical Society in Wuhan, which ended on October 27, he hurried back to Baotou by train in order not to miss his classes. After stopping briefly at Beijing for the libraries, he arrived home at about 6 in the evening of Sunday, October 30, 1983. He told his wife joyfully about the praises he received in the last few months and his future research plan. At about 1 am that night, he suffered from a sudden heart attack in his sleep and died. Although his wife was a doctor, she did not have any equipment to save him, not even a telephone at home to call for help. He was survived by his wife Zhang Shuqin () and two daughters.
Only two days before his death, his middle school received a letter from the President of University of Toronto David Strangway written on September 30. In the letter, he asked the school principal for permission to transfer Lu to a university for the development of Chinese mathematics. On November 23, David Strangway sent a letter of condolences to Lu's middle school and family, in which he said that people would greatly miss "Prof. Lu Jiaxi" for his knowledge and contributions.
In January 1984, an editor of Mathematical Reviews sent Lu an invitation letter to be a reviewer, not knowing of his death.
After his death, the Inner Mongolia government helped his family repay their debts and honored him for his achievements. On the first anniversary of his death, a memorial gathering was held by the government officials at Baotou First Workers' Cultural Palace, and the Chairman of the Autonomous Region issued a document titled "Learn from Comrade Lu Jiaxi" (). He was given Special Class Award in the first Inner Mongolia Autonomous Region Science and Technology Progress Award in 1985.
An investigation on his research was carried out by a number of Chinese mathematicians. Although his 1961 and 1963 papers were lost, they found the manuscript of his 1965 paper and confirmed that it was the first paper to solve the generalized Kirkman's schoolgirl problem completely. In fact, according to Zhu Lie, it contained an asymptotically stronger result than that in the paper by Ray-Chaudhuri and Wilson.
Eric Mendelsohn praised Lu's large set theorem as one of the most significant achievements in the field in the past 20 years. He wrote an article on Lu's work, in which he regretted that "Lu Jia-xi should have had a distinguished career as a mathematician. ... He, however, spent too many years as a high school teacher with virtually no time to do research and virtually no contact with the research community".
The seventh paper in the series on disjoint Steiner triple systems treating the cases for the last six values in the theorem, which he had announced to have solved in the Dalian combinatorial mathematics conference, was left unfinished as a 24-page manuscript, with an outline and a few results. (Actually, once the cases for three of the values were proved, the cases for the other three values would follow immediately from previous results.) A few Chinese mathematicians tried to use the procedure sketched in the manuscript or other methods to solve the cases without success. The very last part of the theorem the proof of which Lu had not finished writing down before his death was finally completed by Luc Teirlinck in 1989. Although Teirlinck's proof did not follow the outline in the manuscript, it nevertheless made use of the combinatorial structures that Lu had constructed.
Lu Jiaxi was awarded posthumously in 1987 the First Class Award of the State Natural Science Award, then the highest honor in science in China, for his work on large sets of disjoint Steiner triple systems.
Bibliography
Published papers
[An English translation: ]
Some unpublished papers
See the reference for a more complete list.
Book
This book contains his published papers and his 1965 paper, with the two papers originally in Chinese translated into English.
References
1935 births
1983 deaths
20th-century Chinese mathematicians
Amateur mathematicians
Combinatorialists
Mathematicians from Shanghai
Northeast Normal University alumni | Lu Jiaxi (mathematician) | [
"Mathematics"
] | 2,726 | [
"Combinatorialists",
"Combinatorics"
] |
46,884,197 | https://en.wikipedia.org/wiki/Chaotic%20rotation | Chaotic rotation involves the irregular and unpredictable rotation of an astronomical body. Unlike Earth's rotation, a chaotic rotation may not have a fixed axis or period. Because of the conservation of angular momentum, chaotic rotation is not seen in objects that are spherically symmetric or well isolated from gravitational interaction, but is the result of the interactions within a system of orbiting bodies, similar to those associated with orbital resonance.
Examples of chaotic rotation include Hyperion, a moon of Saturn, which rotates so unpredictably that the Cassini probe could not be reliably scheduled to pass by unexplored regions, and Pluto's Nix, Hydra, and possibly Styx and Kerberos, and also Neptune's Nereid. According to Mark R. Showalter, author of a recent study, "Nix can flip its entire pole. It could actually be possible to spend a day on Nix in which the sun rises in the east and sets in the north. It is almost random-looking in the way it rotates." Another example is that of galaxies; from careful observation by the Keck and Hubble telescopes of hundreds of galaxies, a trend was discovered that suggests galaxies such as our own Milky Way used to have a very chaotic rotation, with planetary bodies and stars rotating randomly. New evidence suggests that our galaxy and others have settled into an orderly, disk-like rotation over the past 8 billion years and that other galaxies are slowly following suit over time.
See also
List of orbits
References
Astrophysics
Rotation in three dimensions
Chaotic maps | Chaotic rotation | [
"Physics",
"Astronomy",
"Mathematics"
] | 312 | [
"Functions and mappings",
"Mathematical objects",
"Astrophysics",
"Mathematical relations",
"Chaotic maps",
"Astronomical sub-disciplines",
"Dynamical systems"
] |
46,885,960 | https://en.wikipedia.org/wiki/Economics%20of%20networks | Economics of networks is a discipline in the fields of economics and network sciences. It is primarily concerned with the understanding of economic phenomena by using network concepts and the tools of network science. Prominent authors in the field include Sanjeev Goyal, Matthew O. Jackson, and Rachel Kranton.
This term should not be confused with network economics or network externality.
Models of networked markets
The concept of networks enables a better understanding of the functioning of markets. On the border of network science and market theory, several models have emerged to explain different aspects in markets.
Exchange theory
Exchange theory explains how economic transactions, trade in favor, communication of information, or other exchanges are affected by the structure of the relationships among the involved participants. The main idea is that the act of exchange is influenced by the agents’ opportunities and their environment. For example, the position of a given agent in the network can endorse them with the power in the auctions and deals they make with their partners.
Bilateral Trading Models
As part of exchange theory, bilateral trading models consider sellers and buyers. These models use game-theoretic models of bargaining in networks to help predict the behavior of agents depending on the type of network. The outcome of transactions can be determined by, for instance, the number of sellers a buyer is connected to, or vice versa (Corominas-Bosch model). Another case occurs when the agents agree on a transaction through an auction and their decision-making during the auction depends on the link structure. Kranton and Minehart concluded that if markets were considered networks, it would enable sellers to pool uncertainty in demand. Building links is costly, however, due to trade-offs not all links are necessary for the network, resulting in a sparse, efficiency-enhancing network.
Informal exchange
The study of networks in economics started before the development of network science. Károly Polány, Claude Lévi-Strauss, and Bronislaw Malinowski studied tribes where complicated gift exchange mechanisms constructed networks between groups, such as families or islands. Although modern trade systems differ fundamentally, such systems based on reciprocity can still survive and reciprocity-based or personalized exchange deals persist even when a market would be more efficient. According to Kranton, informal exchange can exist in networks if transactions are more reciprocal than market-based. In this case, market exchange is hard to find and is associated with high search costs, therefore yielding low utility. Personalized exchange agreements ensure the possibility of long-term agreements.
Scale-free property and economics
Recent studies have tried to examine the deeper connection between socio-economic factors and phenomena and the scale-free property. They found that business networks have scale-free property and that the merger among companies decreases the average separation between firms and increases cliquishness. In another research paper, scientists found that payment flows in an online payment system exhibit free-scale property, high clustering coefficient, and small world phenomenon and that after the September 11 attacks the connectivity of the network reduced and average path length increased. These results were found to be useful in order to understand how to overcome a possible contagion of similar disturbances in payment networks.
World trade web
World trade is generally highlighted as a typical example of large networks. The interconnectedness of the countries can have both positive and negative externalities. It has been shown that the world trade web exhibits scale-free properties, where the main hub is the United States. Eighteen out of the twenty-one developed countries that were analyzed showed synchronization in economic performance and cycles with the US during 1975-2000. The remaining three countries are exceptions. Austria’s performance correlates highly with that of Germany, while Germany and Japan took differing economic paths after World War II as a result of their unique situations. Despite the embeddedness in the global economy that Germany and Japan experienced, the unusual economic measures following Germany’s unification in 1992 and the Plaza Accord in 1985 (which appreciated the Japanese Yen), resulted in a different economic trajectory compared to the majority of developed countries. The importance of regional economic and political cooperation is also highlighted in the analysis.
See also
Critical mass (Sociodynamics)
Critical Mass (Book)
Network effect
References
Literature
External links
Economics of Networks, Duke University
Interdisciplinary subfields of economics
Network theory
Networks
Network science | Economics of networks | [
"Mathematics",
"Technology"
] | 873 | [
"Graph theory",
"Network theory",
"Computer science",
"Mathematical relations",
"Network science"
] |
46,891,994 | https://en.wikipedia.org/wiki/Demiregular%20tiling | In geometry, the demiregular tilings are a set of Euclidean tessellations made from 2 or more regular polygon faces. Different authors have listed different sets of tilings. A more systematic approach looking at symmetry orbits are the 2-uniform tilings of which there are 20. Some of the demiregular ones are actually 3-uniform tilings.
20 2-uniform tilings
Grünbaum and Shephard enumerated the full list of 20 2-uniform tilings in Tilings and patterns, 1987:
Ghyka's list (1946)
Ghyka lists 10 of them with 2 or 3 vertex types, calling them semiregular polymorph partitions.
Steinhaus's list (1969)
Steinhaus gives 5 examples of non-homogeneous tessellations of regular polygons beyond the 11 regular and semiregular ones. (All of them have 2 types of vertices, while one is 3-uniform.)
Critchlow's list (1970)
Critchlow identifies 14 demi-regular tessellations, with 7 being 2-uniform, and 7 being 3-uniform.
He codes letter names for the vertex types, with superscripts to distinguish face orders. He recognizes A, B, C, D, F, and J can't be a part of continuous coverings of the whole plane.
References
Ghyka, M. The Geometry of Art and Life, (1946), 2nd edition, New York: Dover, 1977.
Keith Critchlow, Order in Space: A design source book, 1970, pp. 62–67
pp. 35–43
Steinhaus, H. Mathematical Snapshots 3rd ed, (1969), Oxford University Press, and (1999) New York: Dover
p. 65
In Search of Demiregular Tilings, Helmer Aslaksen
External links
n-uniform tilings Brian Galebach
Tessellation
Semiregular tilings | Demiregular tiling | [
"Physics",
"Mathematics"
] | 402 | [
"Semiregular tilings",
"Tessellation",
"Euclidean plane geometry",
"Planes (geometry)",
"Symmetry"
] |
53,781,296 | https://en.wikipedia.org/wiki/Aspergillus%20germanicus | Aspergillus germanicus is a species of fungus in the genus Aspergillus which has been isolated from indoor air in Germany. It is from the Usti section.
Growth and morphology
A. germanicus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
Further reading
germanicus
Fungi described in 2011
Fungus species | Aspergillus germanicus | [
"Biology"
] | 104 | [
"Fungi",
"Fungus species"
] |
53,781,950 | https://en.wikipedia.org/wiki/Star%20Athletica%2C%20LLC%20v.%20Varsity%20Brands%2C%20Inc. | Star Athletica, LLC v. Varsity Brands, Inc., 580 U.S. 405 (2017), was a U.S. Supreme Court case in which the court decided under what circumstances aesthetic elements of "useful articles" can be restricted by copyright law. The Court created a two-prong "separability" test, granting copyrightability based on separate identification and independent existence; the aesthetic elements must be identifiable as art if mentally separated from the article's practical use, and must qualify as copyrightable pictorial, graphic, or sculptural works if expressed in any medium.
The case was a dispute between two clothing manufacturers, Star Athletica and Varsity Brands. Star Athletica began creating cheerleading uniforms with stripes, zigzags, and chevron insignia similar to those made by a Varsity subsidiary, but at a lower price. Varsity sued Star Athletica for copyright infringement, and Star Athletica said that the clothing designs were uncopyrightable because their aesthetic designs were tied closely to (and guided by) their utilitarian purpose as uniforms. The court rejected this argument with a close reading of the statute and established that the clothing designs, as aesthetic elements of a useful article of clothing, could be copyrightable. It declined to hear Star Athletica's follow-up question about whether Varsity's designs were original enough to be copyrightable, so that part of the case remained unaddressed and Varsity's copyright registrations stood.
The court's conclusion that aesthetic elements of useful articles (and, thereby, clothing-design elements) could be copyrighted intrigued fashion designers and intellectual property scholars. Some were pleased with the decision because they saw extending copyright to clothes as parity with other creative industries which had had copyrights for much longer. Others denounced the court's opinion because of ambiguities in how to enforce the new rules and because of its potential to end fashion trends in generic clothing.
Background
Fashion was uncopyrightable
Clothing designs were originally not subject to copyright law ("uncopyrightable") in the United States. In 1941, the court heard Fashion Originators' Guild of America v. FTC. This case considered the fashion industry's practice of boycotting the sale of its "high fashion" works at places which would sell knock-offs made by other companies for lower prices, known as "style piracy". The court ruled against the Guild, saying that its practice of attempting to create a monopoly outside the copyright system suppressed competition and violated the Sherman Antitrust Act. Outside fashion, Mazer v. Stein established in 1954 that an artistic statue created to adorn a lamp base could be copyrightable separately from the lamp under expansions of the Copyright Act of 1909; the statue's mass production with the lamp did not invalidate that.
Another barrier to copyrightability in the United States is a vague threshold of originality which must be met to be eligible for an intellectual-property monopoly like a copyright or patent. In 1964's Sears, Roebuck & Co. v. Stiffel Co., the court upheld a lower-court ruling that Stiffel's popular lamp design was not original enough to warrant a patent, rescinding that restriction and passing the design into the public domain. The court's opinion indicated that the same logic would apply to an inappropriate copyright.
In the Copyright Act of 1976, Congress changed the copyright law to allow copyrighting aesthetic features of "useful articles" or "an article having an intrinsic utilitarian function that is not merely to portray the appearance of the article or to convey information." Congress intended to better incorporate the Mazer v. Stein ruling by doing so, clarifying the difference between the copyrightability of "applied art" and the traditional, lesser restriction of "industrial design" (the combination of features provided by design patents or trade dress). According to the Act, "pictorial, graphic, or sculptural features" of useful articles were copyrightable only if "separable" from the utilitarian aspects of the design and capable of existing independently of the article. This broad, definitional language led to about ten competing, inconsistent legal tests for that separability, a state of affairs which was criticized for appearing to require judges to be art critics.
Because clothes have both aesthetic and utilitarian features in their design, they fall into this useful-article category; therefore, the shapes and cuts of clothing are not copyrightable. Designs placed on clothing were opened up to the possibility of copyrightability, subject to those tests. The law was construed to mean that copyrighted two-dimensional designs could be placed on clothing and fabric-pattern sheets could be copyrighted before being cut to make clothing, but an article of clothing's overall color scheme and design could not be copyrighted because it was not capable of existing independently of the final useful article. Some fashion designers bristled under the rules, wondering why other creative industries like films or music were allowed to restrict access to their products with copyright and they were not. Others interpreted fashion's successes as an industry thriving in the absence of copyright, perhaps in part because of that. Members of Congress introduced several bills to remove the separability requirement from the law, but none were signed into law.
As alternatives, fashion designers turned to other forms of intellectual property: design patents and trade dress, an aspect of trademark. These generally provided designers causes of action to sue suspected infringers. However, they were critical of the hurdles necessary to acquire these. The process to acquire a design patent could last longer than the trend on which the designer wanted to capitalize. Trade dress required the public to recognize a secondary meaning associating the design with its origin, and was subject to contradictory rulings from the Supreme Court. It was also vulnerable to dilution if courts determined that it was not being policed sufficiently. Defenders of the slow design-patent process said that the design patent's hurdle benefited society as a whole because the extra time prevented ill-advised patents which would disrupt innovation. Extending trademark to fashion had its critics, who argued that the court was out of line when it applied the trade-dress doctrine to fashion after Congress declined to extend trade dress to it in the past. Apart from intellectual property, there were also remedies under laws banning the sale of counterfeits and post-sale protection confusion.
Varsity Brands
Varsity Brands was the parent company of Varsity Spirit, which had become the largest cheerleading and sports-uniform manufacturer in the world by the end of the 2000s. Because of the law, Varsity could not register copyrights for its cheerleading-uniform designs as clothing. Instead, Varsity applied for copyrights on drawings and photographs of those designs as "two-dimensional artwork" or "fabric design (artwork)." The design in the images would then be applied to the clothing with sewing or sublimation, a process where designs are printed on paper, placed on the fabric, and heated so the ink sinks in. After rejections by the Copyright Office, Varsity described the uniforms in extremely specific detail to make the registration appear limited and improve its registration chances. The Copyright Office approved over 200 of these copyrights with meticulous descriptions like "has a central field of black bordered at the bottom by a gray/white/black multistripe forming a shallow 'vee' of which the left-hand leg is horizontal, while the right-hand leg stretches 'northeast' at approximately a forty-five degree angle." Varsity frequently filed lawsuits alleging infringement with accusations of general copying to halt other companies from merchandising competing uniforms. The competitors regarded the lawsuits as frivolous because the claimed designs were so simple.
Star Athletica lawsuit
The Liebe Company founded Star Athletica as a subsidiary in January 2010. Varsity Brands had cancelled an agreement with The Liebe Company's sports-lettering subsidiary, and Varsity accused The Liebe company of founding Star Athletica to retaliate by leveraging former Varsity employees' knowledge of Varsity designs. Later that year, Varsity Brands sued Star Athletica for infringing five of its copyrighted designs for cheerleading uniforms. The Star Athletica designs were not exactly identical (physically or graphically), but Varsity's general description of allegedly-copied elements in court filings ("the lines, stripes, coloring, angles, V's [or chevrons], and shapes and the arrangement and placement of those elements") suited both designs and the case moved forward. Varsity also sued for trademark infringement under the Lanham Act and Star Athletica counter-sued Varsity under the Sherman Antitrust Act for allegedly monopolizing the cheerleading industry, but those claims were dismissed.
In 2014, the United States District Court for the Western District of Tennessee ruled in Star Athletica's favor on the grounds that the designs were not eligible for copyright restriction. According to Judge Robert Hardy Cleland, a design without distinctive marks (like chevrons and zigzags) would not be identifiable as cheerleading uniforms, so the designs were not separately identifiable. They were not conceptually separable because the marks, outside the context of the clothing, would have still evoked the idea of a cheerleading uniform.
The district court's decision was reversed on appeal by the United States Court of Appeals for the Sixth Circuit, Judge Karen Nelson Moore's majority opinion said that the district court should have deferred to the fact that the Copyright Office's trained personnel had granted the copyright registrations. On the questions of the case, Moore evaluated the competing separability tests and created a new five-step test for the Circuit Court analysis. The court found that the designs were copyrightable because the clothes were usable as athletic wear and removing the designs did not affect their utility. Moore said that the design could be separately identifiable because it could be held "side by side" with a blank dress and there would be no utilitarian difference; it could exist independently, because individual aspects (such as chevrons) could appear in designs of other clothing items. She also said that a ruling in favor of Star Athletica would have rendered all paintings uncopyrightable because they decorated the rooms in which they hung. Judge David McKeague dissented, disagreeing about the application of one of the test's steps. The third step asked the court to determine the useful article's "utilitarian aspects." Instead of the majority's more-general assessment of athletic wear, McKeague would have defined the uniforms as clothing the body "in an attractive way for a special occasion" and "identify[ing] the wearer as a cheerleader;" their aesthetic features, therefore, could not be separated from the utilitarian.
Star Athletica filed to be heard by the United States Supreme Court in January 2016. On May 2 of that year, the court granted certiorari "to resolve widespread disagreement over the proper test for implementing § 101's separate-identification and independent-existence requirements." Star Athletica also wanted the court to decide if Varsity's designs were sufficiently original to be copyrighted, but the court declined.
Amicus curiae briefs
The case attracted the attention of interest groups which filed fifteen amicus curiae briefs. Among Star Athletica's advocates was Public Knowledge, which helped draft a brief representing the views of costuming groups (particularly cosplayers of the Royal Manticoran Navy and the International Costuming Guild) which were concerned that a ruling in Varsity's favor could endanger their craft. Much of cosplaying involved recreating designs recognizable from pop culture. When the legality of creating costumes based on pop culture had been questioned, the Copyright Office decided that costumes were uncopyrightable, useful articles for the practical purpose of covering the body; there was debate over this rationale. Cosplayers also cited fair use to justify their hobby. The Royal Manticoran Navy filed a separate supporting brief in Star Athletica which emphasized fair use in costuming, voicing a concern that allowing clothing-design copyrights would further strengthen Varsity Brands's position in the cheerleading industry, one commonly described as monopolistic because of its 80-percent market share.
Public Knowledge was involved in a brief from Shapeways, the Open Source Hardware Association, Formlabs and the Organization for Transformative Works, who were concerned that copyright restriction would impact 3D printing by making it difficult to share designs and by creating a fiscal incentive for media companies to crack down on derivative works. Another group of supporters ("Intellectual Property Professors") objected to broadly expanding copyright to useful-article designs because they considered design patents sufficient. Citing examples of what Congress considered copyrightable in drafting the 1976 law, they argued that extending copyright to uniform designs would unduly stretch Congress's intent to copyright minor detailing on industrial designs, such as like floral engravings on silverware, carvings on the backs of chairs, or printing on T-shirts.
Varsity was endorsed by the Council of Fashion Designers of America, which believed that extending copyright to clothing designs was critical to prevent exploitative copyists and preserve the United States' rapid rate of expansion in the worldwide fashion industry: $370 billion in domestic consumer spending and 1.8 million jobs. The Fashion Law Institute shared these interests, saying that a decision to copyright clothing designs would be a proper reading of the Mazer v. Stein ruling's incorporation into the 1976 Copyright Act. Both criticized the "fast fashion" industry of duplicating expensive designs with increasingly-cheap 3D printing technology without payment to their original creators. The Institute cited "geek fashion," including cosplay, as a burgeoning part of the industry.
The United States government also supported Varsity. The government said that the question of a proper separability analysis was unnecessary because, in creating the designs as drawings, Varsity had received a copyright for them and reserved the ability to reproduce that design however they chose to in any medium. It pointed to a concession from Star Athletica that if Varsity (hypothetically) controlled The Starry Night, the company would be able to restrict the painting's printing on dresses. Star Athletica had conceded this because it was an abstract painting (not a dress design), but the government said that the painting would cover the entire dress surface and was no different than the Varsity designs. It also said that, in applying the requested conceptual-separability analysis, what mattered was that a uniform stripped of the design "remain[ed] similarly useful" compared to the original; a blank dress was equivalent to a designed one, so the design was copyrightable.
Oral arguments
Oral arguments began on October 31, 2016, with Star Athletica represented by John J. Bursch and Varsity by William M. Jay. Eric Feigin also spoke on Varsity's behalf, representing the United States as an amicus curiae.
Star Athletica's lawyers gave the court examples of the graphic designs' utility. The designs' colors and shapes were arranged to create optical effects such as the Müller-Lyer illusion, changing a cheerleader's appearance to make them look taller, thinner, and generally more appealing. The company considered this distinct from applying a pre-existing two-dimensional image to the uniform because the lines required for the illusions needed to be properly located on a properly-fitted uniform; people often made utilitarian decisions about their clothing to make themselves look better. Those designs on another object, such as a lunchbox, would not serve that utilitarian purpose. Justice Ruth Bader Ginsburg rejected that line of argument, citing the fact that the examples presented in evidence were two-dimensional works. In her view, it did not matter that the submitted designs were "superimposed" on three-dimensional uniforms; they were submitted in two-dimensional images separated from the uniforms and copyrightable. Both parties agreed that the physical, three-dimensional uniform's cut and how it physically framed the body were not copyrightable, and they were interested in the colors and aesthetic designs as applied to the useful article. Ginsburg was uncomfortable with the vagueness she perceived in Star Athletica wanting the court to decide when a given two-dimensional design "is what makes an article utilitarian" when that design could conceivably be placed on anything. Chief Justice John Roberts agreed, adding that the designs did more than sit on the body; they sent a "particular message" (that the wearer was "a member of a cheerleading squad"), and Roberts leaned toward thinking of them as copyrightable.
The court also considered more abstract aspects of the case. For example, it was unclear how a decision in Varsity's favor might affect military-style camouflage patterns, and whether they could be restricted if fashion designs were copyrightable. Varsity supported the idea of camouflage copyrights, although Justice Elena Kagan pointed out the clearly utilitarian function of camouflage patterns: concealment. On the industry side, women's fashion was a concern worth hundreds of billions of dollars worldwide. Justice Stephen Breyer speculated that the price of dresses could conceivably double if copyright terms were applied to designs, and knock-off brands could not compete at lower prices. Breyer and Justice Sonia Sotomayor questioned Varsity about possible monopolization; a uniform design could become part of a school's identity, compelling it to buy exclusively from Varsity for a century of copyright restriction. Breyer was concerned that designers or lawyers might sue over the design of any dress or suit based on generic drawings. Sotomayor, who once represented Fendi in cases brought against knock-offs, wondered if a decision for Varsity would destroy those knock-off brands, and was unsure if that would be a bad thing. Justice Anthony Kennedy wondered if it was "the domain of copyright to [restrict] the way people present themselves to the world." Breyer received media attention for saying of the purpose of fashion, "The clothes on the hanger do nothing; the clothes on the woman do everything," a sentiment Kagan thought was "so romantic."
Opinion of the court
Majority opinion
Justice Clarence Thomas delivered the majority opinion, which was joined by Chief Justice John Roberts and Justices Alito, Sotomayor, and Kagan. The court defined its task as "whether the lines, chevrons, and colorful shapes appearing on the surface of [Varsity Brands'] cheerleading uniforms are eligible for copyright restriction as separable features of the design of those cheerleading uniforms", and did not consider whether the designs in the case met copyright's threshold of originality. Thomas rejected arguments from Varsity and the United States that separability analysis was unneeded, and did away with all previous lower-court tests. The opinion provided a two-part test, based on the 1976 statute and the Mazer v. Stein decision:
After applying this test to the cheerleading uniforms, the court ruled in Varsity's favor that the designs were separable from the useful article and could be copyrighted. The separability analysis started with an admittedly-permissive first requirement, describing the designs as separately-identifiable "pictorial, graphic, or sculptural works." The design needed to exist independently, and Thomas concluded that it did when it appeared in other media (such as the two-dimensional drawings submitted to the Copyright Office). In his view, this conceptual separation would not necessarily recreate the useful dress because the design's elements (like the chevrons) could appear on items in different contexts; the graphic design itself did not make a garment a cheerleader uniform, even it appeared on a different kind of clothing. This analysis moved the consideration away from whether the item left after separation was useful, and to whether or not the design itself was useful. A feature incapable of separation was a utilitarian feature, Thomas said.
Addressing concerns that this would grant control over more than the design, Thomas said that the separated aesthetic element could not be a useful article; someone could not copyright a design and then exert control over its physical representation. A drawing (or small model) of a car, copyrighted, could not restrict production of a functional automobile with the same body by a competitor. The car drawing would not suppress a rival car manufacturer in the automobile market, so Varsity's uniform drawing would not suppress Star Athletica in the uniform market because their uniforms could have the same cut.
The final section of the opinion discussed objections to the decision raised by the parties in their briefs. There were no requirements that there be an equivalent useful article remaining after the design element was conceptually removed or that the removed element be "solely artistic." Thomas said that discussions of the blank dress were unnecessary because the statute did not require the remaining work to be useful (or "similarly useful", as the government had put it), because all that mattered was if the separated element was a pictorial, graphic, or sculptural work. He said that adopting this requirement would have overruled Mazer; the statue in that case was considered "applied art" because the 1909 act had removed an earlier distinction between aesthetic and useful works of art. That distinction was not reinstated by the 1976 act, so there was no distinguishing between "conceptual" and "physical" separability.
Thomas rejected Star Athletica's additional, "objective" considerations from preexisting tests that a work be identified as artistic contributions from a designer, independent of its utilitarian purpose, and be marketable without the design's utilitarian function. These were not within the statute and Thomas dismissed them, saying that all that mattered was consumer perceptionnot the design's intent. About Congress's reluctance to apply copyright to useful articles in general, Thomas said that congressional inaction was not usually a significant judicial argument. He found much of the discussion moot; copyright could not restrict the cut of the design, and copyright coverage did not prevent design patenting.
Thomas rejected the arguments of Justice Breyer's dissent and Star Athletica's similar contention that the designs were uncopyrightable because they would have the same outline as the useful article. He analogized the uniform's design to a mural on a curved dome, saying that the contour of the dome would not make the mural uncopyrightable. He thought that Breyer's traditional view that a preexisting two-dimensional artwork applied to a portion of the clothing could be copyrighted was contradictory; the statute would provide copyright restriction to designs which covered part of the clothing surface, but not to designs that covered all of it. Ginsburg's concurrence agreed on the second point in its notes; portions of Varsity's claimed uniform designs appear on other merchandise, such as T-shirts.
Concurring opinion
Justice Ginsburg wrote an opinion, concurring that the cheerleading uniform designs were separable without joining in the majority's reasoning, and emphasized that the copyrights were not registered for the useful articles of clothing; the registrations were for pictorial and graphic works which were then the clothing. Because the Copyright Act of 1976 provided copyright claimants "the right to reproduce the work in or on any kind of article, whether useful or otherwise," the claimant of a pictorial, graphic, or sculptural work's copyright could restrict others from reproducing the work's elements on their useful articles. According to Ginsburg, there was no need for the court to address the separability-analysis issue. She attached to her decision several pages of applications submitted by Varsity Brands to the Copyright Office, pointing to their claimed types of work: "2-dimensional artwork" or "fabric design (artwork)." In her notes, Ginsburg said that she did not take a stand about whether or not Varsity's designs were original enough for copyright; she referred to Feist Publications, Inc., v. Rural Telephone Service Co., quoting its conclusion that "the requisite level of creativity [for copyrightability] is extremely low; even a slight amount will suffice."
Dissent
Justices Breyer dissented, and Justice Kennedy joined him. While Breyer agreed with much of the majority's reasoning, he disagreed with the framing and application of the majority's test and concluded that the design was not separable from the uniform as a useful article. Breyer also criticized what he considered vagueness in the majority's test. He thought that under it, "virtually any industrial design" could be considered separable as soon as it was thought of in terms of art, whether giving it a picture frame or merely calling an object "art" (like a Marcel Duchamp series). Breyer's approach to the problem was to interpret what "identified separately" meant in the context of the statute. His reading was that to be separable, the design features needed to be physically separable from the article (leaving the utilitarian object functional) or the design features needed to be conceivably separable without conjuring a picture of the utilitarian object in a person's mind. He returned to Mazer v. Stein and applied his reasoning to two lamps, one with a Siamese cat statuette for a pole and one with a brass-rod pole and a cat statuette attached to its base. On the base, the cat could be physically separable and was copyrightable as a figurine. When the cat was the pole, it could not be physically separated; it could be conceptually separated from the context of the lamp without conjuring the idea of a lamp, however, and was copyrightable as a figurine. Applying his version of the test to the cheerleader uniforms, he found that the design was not physically separable. Picturing the design separately would reveal a cheerleader uniform "coextensive with that design and cut", so the design and useful article were not conceptually separable either.
Breyer then considered shoes painted by Vincent van Gogh and turned to the examples of Congress's intended targets of copyright in the amicus curiae brief filed by the Intellectual Property Professors. He found that copyrighting those embellishments was not the same as copyrighting an entire cheerleading uniform design; those examples were conceptually separable, while the uniform design was not. Breyer reiterated that van Gogh could certainly have received a copyright to prevent people from reproducing his painting, but the request in Star Athletica was an injunction against reproducing uniforms; he felt that this decision would be equivalent to giving van Gogh a design copyright which could prevent others from producing those shoes. He accused Varsity Brands of trying to acquire copyrights to "prevent its competitors from making useful three-dimensional cheerleader uniforms by submitting plainly unoriginal chevrons and stripes as cut and arranged on a useful article."
Breyer studied the state of the fashion industry at the time of the decision. Recent Congresses had rejected 70 bills to extend copyright to cover designs on useful articles, which he interpreted as an unwillingness of lawmakers to enact the change. He cited the metrics provided by the Varsity amici Council of Fashion Designers of America to show that the fashion industry was successful without copyright and quoted warnings from Thomas Jefferson and Thomas Babington Macaulay against wantonly expanding copyright monopolies. Seeing no pressing need to extend the restriction, he did not want to overstep the bounds of the Constitution's Copyright Clause, especially when the available design patents afforded fifteen years of restriction and copyright could offer more than a century.
Subsequent developments
Immediate reactions
Varsity Brands's leadership and supporters were pleased by the decision. Varsity founder Jeff Webb said that it was a win for "the basic idea that designers everywhere can create excellent work and make investments in their future without fear of having it stolen or copied." Susan Scafidi, founder of the Fashion Law Institute, had been involved with the case from the district-court level and was sorry that it had to go all the way to the Supreme Court. However, she praised Thomas's decision as a maintenance of the status quo based on the copyrightability of fabric patterns. Although it was important to her because she believed that fashion designers deserved to restrict their designs with copyright, she did not think that it would change things for designers because it was based on the language of the preexisting statute.
On March 31, 2017, Puma sued Forever 21 for alleged violations of Puma's intellectual-property rights. Puma based the copyright-infringement portion of its case on the nine-day-old precedent, and said that Forever 21 shoes included copyrighted elements of similar Puma products. Forever 21, a supplier of knock-offs, had been sued for copyright infringement in the past; this was among the first times that a company used the argument that, in the case of Puma's Fenty Fur Slides, their "wide plush fur strap extending to the base of the sandal" was capable of being represented in another medium and was covered by copyright as separable from the shoe itself. Puma claimed "a casually knotted satin bow with pointed endings atop a satin-lined side strap that extends to the base of the sandal" was a copyrighted element on its Bow Slides. Forever 21 responded with a detailed motion to dismiss which said that the Fenti line resembled prior art. The companies settled in November 2018.
The United States Copyright Office, arbiter of copyright registration, updated its Compendium of rules for validating registrations with preliminary rules taking the Star Athletica developments into account. The report, published on September 29, 2017, said that useful articles and (specifically) clothing articles were not copyrightable. About two-dimensional visual designs applied to useful articles, the Compendium reduced its 2014 discussion of the copyrightability of designs of useful articles to one section in the 2017 guide which quoted Star Athleticas two-step separability test. A note indicated that the office was "developing updated guidance" on the matter for a future version of the report. The office released a draft of the new edition of the Compendium on March 15, 2019, including new material which addressed Star Athletica.
Case resolution
The case was passed back to the district court in Tennessee and, in August 2017, was settled out of court in favor of Varsity Brands (over Star Athletica's objection) by Star Athletica's insurance company. Star Athletica wanted to press a counter-claim after the Supreme Court's ruling that designs on the uniforms be copyrightable with an argument that the Varsity designs in the case be copyrightable due to their simplicity. The settlement precluded that argument and closed the case with prejudice.
Legal analyses
Intellectual property attorneys were split about the opinion; some thought that it clarified the law, and others thought that it made the law more ambiguous. Clarity notwithstanding, many have noted that Star Athletica was an important case for the fashion industry because it overturned the prevailing wisdom that fashion designs were generally uncopyrightable. The effects of this shift in thought remain to be seen, however, as more designers apply for copyrights and awareness of the change grows. Negative effects on fashion trends (which involve some degree of copying basic styles among designers throughout the industry) and an anticipated increase in infringement lawsuits have been speculated. Generic or "knock-off" clothing could cease to exist due to the restriction of the designer brands' designs, although designer brands were also accused of copying independent artists before the decision.
In its broad interpretation of the statute, the ruling did not make conclusive determinations about competition and copyright. Columbia Law School professor Ronald Mann analyzed the decision for SCOTUSblog, saying that the court's opinion did not address the minimal threshold of creativity required for copyright restriction under Feist v. Rural. Mann called Thomas's dismissal of the opposing arguments "half-hearted" and predicted that scholarly debate of the separability test's shift in copyright law would continue.
Professors Jeanne C. Fromer and Mark P. McKenna criticized the decision's ambiguity; the three major stages of litigation resulted in three different majority decisions on three different grounds, with more divergent opinions in the dissents and concurrence. The courts allowed Varsity to define extremely narrow copyright restrictions in the registration and then sue others (such as Star Athletica) with court filings that only described the designs generally, so Fromer and McKenna were concerned that this disconnect in requirements would lead to more controversial lawsuits (even outside the useful-article realm). A model car could be copyrighted as a sculpture, a drawing of that model could be copyrighted, and the claimant could use the features of either to file copyright claims. Which features of either were actually restricted was left up to debate, because the registration's description could diverge from a lawsuit. Fromer and McKenna said that it would be impossible to know what the copyright holder considered restricted before they described it in a lawsuit or before the second party began copying. In the absence of a description, they said that it was impossible to perform a separability analysis and determine if the feature was copyrightable before litigation began.
Expanded separability
The Harvard Law Review said that Star Athletica was an important step towards removing subjectivity from the tests in this area of the law, removing the framing problem which changed the outcome of the analysis based on the definition of article usefulness. The decision may not fully resolve conflicting lower-court rulings, however, because its majority and dissent were based on close readings of the statute without enough differentiating examples in the majority to discredit the alternative view. Potential contradictions in Thomas's majority opinion (assertions that surface designs are "inherently separable" from useful articles without being useful articles themselves, and other clothing with the design do not conjure the original useful article) may muddy the waters. According to the Review, "These dicta imply that the independently existing work can have the shape and look of the article, evoke the same concepts, and even perform the same function and still be separable" (making it copyrightable).
Silvertop Assocs., Inc. v. Kangaroo Mfg., Inc., a 2018 district court case, ruled that a banana costume's physical features were separable from the costume and copyrightable because they could be painted on a canvas. It was upheld on appeal the following year. In February 2019, however, the Copyright Office's review board used Star Athletica as a justification for refusing to register the design of a work glove. The office determined that it failed the second step of the test because panels on the back of the hand and other features of the glove were "apparently deliberately engineered and repeatedly tested to qualify with ANSI cut-level standards while allowing finger and hand movement." The office determined that the design was not sufficiently original to be copyrightable because its "common and familiar uncopyrightable shapes" conformed to the human hand "in the most predictable manner."
In 2019, the office's decision to register the Adidas Yeezy Boost 350 shoe design was considered a significant expansion of the copyrightability of useful articles in the wake of Star Athletica. The Copyright Office rejected the designs twice, followed by requests for reconsideration by Adidas. The 2017 refusal, immediately after Star Athletica, was because the shoes were a useful article (a common response from the office then). The 2018 refusal was because the Copyright Office determined that the shoes' design did not meet the originality requirement. On its third consideration, the office determined that the two- and three-dimensional designs could be perceived separately from the shoes and their design's individually-uncopyrightable elements combined to overcome the originality requirement. The Yeezy's color design overcoming the originality requirement may spur fashion companies to pursue copyright more aggressively for designs more complex than basic shape variations. The Yeezy designs had already been restricted by the design-patent system, so the Copyright Office's decision was also read to establish that copyright was an acceptable addition to design patents for useful articles in general and clothing in particular. This was an outcome the Intellectual Property Professors and Justice Breyer feared while Star Athletica was under consideration, although Justice Thomas said that they were "not mutually exclusive" according to Mazer v. Stein.
Other analyses
For cosplayers, the decision made the possibility of lawsuits by copyright holders and official licensees less unlikely. Different parts of costumes may be subject to different levels of restriction, where fair use and utility are not clear; the shape of a superhero's mask could be considered more ornamental than useful. Cosplay props which are not clothing might be even more easily restricted because they are not a necessary element of a costume's function as clothing. Unauthorized replicas of these items may involve more legal hazard than before Star Athletica. Meredith Rose, policy counsel of Public Knowledge and involved in the group's cosplay amicus brief, later wrote for the group that fair-use rights could still apply to cosplay. Rose agreed that ornamental designs and props could be restricted more easily because "when copyright law looks at props, cosplay armor, and accessories, it sees sculptures", but said that cosplay was not going anywhere because the companies behind pop culture had embraced and encouraged it.
Star Athletica caused uncertainty in the 3D-printing community; 3D printing was a relatively-new field, and the rules could have outsized effects on the development of its cultural norms. Shapeways, one of the amici, criticized the court's test because it prioritized artistic considerations over the utility of an item. In its view, this made the test easier but inappropriately expanded copyright in ways which would impact its interests. The company said that a better test would have first considered an item's function, removing parts which accomplished that task from copyright consideration.
Sara Benson, a lawyer who agreed with the decision, wondered if the court's rejection of a copyrightability test which valued artistic effort on the designer's part may harm the perception of a designer's value to their clients. Benson said that the test had allowed designers to leverage their creativity for respect and credibility during the corporate design process, and its removal may have removed some of their negotiating power.
David Kluft of Foley Hoag said that the new ability to copyright design elements is accompanied by criminal penalties if the elements have utility. If the entity applying for copyright of a design knew about that utility, it would be considered false representation of a material fact in its copyright registration.
Uncertainty exists about how this decision may impact the copyrightability of food. Top chefs had been seeking copyrightability for years before Star Athletica, and some prohibiting customers from taking photographs of the food because of a supposed copyright restriction. According to the pre-Star Athletica interpretation of separability, the copyrightability of food as a sculpture with artistic features did not contribute to its purpose as a consumable. James P. Flynn of Epstein Becker & Green wondered if Star Athletica might have changed the fate of served food.
References
External links
SCOTUSblog case page
2017 in United States case law
United States copyright case law
United States Supreme Court cases
United States Supreme Court cases of the Roberts Court
Cheerleading
Fashion design
Copyrightability case law | Star Athletica, LLC v. Varsity Brands, Inc. | [
"Engineering"
] | 8,031 | [
"Design",
"Fashion design"
] |
53,786,432 | https://en.wikipedia.org/wiki/Jorge%20Sahade | Jorge Sahade (born February 17, 1915, in Cordoba, Argentina, died December 18, 2012) was an Argentine astronomer with more than 200 publications in journals and conferences. He was the first Latin American to achieve the presidency of the International Astronomical Union (IAU) between 1985 and 1988, and was also the first director of the Comisión Nacional de Actividades Espaciales. He held this position between 1991 and 1994.
Career
He was born in Cordoba into a family of Syrian origin. In Cordoba, Sahade wished to study mathematics, but at that time, there were only university degrees in engineering and surveying. Sahade chose to study the latter at the National University of Córdoba, where he received his degree in 1937. While working at the Military Geographic Institute in La Plata, he found out about astronomy, and chose to study this at the National University of La Plata, where in 1941, he became an Astronomical Assistant at his observatory and became Doctor of Astronomical and Related Sciences in 1943. After finishing his degree, he and Carlos Ulrrico Cesco (the first astronomy graduate in the country) obtained scholarships to go to the United States to learn astrophysics. While in the United States, Sahade decided to study binary stars.
He promoted the purchase of a 215 cm diameter telescope, which is today located in the Leoncito Astronomical Complex. The construction of this telescope was in the United States. The telescope was modeled after the one at Kitt Peak National Observatory. The telescope's blueprints were a gift from Kitt Peak's director Nicholas Mayall. Between 1953 and 1955, Sahade served as Director of the Astronomical Observatory of Cordoba, and between March 1968 and July 1969 he served as director of the Observatory of La Plata. In 1969 he became the first dean of the Faculty of Exact Sciences of the National University of La Plata.
He founded the Institute of Astronomy and Physics of Space (IAFE) in the first Pavilion of the University of Buenos Aires, where he was director and Alma Mater between 1971 and 1974. After leaving the CONICET and the direction of the IAFE, he continued as an IAFE researcher independently as well as working at the Argentinean Institute of Radio Astronomy (IAEA).<ref name = "histoastro"
He was the first Latin American to achieve the presidency of the International Astronomical Union (IAU) between 1985 and 1988, and was also the first director of the Comisión Nacional de Actividades Espaciales. He held this position between 1991 and 1994.
One of his publications was on the study of the binary star system Beta Lyrae, which was published in the American Philosophical Society. The publication provided solutions to old problems about the systems of closed binary stars. Later astronomer Helmut Abt in the United States would confirm that the work was correct.
Awards and Acknowledgments
Merit Diploma of the Konex Foundation.
1983 - Konex Prize for Physics and Astronomy
Award for the trajectory of the Argentina Astronomy Association.
1986 - Asteroid 2605 (1974QA) bears the name of "Sahade".
1988 - Medal of Scientific Consecration (in Astronomy) of the Council of Advanced International Studies.
1993 - Ricardo P. Platzeck Prize in Astronomy, National Academy of Exact, Physical and Natural Sciences.
1995 - Researcher Emeritus of CONICET.
1999 - Gold Medal of the Argentina Friends of Astronomy Association.
2011 - Citizen Illustrious of the City of La Plata.
Selected publications
with Su-Shu Huang, Otto Struve, and Velta Zebergs:
with Frank Bradshaw Wood: 2015 reprint
as editor with George Eadon McCluskey, Jr. and Yoji Kondo:
References
Argentine people of Syrian descent
1915 births
2012 deaths
20th-century Argentine astronomers
Presidents of the International Astronomical Union
National University of Córdoba alumni
National University of La Plata alumni | Jorge Sahade | [
"Astronomy"
] | 787 | [
"Astronomers",
"Presidents of the International Astronomical Union"
] |
53,788,372 | https://en.wikipedia.org/wiki/Kuramoto%E2%80%93Sivashinsky%20equation | In mathematics, the Kuramoto–Sivashinsky equation (also called the KS equation or flame equation) is a fourth-order nonlinear partial differential equation. It is named after Yoshiki Kuramoto and Gregory Sivashinsky, who derived the equation in the late 1970s to model the diffusive–thermal instabilities in a laminar flame front. It was later and independently derived by G. M. Homsy and A. A. Nepomnyashchii in 1974, in connection with the stability of liquid film on an inclined plane and by R. E. LaQuey et. al. in 1975 in connection with trapped-ion instability. The Kuramoto–Sivashinsky equation is known for its chaotic behavior.
Definition
The 1d version of the Kuramoto–Sivashinsky equation is
An alternate form is
obtained by differentiating with respect to and substituting . This is the form used in fluid dynamics applications.
The Kuramoto–Sivashinsky equation can also be generalized to higher dimensions. In spatially periodic domains, one possibility is
where is the Laplace operator, and is the biharmonic operator.
Properties
The Cauchy problem for the 1d Kuramoto–Sivashinsky equation is well-posed in the sense of Hadamard—that is, for given initial data , there exists a unique solution that depends continuously on the initial data.
The 1d Kuramoto–Sivashinsky equation possesses Galilean invariance—that is, if is a solution, then so is , where is an arbitrary constant. Physically, since is a velocity, this change of variable describes a transformation into a frame that is moving with constant relative velocity . On a periodic domain, the equation also has a reflection symmetry: if is a solution, then is also a solution.
Solutions
Solutions of the Kuramoto–Sivashinsky equation possess rich dynamical characteristics. Considered on a periodic domain , the dynamics undergoes a series of bifurcations as the domain size is increased, culminating in the onset of chaotic behavior. Depending on the value of , solutions may include equilibria, relative equilibria, and traveling waves—all of which typically become dynamically unstable as is increased. In particular, the transition to chaos occurs by a cascade of period-doubling bifurcations.
Modified Kuramoto–Sivashinsky equation
Dispersive Kuramoto–Sivashinsky equations
A third-order derivative term representing dispersion of wavenumbers are often encountered in many applications. The disperseively modified Kuramoto–Sivashinsky equation, which is often called as the Kawahara equation, is given by
where is real parameter. A fifth-order derivative term is also often included, which is the modified Kawahara equation and is given by
Sixth-order equations
Three forms of the sixth-order Kuramoto–Sivashinsky equations are encountered in applications involving tricritical points, which are given by
in which the last equation is referred to as the Nikolaevsky equation, named after V. N. Nikolaevsky who introudced the equation in 1989, whereas the first two equations has been introduced by P. Rajamanickam and J. Daou in the context of transitions near tricritical points, i.e., change in the sign of the fourth derivative term with the plus sign approaching a Kuramoto–Sivashinsky type and the minus sign approaching a Ginzburg–Landau type.
Applications
Applications of the Kuramoto–Sivashinsky equation extend beyond its original context of flame propagation and reaction–diffusion systems. These additional applications include flows in pipes and at interfaces, plasmas, chemical reaction dynamics, and models of ion-sputtered surfaces.
See also
Michelson–Sivashinsky equation
List of nonlinear partial differential equations
List of chaotic maps
Clarke's equation
Laminar flame speed
G-equation
References
External links
Differential equations
Fluid dynamics
Combustion
Chaotic maps
Functions of space and time | Kuramoto–Sivashinsky equation | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 828 | [
"Functions and mappings",
"Dynamical systems",
"Functions of space and time",
"Chemical engineering",
"Mathematical objects",
"Differential equations",
"Equations",
"Combustion",
"Mathematical relations",
"Piping",
"Spacetime",
"Chaotic maps",
"Fluid dynamics"
] |
53,790,076 | https://en.wikipedia.org/wiki/Conformon | From a biological standpoint, the goal-directed molecular motions inside living cells are carried out by biopolymers acting like molecular machines (e.g. myosin, RNA/DNA polymerase, ion pumps, etc.). These molecular machines are driven by conformons, that is sequence-specific mechanical strains generated by free energy released in chemical reactions or stress induced destabilisations in supercoiled biopolymer chains. Therefore, conformons can be defined as packets of conformational energy generated from substrate binding or chemical reactions and confined within biopolymers.
On the other hand, from a physics standpoint, the conformon is a localization of elastic and electronic energy which may propagate in space with or without dissipation. The mechanism which involves dissipationless propagation is a form of molecular superconductivity. On quantum mechanical level both elastic/vibrational and electronic energy can be quantised, therefore the conformon carries a fixed portion of energy. This has led to the definition of quantum of conformation (shape).
References
Biophysics | Conformon | [
"Physics",
"Biology"
] | 227 | [
"Applied and interdisciplinary physics",
"Biophysics"
] |
50,947,918 | https://en.wikipedia.org/wiki/YInMn%20Blue | YInMn Blue (/jɪnmɪn/; for the chemical symbols Y for yttrium, In for indium, and Mn for manganese), also known as Oregon Blue or Mas Blue, is an inorganic blue pigment that was discovered by Mas Subramanian and his (then) graduate student, Andrew Smith, at Oregon State University in 2009. The pigment is noteworthy for its vibrant, near-perfect blue color and unusually high NIR reflectance. The chemical compound has a unique crystal structure in which trivalent manganese ions in the trigonal bipyramidal coordination are responsible for the observed intense blue color. Since the initial discovery, the fundamental principles of colour science have been explored extensively by the Subramanian research team at Oregon State University, resulting in a wide range of rationally designed novel green, purple, and orange pigments, all through intentional addition of a chromophore in the trigonal bipyramidal coordination environment.
Historical pigments
The discovery of the first known synthetic blue pigment, Egyptian blue () was promoted by the Egyptian pharaohs who sponsored the creation of new pigments to be used in art. Other civilizations combined organic and mineral materials to create blue pigments ranging from azure-blue like the Maya blue to the Han blue (), which was developed by the Chinese Han dynasty and manipulated to produce a light or dark blue color.
A number of pigments are used to impart the blue color. Cobalt blue () was first described in 1777; it is extremely stable and has been traditionally used as a coloring agent in ceramics. Ultramarine () was made by grinding the forbiddingly expensive lapis lazuli into a powder until a cheaper synthetic form was invented in 1826 by the French industrialist Jean Baptiste Guimet and in 1828 by the German chemist Christian Gmelin. Prussian blue () was first described by the German polymath Johann Leonhard Frisch and the president of the Prussian Academy of Sciences, Gottfried Wilhelm Leibniz, in 1708. Azurite () is a soft, deep-blue copper mineral produced by weathering copper ore deposits; it was used since ancient times and was first recorded by the first century Roman writer Pliny the Elder. Phthalocyanine Blue BN was first prepared in 1927 and has wide range of applications.
Most known pigments have detrimental health and environmental effects or durability problems. Cobalt blue causes cobalt poisoning when inhaled or ingested. Prussian blue is known to liberate hydrogen cyanide under certain acidic conditions. Ultramarine and azurite are not stable particularly in high-temperature and acidic conditions; additionally, ultramarine production involves the emission of a large amount of the toxic sulfur dioxide. The newer Phthalocyanine Blue BN is non-biodegradable and has been found to cause neuroanatomical defects in developing chicken embryos when injected directly into incubating eggs.
Inorganic blue pigments in which manganese (in the pentavalent oxidation state and in a tetrahedral coordination) is the chromophore have been employed since the Middle Ages (e.g., the fossil bone odontolite, which is isostructural to the apatite structure). Synthetic alternatives, such as barium manganate sulfate (or Manganese Blue, developed in 1907 and patented in 1935), have been phased out industrially due to safety and regulatory concerns, hence YInMn Blue fills the niche of an inorganic, environmentally safe alternative to the traditionally used blue pigments, and offers a durable intense blue color.
Discovery
In 2008, Mas Subramanian received a National Science Foundation grant to explore novel materials for electronics applications. Under this project, he was particularly interested in synthesizing multiferroics based on manganese oxides. He guided Andrew E. Smith, the first graduate student in his lab, to research an oxide solid solution between (a ferroelectric material) and (an antiferromagnetic material) at . The resulting compound Smith synthesized was by coincidence a vibrant blue material. Because of Subramanian's experience at DuPont, he recognized the compound's potential use as a blue pigment and together they filed a patent disclosure covering the invention. After publishing their results, Shepherd Color Company successfully contacted Subramanian for possible collaboration in commercialization efforts. For his outstanding contributions to inorganic color pigment chemistry, Subramanian was awarded the Perkin Medal from the Society of Dyers and Colourists in 2019.
The pigment is noteworthy for its vibrant, near-perfect blue color and unusually high NIR reflectance. The color may be adjusted by varying the In/Mn ratio in the pigment's base formula of , but the bluest pigment, , has a color comparable to standard cobalt blue pigments.
Properties and preparation
YInMn Blue is chemically stable, does not fade, and is non-toxic. It is more durable than alternative blue pigments such as ultramarine or Prussian blue, retaining its vibrant color in oil and water, and is safer than cobalt blue, which is a suspected carcinogen and may cause cobalt poisoning.
The pigment is resistant to acids such as nitric acid, and is difficult to combust. When YInMn Blue does ignite, it burns a violet color attributed to the indium atoms.
Infrared radiation is strongly reflected by YInMn Blue, which makes this pigment suitable for energy-saving, cool coatings. It can be prepared by heating the oxides of the elements yttrium, indium, and manganese to a temperature of approximately .
Commercialization
In popular culture
After Subramanian, Smith, and other colleagues published their results, companies began inquiring about commercial uses. Shepherd Color Company eventually won the license to commercialize the pigment in May 2015. Many companies such as AMD and Crayola rushed to use the new pigment name in product announcements and press releases. It is unclear when the first commercial application of YInMn blue reached the consumer market.
AMD announced in July 2016 that the pigment would be used on new Radeon Pro WX and Pro SSG professional GPUs for the energy efficiency that stems from its near-infrared reflecting property.
The American art supplies company Crayola announced in May 2017 that it planned to replace its retired Dandelion color (a yellow) with a new color "inspired by" YInMn. The new color does not contain any YInMn. Crayola held a contest for more pronounceable name ideas, and announced the new color name, "Bluetiful", on 14 September 2017. The new crayon color was made available in late 2017.
In artists' pigments
In June 2016, an Australian company, Derivan, published experiments using YInMn within their artist range (Matisse acrylics), and subsequently released the pigment for purchase.
As of April 2021, Golden Paints has commercially licensed and sourced the pigment from Shepherd Color Company. According to Golden, the supply of the raw pigment is extremely limited. Shepherd Color Company received the required environmental and safety approvals to sell the pigment in the U.S. in 2020.
Gamblin Artists Colors made a first Limited Edition batch of YInMn Blue in November 2020.
See also
International Klein Blue
List of inorganic pigments
Notes
References
External links
United States patent 8282728: "Materials with trigonal bipyramidal coordination and methods of making the same"
YInMn Blue at Shepherd Color Company
Medal for YInMnBlue and Dr. Mas Subramanian
Shades of blue
Yttrium compounds
Indium compounds
Manganese(III) compounds
Transition metal oxides
Inorganic pigments | YInMn Blue | [
"Chemistry"
] | 1,563 | [
"Inorganic pigments",
"Inorganic compounds"
] |
50,951,733 | https://en.wikipedia.org/wiki/Human%20interactions%20with%20microbes | Human interactions with microbes include both practical and symbolic uses of microbes, and negative interactions in the form of human, domestic animal, and crop diseases.
Practical use of microbes began in ancient times with fermentation in food processing; bread, beer and wine have been produced by yeasts from the dawn of civilisation, such as in ancient Egypt. More recently, microbes have been used in activities from biological warfare to the production of chemicals by fermentation, as industrial chemists discover how to manufacture a widening variety of organic chemicals including enzymes and bioactive molecules such as hormones and competitive inhibitors for use as medicines. Fermentation is used, too, to produce substitutes for fossil fuels in forms such as ethanol and methane; fuels may also be produced by algae. Anaerobic microorganisms are important in sewage treatment. In scientific research, yeasts and the bacterium Escherichia coli serve as model organisms especially in genetics and related fields.
On the symbolic side, an early poem about brewing is the Sumerian "Hymn to Ninkasi", from 1800 BC. In the Middle Ages, Giovanni Boccaccio's The Decameron and Geoffrey Chaucer's The Canterbury Tales: addressed people's fear of deadly contagion and the moral decline that could result. Novelists have exploited the apocalyptic possibilities of pandemics from Mary Shelley's 1826 The Last Man and Jack London's 1912 The Scarlet Plague onwards. Hilaire Belloc wrote a humorous poem to "The Microbe" in 1912. Dramatic plagues and mass infection have formed the story lines of many Hollywood films, starting with Nosferatu in 1922. In 1971, The Andromeda Strain told the tale of an extraterrestrial microbe threatening life on Earth. Microbiologists since Alexander Fleming have used coloured or fluorescing colonies of bacteria to create miniature artworks.
Microorganisms such as bacteria and viruses are important as pathogens, causing disease to humans, crop plants, and domestic animals.
Context
Culture consists of the social behaviour and norms found in human societies and transmitted through social learning. Cultural universals in all human societies include expressive forms like art, music, dance, ritual, religion, and technologies like tool usage, cooking, shelter, and clothing. The concept of material culture covers physical expressions such as technology, architecture and art, whereas immaterial culture includes principles of social organization, mythology, philosophy, literature, and science. This article describes the roles played by microorganisms in human culture.
Since microbes were not known until the Early Modern period, they appear in earlier literature indirectly, through descriptions of baking and brewing. Only with the invention of the microscope, as used by Robert Hooke in his 1665 book Micrographia, and by Antonie van Leeuwenhoek in the 1670s, the germ theory of disease, and progress in microbiology in the 19th century were microbes observed directly, identified as living organisms, and put to use on a scientific basis. The same knowledge also allowed microbes to appear explicitly in literature and the arts.
Practical uses
Food production
Controlled fermentation with microbes in brewing, wine making, baking, pickling and cultured dairy products such as yogurt and cheese, is used to modify ingredients to make foods with desirable properties. The principal microbes involved are yeasts, in the case of beer, wine, and ordinary bread; and bacteria, in the case of anaerobically fermented vegetables, dairy products, and sourdough bread. The cultures variously provide flavour and aroma, inhibit pathogens, increase digestibility and palatability, make bread rise, reduce cooking time, and create useful products including alcohol, organic acids, vitamins, amino acids, and carbon dioxide. Safety is maintained with the help of food microbiology.
Water treatment
Oxidative sewage treatment processes rely on microorganisms to oxidise organic constituents. Anaerobic microorganisms reduce sludge solids producing methane gas and a sterile mineralised residue. In potable water treatment, one method, the slow sand filter, employs a complex gelatinous layer composed of a wide range of microorganisms to remove both dissolved and particulate material from raw water.
Energy
Microorganisms are used in fermentation to produce ethanol, and in biogas reactors to produce methane. Scientists are researching the use of algae to produce liquid fuels, and bacteria to convert various forms of agricultural and urban waste into usable fuels.
Chemicals, enzymes
Microorganisms are used for many commercial and industrial purposes, including the production of chemicals, enzymes and other bioactive molecules, often through protein engineering. For example, acetic acid is produced by the bacterium Acetobacter aceti, while citric acid is produced by the fungus Aspergillus niger. Microorganisms are used to prepare a widening range of bioactive molecules and enzymes. For example, Streptokinase produced by the bacterium Streptococcus and modified by genetic engineering is used to remove clots from the blood vessels of patients who have suffered a heart attack. Cyclosporin A is an immunosuppressive agent in organ transplantation, while statins produced by the yeast Monascus purpureus serve as blood cholesterol lowering agents, competitively inhibiting the enzyme that synthesizes cholesterol.
Science
Microorganisms are essential tools in biotechnology, biochemistry, genetics, and molecular biology. The yeasts brewer's yeast (Saccharomyces cerevisiae) and fission yeast (Schizosaccharomyces pombe) are important model organisms in science, since they are simple eukaryotes that can be grown rapidly in large numbers and are easily manipulated. They are particularly valuable in genetics, genomics and proteomics, for example in protein production. The easily cultured gut bacterium Escherichia coli, a prokaryote, is similarly widely used as a model organism.
Endosymbiosis
Microbes can form an endosymbiotic relationship with larger organisms. For example, the bacteria that live within the human digestive system contribute to human health through gut immunity, the synthesis of vitamins such as folic acid and biotin, and the fermentation of complex indigestible carbohydrates. Future drugs and food chemicals may need to be tested on the gut microbiota; it is already clear that probiotic supplements can promote health, and that gut microbes are affected by both diet and medicines.
Warfare
Pathogenic microbes, and toxins that they produce, have been developed as possible agents of warfare. Crude forms of biological warfare have been practiced since antiquity. In the 6th century BC, the Assyrians poisoned enemy wells with a fungus said to render the enemy delirious. In 1346, the bodies of Mongol warriors of the Golden Horde who had died of plague were thrown over the walls of the besieged Crimean city of Kaffa, possibly assisting the spread of the Black Death into Europe.
Advances in bacteriology in the 20th century increased the sophistication of possible bio-agents in war. Biological sabotage—in the form of anthrax and glanders—was undertaken on behalf of the Imperial German government during World War I, with indifferent results. In World War II, Britain weaponised tularemia, anthrax, brucellosis, and botulism toxins, but never used them.
The USA similarly explored biological warfare agents, developing anthrax spores, brucellosis, and botulism toxins for possible military use. Japan developed biological warfare agents, with the use of experiments on human prisoners, and was about to use them when the war ended.
Symbolic uses
Being very small, and unknown until the invention of the microscope, microbes do not feature directly in art or literature before Early Modern times (though they appear indirectly in works about brewing and baking), when Antonie van Leeuwenhoek observed microbes in water in 1676; his results were soon confirmed by Robert Hooke. A few major diseases such as tuberculosis appear in literature, art, film, opera and music.
In literature
The literary possibilities of post-apocalyptic stories about pandemics (worldwide outbreaks of disease) have been explored in novels and films from Mary Shelley's 1826 The Last Man and Jack London's 1912 The Scarlet Plague onwards. Medieval writings that deal with plague include Giovanni Boccaccio's The Decameron and Geoffrey Chaucer's The Canterbury Tales: both treat the people's fear of contagion and the resulting moral decline, as well as bodily death.
The making of beer has been celebrated in verse since the time of Ancient Sumeria, c. 1800 BC, when the "Hymn to Ninkasi" was inscribed on a clay tablet. Ninkasi, tutelary goddess of beer, and daughter of the creator Enki and the "queen of the sacred lake" Ninki, "handles the dough and with a big shovel, mixing in a pit, the bappir with [date] honey, ... waters the malt set on the ground, ... soaks the malt in a jar, ... spreads the cooked mash on large reed mats, coolness overcomes, ... holds with both hands the great sweet wort, brewing it with honey".
Wine is a frequent topic in English literature, from the spiced French and Italian "ypocras", "claree", and "vernage" in Chaucer's The Merchant's Tale onwards. William Shakespeare's Falstaff drank Spanish "sherris sack", in contrast to Sir Toby Belch's preference for "canary". Wine references in later centuries branch out to more winegrowing regions.
The Microbe is a humorous 1912 poem by Hilaire Belloc, starting with the lines "The microbe is so very small / You cannot make him out at all,/ But many sanguine people hope / To see him through a microscope. Microbes and Man is an admired "classic" book, first published in 1969, by the "father figure of British microbiology" John Postgate on the whole subject of microorganisms and their relationships with humans.
In film
Microbes feature in many highly dramatized films. Hollywood was quick to exploit the possibilities of deadly disease, mass infection and drastic government reaction, starting as early as 1922 with Nosferatu, in which a Dracula-like figure, Count Orlok, sleeps in unhallowed ground contaminated with the Black Death, which he brings with him wherever he goes. Another classic film, Ingmar Bergman's 1957 The Seventh Seal, deals with the plague theme very differently, with the grim reaper directly represented by an actor in a hood. More recently, the 1971 The Andromeda Strain, based on a novel by Michael Crichton, portrayed an extraterrestrial microbe contaminating the Earth.
In music
"A Very Cellular Song," a song from the British psychedelic folk band The Incredible String Band's 1968 album The Hangman's Beautiful Daughter, is told partially from the point of view of an amoeba, a protistan. The COVID-19 pandemic inspired several songs and albums.
In art
Microbial art is the creation of artworks by culturing bacteria, typically on agar plates, to form desired patterns. These may be chosen to fluoresce under ultraviolet light in different colours. Alexander Fleming, the discoverer of penicillin, created "germ paintings" using different species of bacteria that were naturally pigmented in different colours.
An instance of a protist in an artwork is the artist Louise Bourgeois's bronze sculpture Amoeba. It has a white patina resembling plaster, and was designed in 1963–5, based on drawings of a pregnant woman's belly that she made as early as the 1940s. According to the Tate Gallery, the work "is a roughly modelled organic form, its bulges and single opening suggesting a moving, living creature in the stages of evolution."
Negative interactions
Disease
Microorganisms are the causative agents (pathogens) in many infectious diseases of humans and domestic animals. Pathogenic bacteria cause diseases such as plague, tuberculosis and anthrax. Protozoa cause diseases including malaria, sleeping sickness, dysentery and toxoplasmosis. Microscopic fungi cause diseases such as ringworm, candidiasis and histoplasmosis. Pathogenic viruses cause diseases such as influenza, yellow fever and AIDS.
The practice of hygiene was created to prevent infection or food spoiling by eliminating microbes, especially bacteria, from the surroundings.
In agriculture and horticulture
Microorganisms including bacteria, fungi, and viruses are important as plant pathogens, causing disease to crop plants. Fungi cause serious crop diseases such as maize leaf rust, wheat stem rust, and powdery mildew. Bacteria cause plant diseases including leaf spot and crown galls. Viruses cause plant diseases such as leaf mosaic. The oomycete Phytophthora infestans causes potato blight, contributing to the Great Irish Famine of the 1840s.
The tulip breaking virus played a role in the tulip mania of the Dutch Golden Age. The famous Semper Augustus tulip, in particular, owed its striking pattern to infection with the plant disease, a kind of mosaic virus, making it the most expensive of all the tulip bulbs sold.
References
Microbiology
Biology and culture
Bacteria and humans | Human interactions with microbes | [
"Chemistry",
"Biology"
] | 2,783 | [
"Bacteria and humans",
"Microbiology",
"Bacteria",
"Microscopy"
] |
50,955,330 | https://en.wikipedia.org/wiki/Phylomedicine | Phylomedicine is an emerging discipline at the intersection of medicine, genomics, and evolution. It focuses on the use of evolutionary knowledge to predict functional consequences of mutations found in personal genomes and populations.
History
Modern technologies have made genome sequencing accessible, and biomedical scientists have profiled genomic variation in apparently healthy individuals and individuals diagnosed with a variety of diseases. This work has led to the discovery of thousands of disease-associated genes and genetic variants, elucidating a more robust picture of the amount and types of variations found within and between humans.
Proteins are encoded in genomic DNA by exons, and these comprise only ~1% of the human genomic sequence (aka the exome). The exome of an individual carries about 6,000–10,000 amino-acid-altering nSNVs, and many of these variants are already known to be associated with more than 1000 diseases. Although only a small fraction of these personal variants are likely to impact health, the sheer volume of known genomic and exomic variants is too large to apply traditional laboratory or experimental techniques to explore their functional consequences. Translating a personal genome into useful phenotypic information (e.g. relating to predisposition to disease, differential drug response, or other health concerns), is therefore a grand challenge in the field of genomic medicine.
Fortunately, results from the natural experiment of molecular evolution are recorded in the genomes of humans and other living species. All genomic variation is subjected to the process of natural selection which generally reduces mutations with negative effects on phenotype over time. With the availability of a large number of genomes from the tree of life, evolutionary conservation of individual genomic positions and the sets of mutations permitted among species informs the functional and health consequences of these mutations.
Consequently, phylomedicine has emerged as an important discipline at the intersection of molecular evolution and genomic medicine with a focus on understanding the inherited component of human disease and health. Examples include studies of retinal disease, auditory diseases, and common diseases more generally. Phylomedicine expands the purview of contemporary evolutionary medicine to use evolutionary patterns beyond short-term history (e.g. populations within a species) to the long-term evolutionary history of multispecies genomics.
References
Bioinformatics
Genomics | Phylomedicine | [
"Engineering",
"Biology"
] | 475 | [
"Bioinformatics",
"Biological engineering"
] |
50,956,705 | https://en.wikipedia.org/wiki/Coarse-grained%20modeling | Coarse-grained modeling, coarse-grained models, aim at simulating the behaviour of complex systems using their coarse-grained (simplified) representation. Coarse-grained models are widely used for molecular modeling of biomolecules at various granularity levels.
A wide range of coarse-grained models have been proposed. They are usually dedicated to computational modeling of specific molecules: proteins, nucleic acids, lipid membranes, carbohydrates or water. In these models, molecules are represented not by individual atoms, but by "pseudo-atoms" approximating groups of atoms, such as whole amino acid residue. By decreasing the degrees of freedom much longer simulation times can be studied at the expense of molecular detail. Coarse-grained models have found practical applications in molecular dynamics simulations. Another case of interest is the simplification of a given discrete-state system, as very often descriptions of the same system at different levels of detail are possible. An example is given by the chemomechanical dynamics of a molecular machine, such as Kinesin.
The coarse-grained modeling originates from work by Michael Levitt and Ariel Warshel in 1970s. Coarse-grained models are presently often used as components of multiscale modeling protocols in combination with reconstruction tools (from coarse-grained to atomistic representation) and atomistic resolution models. Atomistic resolution models alone are presently not efficient enough to handle large system sizes and simulation timescales.
Coarse graining and fine graining in statistical mechanics addresses the subject of entropy , and thus the second law of thermodynamics. One has to realise that the concept of temperature cannot be attributed to an arbitrarily microscopic particle since this does not radiate thermally like a macroscopic or "black body". However, one can attribute a nonzero entropy to an object with as few as two states like a "bit" (and nothing else). The entropies of the two cases are called thermal entropy and von Neumann entropy respectively. They are also distinguished by the terms coarse grained and fine grained respectively. This latter distinction is related to the aspect spelled out above and is elaborated on below.
The Liouville theorem (sometimes also called Liouville equation)
states that a phase space volume (spanned by and , here in one spatial dimension) remains constant in the course of time, no matter where the point contained in moves. This is a consideration in classical mechanics. In order to relate this view to macroscopic physics one surrounds each point e.g. with a sphere of some fixed volume - a procedure called coarse graining which lumps together points or states of similar behaviour. The trajectory of this sphere in phase space then covers also other points and hence its volume in phase space grows. The entropy associated with this consideration, whether zero or not, is called coarse grained entropy or thermal entropy. A large number of such systems, i.e. the one under consideration together with many copies, is called an ensemble. If these systems do not interact with each other or anything else, and each has the same energy , the ensemble is called a microcanonical ensemble. Each replica system appears with the same probability, and temperature does not enter.
Now suppose we define a probability density describing the motion of the point with phase space element . In the case of equilibrium or steady motion the equation of continuity implies that the probability density is independent of time . We take as nonzero only inside the phase space volume . One then defines the entropy by the relation
where
Then,by maximisation for a given energy , i.e. linking with of the other sum equal to zero via a Lagrange multiplier , one obtains (as in the case of a lattice of spins or with a bit at each lattice point)
and ,
the volume of being proportional to the exponential of S.
This is again a consideration in classical mechanics.
In quantum mechanics the phase space becomes a space of states, and the probability density an operator with a subspace of states of dimension or number of states specified by a projection operator . Then the entropy is (obtained as above)
and is described as fine grained or von Neumann entropy. If , the entropy vanishes and the system is said to be in a pure state. Here the exponential of S is proportional to the number of states. The microcanonical ensemble is again a large number of noninteracting copies of the given system and , energy etc. become ensemble averages.
Now consider interaction of a given system with another one - or in ensemble terminology - the given system and the large number of replicas all immersed in a big one called a heat bath characterised by . Since the systems interact only via the heat bath, the individual systems of the ensemble can have different energies depending on which energy state they are in. This interaction is described as entanglement and the ensemble as canonical ensemble (the macrocanonical ensemble permits also exchange of particles).
The interaction of the ensemble elements via the heat bath leads to temperature , as we now show. Considering two elements with energies , the probability of finding these in the heat bath is proportional to , and this is proportional to if we consider the binary system as a system in the same heat bath defined by the function . It follows that
(the only way to satisfy the proportionality), where is a constant. Normalisation then implies
Then in terms of ensemble averages
, and
or by comparison with the second law of thermodynamics. is now the entanglement entropy or fine grained von Neumann entropy. This is zero if the system is in a pure state, and is nonzero when in a mixed (entangled) state.
Above we considered a system immersed in another huge one called heat bath with the possibility of allowing heat exchange between them. Frequently one considers a different situation, i.e. two systems A and B with a small hole in the partition between them. Suppose B is originally empty but A contains an explosive device which fills A instantaneously with photons. Originally A and B have energies and respectively, and there is no interaction. Hence originally both are in pure quantum states and have zero fine grained entropies. Immediately after explosion A is filled with photons, the energy still being and that of B also (no photon has yet escaped). Since A is filled with photons, these obey a Planck distribution law and hence the coarse grained thermal entropy of A is nonzero (recall: lots of configurations of the photons in A, lots of states with one maximal), although the fine grained quantum mechanical entropy is still zero (same energy state), as also that of B. Now allow photons to leak slowly (i.e. with no disturbance of the equilibrium) from A to B. With fewer photons in A, its coarse grained entropy diminishes but that of B increases. This entanglement of A and B implies they are now quantum mechanically in mixed states, and so their fine grained entropies are no longer zero. Finally when all photons are in B, the coarse grained entropy of A as well as its fine grained entropy vanish and A is again in a pure state but with new energy. On the other hand B now has an increased thermal entropy, but since the entanglement is over it is quantum mechanically again in a pure state, its ground state, and that has zero fine grained von Neumann entropy. Consider B: In the course of the entanglement with A its fine grained or entanglement entropy started and ended in pure states (thus with zero entropies). Its coarse grained entropy, however, rose from zero to its final nonzero value. Roughly half way through the procedure the entanglement entropy of B reaches a maximum and then decreases to zero at the end.
The classical coarse grained thermal entropy of the second law of thermodynamics is not the same as the (mostly smaller) quantum mechanical fine grained entropy. The difference is called information. As may be deduced from the foregoing arguments, this difference is roughly zero before the entanglement entropy (which is the same for A and B) attains its maximum. An example of coarse graining is provided by Brownian motion.
Software packages
Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)
Extensible Simulation Package for Research on Soft Matter ESPResSo (external link)
References
Molecular modelling
Biomolecules | Coarse-grained modeling | [
"Chemistry",
"Biology"
] | 1,737 | [
"Natural products",
"Molecular physics",
"Organic compounds",
"Theoretical chemistry",
"Molecular modelling",
"Structural biology",
"Biomolecules",
"Biochemistry",
"Molecular biology"
] |
48,574,319 | https://en.wikipedia.org/wiki/Circular%20triangle | In geometry, a circular triangle is a triangle with circular arc edges.
Examples
The intersection of three circular disks forms a convex circular triangle. For instance, a Reuleaux triangle is a special case of this construction where the three disks are centered on the vertices of an equilateral triangle, with radius equal to the side length of the triangle. However, not every convex circular triangle is formed as an intersection of disks in this way.
A circular horn triangle has all internal angles equal to zero. One way of forming some of these triangles is to place three circles, externally tangent to each other in pairs; then the central triangular region surrounded by these circles is a horn triangle. However, other horn triangles, such as the arbelos (with three collinear vertices and three semicircles as its sides) are interior to one of the three tangent circles that form it, rather than exterior to all three.
A cardioid-like circular triangle found by Roger Joseph Boscovich has three vertices equally spaced on a line, two equal semicircles on one side of the line, and a third semicircle of twice the radius on the other side of the line. The two outer vertices have the interior angle and the middle vertex has interior angle . It has the curious property that all lines through the middle vertex bisect its perimeter.
Other circular triangles can have a mixture of convex and concave circular arc edges.
Characterization of angles
Three given angles , , and in the interval form the interior angles of a circular triangle (without self-intersections) if and only if they obey the system of inequalities
All circular triangles with the same interior angles as each other are equivalent to each other under Möbius transformations.
Isoperimetry
Circular triangles give the solution to an isoperimetric problem in which one seeks a curve of minimum length that encloses three given points and has a prescribed area. When the area is at least as large as the circumcircle of the points, the solution is any circle of that area surrounding the points. For smaller areas, the optimal curve will be a circular triangle with the three points as its vertices, and with circular arcs of equal radii as its sides, down to the area at which one of the three interior angles of such a triangle reaches zero. Below that area, the curve degenerates to a circular triangle with "antennae", straight segments reaching from its vertices to one or more of the specified points. In the limit as the area goes to zero, the circular triangle shrinks towards the Fermat point of the given three points.
See also
Hart circle, a circle associated with certain circular triangles
Hyperbolic triangle, a triangle that has straight sides in hyperbolic geometry, but is drawn as circular in some models of hyperbolic geometry
Lune and Lens, two-sided figures bounded by circular arcs
Sine-triple-angle circle
Trefoil, a circular triangle bulging outward from its three vertices, used in architecture
References
Piecewise-circular curves
Types of triangles | Circular triangle | [
"Mathematics"
] | 616 | [
"Planes (geometry)",
"Euclidean plane geometry",
"Piecewise-circular curves"
] |
48,577,408 | https://en.wikipedia.org/wiki/System-level%20simulation | System-level simulation (SLS) is a collection of practical methods used in the field of systems engineering, in order to simulate, with a computer, the global behavior of large cyber-physical systems.
Cyber-physical systems (CPS) are systems composed of physical entities regulated by computational elements (e.g. electronic controllers).
System-level simulation is mainly characterized by:
a level of detail adapted to the practical simulation of large and complex cyber-physical systems (e.g. plants, aircraft, industrial facilities)
the possibility to use the simulation even if the system is not fully specified, i.e. simulation does not necessarily require a detailed knowledge of each part of the system. This makes it possible to use the simulation for conception or study phases, even at an early stage in this process
These two characteristics have several implications in terms of modeling choices (see further).
System-level simulation has some other characteristics, that it shares with CPS simulation in general:
SLS involves multi-physics models (thermo-fluidic, mechanical, electrical, etc.)
SLS is frequently cross-disciplinary, i.e. it is frequently the result of a collaboration between people with different expertises
SLS is generally built upon a hierarchy of models; an organized modeling is usually necessary to make the whole model envisagable; the conceptual decomposition of the system into sub-systems is related to the notion of system of systems
SLS is mainly about computing the evolution over time of the physical quantities that characterize the system of interest, but other aspects can be added like failure modeling or requirement verification.
Motivations and benefit
The main motivation for SLS is the application of the holistic principle to computer simulation, which would state that simulating the system as a whole tells more than simulating parts of the system separately.
Indeed, simulating the different parts of a complex system separately means neglecting all the possible effects of their mutual interactions.
In many applications, these interactions cannot be ignored because of strong dependencies between the parts. For instance, many CPSs contain feedbacks that cannot be broken without modifying the system behavior. Feedbacks can be found in most modern industrial systems, which generally include one or more control systems. Another example of benefits from system level simulations is reflected in the high degree of accuracy (e.g. less than 1% cumulative validation error over 6 months of operation) of such simulations in the case of a solar thermal system.
On the other hand, simply connecting existing simulation tools, each built specifically to simulate one of the system parts, is not possible for large systems since it would lead to unacceptable computation times.
SLS aims at developing new tools and choosing relevant simplifications in order to be able to simulate the whole cyber-physical system.
SLS has many benefits compared to detailed co-simulation of the system sub-parts.
The results of a simulation at the system level are not as accurate as those of simulations at a finer level of detail but, with adapted simplifications, it is possible to simulate at an early stage, even when the system is not fully specified yet. Early bugs or design flaws can then be detected more easily.
SLS is also useful as a common tool for cross-discipline experts, engineers and managers and can consequently enhance the cooperative efforts and communication.
Improving the quality of exchanges reduces the risk of miscommunication or misconception between engineers and managers, which are known to be major sources of design errors in complex system engineering.
More generally SLS must be contemplated for all applications whenever only the simulation of the whole system is meaningful, while the computation times are constrained.
For instance, simulators for plant operators training must imitate the behavior of the whole plant while the simulated time must run faster than real time.
Modeling choices
Cyber-physical systems are hybrid systems, i.e. they exhibit a mix of discrete and continuous dynamics.
The discrete dynamics mostly originates from digital sensing or computational sub-systems (e.g. controllers, computers, signal converters).
The adopted models must consequently be capable of modeling such a hybrid behavior.
It is common in SLS to use 0D —sometimes 1D— equations to model physical phenomena with space variables, instead of 2D or 3D equations. The reason for such a choice is the size of the simulated systems, which is generally too large (i.e. too many elements and/or too large space extension) for the simulation to be computationally tractable. Another reason is that 3D models require the detailed geometry of each part to be modeled. This detailed knowledge might not be known to the modeler, especially if the modelling is done at an early step in the development process.
The complexity of large CPSs make them difficult to describe and visualize. A representation that can be arranged so that its structure looks like the structure of the original system
is a great help in terms of legibility and ease of comprehension. Therefore, acausal modeling is generally preferred to causal block-diagram modeling. Acausal modeling is also
preferred because component models can be reused, contrary to models developed as block diagrams.
Domains of application
System-level simulation is used in various domains like:
building engineering for heating, ventilating and air conditioning simulation
automotive engineering
power plants (solar, combined-cycle)
MEMS
naval architecture
aircraft architecture
offshore oil production
Usages
In an early stage of the development cycle, SLS can be used for dimensioning or to test different designs.
For instance, in automotive applications, "engineers use simulation to refine the specification before building a physical test vehicle".
Engineers run simulations with this system-level model to verify performance against requirements and to optimize tunable parameters.
System-level simulation is used to test controllers connected to the simulated system instead of the real one.
If the controller is a hardware controller like an ECU, the method is called hardware-in-the-loop. If the controller is run as a computer program on an ordinary PC, the method is called software-in-the-loop. Software-in-the-loop is faster to deploy and releases the constraint of real time imposed by the use of a hardware controller.
SLS is used to build plant models that can be simulated fast enough to be integrated in an operator training simulator or in an MPC controller. Systems with a faster dynamics can also be simulated, like a vehicle in a driving simulator.
Another example of SLS use is to couple the system-level simulation to a CFD simulation.
The system-level model provides the boundary conditions of the fluid domain in the CFD model.
Methods and tools
Specific languages are used to model specification and requirement modeling, like SysML or FORM-L. They are not meant to model the system physics but tools exist that can combine specification models and multi-physics models written in hybrid system modeling languages like Modelica.
If a model is too complex or too large to be simulated in a reasonable time, mathematical techniques can be utilized to simplify the model. For instance, model order reduction gives an approximate model, which has a lower accuracy but can be computed in a shorter time.
Reduced order models can be obtained from finite element models, and have been successfully used for system-level simulation of MEMS.
SLS can benefit from parallel computing architectures.
For instance, existing algorithms to generate code from high-level modeling languages can be adapted to multi-core processors like GPUs. Parallel co-simulation is another approach to enable numerical integration speed-ups. In this approach, the global system is partitioned into sub-systems. The subsystems are integrated independently of each other and are synchronized at discrete synchronization points. Data exchange between subsystems occurs only at the synchronization points. This results in a loose coupling between the sub-systems.
Optimization can be used to identify unknown system parameters, i.e. to calibrate CPS model, matching the performance to actual system operation. In cases when exact physical equations governing the processes are unknown, approximate empirical equations can be derived, e.g. using multiple linear regression.
Possible future evolutions
If the simulation can be deployed on a supercomputing architecture, many of the modeling choices that are commonly adopted today (see above) might become obsolete.
For instance, the future supercomputers might be able to "move beyond the loosely coupled, forward-simulation paradigm". In particular, "exascale computing will enable a more holistic treatment of complex problems". To exploit exascale computers, it will however be necessary to rethink the design of today's simulation algorithms.
For embedded system applications, safety considerations will probably lead the evolution of SLS. For instance, unlike synchronous languages, the modeling languages currently used for SLS (see above) are not predictable and may exhibit unexpected behaviors. It is then not possible to use them in a safety-critical context.
The languages should be rigorously formalized first. Some recent languages combine the syntax of synchronous languages for programming discrete components with the syntax of equation-based languages for writing ODEs.
References
External links
International Workshop on Simulation at the System Level: Sim@SL
International Workshop on Equation-based Object-Oriented Modeling Languages and Tools: EOOLT
ACM/IEEE International Conference on Model Driven Engineering Languages and Systems: MODELS
Systems engineering | System-level simulation | [
"Engineering"
] | 1,902 | [
"Systems engineering"
] |
48,578,373 | https://en.wikipedia.org/wiki/Dairy%20salt | Dairy salt is a culinary salt (sodium chloride) product used in the preparation of butter and cheese products that serves to add flavor and act as a food preservative. Dairy salt can vary in terms of quality and purity, with purer varieties being the most desirable for use in foods. Dairy salt has been used since at least the 1890s in England and the United States. In butter preparation, it serves to retain moisture, while in cheeses, it tends to reduce water content and slow the ripening process.
Purity
Quality dairy salts have been described as having favorable solubility properties for use in butter and cheese, and the softness and form of the salt crystals is one of the determining factors of overall quality. The purity of dairy salt is defined by the amount of sodium chloride present in the product, and those with at least 98% sodium chloride have been described as being of sound purity. Highly pure dairy salt has a pure white coloration, a uniformity in grain, and lacks any offensive odors or bitter flavor. Dairy salt products of lower purity may have a bitter flavor and poor solubility. The use of impure dairy salt can have adverse effects upon butter, spoiling its flavor, grain, and preservation. Impure dairy salt can make cheeses bitter, reducing their value. Impurities that may occur in dairy salt include calcium sulphate, calcium chloride, magnesium chloride, and to a lesser extent, sodium sulphate and magnesium sulphate.
History
In the 1890s, many brands of dairy salt were available. In England during this time, the Ashton and Higgins Eureka Salt brands were available (among others), and were used in the United States. U.S. brands of dairy salt in the 1890s included Diamond Crystal Salt and Genesee Salt, among others. Almost all of these brands were very pure, being 98–99% sodium chloride.
In 1899, it was estimated that around 82 million pounds of salt were used in the United States specifically for dairy purposes. At that time, this amount of salt was valued at US $800,000.
In 1914, it was written that American and Danish authorities were in agreement that large-sized, flat flake salt is the best type of dairy salt for use in butter products. During this time, almost all dairy salt was prepared using the process of evaporation. Some techniques included leaving brine in the sun to evaporate, or heating it with fire underneath iron pans, with the salt crystals remaining after the liquid evaporated. Additional methods included the Michigan grainer process and the vacuum pan process.
In the 1920s, calcium sulphate was one of the most common impurities in dairy salt. During this time, the United States Department of Agriculture's had a 1.4% maximum allowance for calcium sulphate in dairy salt and table salt.
Uses
Butter
Dairy salt serves to retain moisture and increase the weight of butter products, adds flavor and serves as a preservative and antimicrobial, which can prevent bacterial contamination. Dairy salt used in butter preparation is sometimes referred to as butter salt and buttersalt.
Cheese
Dairy salt has been used in the preparation of cheeses. Its use can add flavor to cheeses, and it tends to reduce the water content in cheeses, which can influence the ripening process. The use of impure dairy salt can make cheeses bitter in flavor, reducing their value. The use of too much salt in cheese can adversely affect its flavor, resulting in a product with a mealy and dry texture that is slow to ripen.
Dairy salt is used in cheddar cheese, and serves to add flavor and reduce moisture. The reduction of moisture inhibits fermentation and slows the ripening process, which can produce a higher-quality cheese in terms of flavor and consistency.
See also
List of edible salts
Butter salt – a seasoning
References
Bibliography
Further reading
External links
Dairy Salt. Cook's Info.
Butter
Edible salt
Food additives | Dairy salt | [
"Chemistry"
] | 805 | [
"Edible salt",
"Salts"
] |
48,581,239 | https://en.wikipedia.org/wiki/Holus | Holus is a 3D-image simulation product under development by H+Technology. The concept was first developed in 2013, before funding via Kickstarter meant the product could be taken to market. The purpose of Holus is to simulate holographic experiences and is technically different from typical hologram stickers found on credit cards and currency notes.
Holus has been criticized by some commentators as a revamping Pepper's ghost, a 19th-century optical trick.
History
Holus was developed in late 2013 by a team in Vancouver, British Columbia, Canada.
Shortly before H+ Tech began looking for funding for the device, Holus won a number of awards for its design. This included he Vancouver User Experience Award in the non-profit category for partnering up with Ronald McDonald House to build Magic Room and the Peoples Choice Award to achieve excellence in joy, elegance, and creativity.
Its first major coverage came from a review by the Massachusetts Institute of Technology in early 2015. At the time, the technology was demonstrated to bring animals to life within the 3D glass box. The product was referred to in the review as roughly the "size of a microwave". The concept went on to win two awards at the NextBC awards in Canada in early 2015.
In order to build mass versions of the product, a Kickstarter campaign was launched in order to take the idea to market. It used a similar technology to the optical illusion known as Pepper's ghost. This drew criticism from some during its Kickstarter campaign. It launched its Kickstarter campaign in June 2015 and generated twice its target of $40,000 within the first 48 hours.
The technology is similar to the technology used to display the music artists Tupac Shakur and Michael Jackson. Since then the technology has advanced, with a number of startups entering the market. One of these was H+ Technology, who first began working on the technology in early 2013. The aim of the product at the time has remained the same until today, to produce 3D technology that can be used in the home on a tabletop.
Research and development
Due to the technology being in its infancy, the media has covered the R&D of the product and its potential. Spatial light modulators have been mentioned as one potential development on future versions of Holus. The University of British Columbia and Simon Fraser University have both assisted with the research work of such displays.
References
Human–computer interaction
Human–machine interaction | Holus | [
"Physics",
"Technology",
"Engineering",
"Biology"
] | 496 | [
"Machines",
"Behavior",
"Physical systems",
"Human–machine interaction",
"Design",
"Human behavior",
"Human–computer interaction"
] |
48,583,145 | https://en.wikipedia.org/wiki/Northwest%20Nuclear%20Consortium | The Northwest Nuclear Consortium is an organization based in Washington state which uses a research grade ion collider to teach a class of high school students nuclear engineering principles based on the Department of Energy curriculum. They won the 1st Place at WSU Imagine Tomorrow in 2012. They also won the 1st place at the Washington State Science Fair, and the 2nd place worldwide at ISEF in 2013. In 2014 they won two 2nd place at the Central Sound Regional Science Fair at Bellevue College and they won 1st place twice in category at the Washington State Science & Engineering Fair at Bremerton. In 2015, they won 14 1st-place trophies at the Washington State Science and Engineering Fair, over $250,000 in scholarships at two different colleges and 3 of the 5 available trips to ISEF, where they won 4th place in the world against 72 countries.
References
Physics organizations
Nuclear fusion
Education in Washington (state) | Northwest Nuclear Consortium | [
"Physics",
"Chemistry"
] | 180 | [
"Nuclear fusion",
"Nuclear chemistry stubs",
"Nuclear physics"
] |
56,722,248 | https://en.wikipedia.org/wiki/Friction%20stir%20spot%20welding | Friction stir spot welding is a pressure welding process that operates below the melting point of the workpieces. It is a variant of friction stir welding.
Process description
In friction stir spot welding, individual spot welds are created by pressing a rotating tool with high force onto the top surface of two sheets that overlap each other in the lap joint. The frictional heat and the high pressure plastify the workpiece material, so that the tip of the pin plunges into the joint area between the two sheets and stirs-up the oxides. The pin of the tool is plunged into the sheets until the shoulder is in contact with the surface of the top sheet. The shoulder applies a high forging pressure, which bonds the components metallurgically without melting. After a short dwell time, the tool is pulled out of the workpieces again so that a spot weld can be made about every 5 seconds.
The tool consists of a rotating pin and a shoulder. The pin is the part of the tool that penetrates into the materials. Both the pin and the shoulder may be profiled to push the plasticized material in a particular direction and to efficiently break-up and disperse the oxide skins on the adjacent surfaces. After retracting the tool, a hole remains, when using one-piece tools, which have already proven themselves as very reliable in the automotive and the rail vehicle industry. Often the rotating tool is surrounded by a non-rotating clamping ring with which the workpieces are pressed firmly against each other before and during welding by applying a clamping force. The clamping ring can also be used to reduce the pressing out of plasticized material to avoid the formation of burrs or beads to apply inert gas or to cool the tool via compressed air.
The most important process parameters are the speed and contact pressure. This results in the plunge feed rate for a given workpiece material. Modern spot welding guns can be used either via position control or force control or via a product-specific programmed force-displacement control. Often, position control is used until a certain displacement is reached, and then the control system is switched to force control during the dwell time. Even during the force-controlled dwell time, certain position values can be specified, which should not be undermatched or exceeded.
Spot welding guns
Friction stir spot welding is performed with a spot welding gun, which is mounted on a console, flanged to an articulated robot or manually operated with a balancer to the component.
Process advantages
Friction spot welding is characterized by a number of process advantages. Any damage to the material caused by the extreme heat, such as that produced by laser or arc welding, will not occur. In particular, in the case of artificially aged aluminum alloys, the strength in the weld seam and the heat-affected zone is much higher than in conventional welding methods.
Industrial use
Friction stir spot welds have a high strength, so they are even suitable for parts that are exposed to particularly high loads. In addition to automotive and rail vehicle construction, the aerospace industry is developing the process e.g. for welding cockpit doors for helicopters. In the electrical industry aluminum and copper can be friction stir spot welded. Other applications are in façade and furniture manufacture, where the low heat input, especially in anodized sheets, leads to excellent optical properties.
References
Welding
Friction
Friction stir welding | Friction stir spot welding | [
"Physics",
"Chemistry",
"Engineering"
] | 688 | [
"Mechanical phenomena",
"Physical phenomena",
"Force",
"Friction",
"Physical quantities",
"Welding",
"Surface science",
"Mechanical engineering"
] |
56,731,778 | https://en.wikipedia.org/wiki/Dirac%20matter | The term Dirac matter refers to a class of condensed matter systems which can be effectively described by the Dirac equation. Even though the Dirac equation itself was formulated for fermions, the quasi-particles present within Dirac matter can be of any statistics. As a consequence, Dirac matter can be distinguished in fermionic, bosonic or anyonic Dirac matter. Prominent examples of Dirac matter are graphene and other Dirac semimetals, topological insulators, Weyl semimetals, various high-temperature superconductors with -wave pairing and liquid helium-3. The effective theory of such systems is classified by a specific choice of the Dirac mass, the Dirac velocity, the gamma matrices and the space-time curvature. The universal treatment of the class of Dirac matter in terms of an effective theory leads to a common features with respect to the density of states, the heat capacity and impurity scattering.
Definition
Members of the class of Dirac matter differ significantly in nature. However, all examples of Dirac matter are unified by similarities within the algebraic structure of an effective theory describing them.
General
The general definition of Dirac matter is a condensed matter system where the quasi-particle excitations can be described in curved spacetime by the generalised Dirac equation:
In the above definition denotes a covariant vector depending on the -dimensional momentum ( space time dimension), is the vierbein describing the curvature of the space, the quasi-particle mass and the Dirac velocity. Note that since in Dirac matter the Dirac equation gives the effective theory of the quasiparticles, the energy from the mass term is , not the rest mass of a massive particle. refers to a set of Dirac matrices, where the defining for the construction is given by the anticommutation relation,
is the Minkowski metric with signature (+ - - -) and is the -dimensional unit matrix.
In all equations, implicit summation over and is used (Einstein convention). Furthermore, is the wavefunction. The unifying feature of all Dirac matter is the matrix structure of the equation describing the quasi-particle excitations.
In the limit where , i.e. the covariant derivative, conventional Dirac matter is obtained. However, this general definition allows the description of matter with higher order dispersion relations and in curved spacetime as long as the effective Hamiltonian exhibits the matrix structure specific to the Dirac equation.
Common (conventional)
The majority of experimental realisations of Dirac matter to date are in the limit of which therefore defines conventional Dirac matter in which the quasiparticles are described by the Dirac equation in curved space-time,
Here, denotes the covariant derivative. As an example, for the flat metric, the energy of a free Dirac particle differs significantly from the classical kinetic energy where energy is proportional to momentum squared:
The Dirac velocity gives the gradient of the dispersion at large momenta , is the mass of particle or object. In the case of massless Dirac matter, such as the fermionic quasiparticles in graphene or Weyl semimetals, the energy-momentum relation is linear,
Therefore, conventional Dirac matter includes all systems that have a linear crossing or linear behavior in some region of the energy-momentum relation. They are characterised by features that resemble an 'X', sometimes tilted or skewed and sometimes with a gap between the upper and lower parts (the turning points of which become rounded if the origin of the gap is a mass term).
The general features and some specific examples of conventional Dirac matter are discussed in the following sections.
General properties of Dirac matter
Technological relevance and tuning of Dirac matter
Dirac matter, especially fermionic Dirac matter has much potential for technological applications. For example, 2010's Nobel Prize in Physics was awarded to Andre Geim and Konstantin Novoselov "for groundbreaking experiments regarding the material graphene". Within the official press release of the Swedish Royal Academy of Science it is stated that
In general, the properties of massless fermionic Dirac matter can be controlled by shifting the chemical potential by means of doping or within a field effect setup. By tuning the chemical potential, it is possible to have a precise control of the number of states present, since the density of states varies in a well-defined way with energy.
Additionally, depending on the specific realization of the Dirac material, it may be possible to introduce a mass term that opens a gap in the spectrum - a band gap. In general, the mass term is the result of breaking a specific symmetry of the system. The size of the band gap can be controlled precisely by controlling the strength of the mass term.
Density of states
The density of states of -dimensional Dirac matter near the Dirac point scales as where is the particle energy. The vanishing density of states for quasiparticles in Dirac matter mimics semimetal physics for physical dimension . In the two-dimensional systems such as graphene and topological insulators, the density of states gives a V shape, compared with the constant value for massive particles with dispersion .
Experimental measurement of the density of states near the Dirac point by standard techniques such as scanning tunnelling microscopy often differ from the theoretical form due to the effects of disorder and interactions.
Specific heat
Specific heat, the heat capacity per unit mass, describes the energy required to change the temperature of a sample. The low-temperature electronic specific heat of Dirac matter is which is different from encountered for normal metals. Therefore, for systems whose physical dimension is greater than 1, the specific heat can provide a clear signature of the underlying Dirac nature of the quasiparticles.
Landau quantization
Landau quantization refers to the quantization of the cyclotron orbits of charged particles in magnetic fields. As a result, the charged particles can only occupy orbits with discrete energy values, called Landau levels. For 2-dimensional systems with a perpendicular magnetic field, the energy for Landau-levels for ordinary matter described the Schrödinger equation and Dirac matter are given by
Here, is the cyclotron frequency which is linearly dependent of the applied magnetic field and the charge of the particle. There are two distinct features between the Landau level quantization for 2D Schrödinger fermions (ordinary matter) and 2D Dirac fermions. First, the energy for Schrödinger fermions is linearly dependent with respect to the integer quantum number , whereas it exhibits a square-root dependence for the Dirac fermions. This key difference plays an important role in the experimental verification of Dirac matter. Furthermore, for there exists a 0 energy level for Dirac fermions which is independent of the cyclotron frequency and with that of the applied magnetic field. For example, the existence of the zeroth Landau level gives rise to a quantum Hall effect where the Hall conductance in quantized at half-integer values.
Fermionic Dirac matter
In the context of Fermionic quasiparticles, the Dirac velocity is identical to the Fermi velocity; in bosonic systems, no Fermi velocity exists, so the Dirac velocity is a more general property of such systems.
Graphene
Graphene is a 2-dimensional crystalline allotrope of carbon, where the carbon atoms are arranged in a honeycomb lattice.
Each carbon atom forms -bonds to the three neighboring atoms that lie in the graphene plane at angles of 120. These bonds are mediated by three of carbon's four electrons while the fourth electron, which occupies a orbital, mediates an out-of-plane -bond that leads to the electronic bands at the Fermi level. The unique transport properties and the semimetallic state of graphene are the result of the delocalized electrons occupying these pz orbitals.
The semimetallic state corresponds to a linear crossing of energy bands at the and points of graphene's hexagonal Brillouin zone. At these two points, the electronic structure can be effectively described by the Hamiltonian
Here, and are two of the three Pauli matrices.
The factor indicates whether the Hamiltonian describes is centred on the or valley at the corner of hexagonal Brillouin zone. For graphene, the Dirac velocity is about eV . An energy gap in the dispersion of graphene can be obtained from a low-energy Hamiltonian of the form
which now contains a mass term . There are several distinct ways of introducing a mass term, and the results have different characteristics. The most practical approach for creating a gap (introducing a mass term) is to break the sublattice symmetry of the lattice where each carbon atom is slightly different to its nearest but identical to its next-nearest neighbours; an effect that may result from substrate effects.
Topological insulators
A topological insulator is a material that behaves as an insulator in its interior (bulk) but whose surface contains conducting states. This property represents a non-trivial, symmetry protected topological order. As a consequence, electrons in topological insulators can only move along the surface of the material. In the bulk of a non-interacting topological insulator, the Fermi level is positioned within the gap between the conduction and valence bands. On the surface, there are special states within the bulk energy gap which can be effectively described by a Dirac Hamiltonian:
where is normal to the surface and is in the real spin basis. However, if we rotate spin by a unitary operator, , we will end up with the standard notation of Dirac Hamiltonian, .
Such Dirac cones emerging on the surface of 3-dimensional crystals were observed in experiment, e.g.: bismuth selenide (BiSe), tin telluride (SnTe) and many other materials.
Transition metal dichalcogenides (TMDCs)
The low-energy properties some semiconducting transition metal dichalcogenide monolayers, can be described by a two-dimensional massive (gapped) Dirac Hamiltonian with an additional term describing a strong spin–orbit coupling:
The spin-orbit coupling provides a large spin-splitting in the valence band and indicates the spin degree of freedom. As for graphene, gives the valley degree of freedom - whether near the or point of the hexagonal Brillouin zone. Transition metal dichalcogenide monolayers are often discussed in reference to potential applications in valleytronics.
Weyl semimetals
Weyl semimetals, for example tantalum arsenide (TaAs) and related materials, strontium silicide (SrSi) have a Hamiltonian that is very similar to that of graphene, but now includes all three Pauli matrices and the linear crossings occur in 3D:
Since all three Pauli matrices are present, there is no further Pauli matrix that could open a gap in the spectrum and Weyl points are therefore topologically protected. Tilting of the linear cones so the Dirac velocity varies leads to type II Weyl semimetals.
One distinct, experimentally observable feature of Weyl semimetals is that the surface states form Fermi arcs since the Fermi surface does not form a closed loop.
While the Weyl equation was originally derived for odd spatial dimensions, the generalization of a 3D Weyl fermion state in 2D leads to a distinct topological state of matter, labeled as 2D Weyl semimetals. 2D Weyl semimetals are spin-polarized analogues of graphene that promise access to topological properties of Weyl fermions in (2+1)-dim spacetime. In 2024, an intrinsic 2D Weyl semimetal with spin-polarized Weyl cones and topological Fermi strings (1D analog of Fermi arcs) was discovered in epitaxial monolayer bismuthene.
Dirac semimetals
In crystals that are symmetric under inversion and time reversal, electronic energy bands are two-fold degenerate. This degeneracy is referred to as Kramers degeneracy. Therefore, semimetals with linear crossings of two energy bands (two-fold degeneracy) at the Fermi energy exhibit a four-fold degeneracy at the crossing point. The effective Hamiltonian for these states can be written as
This has exactly the matrix structure of Dirac matter. Examples of experimentally realised Dirac semimetals are sodium bismuthide (NaBi) and cadmium arsenide (CdAs)
Bosonic Dirac matter
While historic interest focussed on fermionic quasiparticles that have potential for technological applications, particularly in electronics, the mathematical structure of the Dirac equation is not restricted to the statistics of the particles. This has led to recent development of the concept of bosonic Dirac matter.
In the case of bosons, there is no Pauli exclusion principle to confine excitations close to the chemical potential (Fermi energy for fermions) so the entire Brillouin zone must be included. At low temperatures, the bosons will collect at the lowest energy point, the -point of the lower band. Energy must be added to excite the quasiparticles to the vicinity of the linear crossing point.
Several examples of Dirac matter with fermionic quasi-particles occur in systems where there is a hexagonal crystal lattice; so bosonic quasiparticles on an hexagonal lattice are the natural candidates for bosonic Dirac matter. In fact, the underlying symmetry of a crystal structure strongly constrains and protects the emergence of linear band crossings. Typical bosonic quasiparticles in condensed matter are magnons, phonons, polaritons and plasmons.
Existing examples of bosonic Dirac matter include transition metal halides such as CrX (X= Cl, Br, I), where the magnon spectrum exhibits linear crossings, granular superconductors in a honeycomb lattice and hexagonal arrays of semiconductor microcavities hosting microcavity polaritons with linear crossings. Like graphene, all these systems have an hexagonal lattice structure.
Anyonic Dirac materials
Anyonic Dirac matter is a hypothetical field which is rather unexplored to date. An anyon is a type of quasiparticle that can only occur in two-dimensional systems. Considering bosons and fermions, the interchange of two particles contributes a factor of 1 or -1 to the wave function. In contrast, the operation of exchanging two identical anyons causes a global phase shift. Anyons are generally classified as abelian or non-abelian, according to whether the elementary excitations of the theory transform under an abelian representation of the braid group or a non-abelian one. Abelian anyons have been detected in connection to the fractional quantum Hall effect. The possible construction of anyonic Dirac matter relies on the symmetry protection of crossings of anyonic energy bands. In comparison to bosons and fermions the situation gets more complicated as translations in space do not necessarily commute. Additionally, for given spatial symmetries, the group structure describing the anyon strongly depends on the specific phase of the anyon interchange. For example, for bosons, a rotation of a particle about 2 i.e., 360, will not change its wave function. For fermions, a rotation of a particle about 2, will contribute a factor of to its wave function, whereas a 4 rotation, i.e., a rotation about 720, will give the same wave function as before. For anyons, an even higher degree of rotation can be necessary, e.g., 6, 8, etc., to leave the wave function invariant.
See also
Dirac cone
Further reading
References
Condensed matter physics | Dirac matter | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,270 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
49,937,269 | https://en.wikipedia.org/wiki/Operations%20engineering | Operations engineering is a branch of engineering that is mainly concerned with the analysis and optimization of operational problems using scientific and mathematical methods. More frequently it has applications in the areas of Broadcasting/Industrial Engineering and also in the Creative and Technology Industries.
Operations engineering is considered to be a subdiscipline of Operations Research and Operations Management.
Associations
INFORMS
Society of Operations Engineers
industrial operation
References
See also
Operations research
Systems engineering
Enterprise engineering
Engineering management
Business engineering
Industrial engineering | Operations engineering | [
"Engineering"
] | 91 | [
"Industrial engineering"
] |
49,944,721 | https://en.wikipedia.org/wiki/DevOps%20toolchain | A DevOps toolchain is a set or combination of tools that aid in the delivery, development, and management of software applications throughout the systems development life cycle, as coordinated by an organisation that uses DevOps practices.
Generally, DevOps tools fit into one or more activities, which supports specific DevOps initiatives: Plan, Create, Verify, Package, Release, Configure, Monitor, and Version Control.
Toolchains
In software, a toolchain is the set of programming tools that is used to perform a complex software development task or to create a software product, which is typically another computer program or a set of related programs. In general, the tools forming a toolchain are executed consecutively so the output or resulting environment state of each tool becomes the input or starting environment for the next one, but the term is also used when referring to a set of related tools that are not necessarily executed consecutively.
As DevOps is a set of practices that emphasizes the collaboration and communication of both software developers and other information technology (IT) professionals, while automating the process of software delivery and infrastructure changes, its implementation can include the definition of the series of tools used at various stages of the lifecycle; because DevOps is a cultural shift and collaboration between development and operations, there is no one product that can be considered a single DevOps tool. Instead a collection of tools, potentially from a variety of vendors, are used in one or more stages of the lifecycle.
Stages of DevOps
Plan
Plan is composed of two things: "define" and "plan". This activity refers to the business value and application requirements. Specifically "Plan" activities include:
Production metrics, objects and feedback
Requirements
Business metrics
Update release metrics
Release plan, timing and business case
Security policy and requirement
A combination of the IT personnel will be involved in these activities: business application owners, software development, software architects, continual release management, security officers and the organization responsible for managing the production of IT infrastructure.
Create
Create is composed of the building, coding, and configuring of the software development process. The specific activities are:
Design of the software and configuration
Coding including code quality and performance
Software build and build performance
Release candidate
Tools and vendors in this category often overlap with other categories. Because DevOps is about breaking down silos, this is reflective in the activities and product solutions.
Verify
Verify is directly associated with ensuring the quality of the software release; activities designed to ensure code quality is maintained and the highest quality is deployed to production. The main activities in this are:
Acceptance testing
Regression testing
Security and vulnerability analysis
Performance
Configuration testing
Solutions for verify related activities generally fall under four main categories: Test automation, Static analysis, Test Lab, and Security.
Package
Package refers to the activities involved once the release is ready for deployment, often also referred to as staging or Preproduction / "preprod". This often includes tasks and activities such as:
Approval/preapprovals
Package configuration
Triggered releases
Release staging and holding
Release
Release related activities include schedule, orchestration, provisioning and deploying software into production and targeted environment. The specific Release activities include:
Release coordination
Deploying and promoting applications
Fallbacks and recovery
Scheduled/timed releases
Solutions that cover this aspect of the toolchain include application release automation, deployment automation and release management.
Configure
Configure activities fall under the operation side of DevOps. Once software is deployed, there may be additional IT infrastructure provisioning and configuration activities required.
Specific activities including:
Infrastructure storage, database and network provisioning and configuring
Application provision and configuration.
The main types of solutions that facilitate these activities are continuous configuration automation, configuration management, and infrastructure as code tools.
Monitor
Monitoring is an important link in a DevOps toolchain. It allows IT organization to identify specific issues of specific releases and to understand the impact on end-users. A summary of Monitor related activities are:
Performance of IT infrastructure
End-user response and experience
Production metrics and statistics
Information from monitoring activities often impacts Plan activities required for changes and for new release cycles.
Version Control
Version Control is an important link in a DevOps toolchain and a component of software configuration management. Version Control is the management of changes to documents, computer programs, large web sites, and other collections of information. A summary of Version Control related activities are:
Non-linear development
Distributed development
Compatibility with existent systems and protocols
Toolkit-based design
Information from Version Control often supports Release activities required for changes and for new release cycles.
See also
Continuous delivery
Continuous integration
Agile software development
References
Software design
Software development process
Programming tools | DevOps toolchain | [
"Engineering"
] | 939 | [
"Design",
"Software design"
] |
49,946,142 | https://en.wikipedia.org/wiki/Site%20reliability%20engineering | Site Reliability Engineering (SRE) is a discipline in the field of Software Engineering and IT infrastructure support that monitors and improves the availability and performance of deployed software systems and large software services (which are expected to deliver reliable response times across events such as new software deployments, hardware failures, and cybersecurity attacks). There is typically a focus on automation and an Infrastructure as Code methodology. SRE uses elements of software engineering, IT infrastructure, web development, and operations to assist with reliability. It is similar to DevOps as they both aim to improve the reliability and availability of deployed software systems.
History
Site Reliability Engineering originated at Google with Benjamin Treynor Sloss, who founded SRE team in 2003. The concept expanded within the software development industry, leading various companies to employ site reliability engineers. By March 2016, Google had more than 1,000 site reliability engineers on staff. Dedicated SRE teams are common at larger web development companies. In middle-sized and smaller companies, DevOps teams sometimes perform SRE, as well. Organizations that have adopted the concept include Airbnb, Dropbox, IBM, LinkedIn, Netflix, and Wikimedia.
Definition
Site reliability engineers (SREs) are responsible for a combination of system availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning. SREs often have backgrounds in software engineering, systems engineering, and/or system administration. The focuses of SRE include automation, system design, and improvements to system resilience.
SRE is considered a specific implementation of DevOps; focusing specifically on building reliable systems, whereas DevOps covers a broader scope of operations. Despite having different focuses, some companies have rebranded their operations teams to SRE teams.
Principles and practices
Common definitions of the practices include (but are not limited to):
Automation of repetitive tasks for cost-effectiveness.
Defining reliability goals to prevent endless effort.
Design of systems with a goal to reduce risks to availability, latency, and efficiency.
Observability, the ability to ask arbitrary questions about a system without having to know ahead of time what to ask.
Common definitions of the principles include (but are not limited to):
Toil management, the implementation of the first principle outlined above.
Defining and measuring reliability goals—SLIs, SLOs, and error budgets.
Non-Abstract Large Scale Systems Design (NALSD) with a focus on reliability.
Designing for and implementing observability.
Defining, testing, and running an incident management process.
Capacity planning.
Change and release management, including CI/CD.
Chaos engineering.
Deployment
SRE teams collaborate with other departments within organizations to guide the implementation of the mentioned principles. Below is an overview of common practices:
Kitchen Sink
Kitchen Sink refers to the expansive and often unbounded scope of services and workflows that SRE teams oversee. Unlike traditional roles with clearly defined boundaries, SREs are tasked with various responsibilities, including system performance optimization, incident management, and automation. This approach allows SREs to address multiple challenges, ensuring that systems run efficiently and evolve in response to changing demands and complexities.
Infrastructure
Infrastructure SRE teams focus on maintaining and improving the reliability of systems that support other teams' workflows. While they sometimes collaborate with platform engineering teams, their primary responsibility is ensuring up-time, performance, and efficiency. Platform teams, on the other hand, primarily develop the software and systems used across the organization. While reliability is a goal for both, platform teams prioritize creating and maintaining the tools and services used by internal stakeholders, whereas Infrastructure SRE teams are tasked with ensuring those systems run smoothly and meet reliability standards.
Tools
SRE teams utilize a variety of tools with the aim of measuring, maintaining, and enhancing system reliability. These tools play a role in monitoring performance, identifying issues, and facilitating proactive maintenance. For instance, Nagios Core is commonly employed for system monitoring and alerting, while Prometheus (software) is frequently used for collecting and querying metrics in cloud-native environments.
Product or Application
SRE teams dedicated to specific products or applications are common in large organizations. These teams are responsible for ensuring the reliability, scalability, and performance of key services. In larger companies, it's typical to have multiple SRE teams, each focusing on different products or applications, ensuring that each area receives specialized attention to meet performance and availability targets.
Embedded
In an embedded model, individual SREs or small SRE pairs are integrated within software engineering teams. These SREs collaborate with developers, applying core SRE principles—such as automation, monitoring, and incident response—directly to the software development lifecycle. This approach aims to enhance reliability, performance, and collaboration between SREs and developers.
Consulting
Consulting SRE teams specialize in advising organizations on the implementation of SRE principles and practices. Typically composed of seasoned SREs with a history across various implementations, these teams provide insights and guidance for specific organizational needs. When working directly with clients, these SREs are often referred to as 'Customer Reliability Engineers.'
In large organizations that have adopted SRE, a hybrid model is common. This model includes various implementations, such as multiple Product/Application SRE teams dedicated to addressing the specific reliability needs of different products. An Infrastructure SRE team may collaborate with a Platform engineering group to achieve shared reliability goals for a unified platform that supports all products and applications.
Industry
Since 2014, the USENIX organization has hosted the annual SREcon conference, bringing together site reliability engineers from various industries. This conference is a platform for professionals to share knowledge, explore effective practices, and discuss trends in site reliability engineering.
See also
Chaos engineering
Cloud computing
Data center
Disaster recovery
High availability software
Infrastructure as code
Operations, administration and management
Operations management
Reliability engineering
System administration
Backup site
References
Further reading
External links
Awesome Site Reliability Engineering resources list
How they SRE resources list
SRE Weekly weekly newsletter devoted to SRE
SRE at Google landing page for learning more about SRE in Google
Komodor K8s Reliability learning centre with resources for SREs working with Kubernetes
SRE: What Do You Need To Know To Master This Role? resource list
2003 introductions
Google
Reliability engineering
Software engineering | Site reliability engineering | [
"Technology",
"Engineering"
] | 1,261 | [
"Systems engineering",
"Computer engineering",
"Reliability engineering",
"Software engineering",
"Information technology"
] |
42,285,920 | https://en.wikipedia.org/wiki/Laser%20microprobe%20mass%20spectrometer | A laser microprobe mass spectrometer (LMMS), also laser microprobe mass analyzer (LAMMA), laser ionization mass spectrometer (LIMS), or laser ionization mass analyzer (LIMA) is a mass spectrometer that uses a focused laser for microanalysis. It employs local ionization by a pulsed laser and subsequent mass analysis of the generated ions.
Methods
In laser microprobe mass analysis, a highly focused laser beam is pulsed on a micro sample usually with a volume of approximately 1 microliter. The resulting ions generated by this laser are then analyzed with time-of-flight mass spectrometry to give composition, concentration, and in the case of organic molecules structural information.
Unlike other methods of microprobe analysis which involve ions or electrons, the LMMS microproble fires an ultraviolet pulse in order to create ions.
Advantages
LMMS is relatively simple to operate compared to other methods. Furthermore, its strengths include its ability to analyze biological materials to detect certain compounds (such as metals or organic materials).
Sample preparation
LAMMA is particular about the sample which is used. The sample must be small and thin. Ionization of too much material results in a large microplasma whose time spread and ion energy distribution entering the mass spectrometer can result in undesired peak deformation.
See also
Matrix-assisted laser desorption/ionization
Franz Hillenkamp
References
Mass spectrometry
Scientific techniques | Laser microprobe mass spectrometer | [
"Physics",
"Chemistry"
] | 304 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
42,287,126 | https://en.wikipedia.org/wiki/Great%20duoantiprism | In geometry, the great duoantiprism is the only uniform star-duoantiprism solution in 4-dimensional geometry. It has Schläfli symbol or Coxeter diagram , constructed from 10 pentagonal antiprisms, 10 pentagrammic crossed-antiprisms, and 50 tetrahedra.
Its vertices are a subset of those of the small stellated 120-cell.
Construction
The great duoantiprism can be constructed from a nonuniform variant of the 10-10/3 duoprism (a duoprism of a decagon and a decagram) where the decagram's edge length is around 1.618 (golden ratio) times the edge length of the decagon via an alternation process. The decagonal prisms alternate into pentagonal antiprisms, the decagrammic prisms alternate into pentagrammic crossed-antiprisms with new regular tetrahedra created at the deleted vertices. This is the only uniform solution for the p-q duoantiprism aside from the regular 16-cell (as a 2-2 duoantiprism).
Images
Other names
Great duoantiprism (gudap) Jonathan Bowers
References
Regular Polytopes, H. S. M. Coxeter, Dover Publications, Inc., 1973, New York, p. 124.
Norman Johnson Uniform Polytopes, Manuscript (1991)
N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
Uniform 4-polytopes | Great duoantiprism | [
"Physics",
"Mathematics"
] | 334 | [
"Uniform polytopes",
"Uniform 4-polytopes",
"Geometry",
"Geometry stubs",
"Symmetry"
] |
42,289,911 | https://en.wikipedia.org/wiki/Paraptosis | Paraptosis (from the Greek παρά para, "related to" and apoptosis) is a type of programmed cell death, morphologically distinct from apoptosis and necrosis. The defining features of paraptosis are cytoplasmic vacuolation, independent of caspase activation and inhibition, and lack of apoptotic morphology. Paraptosis lacks several of the hallmark characteristics of apoptosis, such as membrane blebbing, chromatin condensation, and nuclear fragmentation. Like apoptosis and other types of programmed cell death, the cell is involved in causing its own death, and gene expression is required. This is in contrast to necrosis, which is non-programmed cell death that results from injury to the cell.
Paraptosis has been found in some developmental and neurodegenerative cell deaths, as well as induced by several cancer drugs.
Paraptosis was not recognized as a form of cell death by the Nomenclature Committee on Cell Death in their 2018 review article. The use of this term was explicitly discouraged by the Committee in their 2012 revision
History
The first reported use of the term "paraptosis" was by Sabina Sperandio et al. in 2000. The group used human insulin-like growth factor 1 receptor (IGF-1R) to stimulate cell death in 293T cells and mouse embryonic fibroblasts, observing distinct differences from other forms of cell death. They coined the term "paraptosis", derived from the Greek preposition para, meaning beside or related to, and apoptosis.
While Sperandio was the first to publish the term paraptosis, this was not the first time cell death with the properties of paraptosis was observed. Terms such as "cytoplasmic" and "type 3 cell death" had previously been used to describe these forms of cell death. These forms are very similar to paraptosis morphologically, and it is possible that some instances of cell death originally described as one of these forms are occurrences of paraptosis.
Morphology
Paraptosis is a form of type III programmed cell death with a unique combination of certain apoptotic and necrotic characteristics. Paraptosis does not demonstrate nuclear fragmentation, formation of apoptotic bodies, or definitive demonstration of chromatin condensation - all seen in apoptosis. Instead, paraptosis displays a somewhat primitive cell death path, comparable to necrosis, including characteristic cytoplasmic vacuole formation and late mitochondrial swelling and clumping. The number and size of vacuoles increases over time. Eventually, the vacuole sizes reach a point of no return and the cell cannot recover.
Similar to apoptosis, staining techniques can be used to identify paraptotic cells by highlighting the translocation of phosphatidylserine from the plasma membrane cytoplasmic (inner) leaflet to the cell surface or outer leaflet.
Paraptosis morphology changes are similar to the morphological changes undergone during the development of the nervous system.
Major structural rearrangement
Almost immediately, major structural rearrangements such as rounded cells, cytoplasmic reorganization, and vacuolation of cells undergoing paraptosis can be seen through light microscopy. There is physical enlargement of the mitochondria and endoplasmic reticulum. This swollen appearance can be attributed to intracellular ion imbalance and eventual osmotic lysis. Once ruptured, particles and substances are released, including: (1) high mobility group B-1 (HMGB1) (2) heat shock proteins and (3) various other proteases. These substances are "danger signals" and result in inflammation.
Pathway
While certain templates of programmed cell death have been known to rely on de novo protein synthesis, paraptotic cell death induced by IGFIR-IC in 293T cells is deterred by actinomycin D and cycloheximide, thus demonstrating a dependence on transcription and translation.
Induction of paraptosis has been determined to be mediated through two positive signal transduction pathways, MAPK and JNK, by using IGF-IR at the receptor level. As such, paraptosis can be prevented by inhibiting specific protein kinases of these pathways.
AIP1 interaction (via its carboxyl-terminal) with endophilins can induce intracellular vacuole formation. AIP1/Alix was determined to be "the first specific inhibitor" of paraptosis.
Paraptosis-like phenotype has also been described in human colorectal cancer cells following overactivation of the non-receptor tyrosine kinase c-Src suggesting potential involvement of Src-signalling in paraptosis.
Differences from other cell death pathways
Cell death induced by IGFIR-IC in 293T cells demonstrated cell death without associated caspase activity. This is in comparison to apoptosis, in which the proapoptotic protein Bax induced caspase activation and cell death. Additionally, research found that caspase inhibitors (zVAD.fmk, p53, BAF), x-chromosome-linked inhibitor (xiap), and Bcl-xL( from the Bcl-2 family) did not prevent cell death in 293T cells when induced by IGFIR-IC. Therefore, paraptosis was concluded to differ from apoptosis (cell death type 1) in being unaffected by inhibitors of apoptosis.
In apoptosis, HMGB1, a chromatin protein, is retained within the nucleus to result in formations of apoptotic bodies, while in paraptosis HMGB1 is released.
The most defining difference observed (as of April 2014) between paraptosis and autophagic cell death (cell death type 2) is paraptosis' lack of the characteristic autophagic vacuoles seen in autophagic cell death. As expected, autophagic cell death inhibitors (for instance, 3-methyladenine) are ineffective at inhibiting paraptosis.
Comparison of cell death types
Proteome profile
Cells experience both morphologic and proteome changes when undergoing paraptosis. Changes to structural, signal transduction, and mitochondrial proteins have all been observed during paraptosis.
Structural proteins
In cells undergoing paraptosis:
α-Tubulin is more concentrated in endosomes and Golgi (light membrane) and is less abundant in the cytosol and the dark membrane (composed of mitochondria and lysosomes).
β-Tubulin overall is decreased in paraptotic cell fractions.
Tropomyosin, similarly to α - tubulin, demonstrates a higher presence in endosomes and golgi, while having a diminished abundance in the cytosol and the dark membrane.
Signal transduction proteins
PEBP, or Raf kinase inhibitor protein (RKIP) is diminished in paraptotic cells, thus resultant down regulation of PEBP and/or other kinase inhibitors seem to indicate participation in the MAPK and JNK pathways, as diminished PEBP would allow for the levels of MAPK and JNK to accumulate enough to be sufficient to induce cell death.
Mitochondria proteins
ATP synthase is composed of multiple subunits and found in the mitochondria. When undergoing paraptosis, higher amounts of ATP synthase ß-subunit were demonstrated in P20.
Mitochondrial staining reveals that rounded paraptotic cells with elevated levels of prohibitin appear to be undergoing reorganization of the mitochondrial network.
Paraptotic cells demonstrated a 3.4 fold increased in prohibitin. Increased levels of prohibitin in conjugation with a paraptotic stimulus can result in cell death that is unable to be inhibited by caspase inhibitors.
Potential medical significance
Cancers
Many anti-cancer substances have been shown to cause paraptosis in a large range of human cancer cells. This includes several compounds derived from natural sources as well as metal complexes. Paraptosis is also an area of interest for Cancer Research as a way to treat apoptosis resistant cancers.
Paclitaxel, commonly distributed under the trade name Taxol, is a cancer drug used for the treatment of breast and ovarian cancers. At high concentrations (70 μM), one study showed it to induce a paraptosis-like cell death, and could be an important mechanism for treating apoptosis-resistant cancers.
Researchers have reported finding that γ-Tocotrienol, a form of vitamin E derived from palm oil, induced paraptosis-like cell death in colon cancer cells. Along with inducing paraptosis, γ-tocotrienol also suppressed the Wnt signaling pathway, which plays a role in tumor development. The combination of these two features could provide a novel mechanism for treating colon cancer.
Steamed American ginseng extract has been reported to "potently kill colorectal cancer cells". Specifically, derivatives of protopanaxadiol Rg3 and Rh2, are the key ginsenosides found in the extract. In colorectal cancer cell lines, HCT116, cytosolic vacuolization has been induced by Rh2. Furthermore, Rh2-induced vacuolization was inhibited by a MEK1/2 specific inhibitor U0126, cycloheximide, thus confirming two characteristic properties of paraptosis, signaling via MAP kinase and required protein translation. Rh2 also induces increase ROS levels, which activate the NF-κB signaling pathway, while blocking ROS with NAC or catalase prevents the activation of NF-κB signaling and further enhances cell death induced by Rh2. This suggests an antioxidant-enhanced anticancer effect of Rh2.
Honokiol, a compound derived from Magnolia officinalis, can induce paraptosis in human leukemia cells. In the NB4 cell line, paraptosis was the primary method of cell death. In K562 cells, apoptosis was the primary mechanism, with paraptosis occasionally found. Researchers stated that this suggests that leukemia cell death can be induced by multiple pathways.
In one experiment a phosphine copper(I) complex caused paraptosis in colon cancer cells by inducing endoplasmic reticulum stress. Another copper complex, the A0 thioxotriazole copper (II) complex, also caused paraptosis in HT1080 fibrosarcoma cells via endoplasmic reticulum stress and cytoplasmic vacuolization. Along with cytotoxic effects such as an increase in oxidized glutathione and prevention of proteasome activity, A0 prevented the activity of caspase-3, which may inhibit apoptosis and cause the cells to die via paraptosis.
Neurodegenerative cell death
The activity of the mammalian tumor suppressor p53 depends on levels of an isoform of p53, p44. In an experiment with transgenic mice that had an over-expression of p44, hyper-activation of IGF-1R occurred, which in turn led to accelerated aging and death. The mice also experienced neuronal death in areas of the brain related to memory formation and retrieval. This IGF-1R induced neurodegeneration was caused by both paraptosis and autophagic cell death. IGF-1R is an important area of research for neurodegenerative diseases, as defects in IGF-1R signaling, including increased levels of IGF-1R, have been found in the brains of Alzheimer's patients.
Other examples
Paraptosis-like programmed cell death has been observed in both plants and protists. Apoptotic death similar to that found in animals does not occur in plants, due to the cell wall of plant cells preventing phagocytosis. In an experiment with tobacco, bleomycin was used to introduce double strand breaks in the cells' DNA. This then caused cells to undergo programmed cell death with considerable vacuolization and an absence of DNA fragmentation and caspase inhibition, similar to paraptosis. A study with the algae Dunaliella viridis demonstrated the ability of protists to undergo programmed cell death via several types, including paraptosis and apoptosis, depending on different environmental stimuli. A combination of these factors have led to speculation that paraptosis may be an ancestral form of programmed cell death, conserved across different forms of life.
See also
Apoptosis
Autophagy
Cytotoxicity
Necrosis
Parthanatos
Programmed cell death
References
Programmed cell death
Cellular senescence | Paraptosis | [
"Chemistry",
"Biology"
] | 2,629 | [
"Signal transduction",
"Senescence",
"Cellular senescence",
"Cellular processes",
"Programmed cell death"
] |
42,291,886 | https://en.wikipedia.org/wiki/Ligand%20binding%20assay | A ligand binding assay (LBA) is an assay, or an analytic procedure, which relies on the binding of ligand molecules to receptors, antibodies or other macromolecules. A detection method is used to determine the presence and amount of the ligand-receptor complexes formed, and this is usually determined electrochemically or through a fluorescence detection method. This type of analytic test can be used to test for the presence of target molecules in a sample that are known to bind to the receptor.
There are numerous types of ligand binding assays, both radioactive and non-radioactive. Some newer types are called "mix-and-measure" assays because they require fewer steps to complete, for example foregoing the removal of unbound reagents.
Ligand binding assays are used primarily in pharmacology for various demands. Specifically, despite the human body's endogenous receptors, hormones, and other neurotransmitters, pharmacologists utilize assays in order to create drugs that are selective, or mimic, the endogenously found cellular components. On the other hand, such techniques are also available to create receptor antagonists in order to prevent further cascades. Such advances provide researchers with the ability not only to quantify hormones and hormone receptors, but also to contribute important pharmacological information in drug development and treatment plans.
History
Historically, ligand binding assay techniques were used extensively to quantify hormone or hormone receptor concentrations in plasma or in tissue. The ligand-binding assay methodology quantified the concentration of the hormone in the test material by comparing the effects of the test sample to the results of varying amounts of known protein (ligand).
The foundations for which ligand binding assay have been built are a result of Karl Landsteiner, in 1945, and his work on immunization of animals through the production of antibodies for certain proteins. Landsteiner's work demonstrated that immunoassay technology allowed researchers to analyze at the molecular level. The first successful ligand binding assay was reported in 1960 by Rosalyn Sussman Yalow and Solomon Berson. They investigated the binding interaction for insulin and an insulin-specific antibody, in addition to developing the first radioimmunoassay (RIA) for insulin. These discoveries provided precious information regarding both the sensitivity and specificity of protein hormones found within blood-based fluids. Yalow and Berson received the Nobel Prize in Medicine as a result of their advancements. Through the development of RIA technology, researchers have been able to move beyond the use of radioactivity, and instead, use liquid- and solid-phase, competitive, and immunoradiometric assays. As a direct result of these monumental findings, researchers have continued the advancement of ligand binding assays in many facets in the fields of biology, chemistry, and the like. For instance, the Lois lab at Caltech is using engineered artificial ligands and receptors on neurons to trace information flow in the brain. They are specifically using ligand-induced intramembrane proteolysis to unravel the wiring of the brain in Drosophila and other models. When the artificial ligand on one neuron binds to the receptor on another, GFP expression is activated in the acceptor neuron demonstrating the usefulness of ligand binding assays in neuroscience and biology.
Applications
Ligand binding assays provide a measure of the interactions that occur between two molecules, such as protein-bindings, as well as the degree of affinity (weak, strong, or no connection) for which the reactants bind together. Essential aspects of binding assays include, but are not limited to, the concentration level of reactants or products (see radioactive section), maintaining the equilibrium constant of reactants throughout the assay, and the reliability and validity of linked reactions. Although binding assays are simple, they fail to provide information on whether or not the compound being tested affects the target's function.
Radioligand assays
Radioligands are used to measure the ligand binding to receptors and should ideally have high affinity, low non-specific binding, high specific activity to detect low receptor densities, and receptor specificity.
Levels of radioactivity for a radioligand (per mole) are referred to as the specific activity (SA), which is measured in Ci/mmol. The actual concentration of a radioligand is determined by the specific stock mix for which the radioligand originated (from the manufactures.) The following equation determines the actual concentration:
Saturation binding
Saturation analysis is used in various types of tissues, such as fractions of partially purified plasma from tissue homogenates, cells transfected with cloned receptors, and cells that are either in culture or isolated prior to analysis. Saturation binding analysis can determine receptor affinity and density. It requires that the concentration chosen must be determined empirically for a new ligand.
There are two common strategies that are adopted for this type of experiment: Increasing the amount of radioligand added while maintaining both the constant specific activity and constant concentration of radioligand, or decreasing the specific activity of the radioligand due to the addition of an unlabeled ligand.
Scatchard plot
A Scatchard plot (Rosenthal plot) can be used to show radioligand affinity. In this type of plot, the ratio of Bound/Free radioligand is plotted against the Bound radioligand. The slope of the line is equal to the negative reciprocal of the affinity constant (K). The intercept of the line with the X axis is an estimate of Bmax. The Scatchard plot can be standardized against an appropriate reference so that there can be a direct comparison of receptor density in different studies and tissues. This sample plot indicates that the radioligand binds with a single affinity. If the ligand were to have bound to multiple sites that have differing radioligand affinities, then the Scatchard plot would have shown a concave line instead.
Nonlinear curve fitting
Nonlinear curve-fitting programs, such as Equilibrium Binding Data Analysis (EBDA) and LIGAND, are used to calculate estimates of binding parameters from saturation and competition-binding experiments. EBDA performs the initial analysis, which converts measured radioactivity into molar concentrations and creates Hill slopes and Scatchard transformations from the data. The analysis made by EBDA can then be used by LIGAND to estimate a specified model for the binding.
Competition binding
Competition binding is used to determine the presence of selectivity for a particular ligand for receptor sub-types, which allows the determination of the density and proportion of each sub-type in the tissue. Competition curves are obtained by plotting specific binding, which is the percentage of the total binding, against the log concentration of the competing ligand. A steep competition curve is usually indicative of binding to a single population of receptors, whereas a shallow curve, or a curve with clear inflection points, is indicative of multiple populations of binding sites.
Non-radioactive binding assays
Despite the different techniques used for non-radioactive assays, they require that ligands exhibit similar binding characteristics to its radioactive equivalent. Thus, results in both non-radioactive and radioactive assays will remain consistent. One of the largest differences between radioactive and non-radioactive ligand assays are in regards of dangers to human health. Radioactive assays are harmful in that they produce radioactive waste; whereas, non-radioactive ligand assays utilize a different method to avoid producing toxic waste. These methods include, but are not limited to, fluorescence polarization (FP), fluorescence resonance energy transfer (FRET), and surface plasmon resonance (SPR). In order to measure process of ligand-receptor binding, most non-radioactive methods require that labeling avoids interfering with molecular interactions.
Fluorescence polarization
Fluorescence polarization (FP) is synonymous with fluorescence anisotropy. This method measures the change in the rotational speed of a fluorescent-labeled ligand once it is bound to the receptor. Polarized light is used in order to excite the ligand, and the amount of light emitted is measured. Depolarization of the emitted light depends on ligand being bound (e.g., to receptor). If ligand is unbound, it will have a large depolarization (ligand is free to spin rapidly, rotating the light). If the ligand is bound, the combined larger size results in slower rotation and therefore, reduced depolarization. An advantage of this method is that it requires only one labeling step. However, this method is less precise at low nanomolar concentrations.
Kinetic exclusion assay
Kinetic exclusion assay (KinExA) measures free (unbound) ligand or free receptor present in a mixture of ligand, receptor, and ligand-receptor complex. The measurements allow quantitation of the active ligand concentration and the binding constants (equilibrium, on and off rates) of the interaction.
Fluorescence resonance energy transfer
Fluorescence Resonance Energy Transfer (FRET) utilizes energy transferred between the donor and the acceptor molecules that are in close proximity. FRET uses a fluorescently labeled ligand, as with FP. Energy transfer within FRET begins by exciting the donor. The dipole–dipole interaction between the donor and the acceptor molecule transfers the energy from the donor to the acceptor molecule. If the ligand is bound to the receptor-antibody complex, then the acceptor will emit light. When using FRET, it is critical that there is a distance smaller than 10 nm between the acceptor and donor, in addition to an overlapping absorption spectrum between acceptor and donor, and that the antibody does not interfere or block the ligand binding site.
Surface plasmon resonance
Surface Plasmon Resonance (SPR) does not require labeling of the ligand. Instead, it works by measuring the change in the angle at which the polarized light is reflected from a surface (refractive index). The angle is related to the change in mass or layer of thickness, such as immobilization of a ligand changing the resonance angle, which increases the reflected light. The device for which SPR is derived includes a sensor chip, a flow cell, a light source, a prism, and a fixed angle position detector.
Liquid-phase binding assays
Immunoprecipitation
The liquid-phase ligand binding assay of immunoprecipitation (IP) is a method that is used to purify or enrich a specific protein, or a group of proteins, using an antibody from a complex mixture. The extract of disrupted tissue or cells is mixed with an antibody against the antigen of interest, which produces the antigen-antibody complex. When antigen concentration is low, the antigen-antibody complex precipitation can take hours or even days and becomes hard to isolate the small amount of precipitate formed.
The enzyme-linked immunosorbent assay (ELISA) or Western blotting are two different ways that the purified antigen (or multiple antigens) can be obtained and analyzed. This method involves purifying an antigen through the aid of an attached antibody on a solid (beaded) support, such as agarose resin. The immobilized protein complex can be accomplished either in a single step or successively.
IP can also be used in conjunction with biosynthetic radioisotope labeling. Using this technique combination, one can determine if a specific antigen is synthesized by a tissue or by a cell.
Solid-phase binding assays
Multiwell plate
Multiwell plates are multiple petri dishes incorporated into one container, with the number of individual wells ranging from 6 to over 1536. Multiwell Plate Assays are convenient for handling necessary dosages and replicates. There are a wide range of plate types that have a standardized footprint, supporting equipment, and measurement systems. Electrodes can be integrated into the bottom of the plates to capture information as a result of the binding assays. The binding reagents become immobilized on the electrode surface and then can be analyzed.
The multiwell plates are manufactured to allow researchers to create and manipulate different types of assays (i.e., bioassays, immunoassays, etc.) within each multiwell plate. Due to the variability in multiwell plate formatting, it is not uncommon for artifacts to arise. Artifacts are due to the different environments found within the different wells on the plate, especially near the edges and center of the wells. Such effects are known as well effects, edge effects, and plate effects. Thus, emphasizing the necessity to position assay designs in the correct manner both within, and between, each plate.
The use of multiwell plates are common when measuring in vitro biological assay activity, or measuring immunoreactivity through immunoassays.
Artifacts can be avoided by maintaining plate uniformity by applying the same dose of the specific medium in each well, in addition to maintaining atmospheric pressure and temperature rates in order to reduce humidity.
On-bead binding
On-Bead Ligand Binding assays are isolation methods for basic proteins, DNA/RNA or other biomolecules located in undefined suspensions and can be used in multiple biochromatographic applications. Bioaffine ligands are covalently bound to silica beads with terminal negatively charged silanol groups or polystyrene beads and are used for isolation and purification of basic proteins or adsorption of biomolecules. After binding the separation is performed by centrifugation (density separation) or by magnetic field attraction (for magnetic particles only). The beads can be washed to provide purity of the isolated molecule before dissolving it by ion exchange methods. Direct analyzation methods based on enzymatic/fluorescent detection (e.g. HRP, fluorescent dye) can be used for on-bead determination or quantification of bound biomolecules.
On-column binding
Filter
Filter assays are a solid-phase ligand binding assay that use filters to measure the affinity between two molecules. In a filter binding assay, the filters are used to trap cell membranes by sucking the medium through them. This rapid method occurs at a fast speed in which filtration and a recovery can be achieved for the found fraction. Washing filters with a buffer removes residual unbound ligands and any other ligands present that are capable of being washed away from the binding sites. The receptor-ligand complexes present while the filter is being washed will not dissociate significantly because they will be completely trapped by the filters. Characteristics of the filter are important for each job being done. A thicker filter is useful to get a more complete recovery of small membrane pieces, but may require a longer wash time. It is recommended to pretreat the filters to help trap negatively charged membrane pieces. Soaking the filter in a solution that would give the filter a positive surface charge would attract the negatively charged membrane fragments.
Real-time cell-binding
In this type of assay the binding of a ligand to cells is followed over time. The obtained signal is proportional to the number of ligands bound to a target structure, often a receptor, on the cell surface. Information about the ligand-target interaction is obtained from the signal change over time and kinetic parameters such as the association rate constant ka, the dissociation rate constant kd and the affinity KD can be calculated. By measuring the interaction directly on cells, no isolation of the target protein is needed, which can otherwise be challenging, especially for some membrane proteins. To ensure that the interaction with the intended target structure is measured appropriate biological controls, such as cells not expressing the target structure, are recommended.
Real-time measurements using label-free or label-based approaches have been used to analyze biomolecular interactions on fixated or on living cells.
The advantage of measuring ligand-receptor interactions in real-time, is that binding equilibrium does not need to be reached for accurate determination of the affinity.
Binding specificity
The effects of a drug are a result of their binding selectivity with macromolecule properties of an organism, or the affinity with which different ligands bind to a substrate. More specifically, the specificity and selectivity of a ligand to its respective receptor provides researchers the opportunity to isolate and produce specific drug effects through the manipulation of ligand concentrations and receptor densities. Hormones and neurotransmitters are essential endogenous regulatory ligands that affect physiological receptors within an organism. Drugs that act upon these receptors are incredibly selective in order to produce required responses from signaling molecules.
Specific binding refers to the binding of a ligand to a receptor, and it is possible that there is more than one specific binding site for one ligand. Non specific binding refers to the binding of a ligand to something other than its designated receptor such as various other receptors, or different types of transporters in the cell membrane. For example, various antagonists can bind to multiple types receptors. In the case of muscarinic antagonists, they can also bind to histamine receptors. Such binding patterns are technically considered specific, as the destination of the ligand is specific to multiple receptors. However, researchers may not be focused on such behaviors compared to other binding factors. Nevertheless, nonspecific binding behavior is very important information to acquire. These estimates are measured by examining how a ligand binds to a receptor while simultaneously reacting to a substitute agent (antagonist) that will prevent specific binding to occur.
Specific binding types to ligand and receptor interactions:
Technological advances
Technologies for ligand binding assay continue to advance related to the increasing the speed and to keeping cost-effective procedures while maintaining and increasing the accuracy and sensitivity. Some technological advances include new binding reagents as alternatives to antibodies, alternative dye solutions and micro plate systems, and the development of a method to skip the filtration step, which is required in many ligand binding assay processes.
A prominent signaling molecule in cells is Calcium, (Ca2+), which can be detected with a Fluo-4 acetoxymethyl dye. It binds to free Ca2+ ions, which in turn slightly increase fluorescence of the Fluo-4 AM. The drawback of the Fluo-4 dye formulation is that a washing step is required to remove extracellular dye, which may provide unwanted background signals. For instance, washing puts additional stress on the cells, as well as consumes time, which prevents a timely analysis.
Recently, an alternative dye solution and microplate system has been developed called FLIPR® (fluorometric imaging plate reader), which uses a Calcium 3 assay reagent that does not require a washing step. As a result, change in dye fluorescence can be viewed in real time with no delay using an excitatory laser and a charge-coupled device.
Many ligand binding assays require a filtration step to separate bound and unbound ligands before screening. A method called Scintillation proximity assay (SPA) has been recently developed, which eliminates this otherwise crucial step. It works through crystal lattice beads, which are coated with ligand coupling molecules and filled with cerium ions. These give off bursts of light when stimulated by an isotope, which can easily be measured. Ligands are radiolabeled using either 3H or 125I, and released into the assay. Since only the radioligands that directly bind to the beads initiate a signal, free-ligands do not interfere during the screening process.
Limitations
By nature, assays must be carried out in a controlled environment in vitro, so this method does not provide information about receptor binding in vivo. The results obtained can only verify that a specific ligand fits a receptor, but assays provide no way of knowing the distribution of ligand-binding receptors in an organism.
In vivo ligand binding and receptor distribution can be studied using Positron Emission Tomography (PET), which works by induction of a radionuclide into a ligand, which is then released into the body of a studied organism. The radiolabeled ligands are spatially located by a PET scanner to reveal areas in the organism with high concentrations of receptors.
See also
Immunoassay
References
Biochemistry detection reactions
Chemical bonding | Ligand binding assay | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 4,133 | [
"Biochemistry detection reactions",
"Biochemical reactions",
"Microbiology techniques",
"Condensed matter physics",
"nan",
"Chemical bonding"
] |
32,469,641 | https://en.wikipedia.org/wiki/Maxwell%E2%80%93Bloch%20equations | The Maxwell–Bloch equations, also called the optical Bloch equations describe the dynamics of a two-state quantum system interacting with the electromagnetic mode of an optical resonator. They are analogous to (but not at all equivalent to) the Bloch equations which describe the motion of the nuclear magnetic moment in an electromagnetic field. The equations can be derived either semiclassically or with the field fully quantized when certain approximations are made.
Semi-classical formulation
The derivation of the semi-classical optical Bloch equations is nearly identical to solving the two-state quantum system (see the discussion there). However, usually one casts these equations into a density matrix form. The system we are dealing with can be described by the wave function:
The density matrix is
(other conventions are possible; this follows the derivation in Metcalf (1999)). One can now solve the Heisenberg equation of motion, or translate the results from solving the Schrödinger equation into density matrix form. One arrives at the following equations, including spontaneous emission:
In the derivation of these formulae, we define and . It was also explicitly assumed that spontaneous emission is described by an exponential decay of the coefficient with decay constant . is the Rabi frequency, which is
,
and is the detuning and measures how far the light frequency, , is from the transition, . Here, is the transition dipole moment for the transition and is the vector electric field amplitude including the polarization (in the sense ).
Derivation from cavity quantum electrodynamics
Beginning with the Jaynes–Cummings Hamiltonian under coherent drive
where is the lowering operator for the cavity field, and is the atomic lowering operator written as a combination of Pauli matrices. The time dependence can be removed by transforming the wavefunction according to , leading to a transformed Hamiltonian
where . As it stands now, the Hamiltonian has four terms. The first two are the self energy of the atom (or other two level system) and field. The third term is an energy conserving interaction term allowing the cavity and atom to exchange population and coherence. These three terms alone give rise to the Jaynes-Cummings ladder of dressed states, and the associated anharmonicity in the energy spectrum. The last term models coupling between the cavity mode and a classical field, i.e. a laser. The drive strength is given in terms of the power transmitted through the empty two-sided cavity as , where is the cavity linewidth. This brings to light a crucial point concerning the role of dissipation in the operation of a laser or other CQED device; dissipation is the means by which the system (coupled atom/cavity) interacts with its environment. To this end, dissipation is included by framing the problem in terms of the master equation, where the last two terms are in the Lindblad form
The equations of motion for the expectation values of the operators can be derived from the master equation by the formulas and . The equations of motion for , , and , the cavity field, atomic coherence, and atomic inversion respectively, are
At this point, we have produced three of an infinite ladder of coupled equations. As can be seen from the third equation, higher order correlations are necessary. The differential equation for the time evolution of will contain expectation values of higher order products of operators, thus leading to an infinite set of coupled equations. We heuristically make the approximation that the expectation value of a product of operators is equal to the product of expectation values of the individual operators. This is akin to assuming that the operators are uncorrelated, and is a good approximation in the classical limit. It turns out that the resulting equations give the correct qualitative behavior even in the single excitation regime. Additionally, to simplify the equations we make the following replacements
And the Maxwell–Bloch equations can be written in their final form
Application: atom–laser interaction
Within the dipole approximation and rotating-wave approximation, the dynamics of the atomic density matrix, when interacting with laser field, is described by optical Bloch equation, whose effect can be divided into two parts: optical dipole force and scattering force.
See also
Atomic electron transition
Lorenz system
Semiconductor Bloch equations
References
Quantum mechanics | Maxwell–Bloch equations | [
"Physics"
] | 875 | [
"Theoretical physics",
"Quantum mechanics"
] |
40,859,954 | https://en.wikipedia.org/wiki/Bitruncated%2024-cell%20honeycomb | In four-dimensional Euclidean geometry, the bitruncated 24-cell honeycomb is a uniform space-filling honeycomb. It can be seen as a bitruncation of the regular 24-cell honeycomb, constructed by truncated tesseract and bitruncated 24-cell cells.
Alternate names
Bitruncated icositetrachoric tetracomb/honeycomb
Small tetracontaoctachoric tetracomb (baticot)
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
16-cell honeycomb
24-cell honeycomb
Rectified 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 113
o3o3x4x3o - baticot - O113
o3o3x4o3x - sricot - O112
5-polytopes
Honeycombs (geometry)
Bitruncated tilings | Bitruncated 24-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 380 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
40,861,214 | https://en.wikipedia.org/wiki/Biological%20computation | The concept of biological computation proposes that living organisms perform computations, and that as such, abstract ideas of information and computation may be key to understanding biology. As a field, biological computation can include the study of the systems biology computations performed by biota, the design of algorithms inspired by the computational methods of biota, the design and engineering of manufactured computational devices using synthetic biology components and computer methods for the analysis of biological data, elsewhere called computational biology or bioinformatics.
According to Dominique Chu, Mikhail Prokopenko, and J. Christian J. Ray, "the most important class of natural computers can be found in biological systems that perform computation on multiple levels. From molecular and cellular information processing networks to ecologies, economies and brains, life computes. Despite ubiquitous agreement on this fact going back as far as von Neumann automata and McCulloch–Pitts neural nets, we so far lack principles to understand rigorously how computation is done in living, or active, matter".
Logical circuits can be built with slime moulds. Distributed systems experiments have used them to approximate motorway graphs. The slime mould Physarum polycephalum is able to compute high-quality approximate solutions to the Traveling Salesman Problem, a combinatorial test with exponentially increasing complexity, in linear time. Fungi such as basidiomycetes can also be used to build logical circuits. In a proposed fungal computer, information is represented by spikes of electrical activity, a computation is implemented in a mycelium network, and an interface is realized via fruit bodies.
See also
Wetware
Biological neural network
Artificial neuron
Biological computing
Zero player game
References
Computational biology
Computational fields of study | Biological computation | [
"Chemistry",
"Technology",
"Biology"
] | 345 | [
"Computational fields of study",
"Bioinformatics stubs",
"Biotechnology stubs",
"Biochemistry stubs",
"Bioinformatics",
"Computing and society",
"Computational biology",
"Computing stubs"
] |
40,861,730 | https://en.wikipedia.org/wiki/Replicate%20%28biology%29 | In the biological sciences, replicates are an experimental units that are treated identically. Replicates are an essential component of experimental design because they provide an estimate of between sample error. Without replicates, scientists are unable to assess whether observed treatment effects are due to the experimental manipulation or due to random error. There are also analytical replicates which is when an exact copy of a sample is analyzed, such as a cell, organism or molecule, using exactly the same procedure. This is done in order to check for analytical error. In the absence of this type of error replicates should yield the same result. However, analytical replicates are not independent and cannot be used in tests of the hypothesis because they are still the same sample.
See also
Self-replication
Fold change
References
Biological processes
Measurement
Scientific method
Tests
Validity (statistics) | Replicate (biology) | [
"Physics",
"Mathematics",
"Biology"
] | 166 | [
"Physical quantities",
"Quantity",
"Measurement",
"Size",
"nan"
] |
40,862,848 | https://en.wikipedia.org/wiki/Linear%20equation%20over%20a%20ring | In algebra, linear equations and systems of linear equations over a field are widely studied. "Over a field" means that the coefficients of the equations and the solutions that one is looking for belong to a given field, commonly the real or the complex numbers. This article is devoted to the same problems where "field" is replaced by "commutative ring", or "typically Noetherian integral domain".
In the case of a single equation, the problem splits in two parts. First, the ideal membership problem, which consists, given a non-homogeneous equation
with and in a given ring , to decide if it has a solution with in , and, if any, to provide one. This amounts to decide if belongs to the ideal generated by the . The simplest instance of this problem is, for and , to decide if is a unit in .
The syzygy problem consists, given elements in , to provide a system of generators of the module of the syzygies of that is a system of generators of the submodule of those elements in that are solutions of the homogeneous equation
The simplest case, when amounts to find a system of generators of the annihilator of .
Given a solution of the ideal membership problem, one obtains all the solutions by adding to it the elements of the module of syzygies. In other words, all the solutions are provided by the solution of these two partial problems.
In the case of several equations, the same decomposition into subproblems occurs. The first problem becomes the submodule membership problem. The second one is also called the syzygy problem.
A ring such that there are algorithms for the arithmetic operations (addition, subtraction, multiplication) and for the above problems may be called a computable ring, or effective ring. One may also say that linear algebra on the ring is effective.
The article considers the main rings for which linear algebra is effective.
Generalities
To be able to solve the syzygy problem, it is necessary that the module of syzygies is finitely generated, because it is impossible to output an infinite list. Therefore, the problems considered here make sense only for a Noetherian ring, or at least a coherent ring. In fact, this article is restricted to Noetherian integral domains because of the following result.
Given a Noetherian integral domain, if there are algorithms to solve the ideal membership problem and the syzygies problem for a single equation, then one may deduce from them algorithms for the similar problems concerning systems of equations.
This theorem is useful to prove the existence of algorithms. However, in practice, the algorithms for the systems are designed directly.
A field is an effective ring as soon one has algorithms for addition, subtraction, multiplication, and computation of multiplicative inverses. In fact, solving the submodule membership problem is what is commonly called solving the system, and solving the syzygy problem is the computation of the null space of the matrix of a system of linear equations. The basic algorithm for both problems is Gaussian elimination.
Properties of effective rings
Let be an effective commutative ring.
There is an algorithm for testing if an element is a zero divisor: this amounts to solving the linear equation .
There is an algorithm for testing if an element is a unit, and if it is, computing its inverse: this amounts to solving the linear equation .
Given an ideal generated by ,
there is an algorithm for testing if two elements of have the same image in : testing the equality of the images of and amounts to solving the equation ;
linear algebra is effective over : for solving a linear system over , it suffices to write it over and to add to one side of the th equation (for ), where the are new unknowns.
Linear algebra is effective on the polynomial ring if and only if one has an algorithm that computes an upper bound of the degree of the polynomials that may occur when solving linear systems of equations: if one has solving algorithms, their outputs give the degrees. Conversely, if one knows an upper bound of the degrees occurring in a solution, one may write the unknown polynomials as polynomials with unknown coefficients. Then, as two polynomials are equal if and only if their coefficients are equal, the equations of the problem become linear equations in the coefficients, that can be solved over an effective ring.
Over the integers or a principal ideal domain
There are algorithms to solve all the problems addressed in this article over the integers. In other words, linear algebra is effective over the integers; see Linear Diophantine system for details.
More generally, linear algebra is effective on a principal ideal domain if there are algorithms for addition, subtraction and multiplication, and
Solving equations of the form , that is, testing whether is a divisor of , and, if this is the case, computing the quotient ,
Computing Bézout's identity, that is, given and , computing and such that is a greatest common divisor of and .
It is useful to extend to the general case the notion of a unimodular matrix by calling unimodular a square matrix whose determinant is a unit. This means that the determinant is invertible and implies that the unimodular matrices are exactly the invertible matrices such all entries of the inverse matrix belong to the domain.
The above two algorithms imply that given and in the principal ideal domain, there is an algorithm computing a unimodular matrix
such that
(This algorithm is obtained by taking for and the coefficients of Bézout's identity, and for and the quotient of and by ; this choice implies that the determinant of the square matrix is .)
Having such an algorithm, the Smith normal form of a matrix may be computed exactly as in the integer case, and this suffices to apply the described in Linear Diophantine system for getting an algorithm for solving every linear system.
The main case where this is commonly used is the case of linear systems over the ring of univariate polynomials over a field. In this case, the extended Euclidean algorithm may be used for computing the above unimodular matrix; see for details.
Over polynomials rings over a field
Linear algebra is effective on a polynomial ring over a field . This has been first proved in 1926 by Grete Hermann. The algorithms resulting from Hermann's results are only of historical interest, as their computational complexity is too high for allowing effective computer computation.
Proofs that linear algebra is effective on polynomial rings and computer implementations are presently all based on Gröbner basis theory.
References
External links
Commutative algebra
Linear algebra
Equations | Linear equation over a ring | [
"Mathematics"
] | 1,360 | [
"Mathematical objects",
"Equations",
"Fields of abstract algebra",
"Linear algebra",
"Commutative algebra",
"Algebra"
] |
40,865,450 | https://en.wikipedia.org/wiki/Bell%20triangle | In mathematics, the Bell triangle is a triangle of numbers analogous to Pascal's triangle, whose values count partitions of a set in which a given element is the largest singleton. It is named for its close connection to the Bell numbers, which may be found on both sides of the triangle, and which are in turn named after Eric Temple Bell. The Bell triangle has been discovered independently by multiple authors, beginning with and including also and , and for that reason has also been called Aitken's array or the Peirce triangle.
Values
Different sources give the same triangle in different orientations, some flipped from each other. In a format similar to that of Pascal's triangle, and in the order listed in the On-Line Encyclopedia of Integer Sequences (OEIS), its first few rows are:
1
1 2
2 3 5
5 7 10 15
15 20 27 37 52
52 67 87 114 151 203
203 255 322 409 523 674 877
Construction
The Bell triangle may be constructed by placing the number 1 in its first position. After that placement, the leftmost value in each row of the triangle is filled by copying the rightmost value in the previous row. The remaining positions in each row are filled by a rule very similar to that for Pascal's triangle: they are the sum of the two values to the left and upper left of the position.
Thus, after the initial placement of the number 1 in the top row, it is the last position in its row and is copied to the leftmost position in the next row. The third value in the triangle, 2, is the sum of the two previous values above-left and left of it. As the last value in its row, the 2 is copied into the third row, and the process continues in the same way.
Combinatorial interpretation
The Bell numbers themselves, on the left and right sides of the triangle, count the number of ways of partitioning a finite set into subsets, or equivalently the number of equivalence relations on the set.
provide the following combinatorial interpretation of each value in the triangle. Following Sun and Wu, let An,k denote the value that is k positions from the left in the nth row of the triangle, with the top of the triangle numbered as A1,1. Then An,k counts the number of partitions of the set {1, 2, ..., n + 1} in which the element k + 1 is the only element of its set and each higher-numbered element is in a set of more than one element. That is, k + 1 must be the largest singleton of the partition.
For instance, the number 3 in the middle of the third row of the triangle would be labeled, in their notation, as A3,2, and counts the number of partitions of {1, 2, 3, 4} in which 3 is the largest singleton element. There are three such partitions:
{1}, {2, 4}, {3}
{1, 4}, {2}, {3}
{1, 2, 4}, {3}.
The remaining partitions of these four elements either do not have 3 in a set by itself, or they have a larger singleton set {4}, and in either case are not counted in A3,2.
In the same notation, augment the triangle with another diagonal to the left of its other values, of the numbers
An,0 = 1, 0, 1, 1, 4, 11, 41, 162, ...
counting partitions of the same set of n + 1 items in which only the first item is a singleton. Their augmented triangle is
1
0 1
1 1 2
1 2 3 5
4 5 7 10 15
11 15 20 27 37 52
41 52 67 87 114 151 203
162 203 255 322 409 523 674 877
This triangle may be constructed similarly to the original version of Bell's triangle, but with a different rule for starting each row: the leftmost value in each row is the difference of the rightmost and leftmost values of the previous row.
An alternative but more technical interpretation of the numbers in the same augmented triangle is given by .
Diagonals and row sums
The leftmost and rightmost diagonals of the Bell triangle both contain the sequence 1, 1, 2, 5, 15, 52, ... of the Bell numbers (with the initial element missing in the case of the rightmost diagonal). The next diagonal parallel to the rightmost diagonal gives the sequence of differences of two consecutive Bell numbers, 1, 3, 10, 37, ..., and each subsequent parallel diagonal gives the sequence of differences of previous diagonals.
In this way, as observed, this triangle can be interpreted as implementing the Gregory–Newton interpolation formula, which finds the coefficients of a polynomial from the sequence of its values at consecutive integers by using successive differences. This formula closely resembles a recurrence relation that can be used to define the Bell numbers.
The sums of each row of the triangle, 1, 3, 10, 37, ..., are the same sequence of first differences appearing in the second-from-right diagonal of the triangle. The nth number in this sequence also counts the number of partitions of n elements into subsets, where one of the subsets is distinguished from the others; for instance, there are 10 ways of partitioning three items into subsets and then choosing one of the subsets.
Related constructions
A different triangle of numbers, with the Bell numbers on only one side, and with each number determined as a weighted sum of nearby numbers in the previous row, was described by .
Notes
References
.
.
.
. Reprinted with an addendum as "The Tinkly Temple Bells", Chapter 2 of Fractal Music, Hypercards, and more ... Mathematical Recreations from Scientific American, W. H. Freeman, 1992, pp. 24–38.
. The triangle is on p. 48.
.
.
.
External links
Triangles of numbers
Charles Sanders Peirce | Bell triangle | [
"Mathematics"
] | 1,243 | [
"Triangles of numbers",
"Combinatorics"
] |
40,865,977 | https://en.wikipedia.org/wiki/List%20of%20heaviest%20spacecraft | The most massive artificial objects to reach space include space stations, various upper stages, and discarded Space Shuttle external tanks. Spacecraft may change mass over time such as by use of propellant.
During the Shuttle–Mir program between 1994 and 1998, the complex formed by the docking of a visiting Space Shuttle with Mir would temporarily make it heaviest artificial object in orbit with a combined mass of in a 1995 configuration.
Currently the heaviest spacecraft is the International Space Station, nearly double Shuttle-Mir's mass in orbit. It began assembly with a first launch in 1998, however it only attained its full weight in the 2020s, due to its modular nature and gradual additions. Its mass can change significantly depending on what modules are added or removed.
Selected spacecraft (by mass)
The following are a list of spacecraft with a mass greater than , or the top three to any other orbit including a planetary orbit, or the top three of a specific category of vehicle, or the heaviest vehicle from a specific nation. All numbers listed below for satellites use their mass at launch, if not otherwise stated.
Spacecraft design families (by mass)
List of spacecraft families (by mass) with 3 or more flights into space and over 7000kg.
See also
List of large reentering space debris
Lists of spacecraft
References
Lists of spacecraft
Heaviest or most massive things | List of heaviest spacecraft | [
"Physics"
] | 268 | [
"Heaviest or most massive things",
"Mass",
"Matter"
] |
40,866,595 | https://en.wikipedia.org/wiki/Steritruncated%2016-cell%20honeycomb | In four-dimensional Euclidean geometry, the steritruncated 16-cell honeycomb is a uniform space-filling honeycomb, with runcinated 24-cell, truncated 16-cell, octahedral prism, 3-6 duoprism, and truncated tetrahedral prism cells.
Alternate names
Celliprismated icositetrachoric tetracomb (capicot)
Great prismatotetracontaoctachoric tetracomb
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
16-cell honeycomb
24-cell honeycomb
Rectified 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 121 (Wrongly named runcinated icositetrachoric honeycomb)
x3x3o4o3x - capicot - O127
5-polytopes
Honeycombs (geometry)
Truncated tilings | Steritruncated 16-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 380 | [
"Honeycombs (geometry)",
"Truncated tilings",
"Tessellation",
"Crystallography",
"Symmetry"
] |
52,400,224 | https://en.wikipedia.org/wiki/Eisengarn | Eisengarn, meaning "iron yarn" in English, is a light-reflecting, strong, waxed-cotton thread. It was invented and manufactured in Germany in the mid-19th century, but owes its modern renown to its use in cloth woven for the tubular-steel chairs designed by Marcel Breuer while he was a teacher at the Bauhaus design school.
The yarn is also known as Glanzgarn ('gloss' or 'glazed' yarn).
Manufacture
Despite the name, there is no iron in Eisengarn. The name refers to its strength and metallic shine. It is made by soaking cotton threads in a starch and paraffin wax solution. The threads are dried and then stretched and polished by steel rollers and brushes. The end result of the process is a lustrous, tear-resistant yarn which is extremely hardwearing.
History
The Eisengarn manufacturing process was invented in the mid-19th century in a factory in Barmen, now part of the city of Wuppertal, east of the river Rhine.
It was used as a sewing thread and for making lace, shoe laces, hat strings, ribbons, lining materials and in the cable industry.
The manufacture of the yarn gave a considerable boost to the textile industry of Barmen and the surrounding region. By 1875, the Wuppertal company Barthels & Feldhoff employed more than 300 people in Eisengarn production.
In 1927 the weaver and textile designer Margaretha Reichardt (1907–1984), then a student at the Bauhaus design school, experimented and improved the quality of the thread and developed cloth and strapping material made from Eisengarn for use on Marcel Breuer's tubular steel chairs, such as the Wassily Chair. According to the Bauhaus Kooperation, the material "owes its renown" to this use.
Light-weight tubular steel seating was also used in aircraft seating in the 1930s and Reichardt's improved version of Eisengarn was used as a covering for the seats.
A more prosaic use for the strong Eisengarn was, and still is, for making colourful string shopping bags, which were popular in the former East Germany, and are now an Ostalgie item. When the bag is not in use, the nature of the Eisengarn enables it to be compressed so that it takes up very little space.
References
External links
The B5 Chair | Cooper Hewitt, Smithsonian Design Museum
Vitra Design Museum. B35 Chair Marcel Breuer
WDR digit project. Eisengarnfabrikation in Barmen. (Video - 16 min). In German, but video shows eisengarn manufacturing process.
Fibers
Yarn
Woven fabrics
Bauhaus
Textile industry of Germany
Textile engineering | Eisengarn | [
"Physics",
"Engineering"
] | 577 | [
"Applied and interdisciplinary physics",
"Textile engineering"
] |
52,401,503 | https://en.wikipedia.org/wiki/Tavarekere%20Kalliah%20Chandrashekar | Tavarekere Kalliah Chandrashekar (born 1956) is an Indian bioinorganic chemist and a former director of the National Institute for Interdisciplinary Science and Technology, a CSIR subsidiary. He was appointed the director of the National Institute of Science Education and Research, Bhubaneswar where he continues as a senior professor at the department of chemical sciences. He is known for the discovery of novel macrocyclic systems and is an elected fellow of the Indian National Science Academy, National Academy of Sciences, India and the Indian Academy of Sciences. The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards, in 2001, for his contributions to chemical sciences.
Biography
T. K. Chandrashekar, born on the New Year's Day of 1956 in the Indian state of Karnataka, did his college studies at the University of Mysore from where he completed his graduate and master's courses. His doctoral studies were at the Indian Institute of Science under the guidance of V. Krishnan on bioinorganic chemistry and after securing a PhD in 1982, he moved to the US where he did his post-doctoral studies at the laboratories of Hans Van Willigen of the University of Massachusetts, Boston (1982–84) and G. T. Babcock at Michigan State University (1984–86). Before returning to India, he did research for one year as an Alexander von Humboldt Fellow with E. Vogel at the University of Cologne. His Indian career started at the Indian Institute of Technology, Kanpur as a lecturer in 1986 where he spent 17 years before joining the National Institute for Interdisciplinary Science and Technology (NIIST), a CSIR subsidiary, as the director in 2003. Six years later, he was appointed the director of National Institute of Science Education and Research, Bhubaneswar where he continues as a senior professor at the department of chemical sciences.
Legacy
Chandrashekar is credited with the discovery of expanded porphyrins-based macrocyclic systems which had the ability to bind and transport anions and transition metal cations. Using physico-chemical techniques, he elucidated the electronic structure of those macro cycles. He also worked on photodynamic therapy, photosynthetic intermediates and supramolecular systems for molecular devices. He has published his researches by way of several peer-reviewed articles; the online article repository of the Indian Academy of Sciences has listed 103 of them. He has served as the project investigator for 12 projects by scientific agencies such as Department of Science and Technology, Department of Atomic Energy and Council of Scientific and Industrial Research and has guided 26 master's and 17 doctoral scholars in their studies. As the director of National Institute for Interdisciplinary Science and Technology, he is known to have made notable changes in the structure of the organization by establishing five independent divisions and establishing High-resolution transmission electron microscopy and 500 MHz Nuclear magnetic resonance facilities. He has also been involved with the administration of Indian National Science Academy and the Indian Academy of Sciences as a member of their councils during 2009-11 and 2013-15 respectively. He also served as a secretary at the DST.
Awards and honors
Chandrashekar received the Bronze Medal of the Chemical Research Society of India in 2000; the society would honour him again with the Silver Medal in 2008. The Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards, in 2001. He received the Professor Priyadaranjan Ray Memorial Award of the Indian Chemical Society in 2002, followed by the Chemito Award the next year. Holder of the J. C. Bose National Fellowship in 2006, he was elected by the National Academy of Sciences, India as their fellow in 1996
and he became an elected fellow of the Indian Academy of Sciences in 1996 and the Indian National Science Academy in 2003.
In 2021, on the occasion of his 65th birthday, a special issue of the Journal of Porphyrins and Phthalocyanines was dedicated to honouring T. K. Chandrashekar for his outstanding contributions in the field of porphyrinoids. Sixty scholar-contributors around the world, including Karl M. Kadish and Atsuhiro Osuka, submitted their research papers to be published in this special issue.
See also
Porphyrins
Photodynamic therapy
References
Recipients of the Shanti Swarup Bhatnagar Award in Chemical Science
1956 births
Fellows of the Indian Academy of Sciences
Fellows of the Indian National Science Academy
Living people
Scientists from Karnataka
University of Mysore alumni
Indian Institute of Science alumni
University of Massachusetts Boston alumni
Michigan State University alumni
University of Cologne alumni
Academic staff of IIT Kanpur
20th-century Indian chemists
Bioinorganic chemists | Tavarekere Kalliah Chandrashekar | [
"Chemistry"
] | 997 | [
"Bioinorganic chemistry",
"Bioinorganic chemists"
] |
52,405,825 | https://en.wikipedia.org/wiki/Graded-commutative%20ring | In algebra, a graded-commutative ring (also called a skew-commutative ring) is a graded ring that is commutative in the graded sense; that is, homogeneous elements x, y satisfy
where |x | and |y | denote the degrees of x and y.
A commutative (non-graded) ring, with trivial grading, is a basic example. For example, an exterior algebra is generally not a commutative ring but is a graded-commutative ring.
A cup product on cohomology satisfies the skew-commutative relation; hence, a cohomology ring is graded-commutative. In fact, many examples of graded-commutative rings come from algebraic topology and homological algebra.
References
David Eisenbud, Commutative Algebra. With a view toward algebraic geometry, Graduate Texts in Mathematics, vol 150, Springer-Verlag, New York, 1995.
See also
DG algebra
graded-symmetric algebra
alternating algebra
supercommutative algebra
Abstract algebra | Graded-commutative ring | [
"Mathematics"
] | 223 | [
"Abstract algebra",
"Algebra"
] |
52,406,984 | https://en.wikipedia.org/wiki/Milos%20Novotny | Milos Vratislav Novotny (born 19 April 1942) is an American chemist, currently the Distinguished Professor Emeritus and Director of the Novotny Glycoscience Laboratory and the Institute for Pheromone Research at Indiana University, and also a published author. Milos Novotny received his Bachelor of Science from the University of Brno, Czechoslovakia in 1962. In 1965, Novotny received his Ph.D. at the University of Brno. Novotny also holds honorary doctorates from Uppsala University, Masaryk University and Charles University, and he has been a major figure in analytical separation methods. Novotny was recognized for the development of PAGE Polyacrylamide Gel-filled Capillaries for Capillary Electrophoresis in 1993. In his years of work dedicated to analytical chemistry he has earned a reputation for being especially innovative in the field and has contributed a great deal to several analytical separation methods. Most notably, Milos has worked a great deal with microcolumn separation techniques of liquid chromatography, supercritical fluid chromatography, and capillary electrophoresis. Additionally, he is highly acclaimed for his research in proteomics and glycoanalysis and for identifying the first mammalian pheromones.
Awards
In 1986, Novotny was given the Award in Chromatography from the American Chemical Society. Novotny received the ANACHEM award in 1992. This award is given to outstanding analytical chemists for teaching, research, administration or other activities which have advanced of the field.
Novotny was also selected as the LCGC Lifetime Achievement award recipient in 2019.
Awards received in the 1980s
Chairman, Gordon Research Conference on Analytical Chemistry; James B. Himes Merit Award of the Chicago Chromatography Discussion Group; M.S. Tswett Award and Medal in Chromatography; American Chemical Society Award in Chromatography; ISCO Award in Biochemical Instrumentation; Eastern Analytical Symposium Award in Chromatography; Chemical Instrumentation Award of the American Chemical Society; Distinguished Faculty Research Lecture, Indiana University.
Awards received in the 1990s
Keene P. Dimick Award in Chromatography, Third International Symposium on Supercritical Fluid Chromatography Award for Pioneering Work in the Development of SFC; Marcel J.E. Golay Award and Medal, International Symposium on Capillary Chromatography; American Chemical Society Award in Separation Science and Technology; American Chemical Society Exceptional Achievement Award as a Capillary Gas Chromatography Short Course Instructor; R&D 100 Award for technologically significant new product: -PAGE Polyacrylamide Gel-filled Capillaries for Capillary Electrophoresis”; Jan E. Purkynje Memorial Medal of the Czech Academy of Sciences; R&D Magazine Scientist of the Year Award; M.S. Tswett Memorial Medal of the Russian Academy of Sciences; A.J.P. Martin Gold Medal of the Chromatographic Society of Great Britain; Theophilus Redwood Award, The Royal Society of Chemistry, Great Britain; Distinguished Teaching and Mentoring Award of the University Graduate School, Indiana University; Elected as a Foreign Member of the Royal Society of Sciences (Sweden); College of Arts & Sciences Distinguished Faculty Award, Indiana University.
Awards received in the 2000s
COLACRO (Congreso Latinoamericano de Cromatografia) Merit Medal; Pittsburgh Analytical Chemistry Award; Eastern Analytical Symposium Award for Outstanding Achievements in the Fields of Analytical Chemistry; Tracy M. Sonneborn Award for Outstanding Research and Teaching, Indiana University; Dal Nogare Award in Chromatography; CaSSS (California Separation Science Society) Award for Excellence in Separation Science; Honorary Member of the Slovak Pharmaceutical Society; Foreign Member of the Learned Society of the Czech Republic (Czech Academy of Sciences); American Chemical Society Award in Analytical Chemistry; Jan Weber Prize and Medal, Slovak Pharmaceutical Society, Slovakia; Ralph N. Adams Award in Bioanalytical Chemistry.
Awards received in the 2010s
Honorary Membership of the Czech Society for Mass Spectrometry; Lifetime Achievement Award in Chromatography by the LC-GC Magazine, Europe; Giorgio Nota Award, Italian Chemical Society; Heyrovsky Medal in Chemical Sciences, Prague, Czech Republic.
Faculty Positions
On the faculty of Indiana University, Bloomington, since 1971. 1978 – Professor of Chemistry. 1980 – Visiting Scientist, Department of Immunogenetics, Max Planck Institute for Biology, Tübingen, Germany. 1988 – James H. Rudy Professor of Chemistry. 1999 – Distinguished Professor of Chemistry. 1999 – Director of the Institute for Pheromone Research. 2000–2015 – Lilly Chemistry Alumni Chair. 2004 – Adjunct Professor of Medicine, Indiana University School of Medicine. 2004–2009 – Director of the National Center for Glycomics and Glycoproteomics. 2010 – Director of the Novotny Glycoscience Laboratory. 2011 – Distinguished Professor Emeritus of Chemistry.
Other Activities
1972–1975 – Associate Member, Viking Lander Science Team, NASA
1979–1982 – – Member, Committee on Response Strategies to Unusual Chemical Hazards, Assembly of Life Sciences, National Research Council
1982 – U.S. Coordinator, U.S.-Japan Joint Seminar on “Microcolumn Separation Methods and their Ancillary Techniques,” Honolulu, Hawaii
1980–1984 – Member, Advisory Committee to the Analytical Chemistry Division, Oak Ridge National Laboratory
1986 – Instructor, ACS Short Course on Supercritical Fluid Chromatography
1988, 1990 – Organizing Committee, International Symposium, “Microcolumn Separation Methods,” Bloomington, IN and Aronberg, Sweden
1988, 1991 – Scientific Committee, International Symposium, “HPLC 88” and “HPLC 92”
1977–Pres. – Instructor, ACS Short Course on Capillary Gas Chromatography
1978–Pres. – ACS Lecture Tour Speaker
1990–Pres. – Scientific Committee, International Symposia on Capillary Chromatography
1994 – Scientific Committee, Glycobiology: Analytical Methods
2003 – Member of the Center for the Integrative Study of Animal Behavior, Indiana University
2004 – Member of the Indiana University Cancer Center, IU School of Medicine
.
Publications
Separation of amino acid homopolymers by capillary gel electrophoresis.
Retention indices for programmed-temperature capillary-column gas chromatography of polycyclic aromatic hydrocarbons.
Ultrasensitive Pheromone Detection by mammalian vomeronasal neurons.
Electrophoretic separations of proteins in capillaries with hydrolytically-stable surface structures.
Comparison of the methods for profiling glycoprotein glycans—HUPO Human Disease Glycomics/Proteome Initiative multi-institutional study.
Structural Investigations of Glycoconjugates at High Sensitivity.
References
Indiana University faculty
21st-century American chemists
Analytical chemists
Czech chemists
University of Houston faculty
Academic staff of the Karolinska Institute
1942 births
Living people | Milos Novotny | [
"Chemistry"
] | 1,442 | [
"Analytical chemists"
] |
52,408,383 | https://en.wikipedia.org/wiki/Endogenosymbiosis | Endogenosymbiosis is an evolutionary process, proposed by the evolutionary and environmental biologist Roberto Cazzolla Gatti, in which "gene carriers" (viruses, retroviruses and bacteriophages) and symbiotic prokaryotic cells (bacteria or archaea) could share parts or all of their genomes in an endogenous symbiotic relationship with their hosts.
Context
The related process of symbiogenesis or endosymbiosis was proposed by Lynn Margulis in 1967. She argued that the internal symbiosis of bacteria-like organisms had formed organelles like chloroplasts and mitochondria. She proposed that this had created the eukaryotes, and thus driven the expansion of life on Earth. She had argued that this process of symbiotic collaboration had run alongside the classical Darwinian cycle of mutation, natural selection and adaptation.
Genetic symbiosis from parasites
Roberto Cazzolla Gatti, Ph.D., associate professor at Tomsk State University (Russia), argued in his hypothesis that "the main likely cause of the evolution of sexual reproduction, the parasitism, also represents the origin of biodiversity".
In other terms, this theory suggests that sexual reproduction acts as a conservative system against the inclusion of new genetic variations into cells' DNA (supported by the DNA repair systems) and, instead, the evolution of species can take place only when this preservative system fails to contrast the inclusion, within the host genome, of hexogen parts of DNA (and RNA) coming from obliged "parasitic" elements (viruses and phages) that establish a symbiosis with their hosts.
"As two parallel evolutionary lines – Cazzolla Gatti wrote in his original paper – sexual reproduction seems to preserve what the endogenosymbiosis moves to diversify. Following the former process, the species can adapt slowly and indefinitely to the external factors, adjusting themselves, but not 'creating' novelty. The latter process, instead, leads to the speciation due to sudden changes in genes sequences. Not only organelles can be symbiotic with other cells, as suggested Lynn Margulis, but entire pieces of genetic material coming from symbiotic parasites, can be included in the host DNA, changing the gene expression and addressing the speciation process".
This idea challenges the canonical natural selection models based on the gradualism of the mutation-adaptation pattern, providing more support to the punctuated equilibrium theory proposed by Stephen Jay Gould and Niles Eldredge.
Evidence
Two independent studies provide support for the hypothesis. Jamie E. Henzy and Welkin E. Johnson demonstrated that the complex evolutionary history of the IFIT (Interferon Induced proteins with Tetratricopeptide repeats) family of antiviral genes has been shaped by continuous interactions between mammalian hosts and their many viruses.
David Enard and colleagues estimated that viruses have driven close to 30% of all adaptive amino acid changes in the part of the human proteome conserved within mammals. Their results suggest that viruses are one of the most dominant drivers of evolutionary change across mammalian and human proteomes.
Previously, it was estimated that about 7–8% percent of the entire human genome carry about 100,000 pieces of DNA that came from endogenous retroviruses. This may be an underestimate.
In 2016 the biologists Sarah R. Bordestein and Seth R. Bordestein reported that genes are frequently transferred between hosts and parasites. Eukaryotic genes are often co-opted by viruses and bacterial genes are commonly found in bacteriophages. The presence of bacteriophages in symbiotic bacteria that obligately reside in eukaryotes may promote eukaryotic DNA transfers to bacteriophages.
References
Symbiosis
Mutualism (biology)
Endosymbiotic events
Evolution | Endogenosymbiosis | [
"Biology"
] | 804 | [
"Behavior",
"Symbiosis",
"Biological interactions",
"Endosymbiotic events",
"Mutualism (biology)"
] |
39,531,188 | https://en.wikipedia.org/wiki/MYRRHA | The MYRRHA (Multi-purpose hYbrid Research Reactor for High-tech Applications) is a design project of a nuclear reactor coupled to a proton accelerator. This makes it an accelerator-driven system (ADS). MYRRHA will be a lead-bismuth cooled fast reactor with two possible configurations: sub-critical or critical.
The project is managed by SCK CEN, the Belgian Centre for Nuclear Research. Its design will be adapted as a function of the experience gained from a first research project with a small proton accelerator and a lead-bismuth eutectic target: GUINEVERE.
MYRRHA is anticipated to be constructed in 2036, with a first phase (100 MeV LINAC accelerator) expected to be completed in 2026 if successfully demonstrated.
Concept
In a traditional power-generating nuclear reactor, the nuclear fuel is arranged in such a way that the two or three neutrons released from a fission event will induce one other atom in the fuel to fission. This is known as criticality. To maintain this precise balance, a number of control systems are used like control rods and neutron poisons. In most such designs, a loss of control can lead to a runaway reaction, heating the fuel until it melts. Various feedback systems and active controls prevent this.
The concept behind a number of advanced reactor designs is to arrange the fuel so it is always below criticality. Under normal conditions, this would lead to it rapidly "turning off" as the neutron counts continue to fall. In order to produce power, some other source of neutrons has to be provided. In most designs, these are provided from a second much smaller reactor running on a neutron-rich fuel, like highly enriched uranium. This is the basis for the fast breeder reactor and similar designs. In order for this to work, the reactor generally has to use a coolant that has a low neutron cross-section, water will slow the neutrons down too much. Typical coolants for fast reactors are sodium or lead-bismuth.
In the accelerator driven reactor, these extra neutrons are instead provided by a particle accelerator. These produce protons which are shot into a target, normally a heavy metal. The energy of the protons causes neutrons to be knocked off the atoms in the target, a process known as neutron spallation. These neutrons are then fed into the reactor, making up the number needed to bring the reactor back to criticality. The MYRRHA design uses the lead-bismuth cooling fuel as the target, shooting the protons directly into the reactor core.
Components
MYRRHA is a project presently under development of a research reactor aiming to demonstrate the feasibility of the ADS and the lead-cooled fast reactor concepts, with various research applications from spent-fuel irradiation to material irradiation testing. A linear accelerator is under development to provide a beam of fast proton that hits a spallation target, producing neutrons. These neutrons are necessary to keep the nuclear reactor running when operated in sub-critical mode, but to increase its versatility the reactor is also designed to operate in critical mode with fast neutron and thermal neutron zones.
Accelerator
The accelerator will accelerate protons to an energy of 600 MeV with a beam current of up to 4 mA. In subcritical mode, if the accelerator stops the reactor power drops immediately. To avoid thermal cycles the accelerator needs to be extremely reliable. MYRRHA aims at no more than 10 outages longer than three seconds per 100 days. A first prototype stage of the accelerator was started in 2020.
The accelerator and two targets are called Minerva, and construction was started in 2024.
ISOL@MYRRHA
The high reliability and intense beam current required for operating such a machine makes the proton accelerator potentially interesting for online isotope separation. Phase I of the project therefore also includes the design and feasibility study of ISOL@MYRRHA to investigate exotic isotopes.
Spallation target
The protons collide with a liquid lead-bismuth eutectic. The high atomic number of the target leads to a large number of neutrons via spallation.
Reactor
The pool type, or the loop type, reactor will be cooled by a lead-bismuth eutectic. Separated into a fast neutron zone and a thermal neutron zone, the reactor is planned to use a mixed oxide of uranium and plutonium (with 35 wt. % ).
Two operating modes are foreseen: critical and sub-critical.
In sub-critical mode, the reactor is planned to run with a criticality under 0.95: On average a fission reaction will induce less than one additional fission reaction, the reactor does not have enough fissile material to sustain a chain reaction on its own and relies on the neutrons from the spallation target. As additional safety feature the reactor can be passively cooled when the accelerator is switched off.
See also
ASTRID
Fast breeder reactor
Fast neutron reactor
Gas-cooled fast reactor
Generation IV reactor
Integral Fast Reactor
Sodium-cooled fast reactor
References
External links
Liquid metal fast reactors
Neutron sources
Nuclear reactors
Nuclear research reactors
Nuclear technology
Pressure vessels | MYRRHA | [
"Physics",
"Chemistry",
"Engineering"
] | 1,039 | [
"Structural engineering",
"Chemical equipment",
"Nuclear technology",
"Physical systems",
"Hydraulics",
"Nuclear physics",
"Pressure vessels"
] |
39,531,450 | https://en.wikipedia.org/wiki/MXD3 | MAX dimerization protein 3 is a protein that in humans is encoded by the MXD3 gene located on Chromosome 5.
MXD3 is a basic helix-loop-helix protein belonging to a subfamily of MAX-interacting proteins. This protein competes with MYC for binding to MAX to form a sequence-specific DNA-binding complex. MXD3 is a transcriptional repressor that is specifically expressed during S phase of the cell cycle. The protein is implicated in both normal neural development and in the development of brain cancer. In medulloblastoma cells, MXD3 binds E-box sequences, leading to increased cell proliferation at moderate MXD3 levels but increased cell death and apoptosis at higher expression levels.
References
Transcription factors | MXD3 | [
"Chemistry",
"Biology"
] | 153 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
39,536,536 | https://en.wikipedia.org/wiki/Michael%20Krivelevich | Michael Krivelevich (Hebrew: 'מיכאל קריבלביץ; born January 30, 1966) is a professor with the School of Mathematical Sciences of Tel Aviv University, Israel.
Krivelevich received his PhD from Tel Aviv University in 1997 under the supervision of Noga Alon. He has published extensively in combinatorics and adjacent fields and specializes in extremal and probabilistic combinatorics.
He serves as an editor-in-chief of the Journal of Combinatorial Theory (Series B) and is on the editorial board of several other journals in the field.
Awards and honors
In 2007, Krivelevich and Alan Frieze won the Pazy Memorial Award for research into probabilistic reasoning in combinatorics.
In 2014, Krivelevich gave an invited address in the Combinatorics section at the International Congress of Mathematicians.
He was elected as a member of the 2017 class of Fellows of the American Mathematical Society "for contributions to extremal and probabilistic combinatorics".
References
External links
Michael Krivelevich at the Mathematics Genealogy Project
20th-century Israeli mathematicians
21st-century Israeli mathematicians
Academic staff of Tel Aviv University
Combinatorialists
1966 births
Living people
Fellows of the American Mathematical Society | Michael Krivelevich | [
"Mathematics"
] | 265 | [
"Combinatorialists",
"Combinatorics"
] |
45,574,002 | https://en.wikipedia.org/wiki/Wedge%20bonding | Wedge bonding is a kind of wire bonding which relies on the application of ultrasonic power and force to form bonds. It is a popular method and is commonly used in the semiconductor industry. Wedge bonding is directional, so the bonding head rotates to accommodate the different angles for bonding. Due to this rotation, wedge bonding is slower than ball bonding. The advantage of wedge bonding is a finer pitch is possible. Wedge bonding also accommodates the use of metal ribbon instead of wire for bonding.
Thermocompression Bonding (TCB): The technique uses heat and pressure to create a bond between a thin wire (typically aluminum or gold) and the bonding pads. As heat softens the wire, pressure is applied to form the bond.
Ultrasonic Wedge Bonding (UWB): Ultrasonic energy is used in conjunction with pressure to create a bond between the wire and the bonding pads. This method is similar to thermos-sonic ball bonding but uses a wedge tool instead of a ball.
Non-Ultrasonic Wedge Bonding: In this variation, wedge bonding is created without the use of ultrasonic energy.
References
An In-depth Look at Wire Bonding Options (pcb-technologies.com). 2024
Processes > Wire Bonding > Wedge Bonding (palomartechnologies.com). 2024
Semiconductor device fabrication
Packaging (microfabrication)
Articles containing video clips | Wedge bonding | [
"Materials_science"
] | 281 | [
"Semiconductor device fabrication",
"Packaging (microfabrication)",
"Microtechnology"
] |
43,723,326 | https://en.wikipedia.org/wiki/Sacubitril | Sacubitril (; INN) is an antihypertensive drug used in combination with valsartan. The combination drug sacubitril/valsartan, known during trials as LCZ696 and marketed under the brand name Entresto, is a treatment for heart failure. It was approved under the FDA's priority review process for use in heart failure on July 7, 2015.
Side effects
Sacubitril increases levels of bradykinin, which is responsible for the edema seen sometimes in patients with the medication. This is why the medication is not recommended for patients with a history of pulmonary edema with the usage of ACE inhibitors.
Mechanism of action
Sacubitril is a prodrug that is activated to sacubitrilat (LBQ657) by de-ethylation via esterases. Sacubitrilat inhibits the enzyme neprilysin, which is responsible for the degradation of atrial and brain natriuretic peptide, two blood pressure–lowering peptides that work mainly by reducing blood volume. In addition, neprilysin degrades a variety of peptides including bradykinin, an inflammatory mediator.
Synthesis
The large scale synthesis of sacubritil begins with 4-bromo-1,1'-biphenyl, which is converted to its corresponding Grignard reagent; this is reacted directly with (S)-epichlorohydrin regioselectively at less-substituted site of the epoxide.
A Mitsunobu reaction with succinimide is performed, followed by acidic hydrolysis of the succinimide protecting group, hydrolysis of the alkyl chloride using sodium hydroxide and protection of the free amine with a tert-butoxycarbonyl (Boc) group. The primary alcohol is oxidized using bleach with TEMPO as the catalyst. This aldehyde undergoes a Wittig reaction to for the α.β-unsaturated ester, which is converted to the lithium carboxylate by hydrolysis using lithium hydroxide in aqueous ethanol. Asymmetric hydrogenation using a ruthenium catalyst and a chiral bisphosphine ligand sets the second stereocenter. The carboxylate is esterified by reaction with thionyl chloride to form the acyl chloride, which is reacted with ethanol. The acidic conditions under which the acyl chloride is generated result in removal of the Boc group, which allows for direct reaction of the amine with succinic anhydride in the presence of pyridine as a base.
See also
Omapatrilat
References
Antihypertensive agents
Biphenyls
Carboxylate esters
Ethyl esters
Prodrugs
Butyramides
Carboxylic acids | Sacubitril | [
"Chemistry"
] | 589 | [
"Chemicals in medicine",
"Carboxylic acids",
"Functional groups",
"Prodrugs"
] |
43,724,378 | https://en.wikipedia.org/wiki/Collective%20effects%20%28accelerator%20physics%29 | Charged particle beams in a particle accelerator or a storage ring undergo a variety of different processes. Typically the beam dynamics is broken down into single particle dynamics and collective effects. Sources of collective effects include single or multiple inter-particle scattering and interaction with the vacuum chamber and other surroundings, formalized in terms of impedance.
The collective effects of charged particle beams in particle accelerators share some similarity to the dynamics of plasmas. In particular, a charged particle beam may be considered as a non-neutral plasma, and one may find mathematical methods in common with the study of stability or instabilities. One may also find commonality with the field of fluid mechanics
since the density of charged particles is often sufficient to be considered as flowing continuum.
Another important topic is the attempt to mitigate collective effects by use of single bunch or multi-bunch feedback systems.
Types of collective effects
Collective effects can include emittance growth, bunch length or energy spread growth, instabilities, or particle losses. There are also multi-bunch effects.
Formalisms for treating collective effects
The collective beam motion may be modeled in a variety of ways. One may use macroparticle models, or else a continuum model. The evolution equation in the latter case is typically called the Vlasov equation, and requires one to write down the Hamiltonian function including the external magnetic fields, and the self interaction. Stochastic effects may be added by generalizing to the Fokker–Planck equation.
Software for computation of collective effects
Depending on the effects considered and the modeling formalism used, different software is available for simulation. The collective effects must typically be added in addition to the single particle dynamics, which may be modeled using a tracking code.
See article on Accelerator physics codes.
References
Accelerator physics | Collective effects (accelerator physics) | [
"Physics"
] | 357 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
53,795,039 | https://en.wikipedia.org/wiki/RBGT%2062a | The RBGT-62a was a Geiger counter manufactured in the early 1960s for the Czechoslovak People's Army. It read beta and gamma and had a transistorised circuit. The dial, controls, headphone jack, and probe connector are on the front of the meter, and the battery compartment on the back. The meter body is green lacquered metal, and the probe, aluminium and plastic. They are connected by a connection cable.
Reading scale
The readout scale possesses three levels, β1, β2, and β3. Beta 3 reads from 500-2500 decays-per-minute, beta 2 reads 2500-25,000 DPM, beta 1 reads 25,000-250,000. There is a small equation at the bottom of the scale which says that 2500 DPM equates to 1 mr/h. There is also a small line below the Beta 3 scale, labeled K.N., standing for "kontrola napětí", i.e. voltage check. When turning the meter on, the calibration potentiometer is to be turned, so that the dial is on it.
Another characteristic of the meter is that sections are painted in luminous paint, so they could be read at night.
Probe
The probe is made of an aluminium cylinder with a plastic handle. The Geiger-Muller tube it uses is an STS-5. The probe is divided in three different ways, five large rectangular holes, fifteen small circular holes, and unperforated. These are for beta 1, beta 2, and beta 3, respectively. The rubber protective coating was to be put on probe when used in dusty or liquidy environment. That coating, a part of device supplement set, was unlubricated condom (five pieces).
Knobs
The meter has two different knobs, both are black plastic. The first one, at the top left of the meter, has five setting (clockwise), VYP. ("Off"), K.N. ("Voltage Check"), β1, β2, and β3. This top knob is roughly arrow shaped with a dot of luminous paint at the point. The second knob, directly below the first, is cylindrical. It is the knob of the tuning potentiometer and is labeled K.N in luminous paint.
Other front features
There is also a headphone jack, labeled SLUCHATKA. There is also a probe connector. It looks similar to a BNC connector put is not compatible with a BNC cable. It has steel line connected aluminium cap for protection when not in use.
Other
The device had a leather carrier bag when used in the military. This bag had a probe compartment, as well as a small sewn-in testing radiator Sr90, which had to be removed when device was decommissioned.
References
Ionising radiation detectors | RBGT 62a | [
"Technology",
"Engineering"
] | 589 | [
"Ionising radiation detectors",
"Radioactive contamination",
"Measuring instruments"
] |
53,798,296 | https://en.wikipedia.org/wiki/Cascade%20chart%20%28NDI%20interval%20reliability%29 | A cascade chart is tool that can be used in damage tolerance analysis to determine the proper inspection interval, based on reliability analysis, considering all the context uncertainties. The chart is called a "cascade chart" because the scatter of data points and downward curvature resembles a waterfall or cascade. This name was first introduced by Dr. Alberto W Mello in his work "Reliability prediction for structures under cyclic loads and recurring inspections". Materials subject to cyclic loads, as shown in the graph on the right, may form and propagate cracks over time due to fatigue. Therefore, it is essential to determine a reliable inspection interval. There are numerous factors that must be considered to determine this inspection interval. The non-destructive inspection (NDI) technique must have a high probability of detecting a crack in the material. If missed, a crack may lead the structure to a catastrophic failure before the next inspection. On the other hand, the inspection interval cannot be too frequent that the structure's maintenance is no longer profitable.
NDI methods
NDI is a process used to examine materials without causing damage to the structure. The main purpose of using NDI techniques is to comb the surface of a material for small cracks that could affect the integrity of the entire structure. Because the structure is intended to be used again, it is essential that the methods of investigating materials for cracks does not damage the structure in any way.
Some of the most common NDI methods are:
Eddy current
Ultrasound
Dye penetrant
X-ray
Visual
Some of the techniques are more accurate and can detect small cracks. For example, visual inspection is the least reliable method because the human eye can only resolve and identify cracks on the order of millimeters. The table below shows an important parameter of crack size for each method with a 0% chance of detection (a0). This is based on the resolution of each method. This number can be used in a Weibull-like distribution to map the probability of detection as a function of crack size.
As the table shows, the minimum detectable parameter increases from the ultrasound method to the visual method and from excellent accessibility to difficult accessibility. In any case, it is important to have a maintenance plan that allows multiple opportunities to find a crack that may be small and difficult to access.
Cascade chart
A cascade chart is an alternative way from the traditional damage tolerance analysis (DTA) methodology for determining a reliable inspection interval. It uses the scatter from crack growth simulations, uncertainty in material properties, and probability of detection distribution to determine the NDI interval, given a desired cumulative probability of detection under a given confidence level.
Probability of detection
The probability of detection (POD), a function of the NDI method, accessibility, and crack size, can be modeled by the equation below.
In this equation, a0 is defined as the crack size below which detection is impossible. α, and λ, on the other hand, are parameters related to the chosen NDI method that determine the shape of the probability curve. The number of inspections of a structure is directly related to the probability of a detecting a crack in that structure. The more chances that an inspector has to find the crack, the more likely he or she will be to find the crack and prevent further damage to the structure. The equation below describes the total probability of detecting a crack based on each individual inspection's probability.
The variable pi represents the probability of detection for each crack size, and the variable n represents the number of inspections conducted. Due to all the factors that play a role in determining the probability of detection, there will always be a non-zero probability that a crack will be missed, no matter what NDI method is used to inspect the structure.
Creating the chart
The process for creating the cascade chart shown on the right begins with modeling the crack growth over a time interval, number of cycles, or number of flight hours.
Based on an initial crack size, ai, the crack growth curve can vary significantly, causing the crack to reach its critical size in different lengths of time. This contributes to the scatter of the cascade chart. Based on the manufacturing of different materials, the example considers a typical minimum flaw in a material as about 0.127 mm (0.005"). Knowing that the new structures are deeply inspected before being put in service, the example considers a maximum undetectable crack size as about 1.27 mm (0.05") for a new structure. To simulate the variation of possible initial crack sizes, the Monte Carlo simulation method was used to randomly generate values between the given limits. In addition, the method randomly generated parameters for the crack growth curve. Based on typical variation of material properties, the constants C and m in the equation below can be varied to represent different crack growth rates. Uncertainties in loads and geometric factors affecting the stress intensity factor can also be incorporated to simulate different crack growth curves.
The probability of detection distribution curve for a chosen NDI method is superimposed to the crack growth curve, and the inspection interval is systematically changed to compute the cumulative probability of detection for a crack growing from the minimum to the critical size. The simulation is repeated several times, and a distribution of inspection interval versus structural reliability can be formed. To refine the randomization of the values, the Latin Hypercube procedure was also introduced.
As it is clear in the chart, the scatter in NDI decreases as the intervals are reduced and reliability is increased. Several sources of uncertainties can be included in the simulations, such as variation in material properties, the machining quality, the inspection methods, and accessibility of the crack. In the cascade chart, the reliability curve is presented with scatter (i.e. not every point is well defined by the negative quadratic curve). Therefore, it is necessary to make use of a confidence interval.
Using the chart
There are two variables that play a role in the selection of the inspection interval using the cascade chart. These variables are the probability of detection over the lifetime of the structure and the confidence level for the stated probability. As one of the graph's axes is probability, it is fairly easy to find the data points that match a specific probability. In aerospace structural analysis, it is common to consider 99.9999% probability (0.0001% risk) to be improbable and 99.99999% probability (0.00001% risk) to be extremely improbable. Then, the confidence interval is used to select a point where a specified percentage of the data points lies to the right of the selected point. For example, a 95% confidence interval means that 95% of the simulated cases must fall to the right of this point. This specific point is marked, and the respective point on the x-axis represents the suggested inspection interval. Furthermore, to derive the estimated risk per flight hour, the risk percentage can be divided by the number of flight hours described in the inspection interval. Hopefully, using this process, the inspection interval will lead to a higher percentage of cracks being detected before failure, ensuring greater flight safety. A final important observation is that improving the NDI method can increase the number of flight hours needed before re-inspection while maintaining a relatively low risk level.
See also
Fracture mechanics
Nondestructive testing
Monte Carlo method
Latin hypercube sampling
References
Jr, Alberto W. S. Mello, and Daniel Ferreira V. Mattos. "Reliability Prediction for Structures under Cyclic Loads and Recurring Inspections." Journal of Aerospace Technology and Management, vol. 1, no. 2, 28 October 2009, pp. 201–209., doi:10.5028/jatm.2009.0102201209. Accessed 9 April 2017.
ASM Handbook, 1992, “Failure analysis and prevention”, 9. Ed., Materials Park, OH, (ASM International, vol. 11), pp. 15–46.
Broek, D., 1989, “The practical use of fracture mechanics”. Galena, OH. Kluwer Academic, pp. 361–390.
Gallagher J. P., 1984, “USAF damage tolerant design handbook: Guidelines for the analysis and design of damage tolerant aircraft structures”, Dayton Research Institute, Dayton, OH, pp. 1.2.5–1.2.13.
IFI, 2005, “Análise e gerencialmento de riscos nos vôos de certificação”, MPH-830, Instituto de Fomento à Indústria. Divisão de Certficação de Aviação Civil, São José dos Campos, S.P., Brasil.
Knorr, E., 1974, “Reliability of the detection of flaws and of the determination of the flaw size”, AGARDograph, Quebec, No. 176, pp. 398–412.
Lewis W. H. et al., 1978, “Reliability of non-destructive inspection”, SA-ALC/MME. 76-6-38-1, San Antonio, TX.
Manuel, L., 2002, “CE 384S – Structural reliability course: Class notes”, Department of Civil Engineering, The University of Texas at Austin, Austin, TX.
Mattos, D. F. V. et al., 2009, “F-5M DTA Program”. Journal of Aerospace Technology and Management. Vol. 1, No1, pp. 113–120.
Mello Jr, A. W. S. et al., 2009, “Geração do ciclo de tensões para análise de fadiga, Software GCTAF F-5M”, RENG ASA-I 04/09, IAE, São José dos Campos, S.P., Brasil.
Provan, J. W., 2006, “Fracture, fatigue and mechanical reliability: An introduction to mechanical reliability”, Department of Mechanical Engineering, University of Victoria, Victoria, B.C.
USAF., 1974, “Airplane damage tolerance requirement”. Military Specification. Washington, DC. (MIL-A-83444).
USAF., 1974, “Airplane strength and rigidity reliability requirements, repeated loads and fatigue”. Military Specification. Washington, DC. (MIL-A-008866).
USAF., 2005, “Aircraft structural integrity program, airplane requirements”. Military Specification. Washington, DC. (MIL-STD-1530C).
Reliability analysis
Fracture mechanics
Structural analysis
Nondestructive testing | Cascade chart (NDI interval reliability) | [
"Materials_science",
"Engineering"
] | 2,147 | [
"Structural engineering",
"Reliability analysis",
"Fracture mechanics",
"Reliability engineering",
"Structural analysis",
"Materials science",
"Nondestructive testing",
"Materials testing",
"Mechanical engineering",
"Aerospace engineering",
"Materials degradation"
] |
53,802,271 | https://en.wikipedia.org/wiki/Machine%20learning%20control | Machine learning control (MLC) is a subfield of machine learning, intelligent control, and control theory which aims to solve optimal control problems with machine learning methods. Key applications are complex nonlinear systems for which linear control theory methods are not applicable.
Types of problems and tasks
Four types of problems are commonly encountered:
Control parameter identification: MLC translates to a parameter identification if the structure of the control law is given but the parameters are unknown. One example is the genetic algorithm for optimizing coefficients of a PID controller or discrete-time optimal control.
Control design as regression problem of the first kind: MLC approximates a general nonlinear mapping from sensor signals to actuation commands, if the sensor signals and the optimal actuation command are known for every state. One example is the computation of sensor feedback from a known full state feedback. Neural networks are commonly used for such tasks.
Control design as regression problem of the second kind: MLC may also identify arbitrary nonlinear control laws which minimize the cost function of the plant. In this case, neither a model, the control law structure, nor the optimizing actuation command needs to be known. The optimization is only based on the control performance (cost function) as measured in the plant. Genetic programming is a powerful regression technique for this purpose.
Reinforcement learning control: The control law may be continually updated over measured performance changes (rewards) using reinforcement learning.
Applications
MLC has been successfully applied
to many nonlinear control problems,
exploring unknown and often unexpected actuation mechanisms. Example applications include:
spacecraft attitude control,
thermal control of buildings,
feedback control of turbulence,
and remotely operated underwater vehicles.
Many more engineering MLC application are summarized in the review article of PJ Fleming & RC Purshouse (2002).
As is the case for all general nonlinear methods,
MLC does not guarantee convergence,
optimality, or robustness for a range of operating conditions.
See also
Reinforcement learning
References
Further reading
Dimitris C Dracopoulos (August 1997) "Evolutionary Learning Algorithms for Neural Adaptive Control", Springer. .
Thomas Duriez, Steven L. Brunton & Bernd R. Noack (November 2016) "Machine Learning Control - Taming Nonlinear Dynamics and Turbulence", Springer. .
Machine learning
Control theory
Cybernetics | Machine learning control | [
"Mathematics",
"Engineering"
] | 461 | [
"Machine learning",
"Applied mathematics",
"Control theory",
"Artificial intelligence engineering",
"Dynamical systems"
] |
34,072,435 | https://en.wikipedia.org/wiki/Multiple%20layered%20plasmonics | Multiple layered plasmonics use electronically responsive media to change and manipulate the plasmonic properties of plasmons. The properties typically being manipulated can include the directed scattering of light and light absorption. The use of these to use “changeable” plasmonics is currently undergoing development in the academic community by allowing them to have multiple sets of functions that are dependent on how they are being manipulated or excited. Under these new manipulations, such as multiple layers that respond to different resonant frequencies, their new functions were designed to accomplish multiple objectives in a single application.
Overview
This article provides an overview of current developing medical usage of multiple layered plasmonics, more specifically those developed by the Halas Group at Rice University
In addition to the bio-medical applications purposed, several other uses will be briefly described below.
Bio-medical applications
Gold shelled nanoparticles, which are spherical nanoparticles with silica cores and gold shells, are used in cancer therapy and bio imaging enhancement.
Theranostic probes – capable of detection and treatment of cancer in a single treatment - are nanoparticles that have binding sites on their shell that allow them to attach to a desired location (typically cancerous cells) then can be imaged through dual modality imagery (an imaging strategy that uses x-rays and radionuclide imaging) and through near-infrared fluorescence. The reason gold nanoparticles are used is due to their vivid optical properties which are controlled by their size, geometry, and their surface plasmons. Gold nanoparticles (such as AuNPs) have the benefit of being biocompatible and the flexibility to have multiple different molecules and fundamental materials, attached to their shell (almost anything that can normally be attached to gold can be attached to the gold nano-shell, helping in identifying and treating cancer). The treatment of cancer is possible only because of the scattering and absorption that occurs for plasmonics. Under scattering, the gold plated nanoparticles become visible to imaging processes that are tuned to the correct wavelength which is dependent upon the size and geometry of the particles. Under absorption, photothermal ablation occurs, which heats the nanoparticles and their immediate surroundings to temperatures capable of killing the surrounding cells. Additionally, these nanoparticles can be made to release antisense DNA oligonucleotides when under photo-activation. These oligonucleotides are used in conjunction with the photo-thermal ablation treatments to perform gene-therapy. This is accomplished because nanoparticle complexes are delivered inside of cells then undergo light induced release of DNA from their surface. This will allow for the internal manipulation of a cell and provide a means for monitoring a group cells return to equilibrium.
Another example of multiple layered plasmonics involves placing drugs inside of the nanoparticle and using it as a vehicle to deliver toxic drugs to cancerous sites only. This is accomplished by coating the outside of a nanoparticle with iron oxide (allowing for easy tracking with an MRI machine) then once the area of the tumor is coated with the drug filled nanoparticles, the nanoparticles can be activated using resonant light waves to release the drug.
Other applications
Active plasmonics
Multiple layered plasmonics can be coated in nanoparticles to modify or drive a reaction near a metallic surface when properly excited.
Additionally, the scattering of light from these plasmonics can be controlled and even directed based on the surface particles, geometry, and size.
Energy applications
Multiple layered plasmonics can be used in harvesting solar radiation for energy applications. This is accomplished by redirecting incident light into the waveguide and evanescent surface modes of thin film photovoltaic devices.
Using multiple layered plasmons to purify water is also being investigated.
For more information on the research behind energy applications, and the collaborations behind this research, please visit the Halas group website listed below in the external links.
References
External links
halas.rice.edu
Metamaterials
Plasmonics | Multiple layered plasmonics | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 843 | [
"Plasmonics",
"Metamaterials",
"Materials science",
"Surface science",
"Condensed matter physics",
"Nanotechnology",
"Solid state engineering"
] |
34,073,649 | https://en.wikipedia.org/wiki/Approximate%20entropy | In statistics, an approximate entropy (ApEn) is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data. For example, consider two series of data:
Series A: (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, ...), which alternates 0 and 1.
Series B: (0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, ...), which has either a value of 0 or 1, chosen randomly, each with probability 1/2.
Moment statistics, such as mean and variance, will not distinguish between these two series. Nor will rank order statistics distinguish between these series. Yet series A is perfectly regular: knowing a term has the value of 1 enables one to predict with certainty that the next term will have the value of 0. In contrast, series B is randomly valued: knowing a term has the value of 1 gives no insight into what value the next term will have.
Regularity was originally measured by exact regularity statistics, which has mainly centered on various entropy measures.
However, accurate entropy calculation requires vast amounts of data, and the results will be greatly influenced by system noise, therefore it is not practical to apply these methods to experimental data. ApEn was first proposed (under a different name) by A. Cohen and I. Procaccia,
as an approximate algorithm to compute an exact regularity statistic, Kolmogorov–Sinai entropy, and later popularized by Steve M. Pincus. ApEn was initially used to analyze chaotic dynamics and medical data, such as heart rate, and later spread its applications in finance, physiology, human factors engineering, and climate sciences.
Algorithm
A comprehensive step-by-step tutorial with an explanation of the theoretical foundations of Approximate Entropy is available. The algorithm is:
Step 1 Assume a time series of data . These are raw data values from measurements equally spaced in time.
Step 2 Let be a positive integer, with , which represents the length of a run of data (essentially a window).Let be a positive real number, which specifies a filtering level.Let .
Step 3 Define for each where . In other words, is an -dimensional vector that contains the run of data starting with .Define the distance between two vectors and as the maximum of the distances between their respective components, given by
for .
Step 4 Define a count as
for each where . Note that since takes on all values between 1 and , the match will be counted when (i.e. when the test subsequence, , is matched against itself, ).
Step 5 Define
where is the natural logarithm, and for a fixed , , and as set in Step 2.
Step 6 Define approximate entropy () as
Parameter selection Typically, choose or , whereas depends greatly on the application.
An implementation on Physionet, which is based on Pincus, use instead of in Step 4. While a concern for artificially constructed examples, it is usually not a concern in practice.
Example
Consider a sequence of samples of heart rate equally spaced in time:
Note the sequence is periodic with a period of 3. Let's choose and (the values of and can be varied without affecting the result).
Form a sequence of vectors:
Distance is calculated repeatedly as follows. In the first calculation,
which is less than .
In the second calculation, note that , so
which is greater than .
Similarly,
The result is a total of 17 terms such that . These include . In these cases, is
Note in Step 4, for . So the terms such that include , and the total number is 16.
At the end of these calculations, we have
Then we repeat the above steps for . First form a sequence of vectors:
By calculating distances between vector , we find the vectors satisfying the filtering level have the following characteristic:
Therefore,
At the end of these calculations, we have
Finally,
The value is very small, so it implies the sequence is regular and predictable, which is consistent with the observation.
Python implementation
import math
def approx_entropy(time_series, run_length, filter_level) -> float:
"""
Approximate entropy
>>> import random
>>> regularly = [85, 80, 89] * 17
>>> print(f"{approx_entropy(regularly, 2, 3):e}")
1.099654e-05
>>> randomly = [random.choice([85, 80, 89]) for _ in range(17*3)]
>>> 0.8 < approx_entropy(randomly, 2, 3) < 1
True
"""
def _maxdist(x_i, x_j):
return max(abs(ua - va) for ua, va in zip(x_i, x_j))
def _phi(m):
n = time_series_length - m + 1
x = [
[time_series[j] for j in range(i, i + m - 1 + 1)]
for i in range(time_series_length - m + 1)
]
counts = [
sum(1 for x_j in x if _maxdist(x_i, x_j) <= filter_level) / n for x_i in x
]
return sum(math.log(c) for c in counts) / n
time_series_length = len(time_series)
return abs(_phi(run_length + 1) - _phi(run_length))
if __name__ == "__main__":
import doctest
doctest.testmod()
MATLAB implementation
Fast Approximate Entropy from MatLab Central
approximateEntropy
Interpretation
The presence of repetitive patterns of fluctuation in a time series renders it more predictable than a time series in which such patterns are absent. ApEn reflects the likelihood that similar patterns of observations will not be followed by additional similar observations. A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn.
Advantages
The advantages of ApEn include:
Lower computational demand. ApEn can be designed to work for small data samples ( points) and can be applied in real time.
Less effect from noise. If data is noisy, the ApEn measure can be compared to the noise level in the data to determine what quality of true information may be present in the data.
Limitations
The ApEn algorithm counts each sequence as matching itself to avoid the occurrence of in the calculations. This step might introduce bias in ApEn, which causes ApEn to have two poor properties in practice:
ApEn is heavily dependent on the record length and is uniformly lower than expected for short records.
It lacks relative consistency. That is, if ApEn of one data set is higher than that of another, it should, but does not, remain higher for all conditions tested.
Applications
ApEn has been applied to classify electroencephalography (EEG) in psychiatric diseases, such as schizophrenia, epilepsy, and addiction.
See also
Recurrence quantification analysis
Sample entropy
References
Time series
Entropy and information
Articles with example Python (programming language) code | Approximate entropy | [
"Physics",
"Mathematics"
] | 1,523 | [
"Dynamical systems",
"Entropy",
"Physical quantities",
"Entropy and information"
] |
34,076,003 | https://en.wikipedia.org/wiki/Hybrid%20operating%20room | A hybrid operating room is an advanced surgical theatre that is equipped with advanced medical imaging devices such as fixed C-arms, X-ray tomography (CT) scanners, or magnetic resonance imaging (MRI) scanners. These imaging devices enable minimally-invasive surgery. Minimally-invasive surgery is intended to be less traumatic for the patient and minimize incisions on the patient and perform surgery procedure through one or several small cuts.
Though imaging has been a standard part of operating rooms for a long time in the form of mobile C-arms, ultrasound, and endoscopy, these minimally-invasive procedures require imaging techniques that can visualize smaller body parts such as thin vessels in the heart muscle and can be facilitated through intraoperative 3D imaging.
Clinical applications
Hybrid operating rooms are currently used mainly in cardiac, vascular, and neurosurgery, but could be suitable for a number of other surgical disciplines.
Cardiovascular surgery
The repair of diseased heart valves and the surgical treatment of rhythm disturbances and aortic aneurysms can benefit from the imaging capabilities of a hybrid OR. Hybrid Cardiac Surgery is a widespread treatment for these diseases.
The shift toward endovascular treatment of abdominal aortic aneurysms also pushed the spread of angiographic systems in vascular operating room environments. Particularly for complex endografts, a hybrid operating theater should be a basic requirement. Also, it is well-suited for emergency treatment.
Some surgeons not only verify the placement of complex endografts intraoperatively, they also use their angiography system and the applications it offers for planning the procedure. As anatomy changes between a preoperative CT and intraoperative fluoroscopy because of patient positioning and the insertion of stiff material, more precise planning is possible if the surgeon performs an intraoperative rotational angiography, takes an automatic segmentation of the aorta, places markers for the renal arteries and other landmarks in 3D and then overlays the contours on 2D fluoroscopy. This guidance is updated with any change in C-arm angulation/position or table position.
Neurosurgery
In neurosurgery, applications for hybrid ORs are for example spinal fusion and intracranial aneurysm coiling. In both cases, they have been rated promising to improve outcomes. For spinal fusion procedures, an integration with a navigation system can further improve the workflow. Intraoperative acquisition of a cone beam computed tomography image can also be used to reconstruct three dimensional CT-like images. This may be useful for the applications above and also for confirmation of targeting for placement of ventricular catheters, biopsies, or deep brain stimulation electrodes. Intra-operative MRI is used to guide brain tumor surgery as well as placement of deep brain stimulation electrodes and interstitial laser thermal therapy.
Thoracic surgery and endobronchial procedures
Procedures to diagnose and treat small pulmonary nodules have also recently been performed in hybrid operating rooms. Interventional image guidance thereby offers the advantage of precisely knowing the position of the nodules, particularly in small or "ground-glass" opaque tumors, metastases, and/or patients with reduced pulmonary function. This allows for a precise navigation in biopsies, and resection in video-assisted thoracoscopic surgery. Most importantly, using interventional imaging in video-assisted thoracoscopic surgery can substitute for the loss of tactile sensing. This approach also delivers the potential to spare healthy lung tissue by knowing the exact position of the nodule which increases the quality of life for the patient after the operation.
The process for diagnosis and treatment usually comprises 3 steps:
Detection of nodules on CT or chest X-ray
Biopsy of nodule to evaluate malignancy
If necessary, treatment of nodule through surgery, radiotherapy, and chemotherapy (curative approach) or through chemoembolization and ablation (palliative approach)
A hybrid operating room supports steps 2 and 3 (if surgery is performed) of this workflow:
Biopsy
Small lung nodules identified on a thorax CT need to be examined for malignancy, thus a small portion of sample tissue is taken out in a needle procedure. The needle is advanced through the bronchial tree, or trans-thoracically, toward the position of the nodule. To make sure tissue is captured from the nodule as opposed to accidentally taking healthy lung tissue, imaging modalities such as mobile C-arms, ultrasound, or bronchoscopes are used. The yield rate of biopsies in small nodules is reported to be between 33 and 50% in tumors smaller than 3 cm.
To increase the yield rate, advanced interventional imaging with angiographic C-arms has proven to be beneficial. The advantage of intra-procedural imaging is that the patient and the diaphragm are in exactly the same position during 2D/3D imaging and the actual biopsy. Hence the accuracy is usually much higher than using pre-operative data.
Rotational angiography visualizes the bronchial tree in 3D during the procedure. The air thereby serves as a 'natural' contrast agent, thus the nodules are well visible. On this 3D image, using dedicated software, the nodules can be marked, along with a planned needle path for the biopsy (endobronchially or trans-thoracically). These images can then be overlaid on live fluoroscopy. This gives the pulmonologist improved guidance toward the nodules. Yield rates of 90% in nodules of 1–2 cm, and 100% in nodules > 2 cm have been reported with this approach.
Surgery
Video-assisted thoracoscopic surgery is a minimally-invasive technique to resect lung nodules that saves the patient the trauma of a thoracotomy. Thereby, small ports are used to access the pulmonary lobes and introduce a camera on a thoracoscope, along with the necessary instruments. While this procedure speeds up recovery and potentially reduces complications, the loss of natural vision and tactile sensing makes it difficult for the surgeon to locate the nodules, especially in cases of non-superficial, ground-glass opaque, and small lesions. The yield rate for nodules < 1 cm can be below 40% as studies show. As a consequence sometimes more healthy tissue is resected than actually necessary in order to avoid missing (parts of) the lesion. Using advanced intra-operative imaging in the operating rooms helps to precisely locate and resect the lesion in a potentially tissue-sparing and quick fashion. In order to be able to use image guidance during video-assisted thoracoscopic surgery, rotational angiography has to be performed before the introduction of ports, thus before the lobe in question deflates. This way the lesion is visible through the natural contrast of air. In a second step, hook wires, thread needles, or contrast agent (lipiodol, iopamidol) are introduced into or next to the lesion to ensure visibility on the angiogram after lung deflation. Then, the conventional part of video-assisted thoracoscopic surgery starts with the introduction of thoracoscopes. The imaging system is used in fluoroscopic mode now, where both the inserted instruments and the previously marked lesion are well visible. A precise resection is now possible. In case contrast agent has been used to mark the lesion, it will also drain into the regional lymph nodes, which then can be resected within the same procedure.
Orthopedic trauma surgery
Complex fractures like pelvis fractures, calcaneal, or tibia head fractures, etc. need an exact placement of screws and other surgical implants to allow quickest possible treatment of the patients. Minimally invasive surgical approaches result in less trauma for the patient and quicker recovery. However, the risk of malpositionins, revisions, and nerval damage cannot be underestimated (Malposition and revision rates of different imaging modalities for percutaneous iliosacral screw fixation following pelvic fractures: a systematic review and meta-analysis). The possibility of the use of an angio system with a spatial resolution of 0.1 mm, the large field of view to image the entire pelvis in one image and the high kW rate allows the surgeon high precision images while not impairing hygiene (floor-mounted systems) or access to the patient (CT). Degenerative spine surgery, traumatic spinal fractures, oncologic fractures, or scoliosis surgery are other types of surgery that can be optimized in a hybrid OR. The large field of view and the high kW rate allow to optimally image even obese patients. Navigations systems or the use of integrated laser guidance can support and improve the workflow.
Laparoscopic surgery
As in other minimally invasive surgery, not everybody in the surgical community did not believe in this technology. Today it is the gold standard for many types of surgery. Starting with a simple appendectomy, cholecystectomy, partial kidney resections and partial liver resections, the laparoscopic approach is expanding. The image quality, the possibility of imaging the patient in the surgical position and the guidance of the instruments facilitate this approach.(Efficacy of DynaCT for surgical navigation during complex laparoscopic surgery: an initial experience. Partial resection of the kidney, leaving as much healthy tissue, meaning kidney function to the patient has been described.). The challenges the surgeons face is the loss of natural 3D vision and tactile sensing. Through small ports he/she has to rely on the images provided by the endoscope and is unable to feel the tissue. In a hybrid operating room the anatomy can be updated and imaged in real time. 3D images can be fused and/or overlaid on live fluoroscopy or the endoscope. (Real-time image guidance in laparoscopic liver surgery: first clinical experience with a guidance system based on intraoperative CT imaging.) Crucial anatomy like vessels or a tumor can be avoided and complications reduced. Further investigations are under trial at the moment. (Surgical navigation in urology: European perspective)
Emergency care
For the treatment of trauma patients, every minute counts. Patients with severe bleeding after car accidents, explosions, gunshot wounds, aortic dissections, etc. need immediate care due to the life-threatening blood loss. In a hybrid operating room both open and endovascular treatment of the patient can be performed. For example, the tension in the brain due to a severe haemorrhage can be relieved and the aneurysm can be coiled. The concept of placing the emergency patient on an operating table as soon as he/she enters the hospital, if stable perform a trauma scan in the CT or if unstable immediate procedure in the hybrid operating room without having to reposition the patient can save valuable time and reduce risk of further injury.
Imaging techniques
Imaging techniques with a fixed C-arm
Fluoroscopy and data acquisition
Fluoroscopy is performed with continuous X-ray to guide the progression of a catheter or other devices within the body in live images. To depict even fine anatomic structures and devices, brilliant image quality is required. In particular, in cardiac interventions, imaging the moving heart requires a high frame rate (30f/s, 50 Hz) and high power output (at least 80 kW). Image quality needed for cardiac applications can only be achieved by high powered fixed angiography systems, not with mobile C-arms.
Angiographic systems provide a so-called acquisition mode, which stores the acquired images automatically on the system to be uploaded into an image archive later. While standard fluoroscopy is predominantly used to guide devices and to re-position the field of view,
data acquisition is applied for reporting or diagnostic purposes. In particular, when contrast media is injected, a data acquisition is mandatory, because the stored sequences can be replayed as often as required without re-injection of contrast media. To achieve a sufficient image quality for diagnoses and reporting, the angiographic system uses up to 10 times higher X-ray doses than standard fluoroscopy. Thus, data acquisition should be applied only when truly necessary. Data acquisition serves as a base for advanced imaging techniques such as DSA and rotational angiography.
Rotational angiography
Rotational angiography is a technique to acquire CT-like 3D images intraoperatively with a fixed C-arm. To do that, the C-arm is rotated around the patient, acquiring a series of projections that will be reconstructed to a 3D data set.
Digital subtraction angiography
Digital subtraction angiography (DSA) is a two-dimensional imaging technique for the visualization of blood vessels in the human body (Katzen, 1995).
For DSA, the same sequence of a projection is acquired without and then with contrast agent injection through the vessels under investigation. The first image is subtracted from the second to remove background structures such as bones as completely as possible and show the contrast-filled vessels more clearly. As there is a time lag between the acquisition of the first and the second image, motion correction algorithms are necessary to remove movement artifacts.
An advanced application of DSA is road mapping. From the acquired DSA sequence, the image frame with maximum vessel opacification is identified and assigned to be the so-called road-map mask. This mask is continuously subtracted from live fluoroscopy images to produce real-time subtracted fluoroscopic images overlaid on a static image of the vasculature. The clinical benefit is better visualization of small and complex vascular structures without distracting underlying tissue to support the placement of catheters and wires.
2D/3D registration
Fusion imaging and 2D/3D overlay
Modern angiographic systems are not just used for imaging, but support the surgeon also during the procedure by guiding the intervention based on 3D information acquired either pre-operatively or intra-operatively. Such guidance requires that the 3D information is registered to the patient. This is done using special proprietary software algorithms.
Information flow between workstation and angiographic system
3D images are calculated from a set of projections acquired during a rotation of the C-arm around the patient. The volume reconstruction is performed on a separate workstation. The C-arm and the workstation are connected a communicate continuously. For example, when the user virtually rotates the volume on the workstation to view the anatomy from a certain perspective, the parameter of this view can be transmitted to the angio system, which then drives the C-arm to exactly the same perspective for fluoroscopy. In the same way, if the C-arm angulation is changed, this angulation can be transmitted to the workstation which updates the volume to the same perspective as the fluoroscopic view. The software algorithm that stands behind this process is called registration and can also be done with other DICOM images, such as CT or magnetic resonance tomography data acquired preoperatively.
Overlay of 3D information on top of 2D fluoroscopy
The 3D image itself can be overlaid colour-coded on top of the fluoroscopic image. Any change of the angulations of the C-arm will cause the workstation to re-calculate in real-time the view on the 3D image to match exactly the view of the live 2D fluoroscopy image. Without additional contrast agent injection the surgeon can observe device movements simultaneously with the 3D overlay of the vessel contours in the fluoroscopy image. An alternative way to add information from the workstation to the fluoroscopic image is to overlay, after either manual or automatic segmentation of the anatomical structures of interest in the 3D image, the outline as a contour onto the fluoroscopic image. This provides additional information that is not visible in the fluoroscopic image. Some software available provides landmarks automatically, more can be added manually be the surgeon or a qualified technician. One example is the placement of a fenestrated stentgraft to treat an abdominal aortic aneurysm. The ostia of the renal arteries can be circled on the 3D image and then overlaid on the live fluoroscopy. As the marking has been done in 3D, it will update with any change of the fluoroscopy angulation to match the current view.
Guidance during trans-aortic valve implantation
Trans-Aortic Valve Implantation requires exact positioning of the valve in the aortic root to prevent complications. A good fluoroscopic view is essential, whereby an exact perpendicular angle to the aortic root is considered to be optimal for the implantation. Recently, applications have been released which support the surgeon in selecting this optimal fluoroscopy angulation or even drive the C-arm automatically into the perpendicular view to the aortic root. Some approaches are based on pre-operative CT images, which are used to segment the aorta and calculate optimal viewing angles for valve implantations. CT images must be registered with C-arm CT or fluoroscopic images to transfer the 3D volume to the actual angiographic system. Errors during the registration process might result in diversification from the optimal angulations of the C-arm and must be manually corrected. Additionally, anatomical variations between the acquisition of the pre-operatively CT image and surgery are not accounted for. Patients are generally imaged with hands-up in a CT scanner while surgery is performed with arms aside the patient, which leads to substantial errors. Algorithms purely based on C-arm CT images acquired in operating rooms by the angiographic system are inherently registered to the patient and show the present anatomy structures. With such an approach, the surgeon does not rely on pre-operative CT images acquired by the radiology department, which simplifies the workflow in the operating room and reduces errors in the process.
Functional imaging in the operating room
Improvements of the C-arm technology nowadays also enable perfusion imaging and can visualize parenchymal blood volume in the OR. To do that, rotational angiography (3D-DSA) is combined with a modified injection protocol and a special reconstruction algorithm. The blood flow can then be visualized in the course of time. This can be useful in the treatments of patients with ischemic stroke.
Imaging techniques with a CT
A CT system mounted on rails can be moved into and out of an operating room to support complex surgical procedures, such as brain, spine, and trauma surgery with additional information through imaging. The Johns Hopkins Bayview Medical Center in Maryland describes that their intra-operative CT usage has a positive impact on patient outcomes by improving safety, decreasing infections, and lowering the risks of complications.
Imaging techniques with magnetic resonance tomography
Magnetic resonance imaging is used in neurosurgery:
Before surgery to enable precise planning
During surgery to support decision making and accounting for brain shift
After surgery to evaluate the outcome
An magnetic resonance tomography system usually requires a lot of space both in the room and around the patient. It is not possible to perform surgery in a regular magnetic resonance tomography room. Thus for step 2, there are two ways to use magnetic resonance scanners interoperatively. One is a moveable magnetic resonance tomography scanner that can be brought in only when imaging is needed, the other is to transport the patient to a scanner in an adjacent room during surgery.
Planning considerations
Location and organization
Not only the usage of a hybrid operating room is "hybrid", but also its role within the hospital system. As it holds an imaging modality, the radiology department could take the lead responsibility for the room for expertise in handling, technical, maintenance, and connectivity reasons. From a patient workflow perspective, the room could be run by their surgical department and should rather be situated next to other surgical facilities, to ensure proper patient care and fast transportation.
Room size and preparation
Installing a hybrid operating room is a challenge to standard hospital room sizes, as not only the imaging system requires some additional space, but there are also more people in the room as in a normal OR. A team of 8 to 20 people including anesthesiologists, surgeons, nurses, technicians, perfusionists, support staff from device companies, etc. can work in such an OR. Depending on the imaging system chosen, a room size of 70 square meters including a control room but excluding a technical room and the preparation areas is recommended. Additional preparations of the room necessary are 2-3mm lead shielding and potentially enforcement of the floor or ceiling to hold the additional weight of the imaging system (approximately 650–1800 kg).
Workflow
Planning a hybrid operating room requires to involve a considerable number of stakeholders. To ensure a smooth workflow in the room, all parties working there need to state their requirements, which will impact the room design and determining various resources like space, medical, and imaging equipment. This may require professional project management and several iterations in the planning process with the vendor of the imaging system, as technical interdependencies are complex. The result is always an individual solution tailored to the needs and preferences of the interdisciplinary team and the hospital.
Lights, monitors, and booms
In general, two different light sources are needed in an operating room: the surgical (operating) lights used for open procedures and the ambient lighting for interventional procedures. Particular attention should be paid to the possibility to dim the lights. This is frequently needed during fluoroscopy or endoscopy. For the surgical lights it is most important that they cover the complete area across the operating table. Moreover, they must not interfere with head heights and collision paths of other equipment. The most frequent mounting position of OR-lights is centrally above the operating table. If a different position is chosen, the lights usually are swiveled in from an area outside the operating table. Because one central axis per light head is necessary, this may lead to at least two central axes and mounting points in order to ensure sufficient illumination of the surgical field. The movement range of the angiography system determines the positioning of the operating room lights. Central axes must be outside of moving path and swivel range. This is especially important as devices have defined room height requirements that must be met. In this case, head clearance height for the OR-light may be an issue. This makes lights a critical item in the planning and design process. Other aspects in the planning process of operating room lights include avoidance of glare and reflections. Modern operating room operating room lights may have additional features, like built-in camera and video capabilities. For the illumination of the wound area, a double-arm OR-light system is required. Sometimes even a third light may be required, in cases where more than one surgical activity takes place at the same time, e.g. vein stripping of the legs.
In summary, the key topics for planning the surgical light system include:
Central location above the operating table (consideration in planning with ceiling mounted systems).
Usually three light heads for optimal illumination of multiple surgical fields
Suspension accommodating unrestricted, independent movement, and stable positioning of light heads
Modular system with options for extension, e.g. video monitor and/or camera.
Imaging systems
The most common imaging modality to be used in hybrid ORs is a C-arm. Expert consensus rates the performance of mobile C-arms in hybrid ORs as insufficient, because the limited power of the tube impacts image quality, the field of view is smaller for image-intensifier systems than for flat-panel detector systems and the cooling system of mobile C-arms can lead to overheating after just a few hours, which can be too short for lengthy surgical procedures or for multiple procedures in a row, that would be needed to recover the investment in such a room.
Fixed C-arms do not have these limitations, but require more space in the room. These systems can be mounted either on the floor, the ceiling, or both if a biplane system is chosen. The latter is the system of choice if pediatric cardiologists, electrophysiologists, or neurointerventionalists are major users of the room. It is not recommended to implement a biplane system if not clearly required by these clinical disciplines, as ceiling-mounted components may raise hygienic issues: In fact, some hospitals do not allow operating parts directly above the surgical field, because dust may fall in the wound and cause infection. Since any ceiling-mounted system includes moving parts above the surgical field and impairs the laminar airflow, such systems are not the right option for hospitals enforcing highest hygienic standards. (see also and, both German only)
There are more factors to consider when deciding between ceiling- and floor-mounted systems. Ceiling-mounted systems require substantial ceiling space and, therefore, reduce the options to install surgical lights or booms. Nonetheless, many hospitals choose ceiling-mounted systems because they cover the whole body with more flexibility and – most importantly – without moving the table. The latter is sometimes a difficult and dangerous undertaking during surgery with the many lines and catheters that must also be moved. Moving from a parking to a working position during surgery, however, is easier with a floor-mounted system, because the C-arm just turns in from the side and does not interfere with the anesthesiologist. The ceiling-mounted system, by contrast, during surgery can hardly move to a parking position at the head end without colliding with anesthesia equipment. In an overcrowded environment like the OR, biplane systems add to the complexity and interfere with anesthesia, except for neurosurgery, where anesthesia is not at the head end. Monoplane systems are therefore clearly recommended for rooms mainly used for cardiac surgery.
Operating table
The selection of the operating table depends on the primary use of the system. Interventional tables with floating table tops and tilt and cradle compete with fully integrated flexible operating tables. Identification of the right table is a compromise between interventional and surgical requirements. Surgical and interventional requirements may be mutually exclusive. Surgeons, especially orthopedic, general, and neurosurgeons usually expect a table with a segmented tabletop for flexible patient positioning. For imaging purposes, a radiolucent tabletop, allowing full body coverage, is required. Therefore, non-breakable carbon fibre tabletops are used.
Interventionalists require a floating tabletop to allow fast and precise movements during angiography. Cardiac and vascular surgeons, in general, have less complex positioning needs, but based on their interventional experience in angiography may be used to having fully motorized movements of the table and the tabletop. For positioning patients on non breakable tabletops, positioning aids are available, i.e. inflatable cushions. Truly floating tabletops are not available with conventional operating tables. As a compromise, floatable angiography tables specifically made for surgery with vertical and lateral tilt are recommended. To further accommodate typical surgical needs, side rails for mounting surgical equipment like retractors or limb holders should be available for the table.
The position of the table in the room also impacts surgical workflow. A diagonal position in the operating room may be considered in order to gain space and flexibility in the room, as well as access to the patient from all sides. Alternatively, a conventional surgery table can be combined with an imaging system if the vendor offers a corresponding integration. The operating room can then be used either with a radiotranslucent non-breakable tabletop that supports 3D imaging, or with a universal breakable tabletop that provides enhanced patient positioning, but restricts 3D imaging. The latter are particularly suited for neurosurgery or orthopedic surgery, and these integrated solutions recently also became commercially available. If it is planned to share the room for hybrid and open conventional procedures, these are sometimes preferred. They provide greater workflow flexibility because the tabletops are dockable and can be easily exchanged, but require some compromises with interventional imaging.
In summary, important aspects to be included considered are the position in the room, radiolucency (carbon fiber tabletop), compatibility, and integration of imaging devices with the operating table. Further aspects include table load, adjustable table height, and horizontal mobility (floating) including vertical and lateral tilt. It is important to also have proper accessories available, such as rails for mounting special surgical equipment retractors, camera holder). Free-floating angiography tables with tilt and cradle capabilities are best suited for cardiovascular hybrid operating rooms.
Radiation dose
X-ray radiation is ionizing radiation, thus exposure is potentially harmful. Compared to a mobile C-arm, which is classically used in surgery, CT scanners and fixed C-arms work on a much higher energy level, which induces higher dose. Therefore, it is very important to monitor radiation dose applied in a hybrid operating room both for the patient and the medical staff.
There are a few simple measures to protect people in the operating room from scattered radiation, thus lowering their dose. Awareness is one critical issue, otherwise the available protection tools might be neglected. Among these tools is protective clothing in the form of a protective apron for the trunk, a protective thyroid shield around the neck and protective glasses. The later may be replaced by a ceiling-suspended lead glass panel. Additional lead curtains can be installed at the table side to protect the lower body region. Even more restrictive rules apply to pregnant staff members.
A very effective measure of both protection to both the staff and the patient of course is applying less radiation. There is always a trade-off between radiation dose and image quality. A higher X-ray dose leads to a clearer picture. Modern software technology can improve image quality during post-processing, such that the same image quality is reached with a lower dose. Image quality thereby is described by contrast, noise, resolution, and artifacts. In general, the ALARA principle (as low as reasonably achievable) should be followed. Dose should be as low as possible, but image quality can only be reduced to the level that the diagnostic benefit of the examination is still higher than the potential harm to the patient.
There are both technical measures taking by X-ray equipment manufacturers to reduce dose constantly and handling options for the staff to reduce dose depending on the clinical application. Among the former is beam hardening. Among the latter are frame rate settings, pulsed fluoroscopy, and collimation.
Beam hardening: X-ray radiation consists of hard and soft particles, i.e. particles with a lot of energy and particles with little energy. Unnecessary exposure is mostly caused by soft particles, as they are too weak to pass through the body and interact with it. Hard particles, by contrast, pass through the patient. A filter in front of the X-ray tube can catch the soft particles, thus hardening the beam. This decreases dose without impacting image quality.
Frame rate: High frame rates (images acquired per second) are needed to visualize fast motion without stroboscopic effects. However, the higher the frame rate, the higher the radiation dose. Therefore, the frame rate should be chosen according to the clinical need and be as low as reasonably possible. For example, in pediatric cardiology, frame rates of 60 pulses per second are required compared to 0.5 p/s for slowly moving objects. A reduction to half pulse rate reduces dose by about half. The reduction from 30 p/s to 7.5 p/s reduces dose to about 25%.
When using pulsed fluoroscopy, radiation dose is only applied in prespecified intervals of time, thus less dose is used to produce the same image sequence. For the time in between, the last image stored is displayed.
Another tool for decreasing dose is collimation. It may be that from the field of view provided by the detector, only a small part is interesting for the intervention. The X-ray tube can be shielded at the parts that are not necessary to be visible by a collimator, thus only sending dose to the detector for the body parts in question. Modern C-arms enable to navigate on acquired images without constant fluoroscopy.
References
External links
Video of a hybrid Operating Room in Brazil
A reference about neurosurgical hybrid operating rooms on NeuroNews
Medical equipment
Surgical procedures and techniques | Hybrid operating room | [
"Biology"
] | 6,628 | [
"Medical equipment",
"Medical technology"
] |
34,079,692 | https://en.wikipedia.org/wiki/Zero%20carbon%20housing | Zero-carbon housing is housing that does not emit greenhouse gasses (GHGs) into the atmosphere, either directly (Scope 1), or indirectly due to consumption electricity produced using fossil fuels (Scope 2). Most commonly zero-carbon housing is taken to mean zero emissions of carbon dioxide, which is the main climate pollutant from homes, although fugitive methane may also be emitted from natural gas pipes and appliances.
Definition
There are nevertheless a number of definitions of zero carbon housing, particularly concerning the scope of emissions in the housing lifecycle (eg construction vs operation or refurb), and whether it is acceptable to count off-site emissions reduction (eg due to renewable energy export) or other external reductions against any residual emissions from the house to make it a Net Zero Home. The Chancery Lane legal climate project gives 6 definitions of zero carbon housing or buildings, of which 2 explicitly allow for the inclusion of off-site emissions reductions, via off-site renewables or other carbon offsets, and one is a net zero definition, allowing for net renewable energy export to be included. Some definitions are at odds with the apparent meaning of zero carbon, with the UK government at one point proposing to define a zero carbon home as one with "70 per cent reduction in carbon emissions against 2006 standards" - ie by definition not literally zero, as it allows up to 30% of conventional emissions.
Construction vs operation: Some scopes cover operation only, some give the option of including construction too. For the purposes of present day policy to reduce emissions, it is most useful to include construction and operation in the scope of new buildings, and refurbishment and operational emissions in the scope for existing buildings (as their construction impacts cannot be changed in retrospect). For a refurbishment to be genuinely zero-carbon, the embedded carbon needs to be "paid back" by the emissions saved by the house within a timescale relevant for action on climate change (normally within a few years), and well within the lifetime of the equipment concerned. Where a new zero carbon house is constructed, the embedded carbon of the whole building must be considered and paid back. As there is substantial embedded carbon in conventional building materials such as brick and concrete, a new zero carbon home is a bigger challenge than a retrofit and is likely to need more novel materials.
Another way in which a home can become zero carbon in operation is simply that it is powered, heated and cooled purely by a zero carbon electricity grid. While these are currently (2024) few (eg Iceland, Nepal), a significant number of countries are targeting zero carbon electricity grids by 2035, including Austria, Belgium, Canada, France, Germany, Luxembourg, the Netherlands, Switzerland and the UK.
Retrofitting existing conventional homes to become zero carbon in use
The following main changes are required:
Eliminate direct greenhouse gas emissions
Most conventional houses in countries where space heating is required use fossil fuels or wood for space heating, hot water and cooking. In order to become zero carbon, these heating systems need to be replaced with zero emission heating methods. The main options are:
Heat pumps — powered by electricity, deliver high efficiency by drawing on the heat energy in ambient air/ground, heat pumps can deliver an apparent effiency of 400% or more, i.e. heat delivered is 4x the electrical energy in. May use air, water or ground as the heat source, and can deliver heat as warm air (air to air heat pump) or via a wet system using radiators or under-floor heating (air source heat pump). Most buildings in Norway already use heat pumps, and they are being rolled out with some government support in many countries in Western Europe, while heat pump installations now exceed gas furnace installations in the USA. As heat pumps often use lower flow temperatures than traditional fossil fuelled systems, homes may need improvements in insulation and larger emitters (eg radiators) when they install a heat pump. One study estimates the embodied carbon in a UK installed heat pump at 1563 kg. For average UK heat demand of 10,000kWh per year and a sCOP (efficiency) of 4, this would use 2,500kWh electricity at 156g/kWh=390 kg. Whereas gas would have emitted 10,000/0.85 x181g=2129 kg. Therefore 1621 kg is saved per year, and the heat pump carbon payback is 11 months. The payback would be longer in a country with higher grid carbon intensity.
Direct (resistive) electric heating — Already widely used, but less efficient than heat pumps (max output 100% of electricity in), and therefore significantly more expensive for space and water heating. In the form of induction heating, is a replacement for fossil fuelled cooking.
Hydrogen fired boilers/furnaces — a hydrogen boiler emits only water vapour, so is zero emission locally. However, it will only be zero emission overall if the hydrogen is from zero carbon sources, eg green hydrogen from electrolysis powered by renewable or nuclear energy. Also requires significant distribution infrastructure. Hydrogen boilers have been widely demonstrated but have no adoption at scale. They face competition from heat pumps, which make much more efficient use of available renewable electricity.
Passive solar heating — uses large window areas, appropriately orientated to the sun, to absorb solar energy directly for space heating. In a retrofit situation this approach may need significant building remodelling to enlarge or reorient windows, although a conservatory or sunspace may be an easier add-on alternative. Passive solar heating does not work at night and is likely to only provide a part of the home's heating demand. It may also cause overheating in summer if not appropriately controlled, eg with shades, shutters or blinds.
Solar thermal heating of domestic hot water — uses roof panels with fluid circulated through it to heat domestic hot water directly. Increasingly consumers choose to use solar PV panels instead, which can also be used to heat water through a diverter or heat pump, but supply electricity for other uses too.
Biomass — Under some circumstances use of wood burning in a stove or biomass boiler may be considered zero carbon, if the source of the wood is known and it can be confirmed that carbon equivalent to that produced from burning has been captured or will be within a short timescale. However this information is rarely available in practice and biomass has become highly contentious as a zero carbon solution. Additionally, biomass systems normally produce significant local air pollution due to wood smoke.
In some locations fossil fuelled generators would need to be replaced by solar PV/battery systems.
The cost of these measures to householders is naturally a critical factor. Because conventional systems benefit from economies of scale and installation skills are widely available, new zero carbon technologies may have a higher capital cost, although this may be offset by lower operating costs or efficiency savings, depending on the relative costs of electricity and fossil fuels. For this reason some governments provide householders with grants or subsidies towards the cost of the shift, for example the Boiler Upgrade Scheme in the UK, which helps to fund heat pump installations.
Ensure that the house generates more electricity than it consumes from the grid
In almost every case the renewable source of choice for dwellings is solar photovoltaic (PV) power. Use of solar PV power is now becoming routine worldwide, as solar power costs have fallen to become the cheapest source of electricity. Solar panels are typically placed on roofs, outhouses, or on the ground near the home, and it is practical for almost all scales of dwelling and most parts of the world. The only exception may be flats / apartments in dense urban areas, which may lack a roof or even any exposure to the sun.
To deliver a zero carbon house, the size /generation capacity of the PV array must match the annual consumption of the house. This is often straightforward, even if the home is using electricity for heating, directly or via a heat pump, or for cooling. In the case of cooling the solar energy availability will match the cooling demand quite well, but this is not the case with winter heating in higher latitudes. In this situation the house will typically import electricity for heating and other purposes in the winter, and export excess solar power in summer. To be net zero the export must exceed the import.
Home batteries are widely used with solar power, to provide electricity at night or dull conditions, and for cost advantage where export rates are low. In this situation it may make financial sense to store rather than reimport electricity.
Other forms of renewable power are possible in domestic situations, including micro hydro and wind turbines, but the larger size of this equipment restricts it to larger farms or estates, or to communal facilities, eg a wind turbine on an apartment block.
Maximise energy efficiency
Energy efficiency is not strictly necessary to achieve zero carbon housing, so long as the house is able to cover its electricity demand with renewable energy generated on site. However, greater energy efficiency reduces the scale of renewable generation required, and the cost of electricity imported, and may increase comfort by reducing temperature variations. At a national / economy level greater domestic energy efficiency reduces the need for large scale grid generation and transmission infrastructure, and electricity imports. The main energy efficiency approaches are:
Building fabric insulation to reduce space heating and cooling needs: Existing buildings can have their energy consumption cut significantly by insulating walls, floors and roofs, and by related measures such as draughtproofing. While some measures, eg loft/attic insulation using rockwool, are cheap and simple, others such as external wall insulation are more disruptive and expensive. Householders have to make careful analysis of the costs and benefits in terms of energy costs saved. In some countries there is state support for some home insulation measures.
Efficient appliances and lighting: these enable a cut in energy consumption without any change in occupant behaviour. For example, modern LED lighting uses 75% less electricity than traditional incandescent bulbs. Almost all appliances including white goods, computers, TVs and refrigerators have been developed to use less electricity, such that even since 1995, when they were a mature product, refrigerators in the EU are estimated to have had their power consumption cut by 60%. But more efficient appliances can more expensive, and consumers find it hard to know or calculate whether the more efficient products are worthwhile. For this reason certification including the Energy Label in the EU, and Energy Star in the United States have been developed to help consumers.
Efficient behaviour: home occupants have a large influence on the energy consumption of the a home. Typical behaviours include:
whether lights and appliances are switched off when not in use
frequency of use of washing appliances such as clothes washers, dishwashers and tumble driers, and energy intensity of programs selected (eg washing temperature)
whether high energy but optional equipment like hot tubs, tumble driers and electric power showers are installed and used
preferences regarding internal temperature settings for space heating and cooling.
Fabric first?
A major topic of debate in housing circles is whether retrofit should focus on "fabric first": i.e. maximising energy efficiency before updating energy supply approaches to eliminate fossil fuel use and add renewable generation. Proponents suggest that this approach is necessary to avoid over sizing energy supply systems such as heat pumps and to minimise overall energy demand in the economy. Opponents of fabric first suggest that major building upgrades such as wall and floor insulation and new windows are expensive and disruptive, and may deter residents from taking any action at all to move their homes towards zero carbon. By comparison, they say, energy supply equipment such as heat pumps and solar PV panels are cheaper and deliver larger reductions in carbon emissions and bills.
Design Considerations for new Zero Carbon Housing
There are two main areas to consider in designing and building zero carbon housing:
Design for maximum energy efficiency and zero carbon energy supply in operation;
Minimising embedded carbon in the building fabric, so that any carbon payback time is short.
Design for maximum energy efficiency and zero carbon energy supply in operation
The same approaches as set out in the above section are required, and it is normally cheaper to design these features into a house from the start, than to build a conventional house and retrofit it later. Key design approaches include:
Orientation of the home: In cooler climates the home should be orientated to take full advantage of active (eg PV) and passive solar heating. This involves making roofs face south (in the northern hemisphere) to maximise solar power, and specifying large south facing windows to maximise passive solar heating. Measures must also be taken to minimise overheating in summer, such as blinds, shutters and shading. In hotter climates a house can be orientated North-South to minimise insolation in the middle of the day and reduce overheating and cooling demand, although having a south facing roof for PV is still an advantage.
Attention should also be paid to the layout of multiple houses and surrounding features such as trees, so home solar panels are not shaded by trees or other houses. Tree felling to stop shading should be avoided as this is counterproductive in carbon terms. Joining houses as terraces or semi-detached housing is also advantageous as these houses insulate each other and reduce heat loss. In hotter climates trees should be retained or planted so that they can provide shading to homes and streets and reduce cooling needs.
High insulation and air tightness: this applies to all elements of a building envelope, ie floors, roofs, walls, windows and doors. Building codes and standards in many countries specify levels of insulation required by law in new buildings. For discussion of building insulation codes and technologies worldwide see building insulation. Modern building codes, if complied with, may be adequate to achieve zero carbon in operation if linked with an appropriate energy supply. They may specify either or both of materials performance, normally in terms of the U-Value of a material or combined materials, measured in Watts/m2/K, and/or overall building performance in kWh/m2/year. For example, UK regulations specify walls should be <0.18W/m2K. Building energy consumption rates vary enormously: the UK holder houses use 259kWh/m2, while new houses use 100kWh/m2. However there are indications that better performance is possible, with achievement of 50kWh/m2/yr relatively straightforward through retrofit. Meanwhile the high Passivhaus standard requires no more than 15kWh/m2 (for space heating only) which is achievable, though currently considered specialist and high end.
Air tightness refers to minimising air leakage or draught into and out of a building. If cold air leaks in and/or warm air leaks out, this increases heating requirements (or cooling, in hot climates). Air tightness is measured in air changes per hour or AC/H. An example of a high standard of air tightness is the PassivHaus standard which requires less than 0.6AC/H. There is also a need for a minimum air change level, so that damp and stale air does not build up, with negative health impacts for occupants. In order to achieve both requirements a MVHR system is often specified, though this increases costs.
Renewable energy supply integrated into the building: Solar PV panels can be integrated into a roof rather than mounted above conventional roofing materials like tiles. This enables saving on roofing materials and may improve appearance. A house can also be designed for heat pump heating, by specifying underfloor heating which is the best heat emitter for a heat pump: it allows lower flow temperatures which increase heat pump efficiency.
Minimising embedded carbon in the building fabric
See Green Building.
Additional Benefits of Zero Carbon Housing
Health
Zero carbon houses offer much cleaner indoor air because they curb fossil fuel combustion which releases volatile gases and pollutants. Appliances such as gas stove, heaters, dryers, and ovens that rely on burning fuel inside the home worsen the air quality indoors and can lead to respiratory issues for the occupants. Not only is the indoor air quality affected, but so is outdoor air quality. Pollution from residential buildings is noted to be responsible for about 15,500 deaths per year in the United States alone. Replacing appliances that run on fossil fuels can improve indoor air quality and reduce asthma symptoms in children by up to 42%, as well as decrease fire hazards in homes.
Costs
As previously mentioned, energy efficient homes can save the occupant on their utility bills by both replacing their appliances with energy efficient appliances as well as updating their insulation and building envelope. For every $1 invested in improvements towards creating a zero carbon home, approximately $2 are saved in electricity generation and utility costs.
Success with Zero Carbon Housing
It is now routinely possible to achieve net zero carbon housing, even without significant energy efficiency retrofit, by combining heat pump and solar PV technologies. For example, in the UK the average house uses 12,000kWh pa for heating, and 2,900kWh per year for electrical appliances. Using a heat pump to supply this amount of heat will require about 3,000kWh (assuming sCOP of 4). This gives a total electrical demand of 5,900kWh per year, which can be supplied by a solar array of about 6.3 kW (figures derived from Energy Saving Trust calculator in 2024), which is about 16 panels. This approach relies on the grid to supply energy in winter and receive it back in summer, as batteries cannot provide seasonal energy storage. Additional insulation would reduce the heat demand and therefore solar array size needed.
See also
Green building
Net energy gain
Zero heating building
Zero-energy building
References
Sustainable architecture | Zero carbon housing | [
"Engineering",
"Environmental_science"
] | 3,619 | [
"Sustainable architecture",
"Environmental social science",
"Architecture"
] |
34,080,241 | https://en.wikipedia.org/wiki/McKay%20graph | In mathematics, the McKay graph of a finite-dimensional representation of a finite group is a weighted quiver encoding the structure of the representation theory of . Each node represents an irreducible representation of . If are irreducible representations of , then there is an arrow from to if and only if is a constituent of the tensor product Then the weight of the arrow is the number of times this constituent appears in For finite subgroups of the McKay graph of is the McKay graph of the defining 2-dimensional representation of .
If has irreducible characters, then the Cartan matrix of the representation of dimension is defined by where is the Kronecker delta. A result by Robert Steinberg states that if is a representative of a conjugacy class of , then the vectors are the eigenvectors of to the eigenvalues where is the character of the representation .
The McKay correspondence, named after John McKay, states that there is a one-to-one correspondence between the McKay graphs of the finite subgroups of and the extended Dynkin diagrams, which appear in the ADE classification of the simple Lie algebras.
Definition
Let be a finite group, be a representation of and be its character. Let be the irreducible representations of . If
then define the McKay graph of , relative to , as follows:
Each irreducible representation of corresponds to a node in .
If , there is an arrow from to of weight , written as or sometimes as unlabeled arrows.
If we denote the two opposite arrows between as an undirected edge of weight . Moreover, if we omit the weight label.
We can calculate the value of using inner product on characters:
The McKay graph of a finite subgroup of is defined to be the McKay graph of its canonical representation.
For finite subgroups of the canonical representation on is self-dual, so for all . Thus, the McKay graph of finite subgroups of is undirected.
In fact, by the McKay correspondence, there is a one-to-one correspondence between the finite subgroups of and the extended Coxeter-Dynkin diagrams of type A-D-E.
We define the Cartan matrix of as follows:
where is the Kronecker delta.
Some results
If the representation is faithful, then every irreducible representation is contained in some tensor power and the McKay graph of is connected.
The McKay graph of a finite subgroup of has no self-loops, that is, for all .
The arrows of the McKay graph of a finite subgroup of are all of weight one.
Examples
Suppose , and there are canonical irreducible representations of respectively. If , are the irreducible representations of and , are the irreducible representations of , then
are the irreducible representations of , where In this case, we have
Therefore, there is an arrow in the McKay graph of between and if and only if there is an arrow in the McKay graph of between and there is an arrow in the McKay graph of between . In this case, the weight on the arrow in the McKay graph of is the product of the weights of the two corresponding arrows in the McKay graphs of and .
Felix Klein proved that the finite subgroups of are the binary polyhedral groups; all are conjugate to subgroups of The McKay correspondence states that there is a one-to-one correspondence between the McKay graphs of these binary polyhedral groups and the extended Dynkin diagrams. For example, the binary tetrahedral group is generated by the matrices:
where is a primitive eighth root of unity. In fact, we have
The conjugacy classes of are:
The character table of is
Here The canonical representation is here denoted by . Using the inner product, we find that the McKay graph of is the extended Coxeter–Dynkin diagram of type
See also
ADE classification
Binary tetrahedral group
References
Further reading
Representation theory | McKay graph | [
"Mathematics"
] | 786 | [
"Representation theory",
"Fields of abstract algebra"
] |
46,897,030 | https://en.wikipedia.org/wiki/Bianconi%E2%80%93Barab%C3%A1si%20model | The Bianconi–Barabási model is a model in network science that explains the growth of complex evolving networks. This model can explain that nodes with different characteristics acquire links at different rates. It predicts that a node's growth depends on its fitness and can calculate the degree distribution. The Bianconi–Barabási model is named after its inventors Ginestra Bianconi and Albert-László Barabási. This model is a variant of the Barabási–Albert model. The model can be mapped to a Bose gas and this mapping can predict a topological phase transition between a "rich-get-richer" phase and a "winner-takes-all" phase.
Concepts
The Barabási–Albert (BA) model uses two concepts: growth and preferential attachment. Here, growth indicates the increase in the number of nodes in the network with time, and preferential attachment means that more connected nodes receive more links. The Bianconi–Barabási model, on top of these two concepts, uses another new concept called the fitness. This model makes use of an analogy with evolutionary models. It assigns an intrinsic fitness value to each node, which embodies all the properties other than the degree. The higher the fitness, the higher the probability of attracting new edges. Fitness can be defined as the ability to attract new links – "a quantitative measure of a node's ability to stay in front of the competition".
While the Barabási–Albert (BA) model explains the "first mover advantage" phenomenon, the Bianconi–Barabási model explains how latecomers also can win. In a network where fitness is an attribute, a node with higher fitness will acquire links at a higher rate than less fit nodes. This model explains that age is not the best predictor of a node's success, rather latecomers also have the chance to attract links to become a hub.
The Bianconi–Barabási model can reproduce the degree correlations of the Internet Autonomous Systems. This model can also show condensation phase transitions in the evolution of complex network.
The BB model can predict the topological properties of Internet.
Algorithm
The fitness network begins with a fixed number of interconnected nodes. They have different fitness, which can be described with fitness parameter, which is chosen from a fitness distribution .
Growth
The assumption here is that a node’s fitness is independent of time, and is fixed.
A new node j with m links and a fitness is added with each time-step.
Preferential attachment
The probability that a new node connects to one of the existing links to a node in the network depends on the number of edges, , and on the fitness of node , such that,
Each node’s evolution with time can be predicted using the continuum theory. If initial number of node is , then the degree of node changes at the rate:
Assuming the evolution of follows a power law with a fitness exponent
,
where is the time since the creation of node .
Here,
Properties
Equal fitnesses
If all fitnesses are equal in a fitness network, the Bianconi–Barabási model reduces to the Barabási–Albert model, when the degree is not considered, the model reduces to the fitness model (network theory).
When fitnesses are equal, the probability that the new node is connected to node when is the degree of node is,
Degree distribution
Degree distribution of the Bianconi–Barabási model depends on the fitness distribution . There are two scenarios that can happen based on the probability distribution. If the fitness distribution has a finite domain, then the degree distribution will have a power-law just like the BA model. In the second case, if the fitness distribution has an infinite domain, then the node with the highest fitness value will attract a large number of nodes and show a winners-take-all scenario.
Measuring node fitnesses from empirical network data
There are various statistical methods to measure node fitnesses in the Bianconi–Barabási model from real-world network data. From the measurement, one can investigate the fitness distribution or compare the Bianconi–Barabási model with various competing network models in that particular network.
Variations of the Bianconi–Barabási model
The Bianconi–Barabási model has been extended to weighted networks displaying linear and superlinear scaling of the strength with the degree of the nodes as observed in real network data. This weighted model can lead to condensation of the weights of the network when few links acquire a finite fraction of the weight of the entire network.
Recently it has been shown that the Bianconi–Barabási model can be interpreted as a limit case of the model for emergent hyperbolic network geometry called Network Geometry with Flavor.
The Bianconi–Barabási model can be also modified to study static networks where the number of nodes is fixed.
Bose-Einstein condensation
Bose–Einstein condensation in networks is a phase transition observed in complex networks that can be described by the Bianconi–Barabási model. This phase transition predicts a "winner-takes-all" phenomena in complex networks and can be mathematically mapped to the mathematical model explaining Bose–Einstein condensation in physics.
Background
In physics, a Bose–Einstein condensate is a state of matter that occurs in certain gases at very low temperatures. Any elementary particle, atom, or molecule, can be classified as one of two types: a boson or a fermion. For example, an electron is a fermion, while a photon or a helium atom is a boson. In quantum mechanics, the energy of a (bound) particle is limited to a set of discrete values, called energy levels. An important characteristic of a fermion is that it obeys the Pauli exclusion principle, which states that no two fermions may occupy the same state. Bosons, on the other hand, do not obey the exclusion principle, and any number can exist in the same state. As a result, at very low energies (or temperatures), a great majority of the bosons in a Bose gas can be crowded into the lowest energy state, creating a Bose–Einstein condensate.
Bose and Einstein have established that the statistical properties of a Bose gas are governed by the Bose–Einstein statistics. In Bose–Einstein statistics, any number of identical bosons can be in the same state. In particular, given an energy state , the number of non-interacting bosons in thermal equilibrium at temperature is given by the Bose occupation number
where the constant is determined by an equation describing the conservation of the number of particles
with being the density of states of the system.
This last equation may lack a solution at low enough temperatures when for . In this case a critical temperature is found such that for the system is in a Bose-Einstein condensed phase and a finite fraction of the bosons are in the ground state.
The density of states depends on the dimensionality of the space. In particular therefore for only in dimensions . Therefore, a Bose-Einstein condensation of an ideal Bose gas can only occur for dimensions .
The concept
The evolution of many complex systems, including the World Wide Web, business, and citation networks, is encoded in the dynamic web describing the interactions between the system’s constituents. The evolution of these networks is captured by the Bianconi-Barabási model, which includes two main characteristics of growing networks: their constant growth by the addition of new nodes and links and the heterogeneous ability of each node to acquire new links described by the node fitness. Therefore the model is also known as fitness model.
Despite their irreversible and nonequilibrium nature, these networks follow the Bose statistics and can be mapped to a Bose gas.
In this mapping, each node is mapped to an energy state determined by its fitness and each new link attached to a given node is mapped to a Bose particle occupying the corresponding energy state. This mapping predicts that the Bianconi–Barabási model can undergo a topological phase transition in correspondence to the Bose–Einstein condensation of the Bose gas. This phase transition is therefore called Bose-Einstein condensation in complex networks.
Consequently addressing the dynamical properties of these nonequilibrium systems within the framework of equilibrium quantum gases predicts that the “first-mover-advantage,” “fit-get-rich (FGR),” and “winner-takes-all” phenomena observed in a competitive systems are thermodynamically distinct phases of the underlying evolving networks.
The mathematical mapping of the network evolution to the Bose gas
Starting from the Bianconi-Barabási model, the mapping of a Bose gas to a network can be done by assigning an energy to each node, determined by its fitness through the relation
where . In particular when all the nodes have equal fitness, when instead nodes with different "energy" have very different fitness. We assume that the network evolves through a modified preferential attachment mechanism. At each time a new node with energy drawn from a probability distribution enters in the network and attach a new link to a node chosen with probability:
In the mapping to a Bose gas, we assign to every new link linked by preferential attachment to node a particle in the energy state .
The continuum theory predicts that the rate at which links accumulate on node with "energy" is given by
where indicating the number of links attached to node that was added to the network at the time step . is the partition function, defined as:
The solution of this differential equation is:
where the dynamic exponent satisfies , plays the role of the chemical potential, satisfying the equation
where is the probability that a node has "energy" and "fitness" . In the limit, , the occupation number, giving the number of links linked to nodes with "energy" , follows the familiar Bose statistics
The definition of the constant in the network models is surprisingly similar to the definition of the chemical potential in a Bose gas. In particular for probabilities such that for at high enough value of we have a condensation phase transition in the network model. When this occurs, one node, the one with higher fitness acquires a finite fraction of all the links. The Bose–Einstein condensation in complex networks is, therefore, a topological phase transition after which the network has a star-like dominant structure.
Bose–Einstein phase transition in complex networks
The mapping of a Bose gas predicts the existence of two distinct phases as a function of the energy distribution. In the fit-get-rich phase, describing the case of uniform fitness, the fitter nodes acquire edges at a higher rate than older but less fit nodes. In the end the fittest node will have the most edges, but the richest node is not the absolute winner, since its share of the edges (i.e. the ratio of its edges to the total number of edges in the system) reduces to zero in the limit of large system sizes (Fig.2(b)). The unexpected outcome of this mapping is the possibility of Bose–Einstein condensation for , when the fittest node acquires a finite fraction of the edges and maintains this share of edges over time (Fig.2(c)).
A representative fitness distribution that leads to condensation is given by
where .
However, the existence of the Bose–Einstein condensation or the fit-get-rich phase does not depend on the temperature or of the system but depends only on the functional form of the fitness distribution of the system. In the end, falls out of all topologically important quantities. In fact, it can be shown that Bose–Einstein condensation exists in the fitness model even without mapping to a Bose gas. A similar gelation can be seen in models with superlinear preferential attachment, however, it is not clear whether this is an accident or a deeper connection lies between this and the fitness model.
See also
Barabási–Albert model
References
External links
Networks: A Very Short Introduction
Advance Network Dynamics
Social network analysis
Graph algorithms
Random graphs | Bianconi–Barabási model | [
"Mathematics"
] | 2,487 | [
"Mathematical relations",
"Graph theory",
"Random graphs"
] |
46,899,448 | https://en.wikipedia.org/wiki/Oil%20content%20meter | An oil content meter (OCM) is an integral part of all oily water separator (OWS) systems. Oil content meters are also sometimes referred to as oil content monitors, bilge alarms, or bilge monitors.
OCM technology
The OCM continuously monitors how much oil is in the water that is pumped out the discharge line of the OWS system. The OCM will not allow the oil concentration of the exiting water to be above the Marpol standard of 15 ppm. This standard was first adopted in 1977 with Resolution A.393(X) which was published by IMO. These standards were updated various but the most current resolution is MEPC 108(49). The oil content meter will sound an alarm if the liquid leaving the system has an unsatisfactory amount of oil in the mixture. If it is still above that standard, then the bilge water will be reentered into the system until it meets the required criteria. The OCM uses light beams to determine how oily the water in the system is. The system will then gauge the oil concentration based on a light intensity meter. Modern oil content meters also have a data logging system that can store oil concentration measurements for more than 18 months.
If the OCM determines that there is far too much of a type of oil, the OCM may be fouled and needs to be flushed out. Running clean water through the OCM sensor cell is one way it can be cleaned. Also scrubbing the sensor area with a bottle brush is another effective method. The new MEPC 107(49) regulations have set out stringent actions that require the OCM to be tamper proof and also the OCM needs to have an alarm that sounds whenever the OCM is being cleaned. When the alarm goes off, the OCM functionality will be checked by crew members.
An OCM is a small part of what is called the oil discharge monitoring and control system. The first part is the oil content meter. The second is a flow meter which measures the flow rate of the water at the discharge pipe. Third, is a computing unit which calculates how much oil has actually been discharged along with the day and time of the discharge. And lastly is the overboard valve control system which is essentially just a valve that can stop the discharge from flowing out at the appropriate time.
Oil content meters measure how effective the oily water separators on a ship are functioning. If the OCM computes that the oily discharge is above the 15 ppm standard, the oily water separator needs to be checked by the crew.
There are three types of oil that the oil content meter needs to check for and they are fuel oil, diesel, and emulsions.
See also
Marpol Annex I
Magic pipe
Port reception facilities
MARPOL 73/78
International Maritime Organization
References
Measuring instruments
Waste treatment technology | Oil content meter | [
"Chemistry",
"Technology",
"Engineering"
] | 585 | [
"Water treatment",
"Waste treatment technology",
"Measuring instruments",
"Environmental engineering"
] |
46,901,473 | https://en.wikipedia.org/wiki/Metocean | In offshore and coastal engineering, metocean refers to the syllabic abbreviation of meteorology and (physical) oceanography.
Metocean study
In various stages of an offshore or coastal engineering project a metocean study will be undertaken. This, in order to estimate the environmental conditions of direct influence on the choices to be made during the project phase at hand, and to arrive at an effective and efficient solution for the problems/goals stated. In later phases of a project, more detailed and thorough metocean studies may be needed, depending on whether there is an expected additional gain with respect to the successful and efficient completion of the project.
Metocean conditions
Metocean conditions refer to the combined wind, wave and climate (etc.) conditions as found on a certain location. They are most often presented as statistics, including seasonal variations, scatter tables, wind roses and probability of exceedance. The metocean conditions may include, depending on the project and its location, statistics on:
Meteorology
wind speed, direction, gustiness, wind rose and wind spectrum
air temperature
humidity
occurrence and strength of typhoons, hurricanes and (other) cyclones
Physical oceanography
water level fluctuations
historical, expected and seasonal sea level changes
storm surges
tides
tsunamis
seiches
wind waves – wind seas and swells – characterised by statistics like: significant wave heights and periods, propagation directions and (directional) spectra
bathymetry
salinity, temperature and other constituents
stratification, density-driven currents and internal waves
ice occurrence, extent, thickness, strength and seabed gouging
Metocean data
The metocean conditions are preferably based on metocean data, which can come from measuring instruments deployed in or near the project area, global (re-analysis) models and remote sensing (often by satellites). For estimating probabilities of exceedance – for relevant physical quantities – data of extreme events during more than one year is needed.
By use of validated numerical models, the availability of metocean data can be extended. For instance, consider the case of a coastal location where no wave measurements are available. If there is long-term wave data available in a nearby offshore location (e.g. from satellites), a wind wave model can be employed to transform the offshore wave statistics to the nearshore location (provided the bathymetry is known).
Often, long-term local measurements of wave conditions due to extreme events (e.g. hurricanes) are missing. By using estimates for the wind fields during past extreme events, the corresponding wave conditions can be computed through wave hindcasts.
Notes
References
Offshore engineering
Coastal engineering
Physical oceanography
Climate and weather statistics | Metocean | [
"Physics",
"Engineering"
] | 546 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Offshore engineering",
"Weather",
"Coastal engineering",
"Climate and weather statistics",
"Construction",
"Civil engineering",
"Physical oceanography"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.