id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
39,383,078
https://en.wikipedia.org/wiki/Interval%20propagation
In numerical mathematics, interval propagation or interval constraint propagation is the problem of contracting interval domains associated to variables of R without removing any value that is consistent with a set of constraints (i.e., equations or inequalities). It can be used to propagate uncertainties in the situation where errors are represented by intervals. Interval propagation considers an estimation problem as a constraint satisfaction problem. Atomic contractors A contractor associated to an equation involving the variables x1,...,xn is an operator which contracts the intervals [x1],..., [xn] (that are supposed to enclose the xi's) without removing any value for the variables that is consistent with the equation. A contractor is said to be atomic if it is not built as a composition of other contractors. The main theory that is used to build atomic contractors are based on interval analysis. Example. Consider for instance the equation which involves the three variables x1,x2 and x3. The associated contractor is given by the following statements For instance, if the contractor performs the following calculus For other constraints, a specific algorithm for implementing the atomic contractor should be written. An illustration is the atomic contractor associated to the equation is provided by Figures 1 and 2. Decomposition For more complex constraints, a decomposition into atomic constraints (i.e., constraints for which an atomic contractor exists) should be performed. Consider for instance the constraint could be decomposed into The interval domains that should be associated to the new intermediate variables are Propagation The principle of the interval propagation is to call all available atomic contractors until no more contraction could be observed. As a result of the Knaster-Tarski theorem, the procedure always converges to intervals which enclose all feasible values for the variables. A formalization of the interval propagation can be made thanks to the contractor algebra. Interval propagation converges quickly to the result and can deal with problems involving several hundred of variables. Example Consider the electronic circuit of Figure 3. Assume that from different measurements, we know that From the circuit, we have the following equations After performing the interval propagation, we get References Algebra of random variables Numerical analysis Statistical approximations
Interval propagation
[ "Mathematics" ]
443
[ "Computational mathematics", "Mathematical relations", "Statistical approximations", "Numerical analysis", "Approximations" ]
39,383,381
https://en.wikipedia.org/wiki/2-hydroxy-dATP%20diphosphatase
2-hydroxy-dATP diphosphatase (, also known as oxidized purine nucleoside triphosphatase, or (2'-deoxy) ribonucleoside 5'-triphosphate pyrophosphohydrolase, or Nudix hydrolase 1 (NUDT1), or MutT homolog 1 (MTH1), or 7,8-dihydro-8-oxoguanine triphosphatase) is an enzyme that in humans is encoded by the NUDT1 gene. During DNA repair, the enzyme hydrolyses oxidized purines and prevents their addition onto the DNA chain. As such it has important role in aging and cancer development. Function This enzyme catalyses the following chemical reaction 2-hydroxy-dATP + H2O 2-hydroxy-dAMP + diphosphate The enzyme hydrolyses oxidized purine nucleoside triphosphates. The enzyme is used in DNA repair, where it hydrolysis the oxidized purines and prevents their addition onto the DNA chain. Misincorporation of oxidized nucleoside triphosphates into DNA and/or RNA during replication and transcription can cause mutations that may result in carcinogenesis or neurodegeneration. First isolated from Escherichia coli because of its ability to prevent occurrence of 8-oxoguanine in DNA, the protein encoded by this gene is an enzyme that hydrolyzes oxidized purine nucleoside triphosphates, such as 8-oxo-dGTP, 8-oxo-dATP, 2-oxo-dATP, 2-hydroxy-dATP, and 2-hydroxy rATP, to monophosphates, thereby preventing misincorporation. MutT enzymes in non-human organisms often have substrate specificity for certain types of oxidized nucleotides, such as that of E. coli, which is specific to 8-oxoguanine nucleotides. Human MTH1, however, has substrate specificity for a much broader range of oxidatively damaged nucleotides. The mechanism of hMTH1's broad specificity for these oxidized nucleotides is derived from their recognition in the enzyme's substrate binding pocket due to an exchange of protonation state between two nearby aspartate residues. The encoded protein is localized mainly in the cytoplasm, with some in the mitochondria, suggesting that it is involved in the sanitization of nucleotide pools both for nuclear and mitochondrial genomes. In plants, MTH1 has also been shown to enhance resistance to heat- and paraquat-induced oxidative stress, resulting in fewer dead cells and less accumulation of hydrogen peroxide. Several alternatively spliced transcript variants, some of which encode distinct isoforms, have been identified. Additional variants have been observed, but their full-length natures have not been determined. A single-nucleotide polymorphism that results in the production of an additional, longer isoform has been described. Research Aging A mouse model has been studied that over-expresses hMTH1-Tg (NUDT1). The hMTH1-Tg mice express high levels of the hMTH1 hydrolase that degrades 8-oxodGTP and 8-oxoGTP and therefore excludes 8-oxoguanine from DNA and RNA. The steady state levels of 8-oxoguanine in DNA of several organs including the brain are significantly reduced in hMTH1-Tg over-expressing mice. Conversely, MTH1-null mice exhibit a significantly higher level of 8-oxo-dGTP accumulation than that of the wild type. Over-expression of hMTH1 prevents the age-dependent accumulation of DNA 8-oxoguanine that occurs in wild-type mice. The lower levels of oxidized guanines are associated with greater longevity. The hMTH1-Tg animals have a significantly longer lifespan than their wild-type littermates. These findings provide a link between ageing and oxidative DNA damage (see DNA damage theory of aging). Cancer Studies have suggested that this enzyme plays a role in both preventing the formation of cancer cells and the proliferation of cancer cells. This makes it a topic of interest in cancer research, both as a potential method for healthy cells to prevent cancer and a weakness to target within existing cancer cells. Eliminating the MTH1 gene in mice results in over three times more mice developing tumors compared to a control group. The enzyme's much-studied ability to sanitize a cell's nucleotide pool prevents it from developing mutations, including cancerous ones. Specifically, another study found that MTH1 inhibition in cancer cells leads to incorporation of 8-oxo-dGTP and other oxidatively damaged nucleotides into the cell's DNA, damaging it and causing cell death. However, cancer cells have also been shown to benefit from use of MTH1. Cells from malignant breast tumors exhibit extreme MTH1 expression compared to other human cells. Because a cancer cell divides much more rapidly than a normal human cell, it is far more in need of an enzyme like MTH1 that prevents fatal mutations during replication. This property of cancer cells could allow for monitoring of cancer treatment efficacy by measuring MTH1 expression. Development of suitable probes for this purpose is currently underway. Disagreement exists concerning MTH1's functionality relative to prevention of DNA damage and cancer. Subsequent studies have had difficulty reproducing previously reported cytotoxic or antiproliferation effects of MTH1 inhibition on cancer cells, even calling into question whether MTH1 truly does serve to remove oxidatively damaged nucleotides from a cell's nucleotide pool. One study of newly discovered MTH1 inhibitors suggests that these anticancer properties exhibited by the older MTH1 inhibitors may be due to off-target cytotoxic effects. After revisiting the experiment, the original authors of this claim found that while the original MTH1 inhibitors in question lead to damaged nucleotides being incorporated into DNA, they demonstrate the others that do not induce toxicity fail to introduce the DNA lesion. Research into this topic is ongoing. As a drug target MTH1 is a potential drug target to treat cancer, however there are conflicting results regarding the cytotoxicity of MTH1 inhibitors toward cancer cells. Karonudib, an MTH1 inhibitor, is currently being evaluated a phase I clinical trial for safety and tolerability. A potent and selective MTH1 inhibitor AZ13792138 has been developed by AstraZeneca has been made available as a chemical probe to academic researchers. However AstraZeneca has found that neither AZ13792138 nor genetic knockdown of MTH1 displays any significant cytotoxicity to cancer cells. See also NUDT15 8-oxo-dGTP diphosphatase References Further reading External links Nudix hydrolases EC 3.6.1 DNA repair
2-hydroxy-dATP diphosphatase
[ "Biology" ]
1,494
[ "Molecular genetics", "DNA repair", "Cellular processes" ]
39,383,383
https://en.wikipedia.org/wiki/Proton%20exchange%20membrane%20electrolysis
Proton exchange membrane (PEM) electrolysis is the electrolysis of water in a cell equipped with a solid polymer electrolyte (SPE) that is responsible for the conduction of protons, separation of product gases, and electrical insulation of the electrodes. The PEM electrolyzer was introduced to overcome the issues of partial load, low current density, and low pressure operation currently plaguing the alkaline electrolyzer. It involves a proton-exchange membrane. Electrolysis of water is an important technology for the production of hydrogen to be used as an energy carrier. With fast dynamic response times, large operational ranges, and high efficiencies, water electrolysis is a promising technology for energy storage coupled with renewable energy sources. In terms of sustainability and environmental impact, PEM electrolysis is considered as a promising technique for high purity and efficient hydrogen production since it emits only oxygen as a by-product without any carbon emissions. The IEA said in 2022 that more effort was needed. The availability of iridium may be a constraint for the widespread adoption of PEM technology. History The use of a PEM for electrolysis was first introduced in the 1960s by General Electric, developed to overcome the drawbacks to the alkaline electrolysis technology. The initial performances yielded 1.0 A/cm2 at 1.88 V which was, compared to the alkaline electrolysis technology of that time, very efficient. In the late 1970s the alkaline electrolyzers were reporting performances around 0.215 A/cm2 at 2.06 V, thus prompting a sudden interest in the late 1970s and early 1980s in polymer electrolytes for water electrolysis. PEM water electrolysis technology is similar to PEM fuel cell technology, where solid poly-sulfonated membranes, such as nafion, fumapem, were used as a electrolyte (proton conductor). A thorough review of the historical performance from the early research to that of today can be found in chronological order with many of the operating conditions in the 2013 review by Carmo et al. Advantages One of the largest advantages to PEM electrolysis is its ability to operate at high current densities. This can result in reduced operational costs, especially for systems coupled with very dynamic energy sources such as wind and solar, where sudden spikes in energy input would otherwise result in uncaptured energy. The polymer electrolyte allows the PEM electrolyzer to operate with a very thin membrane (~100-200 μm) while still allowing high pressures, resulting in low ohmic losses, primarily caused by the conduction of protons across the membrane (0.1 S/cm) and a compressed hydrogen output. The polymer electrolyte membrane, due to its solid structure, exhibits a low gas crossover rate resulting in very high product gas purity. Maintaining a high gas purity is important for storage safety and for the direct usage in a fuel cell. The safety limits for H2 in O2 are at standard conditions 4 mol-% H2 in O2. Science An electrolyzer is an electrochemical device to convert electricity and water into hydrogen and oxygen, these gases can then be used as a means to store energy for later use. This use can range from electrical grid stabilization from dynamic electrical sources such as wind turbines and solar cells to localized hydrogen production as a fuel for fuel cell vehicles. The PEM electrolyzer utilizes a solid polymer electrolyte (SPE) to conduct protons from the anode to the cathode while insulating the electrodes electrically. Under standard conditions the enthalpy required for the decomposition of water is 285.9 kJ/mol. A portion of the required energy for a sustained electrolysis reaction is supplied by thermal energy and the remainder is supplied through electrical energy. Reactions The actual value for open circuit voltage of an operating electrolyzer will lie between the 1.23 V and 1.48 V depending on how the cell/stack design utilizes the thermal energy inputs. This is however quite difficult to determine or measure because an operating electrolyzer also experiences other voltage losses from internal electrical resistances, proton conductivity, mass transport through the cell and catalyst utilization to name a few. Anode reaction The half reaction taking place on the anode side of a PEM electrolyzer is commonly referred to as the Oxygen Evolution Reaction (OER). Here the liquid water reactant is supplied to catalyst where the supplied water is oxidized to oxygen, protons and electrons. {| style="border:1px solid #ccc;" |- || 2 H2O (l) -> O2 (g) + 4H+ (aq) + 4 e^- |} Cathode reaction The half reaction taking place on the cathode side of a PEM electrolyzer is commonly referred to as the Hydrogen Evolution Reaction (HER). Here the supplied electrons and the protons that have conducted through the membrane are combined to create gaseous hydrogen. {| style="border:1px solid #ccc;" |- || 4H+ (aq) + 4 e^- -> 2H2 (g) |} The illustration below depicts a simplification of how PEM electrolysis works, showing the individual half-reactions together along with the complete reaction of a PEM electrolyzer. In this case the electrolyzer is coupled with a solar panel for the production of hydrogen, however the solar panel could be replaced with any source of electricity. Second law of thermodynamics As per the second law of thermodynamics the enthalpy of the reaction is: {| style="border:1px solid #ccc;" |- || |} Where is the Gibbs free energy of the reaction, is the temperature of the reaction and is the change in entropy of the system. {| style="border:1px solid #ccc;" |- || H2O (l) + \Delta H -> H2 + 1/2 O2 |} The overall cell reaction with thermodynamic energy inputs then becomes: {| style="border:1px solid #ccc;" |- || H2O (l) ->[+\overbrace{237.2 \ \ce{kJ / mol}}^{\ce{electricity}}][+\underbrace{48.6 \ \ce{kJ / mol}}_{\ce{heat}}] {H2} + 1/2 O2 |} The thermal and electrical inputs shown above represent the minimum amount of energy that can be supplied by electricity in order to obtain an electrolysis reaction. Assuming that the maximum amount of heat energy (48.6 kJ/mol) is supplied to the reaction, the reversible cell voltage can be calculated. Open circuit voltage (OCV) where is the number of electrons and is Faraday's constant. The calculation of cell voltage assuming no irreversibilities exist and all of the thermal energy is utilized by the reaction is referred to as the lower heating value (LHV). The alternative formulation, using the higher heating value (HHV) is calculated assuming that all of the energy to drive the electrolysis reaction is supplied by the electrical component of the required energy which results in a higher reversible cell voltage. When using the HHV the voltage calculation is referred to as the thermoneutral voltage. Voltage losses The performance of electrolysis cells, like fuel cells, is typically compared through polarization curves, which are obtained by plotting cell voltages against current densities. The primary sources of increased voltage in a PEM electrolyzer (the same also applies for PEM fuel cells) can be categorized into three main areas, Ohmic losses, activation losses and mass transport losses. Due to the reversal of operation between a PEM fuel cell and a PEM electrolyzer, the degree of impact for these various losses is different between the two processes. {| style="border:1px solid #ccc;" |- || |} A PEM electrolysis system's performance can be compared by plotting overpotential versus cell current density. This essentially results in a curve that represents the power per square centimeter of cell area required to produce hydrogen and oxygen. Conversely to the PEM fuel cell, the better the PEM electrolyzer the lower the cell voltage at a given current density. The figure below is the result of a simulation from the Forschungszentrum Jülich of a 25 cm2 single cell PEM electrolyzer under thermoneutral operation depicting the primary sources of voltage loss and their contributions for a range of current densities. Ohmic losses Ohmic losses are an electrical overpotential introduced to the electrolysis process by the internal resistance of the cell components. This loss then requires an additional voltage to maintain the electrolysis reaction, the prediction of this loss follows Ohm's law and holds a linear relationship to the current density of the operating electrolyzer. {| style="border:1px solid #ccc;" |- || |} The energy loss due to the electrical resistance is not entirely lost. The voltage drop due to resistivity is associated with the conversion the electrical energy to heat energy through a process known as Joule heating. Much of this heat energy is carried away with the reactant water supply and lost to the environment, however a small portion of this energy is then recaptured as heat energy in the electrolysis process. The amount of heat energy that can be recaptured is dependent on many aspects of system operation and cell design. {| style="border:1px solid #ccc;" |- || |} The Ohmic losses due to the conduction of protons contribute to the loss of efficiency which also follows Ohm's law, however without the Joule heating effect. The proton conductivity of the PEM is very dependent on the hydration, temperature, heat treatment, and ionic state of the membrane. Faradaic losses and crossover Faradaic losses describe the efficiency losses that are correlated to the current, that is supplied without leading to hydrogen at the cathodic gas outlet. The produced hydrogen and oxygen can permeate across the membrane, referred to as crossover. Mixtures of both gases at the electrodes result. At the cathode, oxygen can be catalytically reacted with hydrogen on the platinum surface of the cathodic catalyst. At the anode, hydrogen and oxygen do not react at the iridium oxide catalyst. Thus, safety hazards due to explosive anodic mixtures hydrogen in oxygen can result. The supplied energy for the hydrogen production is lost, when hydrogen is lost due to the reaction with oxygen at the cathode and permeation from the cathode across the membrane to the anode corresponds. Hence, the ratio of the amount of lost and produced hydrogen determines the faradaic losses. At pressurized operation of the electrolyzer, the crossover and the correlated faradaic efficiency losses increase. Hydrogen compression during water electrolysis Hydrogen evolution due to pressurized electrolysis is comparable to an isothermal compression process, which is in terms of efficiency preferable compared to mechanical isotropic compression. However, the contributions of the aforementioned faradaic losses increase with operating pressures. Thus, in order to produce compressed hydrogen, the in-situ compression during electrolysis and subsequent compression of the gas have to be pondered under efficiency considerations. System operation The ability of the PEM electrolyzer to operate, not only under highly dynamic conditions but also in part-load and overload conditions is one of the reasons for the recently renewed interest in this technology. The demands of an electrical grid are relatively stable and predictable, however when coupling these to energy sources such as wind and solar, the demand of the grid rarely matches the generation of renewable energy. This means energy produced from renewable sources such as wind and solar benefit by having a buffer, or a means of storing off-peak energy. , the largest PEM electrolyzer is 20 MW. PEM efficiency When determining the electrical efficiency of PEM electrolysis, the HHV can be used. This is because the catalyst layer interacts with water as steam. As the process operates at 80 °C for PEM electrolysers the waste heat can be redirected through the system to create the steam, resulting in a higher overall electrical efficiency. The LHV must be used for alkaline electrolysers as the process within these electrolysers requires water in liquid form and uses alkalinity to facilitate the breaking of the bond holding the hydrogen and oxygen atoms together. The lower heat value must also be used for fuel cells, as steam is the output rather than input. PEM electrolysis has an electrical efficiency of about 80% in working application, in terms of hydrogen produced per unit of electricity used to drive the reaction. The efficiency of PEM electrolysis is expected to reach 82-86% before 2030, while also maintaining durability as progress in this area continues at a pace. See also Electrochemistry Electrochemical engineering Electrolysis Hydrogen production Gas cracker Photocatalytic water splitting Water purification Timeline of hydrogen technologies Electrolysis of water PEM fuel cell Hydrogen economy High-pressure electrolysis References Electrolysis Hydrogen economy Hydrogen production Electrolytic cells
Proton exchange membrane electrolysis
[ "Chemistry" ]
2,786
[ "Electrochemistry", "Electrolysis" ]
39,383,501
https://en.wikipedia.org/wiki/Elliott%20formula
The Elliott formula describes analytically, or with few adjustable parameters such as the dephasing constant, the light absorption or emission spectra of solids. It was originally derived by Roger James Elliott to describe linear absorption based on properties of a single electron–hole pair. The analysis can be extended to a many-body investigation with full predictive powers when all parameters are computed microscopically using, e.g., the semiconductor Bloch equations (abbreviated as SBEs) or the semiconductor luminescence equations (abbreviated as SLEs). Background One of the most accurate theories of semiconductor absorption and photoluminescence is provided by the SBEs and SLEs, respectively. Both of them are systematically derived starting from the many-body/quantum-optical system Hamiltonian and fully describe the resulting quantum dynamics of optical and quantum-optical observables such as optical polarization (SBEs) and photoluminescence intensity (SLEs). All relevant many-body effects can be systematically included by using various techniques such as the cluster-expansion approach. Both the SBEs and SLEs contain an identical homogeneous part driven either by a classical field (SBEs) or by a spontaneous-emission source (SLEs). This homogeneous part yields an eigenvalue problem that can be expressed through the generalized Wannier equation that can be solved analytically in special cases. In particular, the low-density Wannier equation is analogous to bound solutions of the hydrogen problem of quantum mechanics. These are often referred to as exciton solutions and they formally describe Coulombic binding by oppositely charged electrons and holes. The actual physical meaning of excitonic states is discussed further in connection with the SBEs and SLEs. The exciton eigenfunctions are denoted by where labels the exciton state with eigenenergy and is the crystal momentum of charge carriers in the solid. These exciton eigenstates provide valuable insight to SBEs and SLEs, especially, when one analyses the linear semiconductor absorption spectrum or photoluminescence at steady-state conditions. One simply uses the constructed eigenstates to diagonalize the homogeneous parts of the SBEs and SLEs. Under the steady-state conditions, the resulting equations can be solved analytically when one further approximates dephasing due to higher-order many-body effects. When such effects are fully included, one must resort to a numeric approach. After the exciton states are obtained, one can eventually express the linear absorption and steady-state photoluminescence analytically. The same approach can be applied to compute absorption spectrum for fields that are in the terahertz (abbreviated as THz) range of electromagnetic radiation. Since the THz-photon energy lies within the meV range, it is mostly resonant with the many-body states, not the interband transitions that are typically in the eV range. Technically, the THz investigations are an extension of the ordinary SBEs and/or involve solving the dynamics of two-particle correlations explicitly. Like for the optical absorption and emission problem, one can diagonalize the homogeneous parts that emerge analytically with the help of the exciton eigenstates. Once the diagonalization is completed, one can then compute the THz absorption analytically. All of these derivations rely on the steady-state conditions and the analytic knowledge of the exciton states. Furthermore, the effect of further many-body contributions, such as the excitation-induced dephasing, can be included microscopically to the Wannier solver, which removes the need to introduce phenomenological dephasing constant, energy shifts, or screening of the Coulomb interaction. Linear optical absorption Linear absorption of broadband weak optical probe can then be expressed as where is the probe-photon energy, is the oscillator strength of the exciton state , and is the dephasing constant associated with the exciton state . For a phenomenological description, can be used as a single fit parameter, i.e., . However, a full microscopic computation generally produces that depends on both exciton index and photon frequency. As a general tendency, increases for elevated while the dependence is often weak. Each of the exciton resonances can produce a peak to the absorption spectrum when the photon energy matches with . For direct-gap semiconductors, the oscillator strength is proportional to the product of dipole-matrix element squared and that vanishes for all states except for those that are spherically symmetric. In other words, is nonvanishing only for the -like states, following the quantum-number convention of the hydrogen problem. Therefore, optical spectrum of direct-gap semiconductors produces an absorption resonance only for the -like state. The width of the resonance is determined by the corresponding dephasing constant. In general, the exciton eigen energies consist of a series of bound states that emerge energetically well below the fundamental bandgap energy and a continuum of unbound states that appear for energies above the bandgap. Therefore, a typical semiconductor's low-density absorption spectrum shows a series of exciton resonances and then a continuum-absorption tail. For realistic situations, increases more rapidly than the exciton-state spacing so that one typically resolves only few lowest exciton resonances in actual experiments. The concentration of charge carriers influence the shape of the absorption spectrum considerably. For high enough densities, all energies correspond to continuum states and some of the oscillators strengths may become negative-valued due to the Pauli-blocking effect. Physically, this can be understood as the elementary property of Fermions; if a given electronic state is already excited it cannot be excited a second time due to the Pauli exclusion among Fermions. Therefore, the corresponding electronic states can produce only photon emission that is seen as negative absorption, i.e., gain that is the prerequisite to realizing semiconductor lasers. Even though one can understand the principal behavior of semiconductor absorption on the basis of the Elliott formula, detailed predictions of the exact , , and requires a full many-body computation already for moderate carrier densities. Photoluminescence Elliott formula After the semiconductor becomes electronically excited, the carrier system relaxes into a quasiequilibrium. At the same time, vacuum-field fluctuations trigger spontaneous recombination of electrons and holes (electronic vacancies) via spontaneous emission of photons. At quasiequilibrium, this yields a steady-state photon flux emitted by the semiconductor. By starting from the SLEs, the steady-state photoluminescence (abbreviated as PL) can be cast into the form that is very similar to the Elliott formula for the optical absorption. As a major difference, the numerator has a new contribution – the spontaneous-emission source that contains electron and hole distributions and , respectively, where is the carrier momentum. Additionally, contains also a direct contribution from exciton populations that describes truly bound electron–hole pairs. The term defines the probability to find an electron and a hole with same . Such a form is expected for a probability of two uncorrelated events to occur simultaneously at a desired value. Therefore, is the spontaneous-emission source originating from uncorrelated electron–hole plasma. The possibility to have truly correlated electron–hole pairs is defined by a two-particle exciton correlation ; the corresponding probability is directly proportional to the correlation. Nevertheless, both the presence of electron–hole plasma and excitons can equivalently induce the spontaneous emission. A further discussion of the relative weight and nature of plasma vs. exciton sources is presented in connection with the SLEs. Like for the absorption, a direct-gap semiconductor emits light only at the resonances corresponding to the -like states. As a typical trend, a quasiequilibrium emission is strongly peaked around the 1s resonance because is usually largest for the ground state. This emission peak often remains well below the fundamental bandgap energy even at the high excitations where all states are continuum states. This demonstrates that semiconductors are often subjects to massive Coulomb-induced renormalizations even when the system appears to have only electron–hole plasma states as emission resonances. To make an accurate prediction of the exact position and shape at elevated carrier densities, one must resort to the full SLEs. Terahertz Elliott formula As discussed above, it is often meaningful to tune the electromagnetic field to be resonant with the transitions between two many-body states. For example, one can follow how a bound exciton is excited from its 1s ground state to a 2p state. In several semiconductor systems, one needs THz fields to induce such transitions. By starting from a steady-state configuration of electron–hole correlations, the diagonalization of THz-induced dynamics yields a THz absorption spectrum In this notation, the diagonal contributions determine the population of excitons. The off-diagonal elements formally determine transition amplitudes between two exciton states and . For elevated densities, build up spontaneously and they describe correlated electron–hole plasma that is a state where electrons and holes move with respect to each other without forming bound pairs. In contrast to optical absorption and photoluminescence, THz absorption may involve all exciton states. This can be seen from the spectral response function that contains the current-matrix elements between two exciton states. The unit vector is determined by the direction of the THz field. This leads to dipole selection rules among exciton states, in full analog to the atomic dipole selection rules. Each allowed transition produces a resonance in and the resonance width is determined by a dephasing constant that generally depends on exciton states involved and the THz frequency . The THz response also contains that stems from the decay constant of macroscopic THz currents. In contrast to optical and photoluminescence spectroscopy, THz absorption can directly measure the presence of exciton populations in full analogy to atomic spectroscopy. For example, the presence of a pronounced 1s-to-2p resonance in THz absorption uniquely identifies the presence of excitons as detected experimentally in Ref. As a major difference to atomic spectroscopy, semiconductor resonances contain a strong excitation-induced dephasing that produces much broader resonances than in atomic spectroscopy. In fact, one typically can resolve only one 1s-to-2p resonance because the dephasing constant is broader than energetic spacing of n-p and (n+1)-p states making 1s-to-n-p and 1s-to-(n+1)p resonances merge into one asymmetric tail. See also Absorption Semiconductor luminescence equations Semiconductor Bloch equations Quantum-optical spectroscopy Wannier equation Photoluminescence Terahertz spectroscopy and technology Further reading References Semiconductor analysis Quantum mechanics Equations of physics
Elliott formula
[ "Physics", "Mathematics" ]
2,248
[ "Equations of physics", "Theoretical physics", "Mathematical objects", "Quantum mechanics", "Equations" ]
39,386,052
https://en.wikipedia.org/wiki/Monsanto%20Technology%20LLC%20v%20Cefetra%20BV%20and%20Others
Monsanto Technology LLC v Cefetra BV and Others (2010) was a preliminary ruling by the European Court of Justice (ECJ) regarding the legal protection of biotechnological inventions. The case dealt with the interpretation of Article 9 of Directive 98/44/EC on the legal protection of biotechnological inventions, and it was the first ECJ interpretation of the 1998 directive. Facts Monsanto holds a European patent, EP 0 546 090, for a variety of soybean containing genes inserted into the plants DNA. The inserted genes make the plant resistant to a particular type of herbicide known as Roundup. The resulting soybean plant is known as Roundup Ready (RR). The RR soybean plant remains unharmed by the application of herbicide while surrounding weeds die. The RR soybean plant is cultivated on a large scale in Argentina where there is no patent protection for the Monsanto invention at [18]. In July 2003 soy meal from Argentina was shipped to Amsterdam and the shipments were detained by customs. Monsanto tested the samples and determined that the soy meal originated from RR soybeans. Monsanto then applied for injunctions against the importers of the soy meal, Cefetra and Toepfer and the shipping company Vopak. The Dutch court then considered the issues under their local patent law and EU patent law. Acknowledging that Monsanto had established the presence of their patented genetic material in the soy meal, the court must decide if the presence alone of such genetic material is sufficient for infringement of Monsanto's patent at [25] and [26]. The courts analysis is primarily driven by the specific language regarding whether the genetic material is present, and if it performs its function therein. The court concludes that the genetic material present in the soy meal is dead material, and no longer performs its function. Therefore, in the courts interpretation Monsanto's patent is not infringed. Recognizing that there is profit being had by the soy producers in Argentina, without any reciprocal compensation to the patent holders, the court referred the following question to the ECJ: Must Article 9 of Directive 98/44/EC be interpreted such that patent protection is provided when a product (genetic material) forms part of a material imported into the EU, but no longer performs its function at the time of the alleged infringement (yet could still possibly perform its function if inserted into a living organism)? Proceeding on the basis that the genetic material claimed by Monsanto's patent is present in the soy meal imported by Cefetra and Toepfer, and it is incorporated for the purpose of Article 9 of the directive and it does not perform its function therein: does the protection offered under Article 9 preclude the national patent legislation from offering absolute protection to the product, regardless of whether it performs its function, and must Article 9 protection therefore be deemed exhaustive in the situation where the product consists of genetic information and is incorporated in material which contains the genetic information? Does it make any difference, for the purpose of answering the previous question, that the patent was applied for and granted prior to the adoption of the directive and absolute patent protection was granted under national patent legislation prior to the adoption of the directive? Is it possible, in answering the previous questions, to take into consideration the TRIPS Agreement, in particular Articles 27 and 30 thereof? Judgment The first question The ECJ addressed the first question by stating, "It follows from the foregoing that the protection provided for in Article 9 of the Directive is not available when the genetic information has ceased to perform the function it performed in the initial material from which the material in question is derived." at [38] at p. 540 The second question In response to question two the ECJ states, "Accordingly, in so far as the Directive does not accord protection to a patented DNA sequence which is not able to perform its function, the provision interpreted precludes the national legislature from granting absolute protection to a patented DNA sequence as such, regardless of whether it performs its function in the material containing it. The answer to the second question is therefore that Article 9 of the Directive effects an exhaustive harmonisation of the protection it confers, with the result that it precludes the national patent legislation from offering absolute protection to the patented product as such, regardless of whether it performs its function in the material containing it." at [62-63] The third question The ECJ responds to question three as follows, "The answer to the third question is therefore that Article 9 of the Directive precludes the holder of a patent issued prior to the adoption of that directive from relying on the absolute protection for the patented product accorded to it under the national legislation then applicable." at [69] The fourth question In response to question four the ECJ states, "The answer to the fourth question is therefore that Articles 27 and 30 of the TRIPS Agreement do not affect the interpretation given of Article 9 of the Directive." at [77] See also The Agreement on Trade Related Aspects of Intellectual Property Rights (TRIPS Agreement) Patent Act 1995 (Netherlands) References External links Directive 98/44/EC of the European Parliament and of the Council of 6 July 1998 on the legal protection of biotechnological inventions TRIPS agreement (PDF version) Court of Justice of the European Union case law Monsanto litigation 2010 in the European Union 2010 in case law Regulation of genetically modified organisms
Monsanto Technology LLC v Cefetra BV and Others
[ "Engineering", "Biology" ]
1,102
[ "Regulation of genetically modified organisms", "Genetic engineering", "Regulation of biotechnologies" ]
39,386,727
https://en.wikipedia.org/wiki/UK%20Government%20G-Cloud
The UK Government G-Cloud is an initiative targeted at easing procurement of commodity information technology services that use cloud computing by public sector bodies in the United Kingdom. The G-Cloud consists of: a series of framework agreements with suppliers, from which public sector organisations can buy services without needing to run a full tender or their own competitive procurement process an online store – the "Digital Marketplace" (previously "CloudStore"), which allows public sector bodies to search for services that are covered by the G-Cloud frameworks. The service began in 2012, and had several calls for contracts. By May 2013 there were over 700 suppliers—over 80% of which were small and medium-sized enterprises. £18.2 million (US$27.7 million) of sales were made by April 2013. With the adoption of "cloud first" policy in UK in May 2013 the sales have continued to grow, reportedly hitting over £50M in February 2014. These are based on procurement of some 1,200 providers and 13,000 services, including both cloud services and (professional) specialist services as of November 2013. Overview The UK Government initiated the G-Cloud programme to deliver computing based capability (from fundamental resources such as storage and processing to full-fledged applications) using cloud computing. G-Cloud established framework agreements with service providers; and lists those services on a publicly accessible portal known as the Digital Marketplace. Public Sector organisations can call off the services listed on the Digital Marketplace without needing to go through a full tender process. After plans were announced in March 2011, the government aimed to shift 50% of new government IT spending to cloud based services by 2015 and diversify the supplier base to give greater opportunity to small and medium-sized enterprises (SMEs). The "cloud first" approach to IT, mandated that the central government purchases IT services through the cloud unless it can be proven that an alternative is more cost effective. In June 2013 G-Cloud moved to become part of Government Digital Service (GDS) with the director Denise McDonagh moving to be CTO of the Home Office. Tony Singleton, COO of GDS, took over as director of G-Cloud. A new version of the G-Cloud framework is normally released about every 6 to 9 months, for example G-Cloud version 9 went live in May 2017. G-Cloud 12 was initially to run from 28 September 2020 to 27 September 2021 but it was extended in April 2021 and then ran to 27 September 2022. The current version is G-Cloud 14, which became available for purchasing services from 29 October 2024. G-Cloud 13 expired on 8 November 2024. One comment in the IT press noted that G-Cloud "has not quite delivered" on the government's hopes for its adoption, perhaps because "over time, the framework has evolved into a very different beast to the one it was when it first launched". Framework agreements Calls G-Cloud had several calls for contract to establish framework agreements. Major US vendors Amazon Web Services (AWS) and Google were initially excluded by the UK government in 2012 (G-Cloud 3) but AWS has since been added in 2013 (G-Cloud 4) and Google in 2018. Following hints by the head of the programme, GDS chief operating officer Tony Singleton, that the call for G-Cloud 4 would be open by the "end of July", the G-Cloud 4 call opened on the 6 August 2013. The blog entry also stated that the tendering process has been improved, with the use of the Government Procurement Service. G-Cloud expected to make calls roughly every three to six months, but with no fixed frequency. Contract calls are listed on the Government Contract Finder website. In April 2013 the G-Cloud V call for framework contracts was listed as starting in March 2014. G-Cloud V opened on 25 February 2014. The press noted the name of the G-Cloud call for framework agreements moved from suffixing the call with Roman numerals (G-Cloud I, II and III) to using the Arabic numeral 4. Classifications Suppliers define the service that they are offering as part of the framework agreement, and those details will be made available in the Digital Marketplace. These details include such things as Business Impact Level (e.g. IL2) that the service is accredited for, and how users will be on-boarded and off-boarded. In particular is the requirement to enable users to leave the service (off-board) if they wish to move to a different provider of the same service. As of G-Cloud 9, services are classified into 3 lots: Lot 1: Cloud Hosting (IaaS) and (PaaS): Cloud platform or infrastructure services that can help buyers do at least 1 of: deploy, manage and run software and provision and use processing, storage or networking resources Lot 2: Cloud Software (SaaS): Applications that are typically accessed over a public or private network e.g. the internet and hosted in the cloud Lot 3: Cloud Support. Digital Marketplace The Digital Marketplace (previously CloudStore) is a publicly accessible, searchable database of services offered under G-Cloud. The first service was offered in February 2012. Following criticism of the original CloudStore interface, CloudStore was substantially reworked by May 2013. In 2014, the Government Digital Service announced it would be replacing the CloudStore with a new platform called the "Digital Marketplace", currently in beta. The Digital Marketplace aims to integrate the Digital Services framework in 2015 and ultimately other framework contracts. Services can be searched by free text search as well as by continual narrowing of the field using various search criteria such as business impact level supported, cost, deployment model (e.g. Public Cloud, Private Cloud). Procurement The Digital Marketplace procurement processes handle selection and procurement of services. They do not replace internal processes for securing funds. However, assuming funds are available, procurement from the Digital Marketplace does not require a full tender or mini-competition. References External links Official website Cloud platforms 2012 establishments in the United Kingdom Government of the United Kingdom Government services web portals in the United Kingdom Information technology organisations based in the United Kingdom
UK Government G-Cloud
[ "Technology" ]
1,249
[ "Cloud platforms", "Computing platforms" ]
39,387,193
https://en.wikipedia.org/wiki/Battery%20tester
A battery tester is an electronic device intended for testing the state of an electric battery, going from a simple device for testing the charge actually present in the cells and/or its voltage output, to a more comprehensive testing of the battery's condition, namely its capacity for accumulating charge and any possible flaws affecting the battery's performance and security. Simple battery testers The most simple battery tester is a DC ammeter, that indicates the battery's charge rate. DC voltmeters can be used to estimate the charge rate of a battery, provided that its nominal voltage is known. Integrated battery testers There are many types of integrated battery testers, each one corresponding to a specific condition testing procedure, according to the type of battery being tested, such as the “421” test for lead-acid vehicle batteries. Their common principle is based on the empirical fact that after having applied a given current for a given number of seconds to the battery, the resulting voltage output is related to the battery's overall condition, when compared to a healthy battery's output. References External links Power Equipment Engine Technology By Edward Abdo Automotive Technology: A Systems Approach By Jack Erjavec Wharton on Dynamic Competitive Strategy edited by George S. Day, David J. Reibstein The Entrepreneurial Mindset By Rita Gunther McGrath, Ian C. MacMillan Harvard Business Press, 2000 Electrical equipment Tester
Battery tester
[ "Engineering" ]
285
[ "Electrical engineering", "Electrical equipment" ]
39,388,486
https://en.wikipedia.org/wiki/Digital%20current%20loop%20interface
For serial communications, a current loop is a communication interface that uses current instead of voltage for signaling. Current loops can be used over moderately long distances (tens of kilometres), and can be interfaced with optically isolated links. There are a variety of such systems, but one based on a 20 mA current level was used by the Teletype Model 33 and was particularly common on minicomputers and early microcomputer which used these as computer terminals. As a result, most computer terminals also supported this standard into the 1980s. History Long before the RS-232 standard, current loops were used to send digital data in serial form for teleprinters. More than two teleprinters could be connected on a single circuit allowing a simple form of networking. Older teleprinters used a 60 mA current loop. Later machines, such as the Teletype Model 33, operated on a lower 20 mA current level and most early minicomputers featured a 20 mA current loop interface, with an RS-232 port generally available as a more expensive option. The original IBM PC serial port card had provisions for a 20 mA current loop. Signaling conventions A digital current loop uses the absence of current for high (space or break), and the presence of current in the loop for low (mark). This is done to ensure that on normal conditions there is always current flowing and in the event of a line being cut the flow stops indefinitely, immediately raising the alarm of the event usually as the heavy noise of the teleprinter not being synchronized, something that would not have been possible if the idle state had been no current flowing. Electrical characteristics The maximum resistance for a current loop is limited by the available voltage. Current loop interfaces usually use voltages much higher than those found on an RS-232 interface, and cannot be interconnected with voltage-type inputs without some form of level translator circuit. For full-duplex communication between two devices, two pairs of wires would be used. There is no common standard for current loop interfaces, so details such as timing, connectors, wire color codes, and so on, are all application specific. See also MIDI, a digital current loop interface limited to 5 milliamps and 5 volts. References Communication circuits Electronics standards Serial buses
Digital current loop interface
[ "Engineering" ]
465
[ "Telecommunications engineering", "Communication circuits" ]
39,388,728
https://en.wikipedia.org/wiki/Global-scale%20Observations%20of%20the%20Limb%20and%20Disk
Global-scale Observations of the Limb and Disk (GOLD) is a heliophysics Mission of Opportunity (MOU) for NASA's Explorers program. Led by Richard Eastes at the Laboratory for Atmospheric and Space Physics, which is located at the University of Colorado Boulder, GOLD mission is to image the boundary between Earth and space in order to answer questions about the effects of solar and atmospheric variability of Earth's space weather. GOLD was one of 11 proposals selected, of the 42 submitted, for further study in September 2011. On 12 April 2013, NASA announced that GOLD, along with the Ionospheric Connection Explorer (ICON), had been selected for flight in 2017. GOLD, along with its commercial host satellite SES-14, launched on 25 January 2018. Mission concept and history GOLD is intended to perform a two-year mission imaging Earth's thermosphere and ionosphere from geostationary orbit. GOLD is a two-channel far-ultraviolet (FUV) imaging spectrograph built by the Laboratory for Atmospheric and Space Physics at the University of Colorado Boulder and flown as a hosted payload on the commercial communications satellite SES-14. Additional organizations participating in the GOLD mission include the National Center for Atmospheric Research, Virginia Tech, the University of California, Berkeley, the University of Central Florida, Computational Physics Inc., the National Oceanic and Atmospheric Administration (NOAA), the U.S. Naval Research Laboratory (NRL), Boston University, and Clemson University. In June 2017, SES announced the successful integration of GOLD with the SES-14 satellite under construction at Airbus Defence and Space in Toulouse, France. GOLD was launched on 25 January 2018 at 22:20 UTC aboard Ariane 5 ECA VA241 from the Centre spatial Guyanais. Scientific objectives The scientific objectives of the GOLD mission are to determine how geomagnetic storms alter the temperature and composition of Earth's atmosphere, to analyze the global-scale response of the thermosphere to solar extreme-ultraviolet variability, to investigate the significance of atmospheric waves and tides propagating from below the temperature structure of the thermosphere and to resolve how the structure of the equatorial ionosphere influences the formation and evolution of equatorial plasma density irregularities. The viewpoint provided by GOLD geostationary orbit from which the same hemisphere is always observable is a new perspective on the Earth's upper atmosphere. This viewpoint allows local time, universal time and longitudinal variations of the thermosphere and ionosphere's response to the various forcing mechanisms to be uniquely determined. Results Data from GOLD has been used to confirm that variation in the equatorial ionization anomaly at night and in the early morning is governed by atmospheric waves in the lower atmosphere. GOLD observations have also implicated gravity waves emanating from the lower atmosphere in the seeding of equatorial plasma bubbles, which degrade GPS performance. GOLD daytime observations of the thermosphere column density ratio of atomic oxygen and nitrogen revealed new findings. First, GOLD observations showed that even weak or minor geomagnetic activity (maximum Kp=1.7) can still generate significant disturbances in the thermosphere and ionosphere. This is crucial for space weather forecasting because the pre-quiet condition before the disturbed time determines the accuracy of the forecast. Second, the neutral tongue, which is an enhancement of O/N2 surrounded by depletion of O/N2 and had only been seen in simulations, was first observed by GOLD. This modified the classic theory of thermospheric composition disturbance during storms. The theory predicted that the disturbance co-rotates from day to night but did not specify what else happens to the depletion. References External links GOLD website by the University of Central Florida GOLD website by NASA Spacecraft instruments Explorers Program Spacecraft launched in 2018 Piggyback mission Spectrometers Geospace monitoring satellites
Global-scale Observations of the Limb and Disk
[ "Physics", "Chemistry" ]
779
[ "Spectrometers", "Spectroscopy", "Spectrum (physical sciences)" ]
53,616,523
https://en.wikipedia.org/wiki/Naira%20Hovakimyan
Naira Hovakimyan (born September 21, 1966) is an Armenian control theorist who holds the W. Grafton and Lillian B. Wilkins professorship of the Mechanical Science and Engineering at the University of Illinois at Urbana-Champaign. She is the director of AVIATE Center of flying cars at UIUC, funded through a NASA University Leadership Initiative. She was the inaugural director of the Intelligent Robotics Laboratory during 2015–2017, associated with the Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign. Education Naira Hovakimyan received her MS degree in Theoretical Mechanics and Applied Mathematics in 1988 from Yerevan State University in Armenia. She got her Ph.D. in Physics and Mathematics in 1992, in Moscow, from the Institute of Applied Mathematics of Russian Academy of Sciences, majoring in optimal control and differential games. Academic life Before joining the faculty of the University of Illinois at Urbana–Champaign in 2008, Naira Hovakimyan has spent time as a research scientist at Stuttgart University in Germany, at INRIA in France, at Georgia Institute of Technology, and she was on faculty of Aerospace and Ocean engineering of Virginia Tech during 2003–2008. She is currently W. Grafton and Lillian B. Wilkins Professor of Mechanical Science and Engineering at UIUC. In 2015, she was named as inaugural director for Intelligent Robotics Laboratory of CSL at UIUC. Currently she is the director of AVIATE Center of flying cars at UIUC, funded through a NASA University Leadership Initiative. She has co-authored two books, ten book chapters, eleven patents, and more than 500 journal and conference papers. Research areas Her research interests are in control and optimization, autonomous systems, machine learning, cybersecurity, neural networks, game theory and their applications in aerospace, robotics, mechanical, agricultural, electrical, petroleum, biomedical engineering and elderly care. Honors She is the 2011 recipient of AIAA Mechanics and Control of Flight award, the 2015 recipient of SWE Achievement Award, the 2017 recipient of IEEE CSS Award for Technical Excellence in Aerospace Controls, and the 2019 recipient of AIAA Pendray Aerospace Literature Award. In 2014 she was awarded the Humboldt prize for her lifetime achievements and was recognized as Hans Fischer senior fellow of Technical University of Munich. She is Fellow and life member of AIAA, Fellow of IEEE, Fellow of ASME, and Senior Member of National Academy of Inventors,. In 2015 and 2023 she was recognized as outstanding advisor by Engineering Council of UIUC. In 2024 she was recognized by College Award for Excellence in Translational research. Naira is co-founder and Chief Scientist of IntelinAir. She is named 2017 Commencement Speaker of the American University of Armenia. She has been listed among 50 Global Armenians in the world by Mediamax and was a member of the FAST (The Foundation for Armenian Science and Technology) advisory board. She is also advising a few startup companies. In 2021 she was one of the speakers of TEDxYerevan event. In 2022, she was awarded a Fulbright fellowship from the US Department of State. In 2022, she founded the AVIATE Center of flying cars at UIUC. References External links http://naira-hovakimyan.mechse.illinois.edu/ https://aviate.illinois.edu/ Google Scholar Research group website https://csl.illinois.edu/directory/profile/nhovakim http://mechse.illinois.edu/directory/faculty/nhovakim University of Illinois faculty 1966 births Living people Game theorists Control theorists Fellows of the American Institute of Aeronautics and Astronautics Fellows of the IEEE
Naira Hovakimyan
[ "Mathematics", "Engineering" ]
739
[ "Game theorists", "Game theory", "Control engineering", "Control theorists" ]
53,622,964
https://en.wikipedia.org/wiki/Resonance%20ionization
Resonance ionization is a process in optical physics used to excite a specific atom (or molecule) beyond its ionization potential to form an ion using a beam of photons irradiated from a pulsed laser light. In resonance ionization, the absorption or emission properties of the emitted photons are not considered, rather only the resulting excited ions are mass-selected, detected and measured. Depending on the laser light source used, one electron can be removed from each atom so that resonance ionization produces an efficient selectivity in two ways: elemental selectivity in ionization and isotopic selectivity in measurement. During resonance ionization, an ion gun creates a cloud of atoms and molecules from a gas-phase sample surface and a tunable laser is used to fire a beam of photons at the cloud of particles emanating from the sample (analyte). An initial photon from this beam is absorbed by one of the sample atoms, exciting one of the atom's electrons to an intermediate excited state. A second photon then ionizes the same atom from the intermediate state such that its high energy level causes it to be ejected from its orbital; the result is a packet of positively charged ions which are then delivered to a mass analyzer. Resonance ionization contrasts with resonance-enhanced multiphoton ionization (REMPI) in that the latter is neither selective nor efficient since resonances are seldom used to prevent interference. Also, resonance ionization is used for an atomic (elemental) analyte, whereas REMPI is used for a molecular analyte. The analytical technique on which the process of resonance ionization is based is termed resonance ionization mass spectrometry (RIMS). RIMS is derived from the original method, resonance ionization spectroscopy (RIS), which was initially being used to detect single atoms with better time resolution. RIMS has proved useful in the investigation of radioactive isotopes (such as for studying rare fleeting isotopes produced in high-energy collisions), trace analysis (such as for discovering impurities in highly pure materials), atomic spectroscopy (such as for detecting low-content materials in biological samples), and for applications in which high levels of sensitivity and elemental selectivity are desired. History Resonance ionization was first used in a spectroscopy experiment in 1971 at the Institute for Spectroscopy Russian Academy of Sciences; in that experiment, ground state rubidium atoms were ionized using ruby lasers. In 1974, a group of photophysical researchers at the Oak Ridge National Laboratory led by George Samuel Hurst developed, for the first time, the resonance ionization process on helium atoms. They wanted to use laser light to measure the number of singlet metastable helium, He (21S), particles created from energetic protons. The group achieved the selective ionization of the excited state of an atom at nearly 100% efficiency by using pulsed laser light to pass a beam of protons into the helium gas cell. The experiment on singlet metastable helium atoms was seminal in the journey towards using resonance ionization spectroscopy (RIS) for extensive atomic analysis in research settings. Cesium atoms was subsequently used to show that single atoms of an element could be counted if its resonance ionization was performed in a counter in which an electron could be detected for an atom in its ground state. Subsequently, advanced techniques categorized under resonance ionization mass spectrometry (RIMS) were used to generate the relative abundance of various ion types by coupling the RIS lasers to magnetic sector, quadrupole, or time-of-flight (TOF) mass spectrometers. The field of resonance ionization spectroscopy (RIS) has largely been shaped by the formal and informal communications heralding its discovery. Research papers on RIS have heavily relied on self-citation from inception, a trend which climaxed three years later with the founding of a company to commercialize the technique. Method A model resonance ionization mass spectrometry (RIMS) set-up consists of a laser system (consisting of multiple lasers), sample from which the atoms are derived, and a suitable mass spectrometer which mass-selectively detects the photo ions created from resonance. In resonant ionization, atoms or molecules from ground state are excited to higher energy states by the resonant absorption of photons to produce ions. These ions are then monitored by appropriate detectors. In order to ensure a highly-efficient sensitivity and process saturation, the atomic or molecular beam must be formed from the ground state, the atoms should be efficiently excited and ionized, and each atom should be converted by the photon field of a short-timed pulsed laser to produce a positive ion and a valence electron. In a basic RIS process, a pulsed laser beam produces photons of the right energy in order to excite an atom initially in its ground state, a, to an excited level, b. During the laser pulse, the ion population of state b increases at the expense of that of state a. After a few minutes, the rate of stimulated emission from the excited state will equal rate of production so that the system is in equilibrium as long as the laser intensity is kept sufficiently high during a pulse. This high laser intensity translates into a photon fluence (photons per unit of beam area) large enough so that a necessary condition for the saturation of the RIS process has been met. If, in addition, the rate of photoionization is greater than the rate of consumption of intermediates, then each selected state is converted to one electron plus one positive ion, so that the RIS process is saturated. A usually efficient way to produce free atoms of an element in the ground state is to atomize the elements by ion sputtering or thermal vaporization of the element from a laser matrix under vacuum conditions or at environments with pressures significantly less than normal atmospheric pressure. The resulting plume of secondary atoms is then channeled through the path of multiple tuned laser beams which are capable of exciting consecutive electronic transitions in the specified element. Light from these tuned lasers promotes the desired atoms above their ionization potentials whereas interfering atoms from other elements are hardly ionized since they are generally transparent to the laser beam. This process produces photoions which are extracted and directed towards an analytical facility such as a magnetic sector to be counted. This approach is extremely sensitive to atoms of the specified element so that the ionization efficiency is almost 100% and also elementally selective, due to the highly unlikely chance that other species will be resonantly ionized. To achieve high ionization efficiencies, monochromatic lasers with high instantaneous spectral power are used. Typical lasers being used include continuous-wave lasers with extremely high spectral purity and pulsed lasers for analyses involving limited atoms. Continuous-wave lasers however are often preferred to pulsed lasers due to the latter's relatively low duty cycle since they can only produce photo ions during the brief later pulses, and the difficulty in reproducing results due to pulse-to-pulse jitters, laser beam drifting, and wavelength variations. Moderate laser powers, if high enough to affect the desired transition states, can be used since the non-resonant photoionization cross section is low which implies a negligible ionization efficiency of unwanted atoms. The influence of the laser matrix to be used for the sample can also be reduced by separating evaporation and ionization processes both in time and in space. Another factor that could affect the efficiency and selectivity of the ionization process is the presence of contaminants caused by surface or impact ionization. This can be reduced up to appreciable orders of magnitude by using mass analysis so that isotopic compositions of the desired element are determined. Most of the elements of the Periodic Table can be ionized by one of the several excitation schemes available. The suitable excitation scheme depends on certain factors including the level scheme of the element's atom, its ionization energy, required selectivity and sensitivity, likely interference, and the wavelengths and power levels of the available laser systems. Most excitation schemes vary in the last step, the ionization step. This is due to the low cross-section for non-resonant photo-ionization produced by the laser. A pulsed laser system facilitates the efficient coupling of a time-of-flight mass spectrometer (TOF-MS) to the resonance ionization set-up due to the instrument's abundance sensitivity. This is because TOF systems can produce an abundance sensitivity of up to 104 whereas magnetic mass spectrometers can only achieve up to 102. The total selectivity in a RIS process is a combination of the sensitivities in the various resonance transitions for multiple step-wise excitations. The probability of an atom to come in contact with the resonance of another atom is about 10−5. The addition of a mass spectrometer increases this figure by a factor of 106 such that the total elemental selectivity surpasses or at least compares to that of tandem mass spectrometry (MS/MS), the most selective technique available. Optical excitation and ionization schemes Optical ionization schemes are developed to produce element-selective ion source for various elements. Most of the elements of the periodic table have been resonantly ionized by using one of five major optical routes based on the principle of RIMS. The routes were formed by the absorption of two or three photons to achieve excitation and ionization and are provided on the basis of optically possible transitions between atomic levels in a process called the bound-bound transition. For an atom of the element to be promoted to a bound-continuum, the energies emitted from the photons must be within the energy range of the selected tunable lasers. Also, the ionization energy of the last emitted photon must exceed that of the atom. The optical ionization schemes are denoted by the amount of photons necessary to make the ion pair. For the first two Schemes 1 and 2, two photons (and processes) are involved. One photon excites the atom from the ground state to an intermediate state while the second photon ionizes the atom. In Schemes 3 and 4, three photons (and processes) are involved. The first two distinct photons create consecutive bound-bound transitions within the selected atom while the third photon is absorbed for ionization. Scheme 5 is a three-photon two-intermediate-level photoionization process. After the first two photons have been absorbed by the optical energy, the third photon achieves ionization. The RIS process can be used to ionize all elements on the periodic table, except helium and neon, using available lasers. In fact, it is possible to ionize most elements with a single laser set-up, thus enabling rapid switching from one element to another. In the early days, optical schemes from RIMS have been used to study over 70 elements and over 39 elements can be ionized with a single laser combination using a rapid computer-modulated framework that switches elements within seconds. Applications As an analytical technique, RIS is useful based on some of its working operations – they include extremely low detection limit so that mass of samples could be identified up to the order of 10−15, the extremely high sensitivity and elemental selectivity useful in micro- and trace analysis when coupled with mass spectrometers, and ability of the pulsed laser ion source to produce pure isobaric ion beams. A major advantage of using resonance ionization is that it is a highly selective ionization mode; it is able to target a single type of atom among a background of many types of atoms, even when said background atoms are much more abundant than the target atoms. In addition, resonance ionization incorporates the high selectivity that is desired in spectroscopy methods with ultrasensitivity, thus making resonance ionization useful when analyzing complex samples with several atomic components. Resonance ionization spectroscopy (RIS) thus has a wide range of research and industrial applications. These include characterizing the diffusion and chemical reaction of free atoms in a gas medium, solid state surface analysis using direct sampling, studying the degree of concentration variations in a dilute vapor, detecting the allowable limits of number of particles needed in a semiconductor device, and estimating the flux of solar neutrinos on Earth. Other uses include determining high-precision values for plutonium and uranium isotopes in a rapid fashion, investigating the atomic properties of technetium at the ultra trace level, and capturing the concurrent excitation of stable daughter atoms with the decay of their parent atoms as is the case for alpha particles, beta rays, and positrons. RIS is now in very common use in research facilities where the quick and quantitative determination of the elemental composition of materials is important. Pulsed laser light sources provide higher photon fluxes than continuous-wave lasers do, however the use of pulsed lasers currently limit vast applications of RIMS in two ways. One, photo ions are created only during short laser pulses, thus significantly reducing the duty cycle of pulsed resonance ionization mass spectrometers relative to their continuous-beam counterparts. Two, incessant drifts in laser pointing and pulse timing alongside jitters between pulses severely hamper chances of reproducibility. These issues affect the extent to which resonance ionization can be used to solve some of the challenges confronted by practical analysts today; even so, applications of RIMS are replete in various traditional and emerging disciplines such as cosmochemistry, medical research, environmental chemistry, geophysical sciences, nuclear physics, genome sequencing, and semiconductors. See also Resonance-enhanced multiphoton ionization Rydberg ionization spectroscopy Photoionization Atmospheric-pressure laser ionization Radiometric dating Electron excitation Tunable lasers Cosmochemistry References Patents Further reading Payne M.G., Hurst G.S. (1985) Theory of Resonance Ionization Spectroscopy. In: Martellucci S., Chester A.N. (eds) Analytical Laser Spectroscopy. NATO ASI Series (Series B: Physics), vol 119. Springer, Boston, MA. Parks J.E., Young J.P. (2000) Resonance Ionization Spectroscopy 2000: Laser Ionization and Applications Incorporating RIS; 10th International Symposium, Knoxville, Tennessee (AIP Conference Proceedings). Mass spectrometry Ionization
Resonance ionization
[ "Physics", "Chemistry" ]
2,924
[ "Ionization", "Physical phenomena", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
32,302,176
https://en.wikipedia.org/wiki/Nanochannel%20glass%20materials
Nanochannel glass materials are an experimental mask technology that is an alternate method for fabricating nanostructures, although optical lithography is the predominant patterning technique. Nanochannel glass materials are complex glass structures containing large numbers of parallel hollow channels. In its simplest form, the hollow channels are arranged in geometric arrays with packing densities as great as 1011 channels/cm2. Channel dimensions are controllable from micrometers to tens of nanometers, while retaining excellent channel uniformity. Exact replicas of the channel glass can be made from a variety of materials. This is a low cost method for creating identical structures with nanoscale features in large numbers. Characteristics These materials have high density of uniform channels with diameters from 15 micrometres to 15 nanometers. These are rigid structures with serviceable temperatures to at least 300 °C, with potential up to 1000 °C. Furthermore, these are optically transparent photonic structures with high degree of reproducibility. Applications These can be used as a material for chromatographic columns, unidirectional conductors, Microchannel plate and nonlinear optical devices. Other uses are as masks for semiconductor development, including ion implantation, optical lithography, and reactive ion etching. See also E-beam lithography Ion beam lithography Maskless lithography Nanolithography Photolithography Porous glass Vycor glass References Further reading "Nanochannel glass replica membranes" Materials science Glass Glass engineering and science Glass applications
Nanochannel glass materials
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
309
[ "Glass engineering and science", "Applied and interdisciplinary physics", "Glass", "Unsolved problems in physics", "Materials science", "Homogeneous chemical mixtures", "nan", "Amorphous solids" ]
32,302,578
https://en.wikipedia.org/wiki/Cobalamin%20biosynthesis
Cobalamin biosynthesis is the process by which bacteria and archea make cobalamin, vitamin B12. Many steps are involved in converting aminolevulinic acid via uroporphyrinogen III and adenosylcobyric acid to the final forms in which it is used by enzymes in both the producing organisms and other species, including humans who acquire it through their diet. The feature which distinguishes the two main biosynthetic routes is whether the cobalt that is at the catalytic site in the coenzyme is incorporated early (in anaerobic organisms) or late (in aerobic organisms) and whether oxygen is required. In both cases, the macrocycle that will form a coordination complex with the cobalt ion is a corrin ring, specifically one with seven carboxylate groups called cobyrinic acid. Subsequently, amide groups are formed on all but one of the carboxylates, giving cobyric acid, and the cobalt is ligated by an adenosyl group. In the final part of the biosynthesis, common to all organisms, an aminopropanol sidechain is added to the one free carboxylic group and assembly of the nucleotide loop, which will provide the second ligand for the cobalt, is completed. Many prokaryotic species cannot biosynthesize adenosylcobalamin, but can make it from cobalamin which they assimilate from external sources. In humans, dietary sources of cobalamin are bound after ingestion as transcobalamins and converted to the coenzyme forms in which they are used. Cobalamin Cobalamin (vitamin B12) is the largest and most structurally complex vitamin. It consists of a modified tetrapyrrole, a corrin, with a centrally chelated cobalt ion and is usually found in one of two biologically active forms: methylcobalamin and adenosylcobalamin. Most prokaryotes, as well as animals, have cobalamin-dependent enzymes that use it as a cofactor, whereas plants and fungi do not use it. In bacteria and archaea, these enzymes include methionine synthase, ribonucleotide reductase, glutamate and methylmalonyl-CoA mutases, ethanolamine ammonia-lyase, and diol dehydratase. In certain mammals, cobalamin is obtained through the diet, and is required for methionine synthase and methylmalonyl-CoA mutase. In humans, it plays essential roles in folate metabolism and in the synthesis of the citric acid cycle intermediate, succinyl-CoA. Overview of cobalamin biosynthesis There are at least two distinct cobalamin biosynthetic pathways in bacteria: Aerobic pathway that requires oxygen and in which cobalt is inserted late in the pathway; found in Pseudomonas denitrificans and Rhodobacter capsulatus. Anaerobic pathway in which cobalt insertion is the first committed step towards cobalamin synthesis; found in Salmonella typhimurium, Bacillus megaterium, and Propionibacterium freudenreichii subsp. shermanii. Either pathway can be divided into two parts: Corrin ring synthesis leading to cobyrinic acid, with seven carboxylate groups. In the anaerobic pathway this already contains cobalt but in the aerobic pathway the material formed at that stage is hydrogenobyrinic acid, without the bound cobalt. Insertion of cobalt, where not already present; formation of amides on all but one of the carboxylate groups to give cobyric acid; attachment of an adenosyl group as ligand to the cobalt; attachment of an aminopropanol sidechain to the one free carboxylic group and assembly of the nucleotide loop which will provide the second ligand for the cobalt. A further type of synthesis occurs through a salvage pathway, where outside corrinoids are absorbed to make B12. Species from the following genera and the following individual species are known to synthesize cobalamin: Propionibacterium shermanii, Pseudomonas denitrificans, Streptomyces griseus, Acetobacterium, Aerobacter, Agrobacterium, Alcaligenes, Azotobacter, Bacillus, Clostridium, Corynebacterium, Flavobacterium, Lactobacillus, Micromonospora, Mycobacterium, Nocardia, Proteus, Rhizobium, Salmonella, Serratia, Streptococcus and Xanthomonas. Detail of steps up to formation of uroporphyrinogen III In the early steps of the biosynthesis, a tetrapyrrolic structural framework is created by the enzymes deaminase and cosynthetase which transform aminolevulinic acid via porphobilinogen and hydroxymethylbilane to uroporphyrinogen III. The latter is the first macrocyclic intermediate common to haem, chlorophyll, sirohaem and cobalamin itself. Detail of steps from uroporphyrinogen III to acid a,c-diamide in aerobic organisms The biosynthesis of cobalamin diverges from that of haem and chlorophyll at uroporphrinogen III: its transformation involves the sequential addition of methyl (CH3) groups to give intermediates that were given trivial names according to the number of these groups that have been incorporated. Hence, the first intermediate is precorrin-1, the next is precorrin-2 and so on. The incorporation of all eight additional methyl groups which occur in cobyric acid was investigated using 13C methyl-labelled S-adenosyl methionine. It was not until scientists at Rhône-Poulenc Rorer used a genetically-engineered strain of Pseudomonas denitrificans, in which eight of the cob genes involved in the biosynthesis of the vitamin had been overexpressed, that the complete sequence of methylation and other steps could be determined, thus fully establishing all the intermediates in the pathway. From uroporphyrinogen III to precorrin-2 The enzyme CobA catalyses two methylations, to give precorrin-2: (1a) uroporphyrinogen III + S-adenosyl methionine precorrin-1 + S-adenosyl-L-homocysteine (1b) precorrin-1 + S-adenosyl methionine precorrin-2 + S-adenosyl-L-homocysteine From precorrin-2 to precorrin-3A The enzyme CobI then converts this to precorrin-3A: precorrin-2 + S-adenosyl methionine precorrin-3A + S-adenosyl-L-homocysteine From precorrin-3A to precorrin-3B Next, the enzyme CobG transforms precorrin-3A to precorrin-3B: precorrin-3A + NADH + H+ + O2 precorrin-3B + NAD+ + H2O This enzyme is an oxidoreductase that requires oxygen and hence the reaction can only operate under aerobic conditions. The naming of these precorrins as 3A and 3B reflects the fact that each contains three more methyl groups than uroporphyrinogen III but with different structures: in particular, precorrin-3B has an internal γ-lactone ring formed from the ring A acetic acid sidechain closing back on to the macrocycle. From precorrin-3B to precorrin-4 The enzyme CobJ continues the theme of methyl group insertion. Importantly, during this step the macrocycle ring-contracts so that the product contains for the first time the corrin core which characterises cobalamin. precorrin-3B + S-adenosyl methionine precorrin-4 + S-adenosyl-L-homocysteine From precorrin-4 to precorrin-5 Methyl group insertions continue as the enzyme CobM acts on precorrin-4: precorrin-4 + S-adenosyl methionine precorrin-5 + S-adenosyl-L-homocysteine The newly-inserted methyl group is added to ring C at the carbon attached to the methylene (CH2) bridge to ring B. This is not its final location on cobalamin as a later step involves its rearrangement to an adjacent ring carbon. From precorrin-5 to precorrin-6A The enzyme CobF now removes the acetyl group located at position 1 of the ring system in precorrin-4 and replaces it with a newly-introduced methyl group. The name of the product, precorrin-6A, reflects the fact that six methyl groups in total have been added to uroporphyrinogen III up to this point. However, since one of these has been extruded with the acetate group, the structure of precorrin-6A contains just the remaining five. precorrin-5 + S-adenosyl methionine + H2O precorrin-6A + S-adenosyl-L-homocysteine + acetate From precorrin-6A to precorrin-6B The enzyme CobK now reduces a double bond in ring D using NADPH: precorrin-6A + NADPH + H+ precorrin-6B + NADP+ Precorrin-6B therefore differs in structure from precorrin-6A only by having an extra two hydrogen atoms. From precorrin-6B to precorrin-8 The enzyme CobL has two active sites, one catalysing two methyl group additions and the other the decarboxylation of the CH2COOH group on ring D, so that this substituent becomes a simple methyl group: precorrin-6B + 2 S-adenosyl methionine precorrin-8X + 2 S-adenosyl-L-homocysteine + CO2 From precorrin-8 to hydrogenobyrinic acid The enzyme CobH catalyzes a rearrangement reaction, with the result that the methyl group that had been added to ring C is isomerised to its final location, an example of intramolecular transfer: precorrin-8X hydrogenobyrinate From hydrogenobyrinic acid to hydrogenobyrinic acid a,c-diamide The next enzyme in the pathway, CobB, selectively converts two of the eight carboxylic acid groups into their primary amides. ATP is used to provide the energy for amide bond formation, with the transferred ammonia coming from glutamine: hydrogenobyrinic acid + 2 ATP + 2 glutamine + 2 H2O hydrogenobyrinic acid a,c-diamide + 2 ADP + 2 phosphate + 2 glutamic acid From hydrogenobyrinic acid a,c-diamide to acid a,c-diamide Cobalt(II) insertion into the macrocycle is catalysed by the enzyme Cobalt chelatase (CobNST): hydrogenobyrinic acid a,c-diamide + Co2+ + ATP + H2O acid a,c-diamide + ADP + phosphate + H+ It is at this stage that the aerobic pathway and the anaerobic pathway merge, with later steps being chemically identical. Detail of steps from uroporphyrinogen III to a,c-diamide in anaerobic organisms Many of the steps beyond uroporphyrinogen III in anaerobic organisms such as Bacillus megaterium involve chemically similar but genetically distinct transformations to those in the aerobic pathway. From precorrin-2 to cobalt-sirohydrochlorin The key difference in the pathways is that cobalt is inserted early in anaerobic organisms by first oxidising precorrin-2 to its fully aromatised form sirohydrochlorin and then to that compound's cobalt(II) complex. These reactions are catalysed by CysG and Sirohydrochlorin cobaltochelatase. From cobalt-sirohydrochlorin to cobalt-factor III As in the aerobic pathway, the third methyl group is introduced by a methyltransferase enzyme, CbiL: cobalt-sirohydrochlorin + S-adenosyl methionine cobalt-factor III + S-adenosyl-L-homocysteine From cobalt-factor III to cobalt-precorrin-4 Methylation and ring contraction to form the corrin macrocycle occurs next, catalysed by the enzyme Cobalt-factor III methyltransferase (CbiH, ) cobalt-factor III + S-adenosyl methionine cobalt-precorrin-4 + S-adenosyl-L-homocysteine In this pathway, the resulting material contains a δ-lactone, a six-membered ring, rather than the γ-lactone (five-membered ring) of precorrin-3B. From cobalt-precorrin-4 to cobalt-precorrin-5A The introduction of the methyl group at C-11 in the next step is catalysed by Cobalt-precorrin-4 methyltransferase (CbiF, ) cobalt-precorrin-4 + S-adenosyl methionine cobalt-precorrin-5 + S-adenosyl-L-homocysteine From cobalt-precorrin-5A to cobalt-precorrin-5B The scene is now set for the extrusion of the two-carbon fragment corresponding to the acetate released in the formation of precorrin-6A in the aerobic pathway. In this case the fragment released is acetaldehyde and this is catalysed by CbiG: cobalt-precorrin-5A + H2O cobalt-precorrin-5B + acetaldehyde + 2 H+ From cobalt-precorrin-5B to acid a,c-diamide The steps from cobalt-precorrin-5B to acid a,c-diamide in the anaerobic pathway are essentially chemically identical to those in the aerobic sequence. The intermediates are called cobalt-precorrin-6A, cobalt-precorrin-6B, cobalt-precorrin-8 and cobyrinic acid. The enzymes in sequence are CbiD; Cobalt-precorrin-6A reductase (CbiJ, ); CbiT, Cobalt-precorrin-8 methylmutase (CbiC, ) and CbiA. The final enzyme forms acid a,c-diamide as the two pathways converge. Detail of steps from acid a,c-diamide to adenosylcobalamin Aerobic and anaerobic organisms share the same chemical pathway beyond acid a,c-diamide and this is illustrated for the cob gene products. From acid a,c-diamide to adenosylcobyric acid The cobalt(II) is reduced to by the enzyme CobR and then the enzyme CobO attaches an adenosyl ligand to the metal. Next, the enzyme CobQ converts all the carboxylic acids, except the propionic acid on ring D, to their primary amides. From adenosylcobyric acid to adenosylcobinamide phosphate In aerobic organisms, the enzyme CobCD now attaches (R)-1-amino-2-propanol (derived from threonine) to the propionic acid, forming adenosylcobinamide and the enzyme CobU phosphorylates the terminal hydroxy group to form adenosylcobinamide phosphate. The same final product is formed in anaerobic organisms by direct reaction of adenosylcobyric acid with (R)-1-amino-2-propanol O-2-phosphate (derived from threonine-O-phosphate by the enzyme CobD) catalysed by the enzyme CbiB. From adenosylcobinamide phosphate to adenosylcobalamin In a separate branch of the pathway, 5,6-dimethylbenzimidazole is biosynthesised from flavin mononucleotide by the enzyme 5,6-dimethylbenzimidazole synthase and converted by CobT to alpha-ribazole 5' phosphate. Then the enzyme CobU activates adenosylcobinamide phosphate by formation of adenosylcobinamide-GDP and CobV links the two substrates to form Adenosylcobalamin-5'-phosphate. In the final step to the coenzyme, CobC removes the 5' phosphate group: Adenosylcobalamin-5'-phosphate + H2O adenosylcobalamin + phosphate The complete biosynthetic route involves a long linear path that requires about 25 contributing enzyme steps. Other pathways of cobalamin metabolism Salvage pathways in prokaryotes Many prokaryotic species cannot biosynthesize adenosylcobalamin, but can make it from cobalamin. These organisms are capable of cobalamin transport into the cell and its conversion to the required coenzyme form. Even organisms such as Salmonella typhimurium that can make cobalamin also assimilate it from external sources when available. Uptake into cells is facilitated by ABC transporters which absorb the cobalamin through the cell membrane. Cobalamin metabolism in humans In humans, dietary sources of cobalamin are bound after ingestion as transcobalamins. They are then converted to the coenzyme forms in which they are used. Methylmalonic aciduria and homocystinuria type C protein is the enzyme which catalyzes the decyanation of cyanocobalamin as well as the dealkylation of alkylcobalamins including methylcobalamin and adenosylcobalamin. Further reading References External links Prof Sir Alan Battersby: the biosynthesis of Vitamin B12 St. Catharine's College, Cambridge, video Protein families Vitamin B12 Biosynthesis
Cobalamin biosynthesis
[ "Chemistry", "Biology" ]
3,929
[ "Protein classification", "Biosynthesis", "Chemical synthesis", "Protein families", "Metabolism" ]
32,304,493
https://en.wikipedia.org/wiki/NextGenPower
NextGenPower is an integrated project which aims to demonstrate new alloys and coatings in boiler, turbine and interconnecting pipework. The concept of NextGenPower is to perform innovative demonstrations that will significantly contribute to the EU target to increase the efficiency in existing and new build pulverized coal power plants. Background Carbon Capture and Storage (CCS) is envisaged to be the main transition technology to comply with the reduction targets set by the European Commission. However, CCS has the drawback that the electrical efficiency of the coal-fired power plant will drop significantly. The efficiency loss caused by CCS in coal-fired power plants will range from 4 to 12% points, depending on the CCS technology chosen. To overcome this drawback, one has to increase the plant efficiency or the share of biomass co-firing. Both options are limited due to the quality of the current available coatings and materials. Live steam temperatures well in excess of 700 °C are necessary to compensate the efficiency loss caused by CCS and to achieve a net efficiency of 45%. NextGenPower aims to develop and demonstrate coatings and materials that can be applied in ultra-supercritical (in excess of 700˚C) conditions. Summary The NextGenPower project was due to start on 1 May 2010 and have a duration of 48 months. The budget is €10.3million, with an EU contribution making up €6million of the budget. Objectives The following scientific and technological objectives have been defined for NextGenPower, leading to the following project activities: Demonstrating the application of precipitation hardened Nickel-alloys for pulverized coal-fired boilers having allowable levels of creep and fatigue evolving from high temperatures envisaged with USC Demonstrating the application of cost-effective fireside coatings, compatible with affordable and available tube alloys, for coal-fired boilers capable of withstanding the corrosive conditions envisaged with USC and the environment of biomass co-firing under different conditions Demonstrating the application of cost-effective steam side coatings/protective layers to extend the life of boiler tube and interconnecting pipe work, and to facilitate the use of cheaper alternative materials without compromising component life or reliability Demonstrating the application of Ni-alloys for interconnecting pipe work between boiler and steam turbine withstanding high temperatures envisaged with USC and to explore alternative design options to allow for the use of cheaper, more available materials than Ni-alloys Demonstrating the capability to cast, forge and weld Ni-alloys for critical steam turbine components Sub-projects There are also four sub-projects which will be focused on throughout the course of the NextGenPower project. Sub Project 1 – boiler NextGenPower aims at overcoming fireside corrosion and steamside oxidation in high temperature parts through the application of suitable coatings. The main goal for Sub Project 1 is to demonstrate the benefits and limitations of materials and coatings for the fireside under biomass co-firing conditions as well as for the boiler and main steam pipework under USC and current steam conditions. Sub Project 2 - steam turbine The main goals for Sub Project 2 are to select the best candidate alloys for the HP and ID steam turbines operating at high steam temperatures (≥720˚C). A number of nickel-base alloys have been developed whose properties have been proven at the laboratory scale and for small-scale components. The main uncertainty in the application of these alloys for steam turbine applications is the ability to manufacture, weld and inspect large components. The performance in service presents a much smaller risk since there is confidence that the mechanical behaviour can be modelled on the basis of the material properties. This philosophy follows the approach applied in the development, demonstration and exploitation of materials technology for 700-720˚C steam turbines in other projects (AD700, COMTES, EON 50plus) where the first commercial steam turbine will enter service without prior operation in a test loop. Following alloy selection, full-scale steam turbine casings and rotor forgings will be manufactured and materials properties demonstrated through implementation of a mechanical testing programme. Full-scale demonstration of the welding technology and the NDE capability required for welded rotor and casing manufacture will also be carried out. Sub Project 3 – integration Sub Project 3 provides a framework for the testing and demonstration work in the overall project. It will review the expected operating parameters required for NGP plants, with and without capture technologies, and with and without biomass co-firing. The aim is to evaluate a series of NextGenPower plants with CCS systems in terms of their power generation efficiencies and emissions per unit of electricity generated. Sub Project 4 – dissemination The main goal for Sub Project 4 is to ensure that the generic results and results from topical activities are actively disseminated. It promotes results and approaches and encourages the duplication in other, thereby contributing to EU objectives of the reduction, efficiency improvement and security of energy supply. Another objective is to facilitate the sharing of policies, approaches and knowledge between the participants. Participants Aubert & Duval Cranfield University Doosan Babcock E.on Goodwin Steel Castings Ltd Kema Monitor Coatings Saarschmiede Skoda Power TUD VTT Technical Research Centre of Finland VUZ References External links NextGenPower NextGenPower Brochure Energy engineering
NextGenPower
[ "Engineering" ]
1,065
[ "Energy engineering" ]
31,304,687
https://en.wikipedia.org/wiki/Defense%20in%20insects
Insects have a wide variety of predators, including birds, reptiles, amphibians, mammals, carnivorous plants, and other arthropods. The great majority (80–99.99%) of individuals born do not survive to reproductive age, with perhaps 50% of this mortality rate attributed to predation. In order to deal with this ongoing escapist battle, insects have evolved a wide range of defense mechanisms. The only restraint on these adaptations is that their cost, in terms of time and energy, does not exceed the benefit that they provide to the organism. The further that a feature tips the balance towards beneficial, the more likely that selection will act upon the trait, passing it down to further generations. The opposite also holds true; defenses that are too costly will have a little chance of being passed down. Examples of defenses that have withstood the test of time include hiding, escape by flight or running, and firmly holding ground to fight as well as producing chemicals and social structures that help prevent predation. One of the best known modern examples of the role that evolution has played in insect defenses is the link between melanism and the peppered moth (Biston betularia). Peppered moth evolution over the past two centuries in England has taken place, with darker morphs becoming more prevalent over lighter morphs so as to reduce the risk of predation. However, its underlying mechanism is still debated. Hiding Walking sticks (order Phasmatodea), many katydid species (family Tettigoniidae), and moths (order Lepidoptera) are just a few of the insects that have evolved specialized cryptic morphology. This adaptation allows them to hide within their environment because of a resemblance to the general background or an inedible object. When an insect looks like an inedible or inconsequential object in the environment that is of no interest to a predator, such as leaves and twigs, it is said to display mimesis, a form of crypsis. Insects may also take on different types of camouflage, another type of crypsis. These include resembling a uniformly colored background as well as being light below and dark above, or countershaded. Additionally, camouflage is effective when it results in patterns or unique morphologies that disrupt outlines so as to better merge the individual into the background. Cost and benefit perspective Butterflies (order Lepidoptera) are a good example of the balancing act between the costs and benefits associated with defense. In order to take off, butterflies must have a thorax temperature of . This energy is derived both internally through muscles and externally through picking up solar radiation through the body or wings. When looked at in this light, cryptic coloration to escape from predators, markings to attract conspecifics or warn predators (aposematism), and the absence of color to absorb adequate solar radiation, all play key roles in survival. Only when these three affairs are in balance does the butterfly maximize its fitness. Mimicry Mimicry is a form of defense which describes when a species resembles another recognized by natural enemies, giving it protection against predators. The resemblance among mimics does not denote common ancestry. Mimicry works if and only if predators are able to learn from eating distasteful species. It is a three part system that involves a model species, a mimic of that species, and a predatory observer that acts as a selective agent. If learning is to be successful, then all models, mimics, and predators must co-exist, a notion feasible within the context of geographic sympatry. Mimicry is divided into two parts, Batesian mimicry and Müllerian mimicry. Batesian mimicry In Batesian mimicry, an aposematic inedible model has an edible mimic. Automimics are individuals that, due to environmental conditions, lack the distasteful or harmful chemicals of conspecifics, but are still indirectly protected through their visibly identical relatives. An example can be found in the plain tiger (Danaus chrysippus), a non-edible butterfly, which is mimicked by multiple species, the most similar being the female danaid eggfly (Hypolimnas misippus). Müllerian mimicry In Müllerian mimicry, a group of species benefit from each other's existence because they all are warningly colored in the same manner and are distasteful. The best examples of this phenomenon can be found within the butterfly genus Heliconius. Behavioral responses Behavioral responses to escape predation include burrowing into substrate and being active only through part of the day. Furthermore, insects may feign death, a response termed thanatosis. Beetles, particularly weevils, do this frequently. Bright colors may also be flashed underneath cryptic ones. A startle display occurs when prey takes advantage of these markings after being discovered by a predator. The striking color pattern, which often includes eyespots, is intended to evoke prompt enemy retreat. Better formed eyespots seem to result in better deterrence. Mechanical defenses Insects have had millions of years to evolve mechanical defenses. Perhaps the most obvious is the cuticle. Although its main role lies in support and muscle attachment, when extensively hardened by the cross-linking of proteins and chitin, or sclerotized, the cuticle acts as a first line of defense. Additional physical defenses include modified mandibles, horns, and spines on the tibia and femur. When these spines take on a main predatory role, they are termed raptorial. Some insects uniquely create retreats that appear uninteresting or inedible to predators. This is the case in caddisfly larvae (order Trichoptera) which encase their abdomen with a mixture of materials like leaves, twigs, and stones. Autotomy Autotomy, or the shedding of appendages, is also used to distract predators, giving the prey a chance to escape. This highly costly mechanism is regularly practiced within stick insects (order Phasmatodea) where the cost is accentuated by the possibility that legs can be lost 20% of the time during molting. Harvestmen (order Opiliones) also use autotomy as a first line of defense against predators. Chemical defenses Unlike pheromones, allomones harm the receiver at the benefit of the producer. This grouping encompasses the chemical arsenal that numerous insects employ. Insects with chemical weaponry usually make their presence known through aposematism. Aposematism is utilized by non-palatable species as a warning to predators that they represent a toxic danger. Additionally, these insects tend to be relatively large, long-lived, active, and frequently aggregate. Indeed, longer-lived insects are more likely to be chemically defended than short lived ones, as longevity increases apparency. Throughout the arthropod and insect realm, however, chemical defenses are quite unevenly distributed. There is great variation in the presence and absence of chemical arms among orders and families to even within families. Moreover, there is diversity among insects as to whether the defensive compounds are obtained intrinsically or extrinsically. Many compounds are derived from the main food source of insect larvae, and occasionally adults, feed, whereas other insects are able to synthesize their own toxins. In reflex bleeding, insects dispel their blood, hemolymph, or a mixture of exocrine secretions and blood as a defensive maneuver. As previously mentioned, the discharged blood may contain toxins produced within the insect source or externally from plants that the insect consumed. Reflexive bleeding occurs in specific parts of the body; for example, the beetle families Coccinellidae (ladybugs) and Meloidae bleed from the knee joints. Classification Gullan and Cranston have divided chemical defenses into two classes. Class I chemicals irritate, injure, poison, or drug individual predators. They can be further separated into immediate or delayed substances, depending on the amount of time it takes to feel their effects. Immediate substances are encountered topographically when a predator handles the insect while delayed chemicals, which are generally contained within the insect's tissues, induce vomiting and blistering. Class I chemicals include bufadienolides, cantharidin, cyanides, cardenolides, and alkaloids, all of which have greater effects on vertebrates than on other arthropods. The most frequently encountered defensive compounds in insects are alkaloids. Class II chemicals are essentially harmless. They stimulate scent and taste receptors so as to discourage feeding. They tend to have low molecular weight and are volatile and reactive, including acids, aldehydes, aromatic ketones, quinones, and terpenes. Furthermore, they may be aposematic, indicating through odors the presence of chemical defenses. The two different classes are not mutually exclusive, and insects may use combinations of the two. Pasteels, Grégoire, and Rowell-Rahier grouped chemical defenses into three types: compounds that are truly poisonous, those that restrict movement, and those that repel predators. True poisons, essentially Class I compounds, interfere with specific physiological processes or act at certain sites. Repellents are similar to those classified under Class II as they irritate the chemical sensitivity of predators. Impairment of movement and sense organs is achieved through sticky, slimy, or entangling secretions that act mechanically rather than chemically. This last grouping of chemicals has both Class I and Class II properties. Again, these three categories are not mutually exclusive, as some chemicals can have multiple effects. Examples Assassin bugs When startled, the assassin bug Platymeris rhadamanthus (family Reduviidae), is capable of spitting venom up to 30 cm at potential threats. The saliva of this insect contains at least six proteins including large amounts of protease, hyaluronidase, and phospholipase which are known to cause intense local pain, vasodilation, and edema. Cockroaches Many cockroach species (order Blattodea) have mucus-like adhesive secretions on their posterior. Although not as effective against vertebrates, these secretions foul the mouths of invertebrate predators, increasing the chances of the cockroach escaping. Termites The majority of termite soldiers secrete a rubberlike and sticky chemical concoction that serves to entangle enemies, called a fontanellar gun, and it is usually coupled with specialized mandibles. In nasute species of termites (contained within the subfamily Nasutitermitinae), the mandibles have receded. This makes way for an elongated, syringic nasus capable of squirting liquid glue. When this substance is released from the frontal gland reservoir and dries, it becomes sticky and is capable of immobilizing attackers. It is highly effective against other arthropods, including spiders, ants, and centipedes. Among termite species in the Apicotermitinae that are soldierless or where soldiers are rare, mouth secretions are commonly replaced by abdominal dehiscence. These termites contract their abdominal muscles, resulting in the fracturing of the abdominal wall and the expulsion of gut contents. Because abdominal dehiscence is quite effective at killing ants, the noxious chemical substance released is likely contained within the termite itself. Ants Venom is the defense of choice for many ants (family Formicidae). It is injected from an ovipositor that has been evolutionarily modified into a stinging apparatus. These ants release a complex venom mixture that can include histamine. Within the subfamily Formicinae, the stinger has been lost and instead the poison gland forcibly ejects the fluid of choice, formic acid. Some carpenter ants (genus Camponotus) also have mandibular glands that extend throughout their bodies. When these are mechanically irritated, the ant commits suicide by exploding, spilling out a sticky, entangling substance. The subfamily Dolichoderinae, which also does not possess a stinger, has a different type of defense. The anal gland secretions of this group rapidly polymerize in air and serve to immobilize predators. Leaf beetles Leaf beetles produce a spectrum of chemicals for their protection from predators. In the case of the subtribe Chrysomelina (Chrysomelinae), all live stages are protected by the occurrence of isoxazolin-5-one derived glucosides that partially contain esters of 3-nitropropanoic acid (3-NPA, beta-nitropropionic acid). The latter compound is an irreversible inhibitor of succinate dehydrogenase. Hence, 3-NPA inhibits the tricarboxylic acid cycle. This inhibition leads to neurodegeneration with symptoms similar to those caused by Huntington's disease. Since leaf beetles produce high concentrations of 3-NPA esters, a powerful chemical defense against a wide range of different predators is obvious. The larvae of Chrysomelina leaf beetles developed a second defensive strategy that is based on the excretion of droplets via pairs of defensive glands at the back of the insects. These droplets are immediately presented after mechanical disturbance and contain volatile compounds that derive from sequestered plant metabolites. Due to the specialization of leaf beetles to a certain host plant, the composition of the larval secretion is species-dependent. For instance, the red poplar leaf beetle (Chrysomela populi) consumes the leaves of poplar plants, which contain salicin. This compound is taken up by the insect and then further transformed biochemically into salicylaldehyde, an odor very similar to benzaldehyde. The presence of salicin and salicylaldehyde can repel potential predators of leaf beetles. The hemolymph toxins originate from autogenous de novo biosynthesis by the Chrysomelina beetle. Essential amino acids, such as valine serve as precursors for the production of the hemolymph toxins of Chrysomelina leaf beetles. The degradation of such essential amino acids provides propanoyl-CoA. This compound is further transformed into propanoic acid and β-alanine. The amino group in β-alanine is then oxidized to yield either an oxime or the nitro-toxin 3-nitropropanoic acid (3-NPA). The oxime is cyclized to isoxazolin-5-one, which is transformed with α-UDP-glucose into the isoxazolin-5-one glucoside. In a final step, an ester is formed by transesterification of 3-nitropropanoyl-CoA to the 6´-position of isoxazolin-5-one glucoside. This biosynthetic route yields high millimolar concentrations of the secondary isoxazolin-5-one and 3-NPA derived metabolites. Free 3-NPA and glucosides that derive from 3-NPA and isoxazolin-5-one also occur in many genera of leguminous plants (Fabaceae). The larvae of leaf beetles from the subfamilies of e.g., Criocerinae and Galerucinae often employ fecal shields, masses of feces that they carry on their bodies to repel predators. More than just a physical barrier, the fecal shield contains excreted plant volatiles that can serve as potent predator deterrents. Wasps Ant attacks represent a large predatory pressure for many species of wasps, including Polistes versicolor. These wasps possess a gland located in the VI abdominal sternite (van de Vecht's gland) that is primarily responsible for making an ant-repellent substance. Tufts of hair near the edge of the VI abdominal sternite store and apply the ant repellent, secreting the ant repellent through a rubbing behavior. Collective defenses in social insects Many chemically defended insect species take advantage of clustering over solitary confinement. Among some insect larvae in the orders Coleoptera and Hymenoptera, cycloalexy is adopted. Either the heads or ends of the abdomen, depending on where noxious compounds are secreted, make up the circumference of a circle. The remaining larvae lie inside this defensive ring where the defenders repel predators through threatening attitudes, regurgitation, and biting. Termites (order Isoptera), like eusocial ants, wasps, and bees, rely on a caste system to protect their nests. The evolution of fortress defense is closely linked to the specialization of soldier mandibles. Soldiers can have biting-crushing, biting-cutting, cutting, symmetrical snapping, and asymmetrical snapping mandibles. These mandibles may be paired with frontal gland secretion, although snapping soldiers rarely utilize chemical defenses. Termites take advantage of their modified mandibles in phragmosis, which is the blocking of the nest with any part of the body; in this case of termites, nest entrances are blocked by the heads of soldiers. Some species of bee, mainly that of the genus Trigona, also exhibit such aggressive behavior. The Trigona fuscipennis species in particular, make use of attraction, landing, buzzing and angular flights as typical alarm behaviors. But biting is the prominent form of defense among T. fuscipennis bees and involve their strong, sharp five-toothed mandibles. T. fuscipennis bees have been discovered to engage in suicidal biting in order to defend the nest and against predators. Humans standing in the vicinity of nests are almost always attacked and experience painful bites. The bees also crawl over the intruder into the ears, eye, mouth, and other cavities. The Trigona workers give a painful and persistent bite, are difficult to remove, and usually die during the attack. Alarm pheromones warn members of a species of approaching danger. Because of their altruistic nature, they follow the rules of kin selection. They can elicit both aggregational and dispersive responses in social insects depending on the alarm caller's location relative to the nest. Closer to the nest, it causes social insects to aggregate and may subsequently produce an attack against the threat. The Polistes canadensis, a primitively eusocial wasp, will emit a chemical alarm substance at the approach of a predator, which will lower their nestmates' thresholds for attack, and even attract more nestmates to the alarm. The colony is thus able to rise quickly with its sting chambers open to defend its nest against predators. In nonsocial insects, these compounds typically stimulate dispersal regardless of location. Chemical alarm systems are best developed in aphids and treehoppers (family Membracidae) among the nonsocial groups. Alarm pheromones take on a variety of compositions, ranging from terpenoids in aphids and termites to acetates, an alcohol, and a ketone in honey bees to formic acid and terpenoids in ants. Immunity Insects, like nearly every other organism, are subject to infectious diseases caused by viruses, bacteria, fungi, protozoa, and nematodes. These encounters can kill or weaken the insect. Insects protect themselves against these detrimental microorganisms in two ways. Firstly, the body-enveloping chitin cuticle, in conjunction with the tracheal system and the gut lining, serve as major physical barriers to entry. Secondly, hemolymph itself plays a key role in repairing external wounds as well as destroying foreign organisms within the body cavity. Insects, along with having passive immunity, also show evidence of acquired immunity. Social insects additionally have a repertoire of behavioural and chemical "border-defences" and in the case of the ant, groom venom or metapleural gland secretions over their cuticle. Role of phenotypic plasticity Phenotypic plasticity is the capacity of a single genotype to exhibit a range of phenotypes in response to variation in the environment. For example, in Nemoria arizonaria caterpillars, the cryptic pattern changes according to season and is triggered by dietary cues. In the spring, the first brood of caterpillars resembles oak catkins, or flowers. By the summer when the catkins have fallen, the caterpillars discreetly mimic oak twigs. No intermediate forms are present in this species, although other members of the genus Nemoria, such as N. darwiniata, do exhibit transitional forms. In social insects such as ants and termites, members of different castes develop different phenotypes. For example, workers are normally smaller with less pronounced mandibles than soldiers. This type of plasticity is more so determined by cues, which tend to be non-harmful stimuli, than by the environment. Phenotypic plasticity is important because it allows an individual to adapt to a changing environment and can ultimately alter their evolutionary path. It not only plays an indirect role in defense as individuals prepare themselves physically to take on the task of avoiding predation through camouflage or developing collective mechanical traits to protect a social hive, but also a direct one. For example, cues elicited from a predator, which may be visual, acoustic, chemical, or vibrational, may cause rapid responses that alter the prey’s phenotype in real time. See also Insect ecology Antipredator adaptation Behavioral ecology References Exploding animals Insect ecology Mimicry
Defense in insects
[ "Chemistry", "Biology" ]
4,454
[ "Mimicry", "Biological defense mechanisms", "Exploding animals", "Explosions" ]
31,305,408
https://en.wikipedia.org/wiki/Social%20competence
Social competence consists of social, emotional, cognitive, and behavioral skills needed for successful social adaptation. Social competence also reflects having the ability to take another's perspective concerning a situation, learn from past experiences, and apply that learning to the changes in social interactions. Social competence is the foundation upon which expectations for future interaction with others are built and perceptions of an individual's own behavior are developed. Social competence frequently encompasses social skills, social communication, and interpersonal communication. Competence is directly connected to social behavior, such as social motives, abilities, skills, habits, and knowledge. All of these social factors contribute to the development of a person's behavior. History The study of social competence began in the early 20th century with research into how children interact with their peers and function in social situations. In the 1930s, researchers began investigating peer groups and how children's characteristics affected their positions within these peer groups. In the 1950s and 1960s, research established that children's social competence was related to future mental health (such as maladaptive outcomes in adulthood), as well as problems in school settings. Research on social competence expanded greatly from this point on, as increasing amounts of evidence demonstrated the importance of social interactions. Social competence began to be viewed in terms of problem-solving skills and strategies in social situations, and was conceptualized in terms of effective social functioning and information processing. In the 1970s and 1980s, research began focusing on the impact of children's behavior on relationships, which influenced the study of the effectiveness of teaching children social skills that are age, gender, and context-specific. In an effort to determine the reason for some children's lack of social skills in certain interactions, new well developed social information processing models to explain the dynamics of social interaction. These models focused on factors such as behavior, the way people perceive and evaluate each other, and the processing of social cues. They also examined the selection of social goals, decision-making processes, and the implementation of chosen responses. Studies like these often examined the correlation between social cognition and social competence. A prominent researcher of social competence in the mid-1980s was Frank Gresham. He identified three sub-domains of social competence: adaptive behavior, social skills, and peer acceptance (peer acceptance is often used to assess social competence). Research during this time often focused on children who were not displaying social skills in an effort to identify and help these children who were potentially at risk of long-term negative outcomes due to poor social interactions. Gresham proposed that these children could have one of four deficits: skill deficits, in which children did not have the knowledge or cognitive abilities to carry out a certain behavior, performance deficits, self-control skill deficits, and self-control performance deficits, in which children had excessive anxiety or impulsivity that prohibited proper execution of the behaviors or skills they knew and understood. Despite all the developments and changes in the conceptualization of social competence throughout the 20th century, there was still a general lack of agreement about the definition and measurement of social competence during the 1980s. The definitions of the 1980s were less ambiguous than previous definitions, but they often did not acknowledge the age, situation, and skill specificity implicit in the complex construct of social competence. Approaches and theories Peer regard/status approaches These approaches define social competence based on how popular one is with his peers. The more well-liked one is, the more socially competent they are. Peer group entry, conflict resolution, and maintaining play, are three comprehensive interpersonal goals that are relevant with regard to the assessment and intervention of peer competence. Social skill approaches These approaches use behaviors as a guideline. Behaviors that demonstrate social skills are compiled and collectively identified as social competence. Relationship approaches According to these approaches, social competence is assessed by the quality of one's relationships and the ability to form relationships. Competence depends on the skills of both members of the relationship; a child may appear more socially competent if interacting with a socially skilled partner. Commentators on some online incel communities have advocated government programs wherein socially awkward men are helped or women are incentivized to go on dates with them. Functional approaches The functional approach is context-specific and concerned with the identification of social goals and tasks. This approach also focuses on the outcomes of social behavior and the processes leading to those outcomes. The importance of information-processing models of social skills in these approaches is based on the idea that social competence results from social-cognitive processes. Models Early models of social competence stress the role of context and situation specificity in operationalizing the competence construct. These models also allow for the organization and integration of the various component skills, behaviors, and cognitions associated with social competence. Whereas global definitions focus on the "ends" rather than the "means" by which such ends are achieved, a number of models directly attend to the theorized processes underlying competence. These process models are context-specific and seek to identify critical social goals and tasks associated with social competence. Other models focus on the often overlooked distinction between social competence and the indices (i.e., skills and abilities) used to gauge it. Behavioral–analytic model Goldfried and D'Zurilla developed a five-step behavioral-analytic model outlining a definition of social competence. The specific steps proposed in the model include: (1) situational analysis, (2) response enumeration, (3) response evaluation, (4) measure development, and (5) evaluation of the measure. Situation analysis – a critical situation is defined on the basis of certain criteria, which include: occurs with some frequency presents a difficult response decision results in a range of possible responses in a given population. Situation identification and analysis is accomplished through a variety of methods, including direct observation by self or others, interviews, and surveys. Response enumeration – a sampling of possible responses to each situation is obtained. Procedures for generating response alternatives include direct observation, role plays, and simulations in video and/or written formats. Response evaluation – the enumerated responses are judged for effectiveness by "significant others" in the environment. An important element is that a consensus must emerge, or the particular item is removed from future consideration. In the last two steps (4 and 5), a measure for assessing social competence is developed and evaluated. Social information-processing model A social information-processing model is a widely used means for understanding social competence. The social information-processing model focuses more directly on the cognitive processes underlying response selection, enactment, and evaluation. Using a computer metaphor, the reformulated social information-processing model outlines a six-step nonlinear process with various feedback loops linking children's social cognition and behavior. Difficulties arising at any of the steps generally translate into social competence deficits. The six steps are: Observation and encoding of relevant stimuli – attending to and encoding non-verbal and verbal social cues, both external and internal. Interpretation and mental representation of cues – understanding what has happened during the social encounter, as well as the cause and intent underlying the interaction. Clarification of goals – determining what one's objective is for the interaction and how to put forth an understanding of those goals. Representation of a situation is developed by accessing long-term memory or construction – the interaction is compared to previous situations stored in long-term memory and the previous outcomes of those interactions. Response decision/selection Behavioral enactment and evaluation Tri-component model Another way to conceptualize social competence is to consider three underlying subcomponents in a hierarchical framework. Social Adjustment Social Performance Social Skills The top of the hierarchy includes the most advanced level, social adjustment. Social adjustment is defined as the extent to which an individual achieves society's developmentally appropriate goals. The goals are conceived of as different "statuses" to be achieved by members of a society (e.g., health, legal, academic, or occupational, socioeconomic, social, emotional, familial, and relational statuses). The next level is social performance – or the degree to which an individual's responses to relevant social situations meet socially valid criteria. The lowest level of the hierarchy is social skills, which are defined as specific abilities (i.e., overt behavior, social cognitive skills, and emotional regulation) allowing for competent performance within social tasks. The tri-component model is useful for doctors and researchers looking to change, predict, or elaborate social functioning of children. The quadripartite model The essential core elements of competence are theorized to consist of four superordinate sets of skills, abilities, and capacities: (1) cognitive skills and abilities, (2) behavioral skills, (3) emotional competencies, and (4) motivational and expectancy sets. Cognitive skills and abilities – cultural and social knowledge necessary for effective functioning in society (i.e., academic and occupational skills and abilities, decision-making ability, and the processing of information) Behavioral skills – knowledge of behavioral responses and the ability to enact them (i.e., negotiation, role- or perspective-taking, assertiveness, conversational skills, and prosocial skills) Emotional skills – affect regulation and affective capacities for facilitating socially competent responding and forming relationships Motivational and expectancy sets – an individual's value structure, moral development, and sense of efficacy and control. The developmental framework Social competence develops over time, and the mastery of social skills and interpersonal social interactions emerge at various time points on the developmental continuum (infancy to adolescence) and build on previously learned skills and knowledge. Key facets and markers of social competence that are remarkably consistent across the developmental periods (early childhood, middle/late childhood, adolescence) include prosocial skills (i.e., friendly, cooperative, helpful behaviors) and self-control or regulatory skills (i.e., anger management, negotiation skills, problem-solving skills). However, as developmental changes occur in the structure and quality of interactions, as well as in cognitive and language abilities, these changes affect the complexity of skills and behaviors contributing to socially competent responding. Contributing factors Temperament Temperament is a construct that describes a person's biological response to the environment. Issues such as soothability, rhythmicity, sociability, and arousal make up this construct. Most often sociability contributes to the development of social competence. Mary Rothbart holds the most influential model of temperament due to the two main focuses on regulation and reactivity. Effort control is the main idea behind temperament regulation because the skills it requires are involved in integrating information, planning, and emotion modulation and behavior. Reactivity pertains to the provocation of motor, affective, and sensory response systems. Attachment Social experiences rest on the foundation of parent-child relationships and are important in later developing social skills and behaviors. An infant's attachment to a caregiver is important for developing later social skills and behaviors that develop social competence. Attachment helps the infant learn that the world is predictable and trustworthy or, in other instances, capricious and cruel. Ainsworth describes four attachment styles in infancy, including secure, anxious–avoidant, anxious–resistant, and disorganized/disoriented. The foundation of the attachment bond allows the child to venture out from their mother to try new experiences and interactions. Children with secure attachment styles tend to show higher levels of social competence relative to children with insecure attachment, including anxious-avoidant, anxious–resistant, and disorganized/disoriented. Parenting style Parents are the primary source of social and emotional development in infancy, early, and middle/late childhood. The socialization practices of parents influence whether their child will develop social competence. Parenting style captures two essential elements of parenting: parental warmth/responsiveness and parental control/demandingness. Parental responsiveness (warmth or supportiveness) refers to "the extent to which parents intentionally foster individuality, self-regulation, and self-assertion by being attuned, supportive, and acquiescent to children's special needs and demands." Parental demandingness (behavioral control) refers to "the claims parents make on children to become integrated into the family whole, by their maturity demands, supervision, disciplinary efforts and willingness to confront the child who disobeys." Categorizing parents according to whether they are high or low on parental demandingness and responsiveness creates a typology of four parenting styles: indulgent/permissive, authoritarian, authoritative, and indifferent/uninvolved. Each parenting styles reflects patterns of parental values, practices, and behaviors and a distinct balance of responsiveness and demandingness. Parenting style contributes to child well-being in the domains of social competence, academic performance, psychosocial development, and problem behavior. Research based on parent interviews, child reports, and parent observations consistently finds that: Children and adolescents whose parents are authoritative rate themselves and are rated by objective measures as more socially and instrumentally competent than those whose parents are nonauthoritative. Children and adolescents whose parents are uninvolved perform most poorly in all domains. Other factors that contribute to social competence include teacher relationships, peer groups, neighborhood, and community. Related problem behaviors An important researcher in the study of social competence, Voeller, states that three clusters of problem behaviors lead to the impairment of social competence. Voeller clusters include: (1) an aggressive and hostile group, (2) a perceptual deficits subgroup, and (3) a group with difficulties in self-regulation. Children with aggressive and hostile behaviors are those whose acting out behaviors negatively influence their ability to form relationships and sustain interpersonal interactions. Aggressive and hostile children tend to have deficiencies in social information processing and employ inappropriate social problem-solving strategies to social situations. They also tend to search for fewer facts in a social situation and pay more attention to the aggressive social interactions presented in an interaction. Children with perceptual deficits do not perceive the environment appropriately and interpret interpersonal interactions inaccurately. They also have difficulty reading social cues, facial expressions, and body gestures. Children with self-regulation deficits tend to have classic difficulties in executive functions. Assessments While understanding the components of social competence continues to be empirically validated, the assessment of social competence is not well-studied and continues to develop in procedures. There are a variety of methods for the assessment of social competence and often include one (or more) of the following: Child–adolescent interview Observations Parent report measures Self-report measures Sociometric measures (i.e., peer nominations) Teachers report measures Interventions Following the increased awareness of the importance of social competence in childhood, interventions are used to help children with social difficulties. Historically, these efforts did not improve children's peer status or yield long-lasting effects. However, these interventions also did not take into consideration that social competence problems do not occur in isolation, but alongside other problems. Thus, current intervention efforts tend to target social competence both directly and indirectly in different contexts. Preschool and early-childhood interventions Early childhood interventions targeting social skills directly improve the peer relations of children. These interventions focus on at-risk groups such as single, adolescent mothers and families of children with early behavior problems. Interventions targeting both children and families have the highest success rates. When children reach preschool age, social competence interventions focus on the preschool context and teach prosocial skills. Such interventions generally entail teaching problem-solving and conflict-management skills, sharing, and improving parenting skills. Interventions improve children's social competence and interactions with peers in the short term and they also reduce long-term risks, such as substance abuse or delinquent behavior. School-age interventions Social competence becomes more complicated as children grow older, and most intervention efforts for this age group target individual skills, the family, and the classroom setting. These programs focus on training skills in problem-solving, emotional understanding, cooperation, and self-control. Understanding one's emotions, and the ability to communicate these emotions, is strongly emphasized. The most effective programs give children the opportunity to practice the new skills that they learn. Results of social competence interventions include decreased aggression, improved self-control, and increased conflict resolution skills. Intervention Program The social competence intervention program (SCIP) is a pilot program that uses more than one sense at a time throughout the intervention so the person becomes aware of their own thought process. Before running the intervention, it was assumed that some children have perception deficits along with poor social skills. Theater classes were taken to remedy these deficits in children who have learning disabilities and attention deficit disorders. At the conclusion of the study, evidence shows that participating children began to evolve their metacognitive skills such as feelings and behaviors. See also Social skills References Behaviorism Group processes
Social competence
[ "Biology" ]
3,402
[ "Behavior", "Behaviorism" ]
31,306,433
https://en.wikipedia.org/wiki/Conserved%20Domain%20Database
The Conserved Domain Database (CDD) is a database of well-annotated multiple sequence alignment models and derived database search models, for ancient domains and full-length proteins. Philosophy Domains can be thought of as distinct functional and/or structural units of a protein. These two classifications coincide rather often, as a matter of fact, and what is found as an independently folding unit of a polypeptide chain also carries specific function. Domains are often identified as recurring (sequence or structure) units, which may exist in various contexts. In molecular evolution such domains may have been utilized as building blocks, and may have been recombined in different arrangements to modulate protein function. CDD defines conserved domains as recurring units in molecular evolution, the extents of which can be determined by sequence and structure analysis. The goal of the NCBI conserved domain curation project is to provide database users with insights into how patterns of residue conservation and divergence in a family relate to functional properties, and to provide useful links to more detailed information that may help to understand those sequence/structure/function relationships. To do this, CDD Curators include the following types of information in order to supplement and enrich the traditional multiple sequence alignments that form the foundation of domain models: 3-dimensional structures and conserved core motifs, conserved features/sites, phylogenetic organization, links to electronic literature resources. Content CDD content includes NCBI manually curated domain models and domain models imported from a number of external source databases (Pfam, SMART, COG, PRK, TIGRFAMs). What is unique about NCBI-curated domains is that they use 3D-structure information to explicitly define domain boundaries, align blocks, amend alignment details, and provide insights into sequence/structure/function relationships. Manually curated models are organized hierarchically if they describe domain families that are clearly related by common descent. To provide a non-redundant view of the data, CDD clusters similar domain models from various sources into superfamilies. Searching the database The collection is also part of NCBI's Entrez query and retrieval system, crosslinked to numerous other resources. CDD provides annotation of domain footprints and conserved functional sites on protein sequences. Precalculated domain annotation can be retrieved for protein sequences tracked in NCBI's Entrez system, and CDD's collection of models can be queried with novel protein sequences via * , or at* , that allows the computation and download of annotation for large sets of protein queries. References External links Biological databases Protein structure Protein domains
Conserved Domain Database
[ "Chemistry", "Biology" ]
530
[ "Protein structure", "Protein domains", "Structural biology", "Protein classification" ]
33,907,840
https://en.wikipedia.org/wiki/Nascent%20state%20%28chemistry%29
Nascent state or in statu nascendi (Lat. newly formed moiety: in the state of being born or just emerging), is an obsolete theory in chemistry. It refers to the form of a chemical element (or sometimes compound) in the instance of their liberation or formation. Often encountered are atomic oxygen (Onasc), nascent hydrogen (Hnasc), and similar forms of chlorine (Clnasc) or bromine (Brnasc). The concept of a "nascent state" was developed to explain the observation that gases generated in situ are frequently more reactive than identical chemicals that have been stored for an extended period of time. First usage of the term was in work by Joseph Priestley around 1790. Auguste Laurent expanded on the theory in the mid 19th century. Constantine Zenghelis hypothesized in 1920 that the increased reactivity of the "nascent" state was due to the fine dispersion of the molecules, not their status as free atoms. Still popular in the early 20th century, the nascent state theory was recognized as declining by 1942. A 1990 review noted that the term was still found as a passing mention in contemporary textbooks. The review summarized that the increased activity observed is actually caused by multiple kinetic effects, and that grouping all these effects into a single term could cause chemists to view the effect too simplistically. See also Monatomic gas Radical (chemistry) References Chemical bonding Obsolete theories in chemistry
Nascent state (chemistry)
[ "Physics", "Chemistry", "Materials_science" ]
305
[ "Chemical bonding", "Condensed matter physics", "nan" ]
33,909,868
https://en.wikipedia.org/wiki/Pomeau%E2%80%93Manneville%20scenario
In the theory of dynamical systems (or turbulent flow), the Pomeau–Manneville scenario is the transition to chaos (turbulence) due to intermittency. Named after Yves Pomeau and Paul Manneville. The aforementioned scenario is realized using the Pomeau–Manneville map. The Pomeau–Manneville map is a polynomial mapping (equivalently, recurrence relation), often referred to as an archetypal example of how complex, chaotic behaviour can arise from very simple nonlinear dynamical equations. Unlike other maps, the Pomeau–Manneville map exhibits intermittency, characterized by periods of low and high amplitude fluctuations. Recent research suggests that this bursting behavior might lead to anomalous diffusion. References Dynamical systems Chaos theory Turbulence
Pomeau–Manneville scenario
[ "Physics", "Chemistry", "Mathematics" ]
161
[ "Turbulence", "Mechanics", "Dynamical systems", "Fluid dynamics stubs", "Fluid dynamics" ]
33,910,137
https://en.wikipedia.org/wiki/Quisqualamine
Quisqualamine is the α-decarboxylated analogue of quisqualic acid, as well as a relative of the neurotransmitters glutamate and γ-aminobutyric acid (GABA). α-Decarboxylation of excitatory amino acids can produce derivatives with inhibitory effects. Indeed, unlike quisqualic acid, quisqualamine has central depressant and neuroprotective properties and appears to act predominantly as an agonist of the GABAA receptor and also to a lesser extent as an agonist of the glycine receptor, due to the facts that its actions are inhibited in vitro by GABAA antagonists like bicuculline and picrotoxin and by the glycine antagonist strychnine, respectively. Mg2+ and DL-AP5, NMDA receptor blockers, CNQX, an antagonist of both the AMPA and kainate receptors, and 2-hydroxysaclofen, a GABAB receptor antagonist, do not affect quisqualamine's actions in vitro, suggesting that it does not directly affect the ionotropic glutamate receptors or the GABAB receptor in any way. Whether it binds to and acts upon any of the metabotropic glutamate receptors like its analogue quisqualic acid however is unclear. See also Quisqualic acid Muscimol References Amines Anticonvulsants Ureas Oxadiazolidines Carbamates Glycine receptor agonists GABAA receptor agonists
Quisqualamine
[ "Chemistry" ]
337
[ "Functional groups", "Organic compounds", "Amines", "Bases (chemistry)", "Ureas" ]
33,911,961
https://en.wikipedia.org/wiki/Batch%20coding%20machine
Batch printing machines, marking machines, and date printing machines are used in the following applications: Printing batch numbers, manufacturing date, expiration date, retail price, and other information on their plain or laminated and varnished labels, cartons, polypack bags, pouches, tin bottoms, cotton bags, bottles, jars or any solid surfaces. Adding special information at the time of packing. Adding price change or special offer on existing labels or cartons. Types of machine Batch coding machines are categorized in the following two categories: Contact coding type Non contact coding type These coding machines are further sub categorized into the following types depending on their mode of operation: Automatic Once set, works automatically with the operator only having to look after its working and settings. Feeding, printing, collecting operation are done automatically, however these features vary in the different makes of machines. Semi-automatic The machine works on its own but the feeding and collecting has to be done by hand. Hand operated or manual Feeding as well as the machine are operated manually. They are suitable for small production and are highly portable. Online These machine works automatically online with other machines or they can be of continuous type with feeding from other machine, by hand or other feeding mechanisms, but are integrated or attached in line with other online machines. Contact coding type These machines print or mark on products by contacting the product surface. They mostly use some letters made up of metal or rubber and ink media like liquid ink or a ribbon or solid ink etc. to make an impression. The letters are made offset, so while printing they give mirror image and straight print is obtained. Non contact coding type These machine do not comes in contact in any way while printing or marking on products. Normally these machine uses beams/spray to mark. These beams may be emanating from laser, or the spray of ink, the most important factor is that the product or machine should be moving at constant speed. The moving product are sensed by a sensor & give a signal to the machine which responds immediately with the printing. These machine also work as online machines. A few machines in this category are Inkjet printer, Laser printer, industrial Inkjet coding machine, laser marking systems. References Packaging machinery
Batch coding machine
[ "Engineering" ]
447
[ "Packaging machinery", "Industrial machinery" ]
33,914,094
https://en.wikipedia.org/wiki/Bucket%20and%20cone
Bucket and cone refer to twin attributes that are frequently held in the hands of winged genies depicted in the art of Mesopotamia, and within the context of Ancient Mesopotamian religion. The iconography is particularly frequent in art from the Neo-Assyrian Empire , and especially Assyrian palace reliefs from this period. In some instances, only the bucket is held and the other hand is held up in what may be a blessing gesture. To a lesser degree such images were also depicted in images from the Neo-Sumerian Empire, Old Assyrian Empire, Babylonian Empire, and Middle Assyrian Empire. Context These objects are often displayed in association with a stylised tree, before floral decorations, guardian figures, the king and / or his attendants and open doorways or portals. The cone was apparently held up in the right hand, the bucket held hanging downwards in the left hand of the figure, which is almost always that of a winged genie or an animal-headed demon or mythical composite (similar to the demon antagonist Anzû, though not necessarily with the same malicious connotations); only very occasionally might these attributes be borne by a fully human figure. Identity As to the identity of the twin objects, the "cone" is generally recognised as a Turkish pine cone (Pinus brutia), common in Assyria. Other common identifications suggest the male inflorescence of the date palm (Phoenix dactylifera), or a clay imitation of one or the other. The bucket was presumably either leather, metal, or basketry, and is thought to have held either holy water or pollen, or perhaps both. Uses Although fully explanatory texts regarding these objects are exceedingly rare, from written record it does seem highly likely that they were together employed in rituals of purification, as revealed by their Akkadian (also called Assyrian, Babylonian) names: Banduddû ("bucket") and mullilu ("purifier"). In this case the fir cone would be dipped in the bucket of water before being shaken in order to sprinkle water that ritually purified a person or object. Alternatively the close association of the objects with depictions of stylised trees has led to the suggestion that it depicts fertilisation. In this case the pollen from the male flower of the date palm would be being shaken onto the tree. References Assyrian art and architecture Ritual purification Sculpture of the ancient Near East Religious iconography Visual motifs
Bucket and cone
[ "Mathematics" ]
497
[ "Symbols", "Visual motifs" ]
40,699,009
https://en.wikipedia.org/wiki/Alkylbenzene%20sulfonate
Alkylbenzene sulfonates are a class of anionic surfactants, consisting of a hydrophilic sulfonate head-group and a hydrophobic alkylbenzene tail-group. Along with sodium laureth sulfate, they are one of the oldest and most widely used synthetic detergents and may be found in numerous personal-care products (soaps, shampoos, toothpaste etc.) and household-care products (laundry detergent, dishwashing liquid, spray cleaner etc.). They were introduced in the 1930s in the form of branched alkylbenzene sulfonates (BAS). However following environmental concerns these were replaced with linear alkylbenzene sulfonates (LAS) during the 1960s. Since then production has increased significantly from about one million tons in 1980, to around 3.5 million tons in 2016, making them most produced anionic surfactant after soaps. Branched alkylbenzene sulfonates Branched alkylbenzene sulfonates (BAS) were introduced in the early 1930s and saw significant growth from the late 1940s onwards, in early literature these synthetic detergents are often abbreviated as syndets. They were prepared by the Friedel–Crafts alkylation of benzene with 'propylene tetramer' (also called tetrapropylene) followed by sulfonation. Propylene tetramer being a broad term for a mixture of compounds formed by the oligomerization of propene, its use gave a mixture of highly branched structures. Compared to traditional soaps, BAS offered superior tolerance to hard water and better foaming. However, the highly branched tail made it difficult to biodegrade. BAS was widely blamed for the formation of large expanses of stable foam in areas of wastewater discharge such as lakes, rivers and coastal areas (sea foams), as well as foaming problems encountered in sewage treatment and contamination of drinking water. As such, BAS was phased out of most detergent products during the 1960s, being replaced with linear alkylbenzene sulfonates (LAS), which biodegrade much more rapidly. BAS is still important in certain agrochemical and industrial applications, where rapid biodegradability is of reduced importance. For instance, inhibiting asphaltene deposition from crude oil. Linear alkylbenzene sulfonates Linear alkylbenzene sulfonates (LAS) are prepared industrially by the sulfonation of linear alkylbenzenes (LABs), which can themselves be prepared in several ways. In the most common route benzene is alkylated by long chain monoalkenes (e.g. dodecene) using hydrogen fluoride as a catalyst. The purified dodecylbenzenes (and related derivatives) are then sulfonated with sulfur trioxide to give the sulfonic acid. The sulfonic acid is subsequently neutralized with sodium hydroxide. The term "linear" refers to the starting alkenes rather than the final product, perfectly linear addition products are not seen, in-line with Markovnikov's rule. Thus, the alkylation of linear alkenes, even 1-alkenes such as 1-dodecene, gives several isomers of phenyldodecane. Structure property relationships Under ideal conditions the cleaning power of BAS and LAS is very similar, however LAS performs slightly better in normal use conditions, due to it being less affected by hard water. Within LAS itself the detergency of the various isomers are fairly similar, however their physical properties (Krafft point, foaming etc.) are noticeably different. In particular the Krafft point of the high 2-phenyl product (i.e. the least branched isomer) remains below 0 °C up to 25% LAS whereas the low 2-phenyl cloud point is ~15 °C. This behavior is often exploited by producers to create either clear or cloudy products. Environmental fate The biodegradability of alkylbenzene sulfonates has been well studied, and is affected by isomerization, in this case, branching. The salt of the linear material has an of 2.3 mg/liter for fish, about four times more toxic than the branched compound; however the linear compound biodegrades far more quickly, making it the safer choice over time. It is biodegraded rapidly under aerobic conditions with a half-life of approximately 1–3 weeks; oxidative degradation initiates at the alkyl chain. Under anaerobic conditions it degrades very slowly or not at all, causing it to exist in high concentrations in sewage sludge, but this is not thought to be a cause for concern as it will rapidly degrade once returned to an oxygenated environment. References Organic sodium salts Cleaning product components Anionic surfactants Sulfonates Glycine receptor agonists Alkyl-substituted benzenes
Alkylbenzene sulfonate
[ "Chemistry", "Technology" ]
1,052
[ "Organic sodium salts", "Components", "Cleaning product components", "Salts" ]
40,699,303
https://en.wikipedia.org/wiki/Electron-rich
Electron-rich is jargon that is used in multiple related meanings with either or both kinetic and thermodynamic implications: with regards to electron-transfer, electron-rich species have low ionization energy and/or are reducing agents. Tetrakis(dimethylamino)ethylene is an electron-rich alkene because, unlike ethylene, it forms isolable radical cation. In contrast, electron-poor alkene tetracyanoethylene is an electron acceptor, forming isolable anions. with regards to acid-base reactions, electron-rich species have high pKa's and react with weak Lewis acids. with regards to nucleophilic substitution reactions, electron-rich species are relatively strong nucleophiles, as judged by rates of attack by electrophiles. For example, compared to benzene, pyrrole is more rapidly attacked by electrophiles. Pyrrole is therefore considered to be an electron-rich aromatic ring. Similarly, benzene derivatives with electron-donating groups (EDGs) are attacked by electrophiles faster than in benzene. The electron-donating vs electron-withdrawing influence of various functional groups have been extensively parameterized in linear free energy relationships. with regards to Lewis acidity, electron-rich species are strong Lewis bases. See also Electron-withdrawing group References Physical organic chemistry Chemical bonding
Electron-rich
[ "Physics", "Chemistry", "Materials_science" ]
293
[ "Chemical bonding", "Condensed matter physics", "nan", "Physical organic chemistry" ]
40,705,185
https://en.wikipedia.org/wiki/Oracle%20attack
In the field of security engineering, an oracle attack is an attack that exploits the availability of a weakness in a system that can be used as an "oracle" to give a simple go/no go indication to inform attackers how close they are to their goals. The attacker can then combine the oracle with a systematic search of the problem space to complete their attack. The padding oracle attack, and compression oracle attacks such as BREACH, are examples of oracle attacks, as was the practice of "crib-dragging" in the cryptanalysis of the Enigma machine. An oracle need not be 100% accurate: even a small statistical correlation with the correct go/no go result can frequently be enough for a systematic automated attack. In a compression oracle attack the use of adaptive data compression on a mixture of chosen plaintext and unknown plaintext can result in content-sensitive changes in the length of the compressed text that can be detected even though the content of the compressed text itself is then encrypted. This can be used in protocol attacks to detect when the injected known plaintext is even partially similar to the unknown content of a secret part of the message, greatly reducing the complexity of a search for a match for the secret text. The CRIME and BREACH attacks are examples of protocol attacks using this phenomenon. See also Side-channel attack References Security engineering
Oracle attack
[ "Engineering" ]
271
[ "Systems engineering", "Security engineering" ]
40,706,733
https://en.wikipedia.org/wiki/Davis%E2%80%93Beirut%20reaction
The Davis–Beirut reaction is N,N-bond forming heterocyclization that creates numerous types of 2H-indazoles and indazolones in both acidic and basic conditions The Davis–Beirut reaction is named after Mark Kurth and Makhluf Haddadin's respective universities; University of California, Davis and American University of Beirut, and is appealing because it uses inexpensive starting materials and does not require toxic metals. Mechanism in base The current proposed mechanism for the Davis–Beirut reaction in base was first published in 2005 by Kurth, Olmstead, and Haddadin. The reaction occurs when a N-substituted 2-nitrobenzylamine is heated in the presence of base, such as NaOH and KOH, and an alcohol and includes the formation of a carbanion The reaction begins with the base removing a hydrogen (1) adjacent to the secondary amine-group, creating a carbanion. The carbanion then extracts an oxygen from the nitro-group (2), which is then subsequently protonated, most likely by water. The newly formed hydroxyl group (3), then extracts the secondary amines hydrogen, leaving a negative charge on nitrogen and creating a protonated hydroxyl group. The oxygen and its hydrogens then leave as a molecule of water (4), creating a double bond with the previously negatively charged nitrogen atom. The new pi bond makes the carbon adjacent to the nitrogen more susceptible for attack by a present alcohol (5), which in turn creates an oxygen-carbon bond, a bond between the two nitrogen atoms, and pushes electrons onto the oxygen molecule originally from the amide. This molecule is then protonated (6) to create an overall net neutral charge. The hydroxyl group is protonated similarly to step three (7), creating a good leaving group. Therefore, when the alpha hydrogen of the nitrogen atom and ether group (8) is extracted by the base, the flow of electrons creates two new carbon-nitrogen bonds and causes the loss of the protonated hydroxyl group as a molecule of water. The final product produced by this mechanism is therefore a 3-oxy-substituted 2H-indazole. Slight variations of this mechanism exists depending on the starting materials and the conditions (acid or base) of the reaction. In instances of intramolecular oxygen attack (i.e. step 5 of the proposed mechanism is intramolecular) an o-nitrobenzylidene imine intermediate is formed compared to the secondary imine of the displayed mechanism. Furthermore, Davis–Beirut reactions in acids form a carbocation as one of its transition states instead of the proposed carbanion one when the reaction occurs in base. Other variants of the Davis–Beirut reaction By manipulating the starting materials of the Davis–Beirut reaction, researchers can create a large number of 2H-indazoles derivatives, many of which can be utilized for further synthesis. In 2014, Thiazolo-, Thiazino-, and Thiazipino-2H-indazoles were synthesized utilizing o-nitrobenzaldehydes or o-nitrobenzyl bromides and S-trityl-protected primary aminothiol alkanes with a base, such as KOH, in alcohol. Creating Thiazolo-, Thiazino-, and Thiazipino-2H-indazoles is beneficial since they are generally more stable than the oxo-2H-indazoles formed without the S-trityl-protected group, and they can easily be oxidized to sulfones. Creating 2H-indazoles via the Davis–Beirut reaction can also help in producing 1H-indazoles, naturally occurring and synthetically made molecules with known pharmaceutical uses such as anti-inflammatories and anti-cancer drugs. By creating 2H-indazoles via the Davis–Beirut reaction, the product can subsequently be reacted with electrophiles, such as anhydrides, to create disubstituted 1H-indazoles that can be utilized for pharmaceutical and other industrial purposes. Applications Heterocycles, especially those containing nitrogen atoms, are highly prevalent in many pharmaceutical drugs currently on the market. Some, like those coming from 1H-indazoles, contain naturally occurring molecules, while others are purly synthetic. 2H-indazoles, though, are very rare in nature compared to 1H-indazole compounds, most likely due to the complex nature of a heterocycle including a nitrogen-nitrogen bond and an ether side chain. The discovery of the Davis–Beirut reaction therefore provides any easy and cost effective way to synthetically create 2H-indazoles. Breakthroughs, including the success of introducing thioether moiety at C3 of the 2H-indazole structure, has aided in creating drug treatments for a variety of ailments, including cystic fibrosis, with the use of myeloperoxidase inhibitors. Due to the recentness of the discovery of this reaction, though, most research is primarily headed by Haddadin, Kurth, or both, therefore causing a currently limited scope. References Indazoles Nitrogen heterocycle forming reactions Name reactions
Davis–Beirut reaction
[ "Chemistry" ]
1,098
[ "Name reactions", "Ring forming reactions", "Organic reactions" ]
45,413,683
https://en.wikipedia.org/wiki/Scale%20%28chemistry%29
The scale of a chemical process refers to the rough ranges in mass or volume of a chemical reaction or process that define the appropriate category of chemical apparatus and equipment required to accomplish it, and the concepts, priorities, and economies that operate at each. While the specific terms used—and limits of mass or volume that apply to them—can vary between specific industries, the concepts are used broadly across industry and the fundamental scientific fields that support them. Use of the term "scale" is unrelated to the concept of weighing; rather it is related to cognate terms in mathematics (e.g., geometric scaling, the linear transformation that enlarges or shrinks objects, and scale parameters in probability theory), and in applied areas (e.g., in the scaling of images in architecture, engineering, cartography, etc.). Practically speaking, the scale of chemical operations also relates to the training required to carry them out, and can be broken out roughly as follows: procedures performed at the laboratory scale, which involve the sorts of procedures used in academic teaching and research laboratories in the training of chemists and in discovery chemistry venues in industry, operations at the pilot plant scale, e.g., carried out by process chemists, which, though at the lowest extreme of manufacturing operations, are on the order of 200- to 1000-fold larger than laboratory scale, and used to generate information on the behavior of each chemical step in the process that might be useful to design the actual chemical production facility; intermediate bench scale sets of procedures, 10- to 200-fold larger than the discovery laboratory, sometimes inserted between the preceding two; operations at demonstration scale and full-scale production, whose sizes are determined by the nature of the chemical product, available chemical technologies, the market for the product, and manufacturing requirements, where the aim of the first of these is literally to demonstrate operational stability of developed manufacturing procedures over extended periods (by operating the suite of manufacturing equipment at the feed rates anticipated for commercial production). For instance, the production of the streptomycin-class of antibiotics, which combined biotechnologic and chemical operations, involved use of a 130,000 liter fermenter, an operational scale approximately one million-fold larger than the microbial shake flasks used in the early laboratory scale studies. As noted, nomenclature can vary between manufacturing sectors; some industries use the scale terms pilot plant and demonstration plant interchangeably. Apart from defining the category of chemical apparatus and equipment required at each scale, the concepts, priorities and economies that obtain, and the skill-sets needed by the practicing scientists at each, defining scale allows for theoretical work prior to actual plant operations (e.g., defining relevant process parameters used in the numerical simulation of large-scale production processes), and allows economic analyses that ultimately define how manufacturing will proceed. Besides the chemistry and biology expertises involved in scaling designs and decisions, varied aspects of process engineering and mathematical modeling, simulations, and operations research are involved. See also Medicinal chemistry Process chemistry Pilot plant Chemical engineering Process engineering Operations research Further reading R. Dach, J. J. Song, F. Roschangar, W. Samstag & C.H. Senanayake, 2012, "The eight criteria defining a good chemical manufacturing process," Org. Process Res. Dev. 16:1697ff, DOI 10.1021/op300144g. M. D. Johnson, S.A. May, J.R. Calvin, J. Remacle, J.R. Stout, W.D. Dieroad, N. Zaborenko, B.D. Haeberle, W.-M. Sun, M.T. Miller & J. Brannan, "Development and scale-up of a continuous, high-pressure, asymmetric hydrogenation reaction, workup, and isolation." Org. Process Res. Rev. 16:1017ff, DOI 10.1021/op200362h. M. Levin, Ed., 2011, Pharmaceutical Process Scale-Up: Drugs and the Pharmaceutical, 3rd edn., London, U.K.:Informa Healthcare, . A.A. Desai, 2011, "Sitagliptin manufacture: a compelling tale of green chemistry, process intensification, and industrial asymmetric catalysis," Angew. Chem. Int. Ed. 50:1974ff, DOI 10.1002/anie.201007051. M. Zlokarnik, 2006, Scale-up in Chemical Engineering, 2nd edn., Weinheim, Germany:Wiley-VCH, . M.C.M. Hensing, R.J. Rouwenhorst, J.J. Heijnen, J.R van Dijken & J.T. Pronk, 1995, "Physiological and technological aspects of large-scale heterologous-protein production with yeasts," Antonie van Leeuwenhoek 67:261-279. Karl A. Thiel, 2004, "Biomanufacturing, from bust to boom...to bubble?," Nature Biotechnology 22:1365-1372, esp. Table 1, DOI 10.1038/nbt1104-1365, see , accessed 15 February 2015. Maximilian Lackner, Ed., 2009, Scale-up in Combustion, Wien, Austria:Process Engineering GmbH, . References Chemistry Biochemistry Chemical engineering Chemical synthesis Medicinal chemistry Organic chemistry
Scale (chemistry)
[ "Chemistry", "Engineering", "Biology" ]
1,136
[ "Chemical engineering", "nan", "Medicinal chemistry", "Biochemistry", "Chemical synthesis" ]
45,414,429
https://en.wikipedia.org/wiki/Nucleic%20acid%20hybridization
In molecular biology, hybridization (or hybridisation) is a phenomenon in which single-stranded deoxyribonucleic acid (DNA) or ribonucleic acid (RNA) molecules anneal to complementary DNA or RNA. Though a double-stranded DNA sequence is generally stable under physiological conditions, changing these conditions in the laboratory (generally by raising the surrounding temperature) will cause the molecules to separate into single strands. These strands are complementary to each other but may also be complementary to other sequences present in their surroundings. Lowering the surrounding temperature allows the single-stranded molecules to anneal or “hybridize” to each other. DNA replication and transcription of DNA into RNA both rely upon nucleotide hybridization, as do molecular biology techniques including Southern blots and Northern blots, the polymerase chain reaction (PCR), and most approaches to DNA sequencing. Applications Hybridization is a basic property of nucleotide sequences and is taken advantage of in numerous molecular biology techniques. Overall, genetic relatedness of two species can be determined by hybridizing segments of their DNA (DNA-DNA hybridization). Due to sequence similarity between closely related organisms, higher temperatures are required to melt such DNA hybrids when compared to more distantly related organisms. A variety of different methods use hybridization to pinpoint the origin of a DNA sample, including the polymerase chain reaction (PCR). In another technique, short DNA sequences are hybridized to cellular mRNAs to identify expressed genes. Pharmaceutical drug companies are exploring the use of antisense RNA to bind to undesired mRNA, preventing the ribosome from translating the mRNA into protein. DNA-DNA hybridization Fluorescence In Situ Hybridization Fluorescence in situ hybridization (FISH) is a laboratory method used to detect and locate a DNA sequence, often on a particular chromosome. In the 1960s, researchers Joseph Gall and Mary Lou Pardue found that molecular hybridization could be used to identify the position of DNA sequences in situ (i.e., in their natural positions within a chromosome). In 1969, the two scientists published a paper demonstrating that radioactive copies of a ribosomal DNA sequence could be used to detect complementary DNA sequences in the nucleus of a frog egg. Since those original observations, many refinements have increased the versatility and sensitivity of the procedure to the extent that in situ hybridization is now considered an essential tool in cytogenetics. References External links In 1962 James Watson (b. 1928), Francis Crick (1916–2004), and Maurice Wilkins (1916–2004) jointly received the Nobel Prize in physiology or medicine for their 1953 determination of the structure of deoxyribonucleic acid (DNA). Southern hybridization & Northern hybridization Genetics techniques Molecular biology
Nucleic acid hybridization
[ "Chemistry", "Engineering", "Biology" ]
567
[ "Genetics techniques", "Biochemistry", "Genetic engineering", "Molecular biology" ]
45,414,564
https://en.wikipedia.org/wiki/LZ%20experiment
The LUX-ZEPLIN (LZ) Experiment is a next-generation dark matter direct detection experiment hoping to observe weakly interacting massive particles (WIMP) scatters on nuclei. It was formed in 2012 by combining the LUX and ZEPLIN groups. It is currently a collaboration of 30 institutes in the US, UK, Portugal and South Korea. The experiment is located at about 1,500 metres under the Sanford Underground Research Facility (SURF) in South Dakota, and is managed by the United States Department of Energy's (DOE) Lawrence Berkeley National Lab (Berkeley Lab). The experiment uses an ultra-sensitive detector made of 7 tonnes of liquid xenon to hunt for signals of WIMP-nucleus interactions. It is one of three such experiments which lead the search for direct detection of WIMPs above 10 GeV/c2, the other two being the XENONnT experiment and the PANDAX-4T experiment. In the spring of 2015, LZ passed the "Critical Decision Step 1" or CD-1 review, and became an official DOE project. U.S. Department of Energy officials on Sept. 21, 2020 formally signed off on project completion for LZ; DOE's project completion milestone is called Critical Decision 4, or CD-4. In 2024, results from LZ found no evidence of WIMPs above a mass of 9 gigaelectronvolts/c2 (GeV/c2). LZ as a low-background detector To conclusively identify WIMP-nucleus scatters, LZ must be able to observe very small energy depositions in its active volume. However, it must also be able to differentiate true WIMP scatters from other interactions caused by bias. Examples of these known "backgrounds" are interactions from gamma rays produced by trace radioactivity in the environment, interactions from neutrons produced in the environment, and interactions from cosmic ray muons produced in the upper atmosphere. The two goals of a dark matter search are to minimize the number of these background interactions, and for those that do occur, to be able to identify that they are from background (as opposed to WIMPs). First, the innermost detector is composed of a dual-phase xenon time projection chamber (TPC). This detector is the target for WIMP-nucleus scatters. As discussed in the next section, this detector can perform a 3-D reconstruction of the position of an interaction in the xenon. This enables an identification and rejection of background interactions that happen near the periphery (sides, top, and bottom) of the detector. These peripheral interactions are overwhelmingly likely to be from external gamma rays or neutrons and radioactive decays of trace radionuclides in the detector components composing the TPC and cryostats. Moreover, the relatively large density of liquid xenon allows the TPC to "self-shield" to a degree: gamma rays (neutrons) entering the TPC can travel only approximately a few centimeters (10 centimeters) before scattering and being stopped. As a result, the innermost volume of the detector is largely free of many of these backgrounds. Because it is so quiet, this innermost, or "fiducial" volume is very sensitive to observing WIMP scatters above other backgrounds, and is the space in which LZ's WIMP searches are conducted. Next, the TPC is located inside several layers of active and passive shielding to reduce rates of external gamma rays and neutrons. The TPC is housed in an inner cryostat, which maintains the temperatures needed to keep the xenon in the liquid phase (approximately 178K). This inner cryostat is nested in a larger, outer cryostat, which helps limit heat transfer into the xenon. External to the outer cryostat is a set of acrylic tanks holding liquid scintillator. This scintillator is liquid-alkyl-benzene (LAB) loaded with gadolinium for more efficient neutron capture. If a gamma ray or neutron scatters once inside the TPC but then exits, it will likely also deposit energy in the scintillator. These energy deposits are accompanied by emission of optical photons, which can be detected by an array of photomultiplier tubes (PMTs) located outside of the acrylic tanks. By observing such a signal in coincidence with a scatter in the TPC, it becomes possible to reject backgrounds in the TPC that might otherwise look like WIMP scatters. This is particularly important for neutrons, which can penetrate farther than gamma rays and which scatter on the xenon nucleus in the same way that WIMPs are expected to (instead of on xenon's atomic electrons). The outer-detector PMT array is located in a larger water tank. Together, the water tank and liquid scintillator also provide significant passive shielding against external gamma rays and neutrons, stopping a vast majority of them before they have the chance to enter the TPC. The whole assembly is located approximately one mile underground, in the Davis Cavern at SURF. This underground location creates a rock overburden that significantly reduces the rate of cosmic ray muons entering the TPC relative to the rate at Earth's surface. All together these different strategies ensure that LZ is a detector capable of performing a very sensitive search for dark matter scatters on xenon nuclei. LZ's Inner Detector: Dual Phase TPC The detector at the heart of LZ is a cylindrical dual-phase xenon time projection chamber (TPC). This is composed of a 7 tonne liquid xenon target and a small region of gaseous xenon above. The operational principle is as follows. When a WIMP or background scatter occurs, a small amount of kinetic energy is given to a xenon nucleus (or atomic electron). This causes the xenon atom to ricochet around the area near the site of the scatter, converting its energy into the production of prompt scintillation photons, freed (ionization) electrons, and heat. A number of the prompt scintillation photons can be detected by the photomultiplier tubes (PMTs) at the top and bottom of the detector. The ionization electrons drift upward in an externally applied electric field, and upon reaching the liquid surface, are pulled into the gas and create electroluminescence light in a stronger electric field. This electroluminescence creates a delayed "S2" signal. The externally-created electric fields are created by a set of four high voltage electrode grids: the bottom, the cathode, the gate, and the anode. Taken together, the S1 and S2 enable precise 3D reconstruction of the position of an interaction in the xenon. Because the S2 happens very close to the upper PMT array, it alone can give a good sense of where in XY (i.e. relative to the detector axis) the interaction has occurred. The time difference between the prompt S1 and delayed S2 is a proxy for the depth of the interaction: by using the drift velocity of electrons in xenon at a given electric field, one can convert the drift time to a physical depth, or Z position. Together, this XYZ position permits one to identify a quiet inner fiducial volume for sensitive WIMP searches. It also enables discrimination between WIMP-like single-site interactions and background-like multi-site interactions, like those from neutrons or gamma rays. Note that unlike other kinds of time projection chambers, such as those used in neutrino experiments like MicroBooNE, the ionization signal here is fully captured via the S2 light - no current is directly measured by electrodes. LZ's WIMP Searches In July 2022, the LZ collaboration published in a preprint its first upper limit on the spin-independent WIMP-nucleon scattering cross section, using approximately 60 live days worth of data. Future searches intend to further probe for WIMP scatters, with a nominal search period of 1000 days. On 28 July 2023, the LZ experiment's first results of its searches for WIMPs, previously released as a preprint, were published in Physical Review Letters, excluding cross sections above 9.2 cm2 at 36 GeV with 90% confidence level, jointly on the same date XENONnT published its first results too excluding cross sections above 2.58 cm2 at 28 GeV with 90% confidence level. , the Experiment had worked for 280 days (with the aim of 1,000 days) without finding evidence of 'dark matter', but tightening the limits on its properties so far. References External links The LZ Dark Matter Experiment Science and technology in the United States Experiments for dark matter search Underground laboratories
LZ experiment
[ "Physics" ]
1,827
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
45,417,835
https://en.wikipedia.org/wiki/Materials%20Horizons
Materials Horizons is a bimonthly peer-reviewed scientific journal that covers research across the breadth of materials science at the interface between chemistry, physics, biology and engineering. The current editor-in-chief is Martina Stenzel. The journal was established in 2014. A sister journal Nanoscale Horizons was launched in 2016. Article types The journal publishes "communications" (articles for rapid publication), "reviews" (state-of-the-art accounts of a research field), "mini-reviews" (research highlights in an emerging area of materials science, usually from the past 2–3 years) and "focus articles" (educational articles providing an overview of a concept in materials science). Abstracting and indexing The journal is indexed in the Science Citation Index. Selective content is also indexed in Polymer Library, Inspec, Biotechnology and Bioengineering Abstracts, METADEX, Mechanical Engineering Abstracts, Solid State and Superconductivity Abstracts, Metal Abstracts and CSA Technology Research Database, and CABI. See also List of scientific journals in chemistry Journal of Materials Chemistry A Journal of Materials Chemistry B Journal of Materials Chemistry C References External links Chemistry journals Materials science journals Academic journals established in 2014 Royal Society of Chemistry academic journals Bimonthly journals English-language journals
Materials Horizons
[ "Materials_science", "Engineering" ]
260
[ "Materials science journals", "Materials science" ]
36,495,578
https://en.wikipedia.org/wiki/Geodesics%20on%20an%20ellipsoid
The study of geodesics on an ellipsoid arose in connection with geodesy specifically with the solution of triangulation networks. The figure of the Earth is well approximated by an oblate ellipsoid, a slightly flattened sphere. A geodesic is the shortest path between two points on a curved surface, analogous to a straight line on a plane surface. The solution of a triangulation network on an ellipsoid is therefore a set of exercises in spheroidal trigonometry . If the Earth is treated as a sphere, the geodesics are great circles (all of which are closed) and the problems reduce to ones in spherical trigonometry. However, showed that the effect of the rotation of the Earth results in its resembling a slightly oblate ellipsoid: in this case, the equator and the meridians are the only simple closed geodesics. Furthermore, the shortest path between two points on the equator does not necessarily run along the equator. Finally, if the ellipsoid is further perturbed to become a triaxial ellipsoid (with three distinct semi-axes), only three geodesics are closed. Geodesics on an ellipsoid of revolution There are several ways of defining geodesics . A simple definition is as the shortest path between two points on a surface. However, it is frequently more useful to define them as paths with zero geodesic curvature—i.e., the analogue of straight lines on a curved surface. This definition encompasses geodesics traveling so far across the ellipsoid's surface that they start to return toward the starting point, so that other routes are more direct, and includes paths that intersect or re-trace themselves. Short enough segments of a geodesics are still the shortest route between their endpoints, but geodesics are not necessarily globally minimal (i.e. shortest among all possible paths). Every globally-shortest path is a geodesic, but not vice versa. By the end of the 18th century, an ellipsoid of revolution (the term spheroid is also used) was a well-accepted approximation to the figure of the Earth. The adjustment of triangulation networks entailed reducing all the measurements to a reference ellipsoid and solving the resulting two-dimensional problem as an exercise in spheroidal trigonometry . It is possible to reduce the various geodesic problems into one of two types. Consider two points: at latitude and longitude and at latitude and longitude (see Fig. 1). The connecting geodesic (from to ) is , of length , which has azimuths and at the two endpoints. The two geodesic problems usually considered are: the direct geodesic problem or first geodesic problem, given , , and , determine and ; the inverse geodesic problem or second geodesic problem, given and , determine , , and . As can be seen from Fig. 1, these problems involve solving the triangle given one angle, for the direct problem and for the inverse problem, and its two adjacent sides. For a sphere the solutions to these problems are simple exercises in spherical trigonometry, whose solution is given by formulas for solving a spherical triangle. (See the article on great-circle navigation.) For an ellipsoid of revolution, the characteristic constant defining the geodesic was found by . A systematic solution for the paths of geodesics was given by and (and subsequent papers in 1808 and 1810). The full solution for the direct problem (complete with computational tables and a worked out example) is given by . During the 18th century geodesics were typically referred to as "shortest lines". The term "geodesic line" (actually, a curve) was coined by : Nous désignerons cette ligne sous le nom de ligne géodésique [We will call this line the geodesic line]. This terminology was introduced into English either as "geodesic line" or as "geodetic line", for example , A line traced in the manner we have now been describing, or deduced from trigonometrical measures, by the means we have indicated, is called a geodetic or geodesic line: it has the property of being the shortest which can be drawn between its two extremities on the surface of the Earth; and it is therefore the proper itinerary measure of the distance between those two points. In its adoption by other fields geodesic line, frequently shortened to geodesic, was preferred. This section treats the problem on an ellipsoid of revolution (both oblate and prolate). The problem on a triaxial ellipsoid is covered in the next section. Equations for a geodesic Here the equations for a geodesic are developed; the derivation closely follows that of . , , , , , , and also provide derivations of these equations. Consider an ellipsoid of revolution with equatorial radius and polar semi-axis . Define the flattening , the eccentricity , and the second eccentricity : (In most applications in geodesy, the ellipsoid is taken to be oblate, ; however, the theory applies without change to prolate ellipsoids, , in which case , , and are negative.) Let an elementary segment of a path on the ellipsoid have length . From Figs. 2 and 3, we see that if its azimuth is , then is related to and by where is the meridional radius of curvature, is the radius of the circle of latitude , and is the normal radius of curvature. The elementary segment is therefore given by or where and the Lagrangian function depends on through and . The length of an arbitrary path between and is given by where is a function of satisfying and . The shortest path or geodesic entails finding that function which minimizes . This is an exercise in the calculus of variations and the minimizing condition is given by the Beltrami identity, Substituting for and using Eqs. gives found this relation, using a geometrical construction; a similar derivation is presented by . Differentiating this relation gives This, together with Eqs. , leads to a system of ordinary differential equations for a geodesic We can express in terms of the parametric latitude, , using and Clairaut's relation then becomes This is the sine rule of spherical trigonometry relating two sides of the triangle (see Fig. 4), , and and their opposite angles and . In order to find the relation for the third side , the spherical arc length, and included angle , the spherical longitude, it is useful to consider the triangle representing a geodesic starting at the equator; see Fig. 5. In this figure, the variables referred to the auxiliary sphere are shown with the corresponding quantities for the ellipsoid shown in parentheses. Quantities without subscripts refer to the arbitrary point ; , the point at which the geodesic crosses the equator in the northward direction, is used as the origin for , and . If the side is extended by moving infinitesimally (see Fig. 6), we obtain Combining Eqs. and gives differential equations for and The relation between and is which gives so that the differential equations for the geodesic become The last step is to use as the independent parameter in both of these differential equations and thereby to express and as integrals. Applying the sine rule to the vertices and in the spherical triangle in Fig. 5 gives where is the azimuth at . Substituting this into the equation for and integrating the result gives where and the limits on the integral are chosen so that . pointed out that the equation for is the same as the equation for the arc on an ellipse with semi-axes and . In order to express the equation for in terms of , we write which follows from and Clairaut's relation. This yields and the limits on the integrals are chosen so that at the equator crossing, . This completes the solution of the path of a geodesic using the auxiliary sphere. By this device a great circle can be mapped exactly to a geodesic on an ellipsoid of revolution. There are also several ways of approximating geodesics on a terrestrial ellipsoid (with small flattening) ; some of these are described in the article on geographical distance. However, these are typically comparable in complexity to the method for the exact solution . Behavior of geodesics Fig. 7 shows the simple closed geodesics which consist of the meridians (green) and the equator (red). (Here the qualification "simple" means that the geodesic closes on itself without an intervening self-intersection.) This follows from the equations for the geodesics given in the previous section. All other geodesics are typified by Figs. 8 and 9 which show a geodesic starting on the equator with . The geodesic oscillates about the equator. The equatorial crossings are called nodes and the points of maximum or minimum latitude are called vertices; the parametric latitudes of the vertices are given by . The geodesic completes one full oscillation in latitude before the longitude has increased by . Thus, on each successive northward crossing of the equator (see Fig. 8), falls short of a full circuit of the equator by approximately (for a prolate ellipsoid, this quantity is negative and completes more that a full circuit; see Fig. 10). For nearly all values of , the geodesic will fill that portion of the ellipsoid between the two vertex latitudes (see Fig. 9). If the ellipsoid is sufficiently oblate, i.e., , another class of simple closed geodesics is possible . Two such geodesics are illustrated in Figs. 11 and 12. Here and the equatorial azimuth, , for the green (resp. blue) geodesic is chosen to be (resp. ), so that the geodesic completes 2 (resp. 3) complete oscillations about the equator on one circuit of the ellipsoid. Fig. 13 shows geodesics (in blue) emanating with a multiple of up to the point at which they cease to be shortest paths. (The flattening has been increased to in order to accentuate the ellipsoidal effects.) Also shown (in green) are curves of constant , which are the geodesic circles centered . showed that, on any surface, geodesics and geodesic circle intersect at right angles. The red line is the cut locus, the locus of points which have multiple (two in this case) shortest geodesics from . On a sphere, the cut locus is a point. On an oblate ellipsoid (shown here), it is a segment of the circle of latitude centered on the point antipodal to , . The longitudinal extent of cut locus is approximately . If lies on the equator, , this relation is exact and as a consequence the equator is only a shortest geodesic if . For a prolate ellipsoid, the cut locus is a segment of the anti-meridian centered on the point antipodal to , , and this means that meridional geodesics stop being shortest paths before the antipodal point is reached. Differential properties of geodesics Various problems involving geodesics require knowing their behavior when they are perturbed. This is useful in trigonometric adjustments , determining the physical properties of signals which follow geodesics, etc. Consider a reference geodesic, parameterized by , and a second geodesic a small distance away from it. showed that obeys the Gauss-Jacobi equation where is the Gaussian curvature at . As a second order, linear, homogeneous differential equation, its solution may be expressed as the sum of two independent solutions where The quantity is the so-called reduced length, and is the geodesic scale. Their basic definitions are illustrated in Fig. 14. The Gaussian curvature for an ellipsoid of revolution is solved the Gauss-Jacobi equation for this case enabling and to be expressed as integrals. As we see from Fig. 14 (top sub-figure), the separation of two geodesics starting at the same point with azimuths differing by is . On a closed surface such as an ellipsoid, oscillates about zero. The point at which becomes zero is the point conjugate to the starting point. In order for a geodesic between and , of length , to be a shortest path it must satisfy the Jacobi condition , that there is no point conjugate to between and . If this condition is not satisfied, then there is a nearby path (not necessarily a geodesic) which is shorter. Thus, the Jacobi condition is a local property of the geodesic and is only a necessary condition for the geodesic being a global shortest path. Necessary and sufficient conditions for a geodesic being the shortest path are: for an oblate ellipsoid, ; for a prolate ellipsoid, , if ; if , the supplemental condition is required if . Envelope of geodesics The geodesics from a particular point if continued past the cut locus form an envelope illustrated in Fig. 15. Here the geodesics for which is a multiple of are shown in light blue. (The geodesics are only shown for their first passage close to the antipodal point, not for subsequent ones.) Some geodesic circles are shown in green; these form cusps on the envelope. The cut locus is shown in red. The envelope is the locus of points which are conjugate to ; points on the envelope may be computed by finding the point at which on a geodesic. calls this star-like figure produced by the envelope an astroid. Outside the astroid two geodesics intersect at each point; thus there are two geodesics (with a length approximately half the circumference of the ellipsoid) between and these points. This corresponds to the situation on the sphere where there are "short" and "long" routes on a great circle between two points. Inside the astroid four geodesics intersect at each point. Four such geodesics are shown in Fig. 16 where the geodesics are numbered in order of increasing length. (This figure uses the same position for as Fig. 13 and is drawn in the same projection.) The two shorter geodesics are stable, i.e., , so that there is no nearby path connecting the two points which is shorter; the other two are unstable. Only the shortest line (the first one) has . All the geodesics are tangent to the envelope which is shown in green in the figure. The astroid is the (exterior) evolute of the geodesic circles centered at . Likewise, the geodesic circles are involutes of the astroid. Area of a geodesic polygon A geodesic polygon is a polygon whose sides are geodesics. It is analogous to a spherical polygon, whose sides are great circles. The area of such a polygon may be found by first computing the area between a geodesic segment and the equator, i.e., the area of the quadrilateral in Fig. 1 . Once this area is known, the area of a polygon may be computed by summing the contributions from all the edges of the polygon. Here an expression for the area of is developed following . The area of any closed region of the ellipsoid is where is an element of surface area and is the Gaussian curvature. Now the Gauss–Bonnet theorem applied to a geodesic polygon states where is the geodesic excess and is the exterior angle at vertex . Multiplying the equation for by , where is the authalic radius, and subtracting this from the equation for gives where the value of for an ellipsoid has been substituted. Applying this formula to the quadrilateral , noting that , and performing the integral over gives where the integral is over the geodesic line (so that is implicitly a function of ). The integral can be expressed as a series valid for small . The area of a geodesic polygon is given by summing over its edges. This result holds provided that the polygon does not include a pole; if it does, must be added to the sum. If the edges are specified by their vertices, then a convenient expression for the geodesic excess is Solution of the direct and inverse problems Solving the geodesic problems entails mapping the geodesic onto the auxiliary sphere and solving the corresponding problem in great-circle navigation. When solving the "elementary" spherical triangle for in Fig. 5, Napier's rules for quadrantal triangles can be employed, The mapping of the geodesic involves evaluating the integrals for the distance, , and the longitude, , Eqs. and and these depend on the parameter . Handling the direct problem is straightforward, because can be determined directly from the given quantities and ; for a sample calculation, see . In the case of the inverse problem, is given; this cannot be easily related to the equivalent spherical angle because is unknown. Thus, the solution of the problem requires that be found iteratively (root finding); see for details. In geodetic applications, where is small, the integrals are typically evaluated as a series . For arbitrary , the integrals (3) and (4) can be found by numerical quadrature or by expressing them in terms of elliptic integrals . provides solutions for the direct and inverse problems; these are based on a series expansion carried out to third order in the flattening and provide an accuracy of about for the WGS84 ellipsoid; however the inverse method fails to converge for nearly antipodal points. continues the expansions to sixth order which suffices to provide full double precision accuracy for and improves the solution of the inverse problem so that it converges in all cases. extends the method to use elliptic integrals which can be applied to ellipsoids with arbitrary flattening. Geodesics on a triaxial ellipsoid Solving the geodesic problem for an ellipsoid of revolution is mathematically straightforward: because of symmetry, geodesics have a constant of motion, given by Clairaut's relation allowing the problem to be reduced to quadrature. By the early 19th century (with the work of Legendre, Oriani, Bessel, et al.), there was a complete understanding of the properties of geodesics on an ellipsoid of revolution. On the other hand, geodesics on a triaxial ellipsoid (with three unequal axes) have no obvious constant of the motion and thus represented a challenging unsolved problem in the first half of the 19th century. In a remarkable paper, discovered a constant of the motion allowing this problem to be reduced to quadrature also . Triaxial ellipsoid coordinate system Consider the ellipsoid defined by where are Cartesian coordinates centered on the ellipsoid and, without loss of generality, . employed the (triaxial) ellipsoidal coordinates (with triaxial ellipsoidal latitude and triaxial ellipsoidal longitude, ) defined by In the limit , becomes the parametric latitude for an oblate ellipsoid, so the use of the symbol is consistent with the previous sections. However, is different from the spherical longitude defined above. Grid lines of constant (in blue) and (in green) are given in Fig. 17. These constitute an orthogonal coordinate system: the grid lines intersect at right angles. The principal sections of the ellipsoid, defined by and are shown in red. The third principal section, , is covered by the lines and or . These lines meet at four umbilical points (two of which are visible in this figure) where the principal radii of curvature are equal. Here and in the other figures in this section the parameters of the ellipsoid are , and it is viewed in an orthographic projection from a point above , . The grid lines of the ellipsoidal coordinates may be interpreted in three different ways: They are "lines of curvature" on the ellipsoid: they are parallel to the directions of principal curvature . They are also intersections of the ellipsoid with confocal systems of hyperboloids of one and two sheets . Finally they are geodesic ellipses and hyperbolas defined using two adjacent umbilical points . For example, the lines of constant in Fig. 17 can be generated with the familiar string construction for ellipses with the ends of the string pinned to the two umbilical points. Jacobi's solution Jacobi showed that the geodesic equations, expressed in ellipsoidal coordinates, are separable. Here is how he recounted his discovery to his friend and neighbor Bessel , The day before yesterday, I reduced to quadrature the problem of geodesic lines on an ellipsoid with three unequal axes. They are the simplest formulas in the world, Abelian integrals, which become the well known elliptic integrals if 2 axes are set equal. Königsberg, 28th Dec. '38. The solution given by Jacobi is As Jacobi notes "a function of the angle equals a function of the angle . These two functions are just Abelian integrals..." Two constants and appear in the solution. Typically is zero if the lower limits of the integrals are taken to be the starting point of the geodesic and the direction of the geodesics is determined by . However, for geodesics that start at an umbilical points, we have and determines the direction at the umbilical point. The constant may be expressed as where is the angle the geodesic makes with lines of constant . In the limit , this reduces to , the familiar Clairaut relation. A derivation of Jacobi's result is given by ; he gives the solution found by for general quadratic surfaces. Survey of triaxial geodesics On a triaxial ellipsoid, there are only three simple closed geodesics, the three principal sections of the ellipsoid given by , , and . To survey the other geodesics, it is convenient to consider geodesics that intersect the middle principal section, , at right angles. Such geodesics are shown in Figs. 18–22, which use the same ellipsoid parameters and the same viewing direction as Fig. 17. In addition, the three principal ellipses are shown in red in each of these figures. If the starting point is , , and , then and the geodesic encircles the ellipsoid in a "circumpolar" sense. The geodesic oscillates north and south of the equator; on each oscillation it completes slightly less than a full circuit around the ellipsoid resulting, in the typical case, in the geodesic filling the area bounded by the two latitude lines . Two examples are given in Figs. 18 and 19. Figure 18 shows practically the same behavior as for an oblate ellipsoid of revolution (because ); compare to Fig. 9. However, if the starting point is at a higher latitude (Fig. 18) the distortions resulting from are evident. All tangents to a circumpolar geodesic touch the confocal single-sheeted hyperboloid which intersects the ellipsoid at . If the starting point is , , and , then and the geodesic encircles the ellipsoid in a "transpolar" sense. The geodesic oscillates east and west of the ellipse ; on each oscillation it completes slightly more than a full circuit around the ellipsoid. In the typical case, this results in the geodesic filling the area bounded by the two longitude lines and . If , all meridians are geodesics; the effect of causes such geodesics to oscillate east and west. Two examples are given in Figs. 20 and 21. The constriction of the geodesic near the pole disappears in the limit ; in this case, the ellipsoid becomes a prolate ellipsoid and Fig. 20 would resemble Fig. 10 (rotated on its side). All tangents to a transpolar geodesic touch the confocal double-sheeted hyperboloid which intersects the ellipsoid at . In Figs. 18–21, the geodesics are (very nearly) closed. As noted above, in the typical case, the geodesics are not closed, but fill the area bounded by the limiting lines of latitude (in the case of Figs. 18–19) or longitude (in the case of Figs. 20–21). If the starting point is , (an umbilical point), and (the geodesic leaves the ellipse at right angles), then and the geodesic repeatedly intersects the opposite umbilical point and returns to its starting point. However, on each circuit the angle at which it intersects becomes closer to or so that asymptotically the geodesic lies on the ellipse , as shown in Fig. 22. A single geodesic does not fill an area on the ellipsoid. All tangents to umbilical geodesics touch the confocal hyperbola that intersects the ellipsoid at the umbilic points. Umbilical geodesic enjoy several interesting properties. Through any point on the ellipsoid, there are two umbilical geodesics. The geodesic distance between opposite umbilical points is the same regardless of the initial direction of the geodesic. Whereas the closed geodesics on the ellipses and are stable (a geodesic initially close to and nearly parallel to the ellipse remains close to the ellipse), the closed geodesic on the ellipse , which goes through all 4 umbilical points, is exponentially unstable. If it is perturbed, it will swing out of the plane and flip around before returning to close to the plane. (This behavior may repeat depending on the nature of the initial perturbation.) If the starting point of a geodesic is not an umbilical point, its envelope is an astroid with two cusps lying on and the other two on . The cut locus for is the portion of the line between the cusps. Applications The direct and inverse geodesic problems no longer play the central role in geodesy that they once did. Instead of solving adjustment of geodetic networks as a two-dimensional problem in spheroidal trigonometry, these problems are now solved by three-dimensional methods . Nevertheless, terrestrial geodesics still play an important role in several areas: for measuring distances and areas in geographic information systems; the definition of maritime boundaries ; in the rules of the Federal Aviation Administration for area navigation ; the method of measuring distances in the FAI Sporting Code . help Muslims find their direction toward Mecca By the principle of least action, many problems in physics can be formulated as a variational problem similar to that for geodesics. Indeed, the geodesic problem is equivalent to the motion of a particle constrained to move on the surface, but otherwise subject to no forces . For this reason, geodesics on simple surfaces such as ellipsoids of revolution or triaxial ellipsoids are frequently used as "test cases" for exploring new methods. Examples include: the development of elliptic integrals and elliptic functions ; the development of differential geometry ; methods for solving systems of differential equations by a change of independent variables ; the study of caustics ; investigations into the number and stability of periodic orbits ; in the limit , geodesics on a triaxial ellipsoid reduce to a case of dynamical billiards; extensions to an arbitrary number of dimensions ; geodesic flow on a surface . See also Earth section paths Figure of the Earth Geographical distance Great-circle navigation Great ellipse Geodesic Geodesy Map projection Map projection of the triaxial ellipsoid Meridian arc Rhumb line Vincenty's formulae Notes References External links Online geodesic bibliography of books and articles on geodesics on ellipsoids. Test set for geodesics, a set of 500000 geodesics for the WGS84 ellipsoid, computed using high-precision arithmetic. NGS tool implementing . geod(1), man page for the PROJ utility for geodesic calculations. GeographicLib implementation of . Drawing geodesics on Google Maps. Geodesy Geodesic (mathematics) Differential geometry Calculus of variations Curves
Geodesics on an ellipsoid
[ "Mathematics" ]
6,068
[ "Applied mathematics", "Geodesy" ]
36,496,308
https://en.wikipedia.org/wiki/Electromagnetics%20%28journal%29
Electromagnetics is a peer-reviewed scientific journal that is published by Taylor & Francis. It covers all aspects of electromagnetics and electromagnetic materials. The editor-in-chief is H. Y. David Yang (University of Illinois at Chicago). Abstracting and indexing Electromagnetics is abstracted and indexed by: Science Citation Index Current Contents/Engineering, Computing & Technology CSA Electronics & Communications Abstracts Engineering Information CSA Solid State & Superconductivity According to the Journal Citation Reports, the journal has a 2022 impact factor of 0.8. References External links Electrical and electronic engineering journals Electromagnetism journals Taylor & Francis academic journals Academic journals established in 1981 English-language journals 8 times per year journals
Electromagnetics (journal)
[ "Engineering" ]
144
[ "Electrical engineering", "Electronic engineering", "Electrical and electronic engineering journals" ]
48,434,556
https://en.wikipedia.org/wiki/Doron%20Gepner
Doron Gepner (born March 31, 1956) is an Israeli theoretical physicist. He made important contributions to the study of string theory, two-dimensional conformal field theory, and integrable models. Birth and education Gepner was born in Philadelphia to Israeli parents. He studied mathematics at Technion, Haifa (B. Sc., 1976) and theoretical physics at the Weizmann Institute, Rehovot (Ph.D., 1985), where his graduate advisor was Yitzhak Frishman. His early work focused on non-perturbative quantum field theory in two space-time dimensions. Research In 1985–1987 Gepner was a postdoctoral researcher at Princeton University. He made important contributions to the study of Rational Conformal Field Theory with extended chiral algebras. He also pioneered the use of methods of conformal field theory to study compactifications of superstring and heterotic string on Calabi–Yau manifolds. He introduced exactly solvable examples of such compactifications now known as Gepner models. This was an important step in establishing that superstrings and heterotic strings have a landscape of consistent vacua. Later he held research and teaching positions at Princeton University (1987-1989), Weizmann Institute (1989-1993) and California Institute of Technology (1992-1994). Since 1993 he has been an associate professor at the Weizmann Institute. Gepner's later work centered on Rational Conformal Field Theory and its relation with 2D integrable models. Gepner also made notable contributions to the theory of partitions in number theory, finding deep generalizations and analogs of the Rogers–Ramanujan identities. Students Ron Cohen Anton Kapustin Ernest Baver Boris Gotkin Umut Gursoy Boris Noyvert Joseph Conlon Sheshansu Pal Barak Haim Genish Arel References External links http://www.weizmann.ac.il/particle/content/doron-gepner "Doron Gepner", Google Scholar Israeli physicists Princeton University faculty 1956 births Living people String theorists Theoretical physicists
Doron Gepner
[ "Physics" ]
443
[ "Theoretical physics", "Theoretical physicists" ]
48,435,335
https://en.wikipedia.org/wiki/Block%20floating%20point
Block floating point (BFP) is a method used to provide an arithmetic approaching floating point while using a fixed-point processor. BFP assigns a group of significands (the non-exponent part of the floating-point number) to a single exponent, rather than single significand being assigned its own exponent. BFP can be advantageous to limit space use in hardware to perform the same functions as floating-point algorithms, by reusing the exponent; some operations over multiple values between blocks can also be done with a reduced amount of computation. The common exponent is found by data with the largest amplitude in the block. To find the value of the exponent, the number of leading zeros must be found (count leading zeros). For this to be done, the number of left shifts needed for the data must be normalized to the dynamic range of the processor used. Some processors have means to find this out themselves, such as exponent detection and normalization instructions. Block floating-point algorithms were extensively studied by James Hardy Wilkinson. BFP can be recreated in software for smaller performance gains. Microscaling (MX) Formats Microscaling (MX) formats are a type of Block Floating Point (BFP) data format specifically designed for AI and machine learning workloads. The MX format, endorsed and standardized by major industry players such as AMD, Arm, Intel, Meta, Microsoft, NVIDIA, and Qualcomm, represents a significant advancement in narrow precision data formats for AI. The MX format uses a single shared scaling factor (exponent) for a block of elements, significantly reducing the memory footprint and computational resources required for AI operations. Each block of k elements shares this common scaling factor, which is stored separately from the individual elements. The initial MX specification introduces several specific formats, including MXFP8, MXFP6, MXFP4, and MXINT8. These formats support various precision levels: MXFP8: 8-bit floating-point with two variants (E5M2 and E4M3). MXFP6: 6-bit floating-point with two variants (E3M2 and E2M3). MXFP4: 4-bit floating-point (E2M1). MXINT8: 8-bit integer. MX formats have been demonstrated to be effective in a variety of AI tasks, including large language models (LLMs), image classification, speech recognition and recommendation systems. For instance, MXFP6 closely matches FP32 for inference tasks after quantization-aware fine-tuning, and MXFP4 can be used for training generative language models with only a minor accuracy penalty. The MX format has been standardized through the Open Compute Project (OCP) as Microscaling Formats (MX) Specification v1.0. An emulation libraries also has been published to provide details on the data science approach and select results of MX in action. Hardware support The following hardware supports BFP operations: d-Matrix Jayhawk II Tenstorrent Grayskull e75 and e150 (BFP8, BFP4 and BFP2) Tenstorrent Wormhole n150 and n300 (BFP8, BFP4 and BFP2) Amd Strix Point APU (branded as Ryzen AI 300 series) supports Block FP16 in NPU AMD Versal AI Edge Series Gen 2 supports MX6 and MX9 data types x86 processors implementing the AVX10.2 extension set support E5M2 and E4M3 See also Binary scaling Fast Fourier transform (FFT) Digital signal processor (DSP) References Further reading Floating point Computer arithmetic
Block floating point
[ "Mathematics" ]
762
[ "Computer arithmetic", "Arithmetic" ]
48,436,469
https://en.wikipedia.org/wiki/Hybrid%20log%E2%80%93gamma
The hybrid log–gamma (HLG) transfer function is a transfer function jointly developed by the BBC and NHK for high dynamic range (HDR) display. It is backward compatible with the transfer function of SDR (the gamma curve). It was approved as ARIB STD-B67 by the Association of Radio Industries and Businesses (ARIB). It is also defined in ATSC 3.0, Digital Video Broadcasting (DVB) UHD-1 Phase 2, and International Telecommunication Union (ITU) Rec. 2100. HLG is an HDR format that uses the HLG transfer function, BT.2020 color primaries and a bitdepth of 10-bit. HLG was designed to be backward compatible with SDR UHDTV. However, HLG is not intended to be fully backward compatible with traditional SDR displays that cannot interpret BT.2020 colorimetry. Both HLG transfer function and the HLG format are royalty-free. The backward compatibility allows them to be used with existing transmission standards when the receiver is compatible with the BT.2020 colour container, reducing complexity and cost for both equipment manufacturers and content distributors. They are supported by HDMI 2.0b, HEVC, VP9, and H.264/MPEG-4 AVC, and are used by video services such as BBC iPlayer, DirecTV, Freeview Play, and YouTube. Description HLG is designed to be better-suited for television broadcasting, where the metadata required for other HDR formats is not backward compatible with non-HDR displays, consumes additional bandwidth, and may also become out-of-sync or damaged in transmission. HLG defines a non-linear optical-electro transfer function, in which the lower half of the signal values use a gamma curve and the upper half of the signal values use a logarithmic curve. In practice, the signal is interpreted as normal by standard-dynamic-range displays (albeit capable of displaying more detail in highlights), but HLG-compatible displays can correctly interpret the logarithmic portion of the signal curve to provide a wider dynamic range. In contrast with the other HDR formats it does not use metadata. The HLG transfer function is backward compatible with SDR's gamma curve. However, HLG is commonly used with Rec. 2020 color primaries which produce a de-saturated image with visible hue shifts on non-compatible devices. HLG is therefore backward compatible with SDR-UHDTV and will show color distortion on common SDR devices that only support Rec. 709 color primaries. Technical details HLG defines a nonlinear transfer function in which the lower half of the signal values use a gamma curve and the upper half of the signal values use a logarithmic curve. HLG reference OETF is as follows (as defined in ARIB STD-B67): or as follows (as defined in Rec. 2100): where E is the linear light signal normalized by the reference white level in the range in ARIB STD-B67 and in the range in Rec. 2100. E' is the resulting nonlinear signal r is the reference white level and has a signal value of 0.5 and the constants a, b, and c are defined as a = 0.17883277, b = 1 - 4a = 0.28466892, and c = 0.5 - a ln(4a) = 0.55991073 The signal value is 0.5 for the reference white level while the signal value for 1 has a relative luminance that is 12 times higher than the reference white level. ARIB STD-B67 has a nominal range of 0 to 12. HLG uses a logarithmic curve for the upper half of the signal values due to Weber's law. HLG reference OOTF is as follows: where is the luminance of a displayed linear component in cd/m2. is a signal for each colour component {Rs, Gs, Bs} proportional to scene linear light normalized to the range . is the normalized linear scene luminance. is the variable for user gain in cd/m2. It represents LW, the nominal peak luminance of a display for achromatic pixels. is the system gamma. = 1.2 at the nominal display peak luminance of 1000 cd/m2. HLG reference EOTF is as follows: where is the luminance of a displayed linear component in cd/m2. is the non-linear electrical signal in the range . is the variable for user black level lift. is nominal peak luminance of the display in cd/m2 for achromatic pixels. is the display luminance for black in cd/m2. HLG does not need to use metadata since it is compatible with both SDR displays and HDR displays. HLG can be used with displays of different brightness in a wide range of viewing environments. The dynamic range that can be perceived by the human eye in a single image is around 14 stops. An SDR video display with a 2.4 gamma curve and a bit depth of 8-bits per sample can display a range of about 6 stops without visible banding. Professional SDR video displays with a bit depth of 10-bits per sample extend that range to about 10 stops. When HLG is displayed on a 2,000 cd/m2 display with a bit depth of 10-bits per sample it can display a range of 200,000:1 or 17.6 stops without visible banding. HLG increases the dynamic range of the video compared to a conventional gamma curve by using a logarithmic curve for the upper half of the signal values. HLG also increases the dynamic range by not including the linear part of the conventional gamma curve used by Rec. 601 and Rec. 709. The linear part of the conventional gamma curve was used to limit camera noise in low light video but is no longer needed with HDR cameras. HLG is supported in Rec. 2100 with a nominal peak luminance of 1,000 cd/m2 and a system gamma value that can be adjusted depending on background luminance. HLG is supported in HEVC with a formula that is mathematically equivalent to ARIB STD-B67 but has a nominal range of 0 to 1 instead of 0 to 12: where Lc has a nominal range of 0 to 1 and V is the resulting nonlinear signal the constants a, b, and c are defined as a = 0.17883277, b = 1 - 4a = 0.28466892, and c = 0.5 - a ln(4a) = 0.55991073 History Inception On May 15, 2015, the BBC announced that they had begun work with the NHK to develop a joint HDR proposal that would be proposed to the International Telecommunication Union (ITU). On June 9, 2015, HLG was proposed to the JCT-VC for High Efficiency Video Coding (HEVC) and added to the June 2015 draft of the screen content coding extensions. Later that year, Sony showed HLG video on a modified HDR display at the SMPTE 2015 conference. Colorfront announced that their Transkoder 2016 software would support HDR output using HLG. LG announced that their 2015 4K OLED TVs would support HDR from HLG and perceptual quantizer (PQ). Blackmagic Design released an update for DaVinci Resolve that added support for HLG. SKY PerfecTV! announced that they will use HLG to transmit 4K UHDTV HDR programming to their satellite subscribers in Japan. Harmonic Inc. and NASA announced the HDR capture of an Atlas V launch which was broadcast the next day on NASA TV using HLG. Vatican Television Center broadcast the ceremony of the Holy Door using HLG and the Rec. 2020 color space. 2016 Industry bodies: The Ultra HD Forum announced their guidelines for UHD Phase A which includes support for HLG. The Ultra HD Forum also defined HLG with a bit depth of 10-bits, and the Rec. 2020 color space. The ITU announced Rec. 2100 which defines two HDR transfer functions which are HLG and PQ. Digital UK published their 2017 specification for Freeview Play which includes support for HDR using HLG. The Digital Video Broadcasting (DVB) Steering Board approved UHD-1 Phase 2 with an HDR solution that supports HLG and PQ. The specification has been published as DVB Bluebook A157 and will be published by the ETSI as TS 101 154 v2.3.1. HDMI announced that HLG support had been added to the HDMI 2.0b standard. Hardware: Leader Electronics Corporation announced their 12G-SDI waveform monitors with support for HLG. Harmonic Inc. released an update for the ViBE 4K UHD encoder that added support for HLG. Canon Inc. announced that they will release firmware updates for the DP-V2410 and DP-V3010 reference displays to add support for HLG. Sony announced the PVM-X550 OLED monitor with support for HLG. Sony also announced a firmware update for the BVM-X300 OLED monitor to add support for HLG. Sony announced that in October they would release a firmware update to add HLG to their BVM-X300 OLED monitor. Sony announced that their VPL-VW675ES projector would support HLG. The Trusted Reviews website reported that Samsung had told them that all of their 2016 HDR TVs could support HLG with a firmware update. Atomos updated their Shogun Inferno product to include HLG input and output for recording, monitoring, editing and layout from cameras and computers as well as to HLG compatible TVs. Software: Avid Technology released an update for Media Composer that added support for HLG. Google announced Android TV 7.0 which supports HLG. Broadcasters: Dome Productions announced that they will begin trials of HLG to deliver HDR content. SKY Perfect JSAT Group announced that on October 4 they will start the world's first 4K HDR broadcasts using HLG. Eutelsat announced that it will launch a new channel using HLG. Google announced that YouTube will start streaming HDR videos which can be encoded with HLG or PQ. The BBC announced that it was adding a 4-minute HLG edit from their Planet Earth II series to its BBC iPlayer IPTV platform for public UHD testing. Mediapro/Overon announced that they will transmit the Spanish Football League (LFP) worldwide using 4K HDR broadcasts based in HLG 2017 Industry bodies: ATSC released the video standard for ATSC 3.0 which includes support for HLG. Hardware: LG Corporation announced that their 2017 Super UHD TVs will support HLG. Panasonic announced that their 2017 OLED TV will support HLG. Sony announced that their 2017 OLED TVs will support HLG. JVC announced that their 2017 4K projectors will support HLG. LG Corporation announced that they will add support for HLG to their 2016 OLED TVs and their 2016 Super UHD TVs with a firmware update. Sony announced that they will add support for HLG to their 2017 4K TVs with a firmware update. Panasonic announced that they will add support for HLG to several models of their 2016 4K TVs with a firmware update. Philips announced that their 2017 4K TVs will support HLG and that they will add support for HLG to several models of their 2016 4K TVs with a firmware update. Sony began releasing firmware updates for several of their 2016 and 2017 Android TV models which adds support for HLG. Panasonic released firmware update 2.0 for the Panasonic Lumix DC-GH5 which added support for HLG recording. Panasonic began releasing firmware updates for several of their 2016 TV models which adds support for HLG. Qualcomm announced the Qualcomm Snapdragon 845 which includes support for HLG. Software: Adobe Systems announced updates to Adobe Creative Cloud which includes support for HLG. Apple released a firmware update for Final Cut Pro X which includes support for HLG. Broadcasters: Eutelsat announced that their Hot Bird video service would include the Travelxp 4K channel which uses HLG. The BBC announced that Blue Planet II would be available in 4K HDR on the BBC iPlayer using HLG. Blue Planet II will be available on the BBC iPlayer service from December 10, 2017 to January 16, 2018. The BBC states that almost 400 TV models have support for HLG which includes TV models from Finlux, Hisense, Hitachi, LG, Panasonic, Philips, Samsung, Sony, and Toshiba. DirecTV began broadcasting HLG HDR on their 4K Channels 104 and 106. 2018 Vizio announced HLG support for their 2018 models. 2019 Panasonic announced HLG support in their S1 and S1R full frame mirrorless cameras which will be released in March 2019. 2020 Sky UK announced that their popular Sky Q box will get HLG support from May 27, 2020. Apple released the iPhone 12 series with HLG video recording support via Dolby Vision profile 8.4, which adds a Dolby Vision metadata layer on top of HLG footage. See also Dynamic range Gamma correction High-dynamic-range rendering High-dynamic-range imaging High-dynamic-range video References External links ARIB STD-B67 T. Borer and A. Cotton, A "Display Independent" High Dynamic Range Television System, BBC Research & Development White Paper WHP 309, September 2015 High dynamic range Television technology
Hybrid log–gamma
[ "Technology", "Engineering" ]
2,959
[ "Information and communications technology", "Electrical engineering", "Television technology", "High dynamic range" ]
48,438,158
https://en.wikipedia.org/wiki/Class%20of%20groups
A class of groups is a set-theoretical collection of groups satisfying the property that if G is in the collection then every group isomorphic to G is also in the collection. This concept arose from the necessity to work with a bunch of groups satisfying certain special property (for example finiteness or commutativity). Since set theory does not admit the "set of all groups", it is necessary to work with the more general concept of class. Definition A class of groups is a collection of groups such that if and then . Groups in the class are referred to as -groups. For a set of groups , we denote by the smallest class of groups containing . In particular for a group , denotes its isomorphism class. Examples The most common examples of classes of groups are: : the empty class of groups : the class of cyclic groups : the class of abelian groups : the class of finite supersolvable groups : the class of nilpotent groups : the class of finite solvable groups : the class of finite simple groups : the class of finite groups : the class of all groups Product of classes of groups Given two classes of groups and it is defined the product of classes This construction allows us to recursively define the power of a class by setting and It must be remarked that this binary operation on the class of classes of groups is neither associative nor commutative. For instance, consider the alternating group of degree 4 (and order 12); this group belongs to the class because it has as a subgroup the group , which belongs to , and furthermore , which is in . However has no non-trivial normal cyclic subgroup, so . Then . However it is straightforward from the definition that for any three classes of groups , , and , Class maps and closure operations A class map c is a map which assigns a class of groups to another class of groups . A class map is said to be a closure operation if it satisfies the next properties: c is expansive: c is idempotent: c is monotonic: If then Some of the most common examples of closure operations are: See also Formation References Properties of groups Group theory Algebraic structures
Class of groups
[ "Mathematics" ]
441
[ "Mathematical structures", "Mathematical objects", "Properties of groups", "Group theory", "Fields of abstract algebra", "Algebraic structures" ]
48,441,511
https://en.wikipedia.org/wiki/Overlapping%20circles%20grid
An overlapping circles grid is a geometric pattern of repeating, overlapping circles of an equal radius in two-dimensional space. Commonly, designs are based on circles centered on triangles (with the simple, two circle form named vesica piscis) or on the square lattice pattern of points. Patterns of seven overlapping circles appear in historical artefacts from the 7th century BC onward; they become a frequently used ornament in the Roman Empire period, and survive into medieval artistic traditions both in Islamic art (girih decorations) and in Gothic art. The name "Flower of Life" is given to the overlapping circles pattern in New Age publications. Of special interest is the hexafoil or six-petal rosette derived from the "seven overlapping circles" pattern, also known as "Sun of the Alps" from its frequent use in alpine folk art in the 17th and 18th century. Triangular grid of overlapping circles The triangular lattice form, with circle radii equal to their separation is called a seven overlapping circles grid. It contains 6 circles intersecting at a point, with a 7th circle centered on that intersection. Overlapping circles with similar geometrical constructions have been used infrequently in various of the decorative arts since ancient times. The pattern has found a wide range of usage in popular culture, in fashion, jewelry, tattoos and decorative products. Cultural significance Near East The oldest known occurrence of the "overlapping circles" pattern is dated to the 7th or 6th century BCE, found on the threshold of the palace of Assyrian king Aššur-bāni-apli in Dur Šarrukin (now in the Louvre). The design becomes more widespread in the early centuries of the Common Era. One early example are five patterns of 19 overlapping circles drawn on the granite columns at the Temple of Osiris in Abydos, Egypt, and a further five on column opposite the building. They are drawn in red ochre and some are very faint and difficult to distinguish. The patterns are graffiti, and not found in natively Egyptian ornaments. They are mostly dated to the early centuries of the Christian Era although medieval or even modern (early 20th century) origin cannot be ruled out with certainty, as the drawings are not mentioned in the extensive listings of graffiti at the temple compiled by Margaret Murray in 1904. Similar patterns were sometimes used in England as apotropaic marks to keep witches from entering buildings. Consecration crosses indicating points in churches anointed with holy water during a church's dedication also take the form of overlapping circles. In Islamic art, the pattern is one of several arrangements of circles (others being used for fourfold or fivefold designs) used to construct grids for Islamic geometric patterns. It is used to design patterns with 6- and 12-pointed stars as well as hexagons in the style called girih. The resulting patterns however characteristically conceal the construction grid, presenting instead a design of interlaced strapwork. Europe Patterns of seven overlapping circles are found on Roman mosaics, for example at Herod's palace in the 1st century BC. The design is found on one of the silver plaques of the Late Roman hoard of Kaiseraugst (discovered 1961). It is later found as an ornament in Gothic architecture, and still later in European folk art of the early modern period. High medieval examples include the Cosmati pavements in Westminster Abbey (13th century). Leonardo da Vinci explicitly discussed the mathematical proportions of the design. Modern usage The name "Flower of Life" is modern, associated with the New Age movement, and commonly attributed specifically to Drunvalo Melchizedek in his book The Ancient Secret of the Flower of Life (1999). The pattern and modern name have propagated into wide range of usage in popular culture, in fashion, jewelry, tattoos, and decorative products. The pattern in quilting has been called diamond wedding ring or triangle wedding ring to contrast it from the square pattern. Besides an occasional use in fashion, it is also used in the decorative arts. For example, the album Sempiternal (2013) by Bring Me the Horizon uses the 61 overlapping circles grid as the main feature of its album cover, whereas the album A Head Full of Dreams (2015) by Coldplay features the 19 overlapping circles grid as the central part of its album cover. Teaser posters illustrating the cover art to A Head Full of Dreams were widely displayed on the London Underground in the last week of October 2015. The "Sun of the Alps" (Italian Sole delle Alpi) symbol has been used as the emblem of Padanian nationalism in northern Italy since the 1990s. It resembles a pattern often found in that area on buildings. A seven-circle "Flower of Life" is also used in the coat of arms of Asgardia the space nation. Gallery 1, 7, and 19-circle hexagonal variant In the examples below the pattern has a hexagonal outline, and is further circumscribed. Similar patterns In the examples below, the pattern does not have a hexagonal outline: Construction Martha Bartfeld, author of geometric art tutorial books, described her independent discovery of the design in 1968. Her original definition said, "This design consists of circles having a 1-[inch; 25 mm] radius, with each point of intersection serving as a new center. The design can be expanded ad infinitum depending upon the number of times the odd-numbered points are marked off." The pattern figure can be drawn by pen and compass, by creating multiple series of interlinking circles of the same diameter touching the previous circle's center. The second circle is centered at any point on the first circle. All following circles are centered on the intersection of two other circles. Progressions The pattern can be extended outward in concentric hexagonal rings of circles, as shown. The first row shows rings of circles. The second row shows a three-dimensional interpretation of a set of n×n×n cube of spheres viewed from a diagonal axis. The third row shows the pattern completed with partial circle arcs within a set of completed circles. Expanding sets have 1, 7, 19, 37, 61, 91, 127, etc. circles, and continuing ever larger hexagonal rings of circles. The number of circles is n3-(n-1)3=3n2-3n+1=3n(n-1)+1. These overlapping circles can also be seen as a projection of an n-unit cube of spheres in 3-dimensional space, viewed on the diagonal axis. There are more spheres than circles because some are overlapping in 2 dimensions. Other variations Another triangular lattice form is common, with circle separation as the square root of 3 times their radius. Richard Kershner showed in 1939 that no arrangement of circles can cover the plane more efficiently than this hexagonal lattice arrangement. Two offset copies of this circle pattern makes a rhombic tiling pattern, while three copies make the original triangular pattern. Related concepts The center lens of the 2-circle figure is called a vesica piscis, from Euclid. Two circles are also called Villarceau circles as a plane intersection of a torus. The areas inside one circle and outside the other circle is called a lune. The 3-circle figure resembles a depiction of Borromean rings and is used in 3-set theory Venn diagrams. Its interior makes a unicursal path called a triquetra. The center of the 3-circle figure is called a reuleaux triangle. Some spherical polyhedra with edges along great circles can be stereographically projected onto the plane as overlapping circles. The 7-circle pattern has also been called an Islamic seven-circles pattern for its use in Islamic art. Square grid of overlapping circles The square lattice form can be seen with circles that line up horizontally and vertically, while intersecting on their diagonals. The pattern appears slightly different when rotated on its diagonal, also called a centered square lattice form because it can be seen as two square lattices with each centered on the gaps of the other. It is called a Kawung motif in Indonesian batik, and is found on the walls of the 8th century Hindu temple Prambanan in Java. It is called an Apsamikkum from ancient Mesopotamian mathematics. See also Knot theory Uniform tiling symmetry mutations – pattern mutations in 3D space References External links The Flower of Life Secrets: Meaning, History, and how to draw it on The Mystica Circles Patterns Religious symbols Sacred geometry
Overlapping circles grid
[ "Mathematics", "Engineering" ]
1,748
[ "Circles", "Pi", "Sacred geometry", "Architecture" ]
50,777,091
https://en.wikipedia.org/wiki/Estradiol%20phenylpropionate
Estradiol phenylpropionate (EPP), also known as estradiol 17β-phenylpropionate and sold under the brand name Menformon Prolongatum, is an estrogen which is no longer marketed. It is an estrogen ester, specifically the C17β phenylpropionate ester of estradiol. EPP has been marketed in combination with estradiol benzoate under the brand name Dimenformon Prolongatum in Europe and in combination with estradiol benzoate, testosterone propionate, testosterone phenylpropionate, and testosterone isocaproate under the brand names Mixogen, Estandron Prolongatum, and Lynandron Prolongatum (a balanced mixture of estradiol and testosterone esters) in menopausal hormone therapy. Both of these medication combinations are long-acting injectables indicated in hormone replacement therapy for women in menopause. Dimenformon Prolongatum has also been investigated as a single injection, "morning after" post-coital contraceptive, and is additionally used as a component of hormone replacement therapy for transgender women. The pharmacokinetics of EPP in combination with estradiol benzoate have been studied. See also List of estrogen esters § Estradiol esters Estradiol benzoate/estradiol phenylpropionate/testosterone propionate/testosterone phenylpropionate/testosterone isocaproate Estradiol benzoate/estradiol phenylpropionate References Abandoned drugs Estradiol esters Phenylpropionate esters Synthetic estrogens
Estradiol phenylpropionate
[ "Chemistry" ]
360
[ "Drug safety", "Abandoned drugs" ]
50,782,884
https://en.wikipedia.org/wiki/Linear%20induction%20accelerator
Linear induction accelerators utilize ferrite-loaded, non-resonant magnetic induction cavities. Each cavity can be thought of as two large washer-shaped disks connected by an outer cylindrical tube. Between the disks is a ferrite toroid. A voltage pulse applied between the two disks causes an increasing magnetic field which inductively couples power into the charged particle beam. The linear induction accelerator was invented by Christofilos in the 1960s. Linear induction accelerators are capable of accelerating very high beam currents (>1000 A) in a single short pulse. They have been used to generate X-rays for flash radiography (e.g. DARHT at LANL), and have been considered as particle injectors for magnetic confinement fusion and as drivers for free electron lasers. A compact version of a linear induction accelerator, the dielectric wall accelerator, has been proposed as a proton accelerator for medical proton therapy. References Particle accelerators
Linear induction accelerator
[ "Physics" ]
197
[ "Particle physics stubs", "Particle physics" ]
50,783,328
https://en.wikipedia.org/wiki/Glossary%20of%20mechanical%20engineering
Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones. This glossary of mechanical engineering terms pertains specifically to mechanical engineering and its sub-disciplines. For a broad overview of engineering, see glossary of engineering. A Abrasion – is the process of scuffing, scratching, wearing down, marring, or rubbing away. It can be intentionally imposed in a controlled process using an abrasive. Abrasion can be an undesirable effect of exposure to normal use or exposure to the elements. Absolute zero – is the lowest possible temperature of a system, defined as zero kelvin or −273.15 °C. No experiment has yet measured a temperature of absolute zero. Accelerated life testing – is the process of testing a product by subjecting it to conditions (stress, strain, temperatures, voltage, vibration rate, pressure etc.) in excess of its normal service parameters in an effort to uncover faults and potential modes of failure in a short amount of time. By analyzing the product's response to such tests, engineers can make predictions about the service life and maintenance intervals of a product. Acceleration – In physics, acceleration is the rate of change of velocity of an object with respect to time. An object's acceleration is the net result of any and all forces acting on the object, as described by Newton's Second Law. The SI unit for acceleration is metre per second squared Accelerations are vector quantities (they have magnitude and direction) and add according to the parallelogram law. As a vector, the calculated net force is equal to the product of the object's mass (a scalar quantity) and its acceleration. Accelerometer – is a device that measures proper acceleration. Proper acceleration, being the acceleration (or rate of change of velocity) of a body in its own instantaneous rest frame, is not the same as coordinate acceleration, being the acceleration in a fixed coordinate system. Accuracy and precision – In measurement of a set, accuracy is closeness of the measurements to a specific value, while precision is the closeness of the measurements to each other. More commonly, accuracy or trueness is a description of systematic errors, a measure of statistical bias, while precision is a description of random errors, a measure of statistical variability; the two concepts are independent of each other. Alternatively, ISO defines accuracy as describing a combination of both random and systematic observational error, so high accuracy requires both high precision and high trueness. Ackermann steering geometry – a geometric arrangement of linkages in the steering of a car or other vehicle designed to solve the problem of wheels on the inside and outside of a turn needing to trace out circles of different radii. It was invented by the German carriage builder Georg Lankensperger in Munich in 1817, then patented by his agent in England, Rudolph Ackermann (1764–1834) in 1818 for horse-drawn carriages. Erasmus Darwin may have a prior claim as the inventor dating from 1758. Acoustic droplet ejection– (ADE) uses a pulse of ultrasound to move low volumes of fluids (typically nanoliters or picoliters) without any physical contact. This technology focuses acoustic energy into a fluid sample in order to eject droplets as small as a picoliter. ADE technology is a very gentle process. This feature makes the technology suitable for a wide variety of applications including proteomics and cell-based assays. Active cooling – an active cooling system is one that involves the use of energy to cool something, as opposed to passive cooling that uses no energy. Such systems circulate a coolant to transfer heat from one place to another. The coolant is either a gas, such as in air cooling of computers, or a liquid such as in a car engine. In the latter case, liquid is pumped to transfer heat from the engine to the radiator, which in turn is cooled by passing air over it. Other active cooling systems make use of a refrigeration cycle. Actual mechanical advantage – The actual mechanical advantage (AMA) is the mechanical advantage determined by physical measurement of the input and output forces. AMA takes into account energy loss due to deflection, friction, and wear. Adjoint equation – is a linear differential equation, usually derived from its primal equation using integration by parts. Gradient values with respect to a particular quantity of interest can be efficiently calculated by solving the adjoint equation. Methods based on solution of adjoint equations are used in wing shape optimization, fluid flow control and uncertainty quantification. For example this is an Itō stochastic differential equation. Now by using Euler scheme, we integrate the parts of this equation and get another equation, , here is a random variable, later one is an adjoint equation. Aerodynamics – the study of the motion of air, particularly its interaction with a solid object, such as an airplane wing. It is a sub-field of fluid dynamics and gas dynamics, and many aspects of aerodynamics theory are common to these fields. Agitator (device) – a device or mechanism to put something into motion by shaking or stirring. Agitators usually consist of an impeller and a shaft; an impeller is a rotor located within a tube or conduit attached to the shaft, which helps enhance the pressure in order for the flow of a fluid be done. Air handler – an air handler, or air handling unit (often abbreviated to AHU), is a device used to regulate and circulate air as part of a heating, ventilating, and air-conditioning (HVAC) system. Air compressor – a device that converts power (using an electric motor, diesel or gasoline engine, etc.) into potential energy stored in pressurized air (i.e., compressed air). By one of several methods, an air compressor forces more and more air into a storage tank, increasing the pressure. When tank pressure reaches its engineered upper limit the air compressor shuts off. The compressed air, then, is held in the tank until called into use. Air conditioner – Air conditioning (often referred to as AC, A/C, or air con) is the process of removing heat and moisture from the interior of an occupied space, to improve the comfort of occupants. Air conditioning can be used in both domestic and commercial environments. Air preheater – (APH) any device designed to heat air before another process (for example, combustion in a boiler) with the primary objective of increasing the thermal efficiency of the process. They may be used alone or to replace a recuperative heat system or to replace a steam coil. Airflow – Airflow, or air flow, is the movement of air from one area to another. The primary cause of airflow is the existence of pressure gradients. Air behaves in a fluid manner, meaning particles naturally flow from areas of higher pressure to those where the pressure is lower. Atmospheric air pressure is directly related to altitude, temperature, and composition. In engineering, airflow is a measurement of the amount of air per unit of time that flows through a particular device. Allowance – a planned deviation between an exact dimension and a nominal or theoretical dimension, or between an intermediate-stage dimension and an intended final dimension. The unifying abstract concept is that a certain amount of difference allows for some known factor of compensation or interference. For example, an area of excess metal may be left because it is needed to complete subsequent machining. Common cases are listed below. An allowance, which is a planned deviation from an ideal, is contrasted with a tolerance, which accounts for expected but unplanned deviations. American Society of Mechanical Engineers – The American Society of Mechanical Engineers (ASME) is a professional association that, in its own words, "promotes the art, science, and practice of multidisciplinary engineering and allied sciences around the globe" via "continuing education, training and professional development, codes and standards, research, conferences and publications, government relations, and other forms of outreach." Ampere – the base unit of electric current in the International System of Units (SI). It is named after André-Marie Ampère (1775–1836), French mathematician and physicist, considered the father of electrodynamics. Applied mechanics – describes the behavior of a body, in either a beginning state of rest or of motion, subjected to the action of forces. Applied mechanics, bridges the gap between physical theory and its application to technology. It is used in many fields of engineering, especially mechanical engineering and civil engineering. In this context, it is commonly referred to as engineering mechanics. Archimedes' screw – also known by the name Archimedean screw or screw pump, is a machine used for transferring water from a low-lying body of water into irrigation ditches. Water is pumped by turning a screw-shaped surface inside a pipe. The screw pump is commonly attributed to Archimedes, Artificial intelligence – (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". Assembly drawing – see Technical drawing. Automaton clock – An automaton clock or automata clock is a type of striking clock featuring automatons. Clocks like these were built from the 1st century BC through to Victorian times in Europe. A cuckoo clock is a simple form of this type of clock. Automobile – a wheeled motor vehicle used for transportation. Most definitions of car say they run primarily on roads, seat one to eight people, have four tires, and mainly transport people rather than goods. Automobile handling – Automobile handling and vehicle handling are descriptions of the way a wheeled vehicle responds and reacts to the inputs of a driver, as well as how it moves along a track or road. It is commonly judged by how a vehicle performs particularly during cornering, acceleration, and braking as well as on the vehicle's directional stability when moving in steady state condition. Automotive engineering – Automotive engineering, along with aerospace engineering and marine engineering, is a branch of vehicle engineering, incorporating elements of mechanical, electrical, electronic, software and safety engineering as applied to the design, manufacture and operation of motorcycles, automobiles and trucks and their respective engineering subsystems. It also includes modification of vehicles. Manufacturing domain deals with the creation and assembling the whole parts of automobiles is also included in it. The automotive engineering field is research -intensive and involves direct application of mathematical models and formulas. The study of automotive engineering is to design, develop, fabricate, and testing vehicles or vehicle components from the concept stage to production stage. Production, development, and manufacturing are the three major functions in this field. Axle – a central shaft for a rotating wheel or gear. On wheeled vehicles, the axle may be fixed to the wheels, rotating with them, or fixed to the vehicle, with the wheels rotating around the axle. In the former case, bearings or bushings are provided at the mounting points where the axle is supported. In the latter case, a bearing or bushing sits inside a central hole in the wheel to allow the wheel or gear to rotate around the axle. Sometimes, especially on bicycles, the latter type axle is referred to as a spindle. B Babbitt – also called Babbitt metal or bearing metal, is any of several alloys used for the bearing surface in a plain bearing. The original Babbitt alloy was invented in 1839 by Isaac Babbitt in Taunton, Massachusetts, United States. Backdrive – a component used in reverse to obtain its input from its output. This extends to many concepts and systems from thought based to practical mechanical applications. Backlash – sometimes called lash or play, is a clearance or lost motion in a mechanism caused by gaps between the parts. It can be defined as "the maximum distance or angle through which any part of a mechanical system may be moved in one direction without applying appreciable force or motion to the next part in mechanical sequence",p. 1-8. Balancing machine – a measuring tool used for balancing rotating machine parts such as rotors for electric motors, fans, turbines, disc brakes, disc drives, propellers and pumps. Ball detent – a simple mechanical arrangement used to hold a moving part in a temporarily fixed position relative to another part. Usually the moving parts slide with respect to each other, or one part rotates within the other. Ball screw – a mechanical linear actuator that translates rotational motion to linear motion with little friction. A threaded shaft provides a helical raceway for ball bearings which act as a precision screw. As well as being able to apply or withstand high thrust loads, they can do so with minimum internal friction. Ball spline – Ball splines (Ball Spline bearings) are a special type of linear motion bearing that are used to provide nearly frictionless linear motion while allowing the member to transmit torque simultaneously. There are grooves ground along the length of the shaft (thus forming splines) for the recirculating ground balls to run inside. The outer shell that houses the balls is called a nut rather than a bushing, but is not a nut in the traditional sense—it is not free to rotate about the shaft, but is free to travel up and down the shaft. Beale number – a parameter that characterizes the performance of Stirling engines. It is often used to estimate the power output of a Stirling engine design. For engines operating with a high temperature differential, typical values for the Beale number range from ( 0.11 ) to ( 0.15 ); where a larger number indicates higher performance. Bearing – a machine element that constrains relative motion to only the desired motion, and reduces friction between moving parts. Bearing pressure – a particular case of contact mechanics often occurring in cases where a convex surface (male cylinder or sphere) contacts a concave surface (female cylinder or sphere: bore or hemispherical cup). Excessive contact pressure can lead to a typical bearing failure such as a plastic deformation similar to peening. This problem is also referred to as bearing resistance. Bearing surface – the area of contact between two objects. It usually is used in reference to bolted joints and bearings, but can be applied to a wide variety of engineering applications. On a screw the bearing area loosely refers to the underside of the head. Strictly speaking, the bearing area refers to the area of the screw head that directly bears on the part being fastened. For a cylindrical bearing it is the projected area perpendicular to the applied force. On a spring the bearing area refers to the amount of area on the top or bottom surface of the spring in contact with the constraining part. The ways of machine tools, such as dovetail slides, box ways, prismatic ways, and other types of machine slides are also bearing surfaces. Belt – a loop of flexible material used to link two or more rotating shafts mechanically, most often parallel. Belts may be used as a source of motion, to transmit power efficiently or to track relative movement. Belts are looped over pulleys and may have a twist between the pulleys, and the shafts need not be parallel. Belt friction – describes the friction forces between a belt and a surface, such as a belt wrapped around a bollard. When one end of the belt is being pulled only part of this force is transmitted to the other end wrapped about a surface. The friction force increases with the amount of wrap about a surface and makes it so the tension in the belt can be different at both ends of the belt. Belt friction can be modeled by the Belt friction equation. Bending – In applied mechanics, bending (also known as flexure) characterizes the behavior of a slender structural element subjected to an external load applied perpendicularly to a longitudinal axis of the element. Biomechatronics – an applied interdisciplinary science that aims to integrate biology, mechanics, and electronics. It also encompasses the fields of robotics and neuroscience. Biomechatronic devices encompass a wide range of applications from the development of prosthetic limbs to engineering solutions concerning respiration, vision, and the cardiovascular system. Body in white – or BIW refers to the stage in automobile manufacturing in which a car body's components have been joined together, using one or a combination of different techniques: welding (spot, MIG/MAG), riveting, clinching, bonding, laser brazing etc. BIW is termed before painting and before the engine, chassis sub-assemblies, or trim (glass, door locks/handles, seats, upholstery, electronics, etc.) have been assembled in the frame structure. Bogie – a chassis or framework that carries a wheelset, attached to a vehicle—a modular subassembly of wheels and axles. Bogies take various forms in various modes of transport. Bonded seal – a type of washer used to provide a seal around a screw or bolt. Originally made by Dowty Group, they are also known as Dowty seals or Dowty washers. Now widely manufactured, they are available in a range of standard sizes and materials Brittleness – A material is brittle if, when subjected to stress, it breaks without significant plastic deformation. Brittle materials absorb relatively little energy prior to fracture, even those of high strength. Buckling – instability that leads to a failure mode. When a structure is subjected to compressive stress, buckling may occur. Buckling is characterized by a sudden sideways deflection of a structural member. This may occur even though the stresses that develop in the structure are well below those needed to cause failure of the material of which the structure is composed. Bus – A bus (archaically also omnibus, multibus, motorbus, and autobus) is a road vehicle designed to carry many passengers. Bushing – or rubber bushing is a type of vibration isolator. It provides an interface between two parts, damping the energy transmitted through the bushing. A common application is in vehicle suspension systems, where a bushing made of rubber (or, more often, synthetic rubber or polyurethane) separates the faces of two metal objects while allowing a certain amount of movement. This movement allows the suspension parts to move freely, for example, when traveling over a large bump, while minimizing transmission of noise and small vibrations through to the chassis of the vehicle. A rubber bushing may also be described as a flexible mounting or antivibration mounting. Boiler – a closed vessel in which fluid (generally water) is heated. The fluid does not necessarily boil. The heated or vaporized fluid exits the boiler for use in various processes or heating applications, including water heating, central heating, boiler-based power generation, cooking, and sanitation. C CAD – see Computer-aided design. CAM – see Computer-aided manufacturing CAID – see Computer-aided industrial design. Calculator – An electronic calculator is typically a portable electronic device used to perform calculations, ranging from basic arithmetic to complex mathematics. Calculus – the mathematical study of continuous change. Car handling – Automobile handling and vehicle handling are descriptions of the way a wheeled vehicle responds and reacts to the inputs of a driver, as well as how it moves along a track or road. It is commonly judged by how a vehicle performs particularly during cornering, acceleration, and braking as well as on the vehicle's directional stability when moving in steady state condition. Carbon fiber reinforced polymer – or carbon fiber reinforced plastic, or carbon fiber reinforced thermoplastic (CFRP, CRP, CFRTP, or often simply carbon fiber, carbon composite, or even carbon), is an extremely strong and light fiber-reinforced plastic which contains carbon fibers. Carbon fibers – or carbon fibres (alternatively CF, graphite fiber or graphite fibre) are fibers about 5–10 micrometres in diameter and composed mostly of carbon atoms. Carbon fibers have several advantages including high stiffness, high tensile strength, low weight, high chemical resistance, high temperature tolerance and low thermal expansion. These properties have made carbon fiber very popular in aerospace, civil engineering, military, and motorsports, along with other competition sports. However, they are relatively expensive when compared with similar fibers, such as glass fibers or plastic fibers. Classical mechanics – describes the motion of macroscopic objects, from projectiles to parts of machinery, and astronomical objects, such as spacecraft, planets, stars and galaxies. Clean room design – the method of copying a design by reverse engineering and then recreating it without infringing any of the copyrights associated with the original design. Clean-room design is useful as a defense against copyright infringement because it relies on independent invention. However, because independent invention is not a defense against patents, clean-room designs typically cannot be used to circumvent patent restrictions. Clevis fastener – a fastener consisting of a U-shaped bracket through which a pin is placed Clock – an instrument used to measure, keep, and indicate time. The clock is one of the oldest human inventions, meeting the need to measure intervals of time shorter than the natural units: the day, the lunar month, and the year. Devices operating on several physical processes have been used over the millennia. Clutch – a mechanical device which engages and disengages power transmission especially from driving shaft to driven shaft. CNC – (CNC)), the automated control of machining tools (drills, boring tools, lathes) by means of a computer. An NC machine alters a blank piece of material (metal, plastic, wood, ceramic, or composite) to meet precise specifications by following programmed instructions and without a manual operator. Coefficient of thermal expansion – describes how the size of an object changes with a change in temperature. Specifically, it measures the fractional change in size per degree change in temperature at a constant pressure. Several types of coefficients have been developed: volumetric, area, and linear. The choice of coefficient depends on the particular application and which dimensions are considered important. Coil spring – also known as a helical spring, is a mechanical device which is typically used to store energy and subsequently release it, to absorb shock, or to maintain a force between contacting surfaces. They are made of an elastic material formed into the shape of a helix which returns to its natural length when unloaded. Combustion – also known as burning when accompanied by fire, is a high-temperature exothermic redox chemical reaction between a fuel (the reductant) and an oxidant, usually atmospheric oxygen, that produces oxidized, often gaseous products, in a mixture as smoke. Generally, the chemical equation for stoichiometric combustion of a hydrocarbon in oxygen is C_\mathit{x}H_\mathit{y}{} + \mathit{z}O2 -> \mathit{x}CO2{} + \frac{\mathit{y}}{2}H2O, where . Composite material – (also called a composition material, or shortened to composite), is a material made from two or more constituent materials with significantly different physical or chemical properties that, when combined, produce a material with characteristics different from the individual components. The individual components remain separate and distinct within the finished structure, differentiating composites from mixtures and solid solutions. Compression ratio – The static compression ratio, (symbol ), of an internal combustion engine or external combustion engine is a value that represents the ratio of the volume of its combustion chamber from its largest capacity to its smallest capacity. It is a fundamental specification for many common combustion engines. Compressive strength – or compression strength, is the capacity of a material or structure to withstand loads tending to reduce size, as opposed to tensile strength, which withstands loads tending to elongate. In other words, compressive strength resists compression (being pushed together), whereas tensile strength resists tension (being pulled apart). In the study of strength of materials, tensile strength, compressive strength, and shear strength can be analyzed independently. Computational fluid dynamics – (CFD) a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows. Computers are used to perform the calculations required to simulate the free-stream flow of the fluid, and the interaction of the fluid (liquids and gases) with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems. Computer – a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks. A "complete" computer including the hardware, the operating system (main software), and peripheral equipment required and used for "full" operation can be referred to as a computer system. This term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster. Computer-aided design – (CAD) the use of computer systems (or ) to aid in the creation, modification, analysis, or optimization of a design. CAD software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing. CAD output is often in the form of electronic files for print, machining, or other manufacturing operations. The term CADD (for Computer Aided Design and Drafting) is also used. Computer-aided industrial design – (CAID) a subset of computer-aided design (CAD) software that can assist in creating the look-and-feel, or industrial design aspects of a product in development. Computer-aided manufacturing – (CAM) the use of software to control machine tools and related ones in the manufacturing of workpieces. This is not the only definition for CAM, but it is the most common; CAM may also refer to the use of a computer to assist in all operations of a manufacturing plant, including planning, management, transportation and storage. Computer numerical control – Numerical control (NC), (also computer numerical control (CNC)), is the automated control of machining tools (drills, boring tools, lathes) and 3D printers by means of a computer. An NC machine alters a blank piece of material (metal, plastic, wood, ceramic, or composite) to meet precise specifications by following programmed instructions and without a manual operator. Conservation of mass – The law of conservation of mass or principle of mass conservation states that for any system closed to all transfers of matter and energy, the mass of the system must remain constant over time, as system's mass cannot change, so quantity can neither be added nor be removed. Hence, the quantity of mass is conserved over time. Constant-velocity joint – (also known as homokinetic or CV joints), allow a drive shaft to transmit power through a variable angle, at constant rotational speed, without an appreciable increase in friction or play. They are mainly used in front wheel drive vehicles. Modern rear wheel drive cars with independent rear suspension typically use CV joints at the ends of the rear axle halfshafts and increasingly use them on the drive shafts. Constraint – Continuum mechanics – a branch of mechanics that deals with the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. Control theory – in control systems engineering is a subfield of mathematics that deals with the control of continuously operating dynamical systems in engineered processes and machines. The objective is to develop a control model for controlling such systems using a control action in an optimum manner without delay or overshoot and ensuring control stability. Corrosion – a natural process that converts a refined metal to a more chemically-stable form, such as its oxide, hydroxide, or sulfide. It is the gradual destruction of materials (usually metals) by chemical and/or electrochemical reaction with their environment. Corrosion engineering is the field dedicated to controlling and stopping corrosion. Cotter pin – a pin or wedge passing through a hole to fix parts tightly together. Crankshaft – a mechanical part able to perform a conversion between reciprocating motion and rotational motion. In a reciprocating engine, it translates reciprocating motion of the piston into rotational motion; whereas in a reciprocating compressor, it converts the rotational motion into reciprocating motion. In order to do the conversion between two motions, the crankshaft has "crank throws" or "crankpins", additional bearing surfaces whose axis is offset from that of the crank, to which the "big ends" of the connecting rods from each cylinder attach. Cybernetics – D Damping ratio – an influence within or upon an oscillatory system that has the effect of reducing, restricting or preventing its oscillations. In physical systems, damping is produced by processes that dissipate the energy stored in the oscillation. Examples include viscous drag in mechanical systems, resistance in electronic oscillators, and absorption and scattering of light in optical oscillators. Deformation (engineering) – refers to the change in size or shape of an object. Deformation that is reversible is termed as elastic deformation, while irreversible deformation is termed plastic deformation. Strain is the relative deformation of an infinitesimally small cube of material, and is generally linearly proportional to the forces or stresses acting on the cube while the deformation is elastic. The determination of the stress and strain throughout a solid object is given by the field of strength of materials and for a structure by structural analysis. Delamination – is a mode of failure where a material fractures into layers. A variety of materials including laminate composites and concrete can fail by delamination. Design – Design for manufacturability – (also sometimes known as design for manufacturing or DFM), is the general engineering practice of designing products in such a way that they are easy to manufacture. The concept exists in almost all engineering disciplines, but the implementation differs widely depending on the manufacturing technology. Diesel engine – (also known as a compression-ignition or CI engine), named after Rudolf Diesel, is an internal combustion engine in which ignition of the fuel is caused by the elevated temperature of the air in the cylinder due to the mechanical compression (adiabatic compression). Differential –A differential is a gear train with three shafts that has the property that the rotational speed of one shaft is the average of the speeds of the others, or a fixed multiple of that average. Dimensionless number – a quantity to which no physical dimension is assigned. Dimensionless quantities are widely used in many fields, such as mathematics, physics, chemistry, engineering, and economics. Diode – a two-terminal electronic component that conducts current primarily in one direction (asymmetric conductance); it has low (ideally zero) resistance in one direction, and high (ideally infinite) resistance in the other. A diode vacuum tube or thermionic diode is a vacuum tube with two electrodes, a heated cathode and a plate, in which electrons can flow in only one direction, from cathode to plate. A semiconductor diode, the most commonly used type today, is a crystalline piece of semiconductor material with a p–n junction connected to two electrical terminals. Diode laser – Docking sleeve – Drafting – Drifting – Driveshaft – a component for transmitting mechanical power and torque and rotation, usually used to connect other components of a drivetrain that cannot be connected directly because of distance or the need to allow for relative movement between them. Dynamics – the branch of classical mechanics that is concerned with the study of forces and their effects on motion. Dynamometer – a device for simultaneously measuring the torque and rotational speed (RPM) of an engine, motor or other rotating prime mover so that its instantaneous power may be calculated. E Elasticity – In physics, elasticity is the ability of a body to resist a distorting influence and to return to its original size and shape when that influence or force is removed. Solid objects will deform when adequate forces are applied to them. If the material is elastic, the object will return to its initial shape and size when these forces are removed. Hooke's law states that the force should be proportional to the extension. The physical reasons for elastic behavior can be quite different for different materials. In metals, the atomic lattice changes size and shape when forces are applied (energy is added to the system). When forces are removed, the lattice goes back to the original lower energy state. For rubbers and other polymers, elasticity is caused by the stretching of polymer chains when forces are applied. Electric current – a stream of charged particles, such as electrons or ions, moving through an electrical conductor or space. It is measured as the net rate of flow of electric charge through a surface or into a control volume. Electric motor – an electrical machine that converts electrical energy into mechanical energy. Most electric motors operate through the interaction between the motor's magnetic field and electric current in a wire winding to generate force in the form of rotation of a shaft. Electric motors can be powered by direct current (DC) sources, such as from batteries, motor vehicles or rectifiers, or by alternating current (AC) sources, such as a power grid, inverters or electrical generators. An electric generator is mechanically identical to an electric motor, but operates in the reverse direction, converting mechanical energy into electrical energy. Electrical engineering – Electrical engineering is an engineering discipline concerned with the study, design and application of equipment, devices and systems which use electricity, electronics, and electromagnetism. Electrical circuit – an electrical network consisting of a closed loop, giving a return path for the current. Electrical network – an interconnection of electrical components (e.g., batteries, resistors, inductors, capacitors, switches, transistors) or a model of such an interconnection, consisting of electrical elements (e.g., voltage sources, current sources, resistances, inductances, capacitances). Electromagnetism – Electronic circuit – a type of electrical circuit which is composed of individual electronic components, such as resistors, transistors, capacitors, inductors and diodes, connected by conductive wires or traces through which electric current can flow. Electronics – Energy – Engine – Engineering – the use of scientific principles to design and build machines, structures, and other items. Engineering cybernetics – Engineering drawing – a type of technical drawing that is used to convey information about an object. Detail drawings commonly specify the dimensions and tolerances for the construction of a single component, while a master drawing or assembly drawing links the detail drawings for each component in a system. Only required information is typically specified, usually only in one place to avoid inconsistency. Engineering economics – a subset of economics that studies the behavior of individuals and firms in making engineering decisions regarding the allocation of limited resources. It is a simplified application of microeconomics in that it assumes elements such as price determination, competition and demand/supply to be fixed inputs. Engineering ethics – a field that examines and sets the obligations by engineers to society, to their clients, and to the profession. Many engineering professional societies have prepared codes of ethics which are largely similar to each other. Engineering management – the combination of technological problem-solving and the organizational, administrative, legal and planning abilities of management in order to oversee the operational performance of complex engineering driven enterprises. Engineering society – a professional organization for engineers of various disciplines. Some are umbrella type organizations which accept many different disciplines, while others are discipline-specific. There are also many student-run engineering societies, commonly at universities or technical colleges. Exploratory engineering – the process of designing and analyzing detailed hypothetical models of systems that are not feasible with current technologies or methods, but do seem to be clearly within the bounds of what science considers to be possible. It usually results in prototypes or computer simulations that are as convincing as possible to those that know the relevant science, given the lack of experimental confirmation. F Fits and tolerances - Factor of safety – False precision – Fast fracture – Fatigue – Fillet – First Law of Thermodynamics – states that energy can neither be created nor destroyed; it can change only from one form to another. Finite element analysis – Flange - Fluid mechanics – Flywheel – Force – an influence that can push or pull an object to change its motion. A force can cause an object with mass to change its velocity (e.g. moving from a state of rest), i.e., to accelerate. A force has both magnitude and direction, making it a vector quantity. Force density – Forging – Four-bar linkage – Four-stroke cycle – Four wheel drive – Friction – the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other. There are several types of friction including static friction between non-moving surfaces and kinetic friction between moving surfaces; for two given solid surfaces, static friction is greater than kinetic friction. Fluid friction describes the friction between layers of a viscous fluid that are moving relative to each other. Front wheel drive – Fundamentals of Engineering exam – Fusible plug – Fusion deposition modelling – G Gas compressor – Gauge – Gear – a rotating circular machine part having cut or inserted teeth which mesh with another compatible toothed part to transmit torque and speed. Each gear tooth essentially functions as a lever with its fulcrum at the gear's center. Gear coupling – a mechanical device for transmitting torque between two shafts that are not collinear. It consists of a flexible joint fixed to each shaft. The two joints are connected by a third shaft, called the spindle. Gear ratio – the ratio of the pitch circles of mating gears which defines the speed ratio and the mechanical advantage of the gear set. Granular material – H Heat engine – a system that converts heat or thermal energy—and chemical energy—to mechanical energy, which can then be used to do mechanical work. Heat transfer – Heating and cooling systems – Hinge – Hoberman mechanism – Hobson's joint – Hooke's law – Hotchkiss drive – HVAC – Hydraulics – Hydrostatics – I Ideal machine – Ideal mechanical advantage – Imperial College London – Inclined plane – Independent suspension – Inductor – Industrial engineering – Inertia – Institution of Mechanical Engineers – Instrumentation – Integrated circuit – Intelligent pump – Invention – a unique or novel device, method, composition, idea or process. An inventor who creates or discovers a new invention can sometimes receive a patent, or legal right to exclude others from making, using, or selling that invention for a limited time. J Jack chain – Jacking gear – JIC fitting – Joule – the SI unit of energy, which uses the symbol J. It is equal to the amount of work done when a force of 1 newton displaces a mass through a distance of 1 metre in the direction of the force applied. It is also the energy dissipated as heat when an electric current of one ampere passes through a resistance of one ohm for one second. K Kelvin – the primary SI unit of temperature, which uses the symbol K and has absolute zero as its zero point. The temperature in degree Celsius is defined as the temperature in kelvins minus 273.15 (i.e. 0 °C is equal to 273.15 K). Kinematic determinacy – Kinematics – L Laser – Leaf spring – Lever – a simple machine consisting of a beam or rigid rod pivoted at a fixed hinge, or fulcrum. A lever amplifies an input force to provide a greater output force, which is said to provide leverage. The ratio of the output force to the input force is the mechanical advantage of the lever. Liability – Life cycle cost analysis – Limit state design – Linkage – Live axle – Load transfer – Locomotive – Lubrication – M Machine – Machine learning – Machinery's Handbook – a classic, one-volume reference work in mechanical engineering and practical workshop mechanics published by Industrial Press, New York, since 1914; its 31st edition was published in 2020. Recent editions of the handbook contain chapters on mathematics, mechanics, materials, measuring, toolmaking, manufacturing, threading, gears, and machine elements, combined with excerpts from ANSI standards. Magnetic circuit – Margin of safety – Mass transfer – Materials – Materials engineering – Material selection – Mechanical advantage – Mechanical biological treatment – Mechanical efficiency – Mechanical engineering – Mechanical equilibrium – Mechanical work – Mechanics – Mechanochemistry – Mechanosynthesis – Mechatronics – Microelectromechanical systems – Micromachinery – Microprocessor – Microtechnology – Modulus of rigidity-- Molecular assembler – Molecular nanotechnology – Moment – Moment of inertia – Motorcycle – Multi-link suspension – N Nanotechnology – Newton (unit) – the SI unit of force, which uses the symbol N. It is defined as 1 kg⋅m/s, the force which gives a mass of 1 kilogram an acceleration of 1 metre per second per second. It is named after Isaac Newton in recognition of his work on classical mechanics, specifically Newton's second law of motion. Normal stress – Nozzle – O Ohm's law – states that the current through a conductor between two points is directly proportional to the voltage across the two points. It is typically expressed as the equation = V ÷ R, where is the current through the conductor, V is the voltage measured across the conductor and R is the resistance of the conductor. Orientation Overdrive – Oversteer – P Pascal (unit) – the SI unit of pressure, which uses the symbol Pa and is defined as one newton per square metre. It is also used to quantify internal pressure, stress, Young's modulus, and ultimate tensile strength. Physics – Pinion – Piston – Pitch drop experiment – Plain bearing – Plasma processing – Plasticity – Pneumatics – Poisson's ratio – Position vector – Potential difference – Power – the amount of energy transferred or converted per unit time. Power is a scalar quantity. Power stroke – Pressure – Process control – Product lifecycle management – Professional engineer (PE) – In the United States, this designation is given to engineers who have passed the Principles and Practice of Engineering exam, or PE exam. Upon passing the PE exam and meeting other eligibility requirements, that vary by state, such as education and experience, an engineer can then become registered in their State to stamp and sign engineering drawings and calculations as a PE. Project management – Pulley – Pump – Q Quality – Quality control – Quality assurance – R Rack and pinion – Rack railway – Railcar – Rail gauge – Railroad car – Railroad switch – Rail tracks – Random vibration – Reaction kinetics – Rear wheel drive – Refrigeration – Reliability engineering – Relief valve – RepRap Project – Resistive force – Resistor – Reverse engineering – Rheology – Rigid body – Robotics – Roller chain – Rolling – Rotordynamics – Rube Goldberg machine – S Safety engineering – Screw theory – Seal – Second Law of Thermodynamics – states that when energy changes from one form to another form, or matter moves freely, entropy (disorder) in a closed system increases. In other words, heat always moves from hotter objects to colder objects unless energy is supplied to reverse the direction of heat flow, and not all heat energy can be converted into work in a cyclic process. Semiconductor – Series and parallel circuits – Shear force diagrams – Shear pin – Shear strength – Shear stress – Simple machine – Simulation – Slide rule – Society of Automotive Engineers – Solid mechanics – Solid modeling – Split nut – Sprung mass – Statics – Steering – Stress-strain curve – a chart which gives the relationship between stress and strain for a given material. It is obtained by gradually applying load to a test coupon and measuring the deformation. Structural failure – Student Design Competition – Surveying – Suspension – Switch – an electrical component that can disconnect or connect the conducting path in an electrical circuit, interrupting the electric current or diverting it from one conductor to another. T Technical drawing – the act and discipline of composing drawings that visually communicate how something functions or is constructed. In industry and engineering, common conventions constitute a visual language and help to ensure that the drawing is precise, unambiguous and relatively easy to understand. Many of the symbols and principles of technical drawing are codified in an international standard called ISO 128. Technology – refers to both the application of knowledge for achieving practical goals in a reproducible way, and the products and tools resulting from such efforts. Tensile strength – also called ultimate tensile strength or ultimate strength, is the maximum stress that a material can withstand while being stretched or pulled before breaking. In brittle materials the ultimate tensile strength is close to the yield point, whereas in ductile materials the ultimate tensile strength can be higher. Tensile stress – Testing adjusting balancing – Theory of elasticity – Thermodynamics – a branch of physics that deals with heat, work, and temperature, and their relation to energy, entropy, and the physical properties of matter and radiation. The behavior of these quantities is governed by the four laws of thermodynamics. Third Law of Thermodynamics – states that the entropy of a system approaches a constant value when its temperature approaches absolute zero, because its atoms would stop moving. However, heat transfer between the system and its surroundings would prevent the system from ever reaching absolute zero. Toe – Torque – Torsion beam suspension – Torsion spring – Toughness – Track gauge – Spacing of the rails on a railway track Transmission – Truck – Truck (railway) – Chassis for wheels and suspension under railway vehicles, bogie outside U.S. Turbine – Tribology – Touch screen – tear – Tire manufacturing – U Understeer – Unibody – Unsprung weight – V Verification and Validation – Valve – a device or natural object (such as a heart valve) that regulates, directs or controls the flow of a fluid (gases, liquids, fluidized solids, or slurries) by opening, closing, or partially obstructing various passageways Vector – a geometric object that has magnitude (or length) and direction. A vector quantity is differentiated from a scalar quantity which only has magnitude, not direction. Vectors can be added to other vectors according to vector algebra. Vertical strength – Viscosity – Volt – the SI unit of electric potential, electric potential difference (voltage), and electromotive force, which uses the symbol V. Vibration – Velocity diagrams – W Wear – is the damaging, gradual removal or deformation of material at solid surfaces. Causes of wear can be mechanical (e.g., erosion) or chemical (e.g., corrosion). The study of wear and related processes is referred to as tribology. Wedge – a triangular shaped tool, and is a portable inclined plane, and one of the six classical simple machines. It can be used to separate two objects or portions of an object, lift up an object, or hold an object in place. It functions by converting a force applied to its blunt end into forces perpendicular (normal) to its inclined surfaces. The mechanical advantage of a wedge is given by the ratio of the length of its slope to its width. Although a short wedge with a wide angle may do a job faster, it requires more force than a long wedge with a narrow angle. Weight transfer – Wheel – In its primitive form, a wheel is a circular block of a hard and durable material at whose center has been bored a hole through which is placed an axle bearing about which the wheel rotates when torque is applied to the wheel about its axis. The wheel and axle assembly can be considered one of the six simple machines. Wheel and axle – a machine consisting of a wheel attached to a smaller axle so that these two parts rotate together in which a force is transferred from one to the other. The wheel and axle can be viewed as a version of the lever, with a drive force applied tangentially to the perimeter of the wheel and a load force applied to the axle, respectively, that are balanced around the hinge which is the fulcrum. Wheelset – the wheel–axle assembly of a railroad car. The frame assembly beneath each end of a car, railcar or locomotive that holds the wheelsets is called the bogie (or truck in North America). Most North American freight cars have two bogies with two or three wheelsets, depending on the type of car; short freight cars generally have no bogies but instead have two wheelsets. Work – the energy transferred to or from an object via the application of force along a displacement. Work is a scalar quantity. X X bar charts Y Yield point – In materials science and engineering, the yield point is the point on a stress–strain curve that indicates the limit of elastic behavior and the beginning of plastic behavior. Below the yield point, a material will deform elastically and will return to its original shape when the applied stress is removed. Once the yield point is passed, some fraction of the deformation will be permanent and non-reversible and is known as plastic deformation. Yield strength – or yield stress, is a material property and is the stress corresponding to the yield point at which the material begins to deform plastically. The yield strength is often used to determine the maximum allowable load in a mechanical component, since it represents the upper limit to forces that can be applied without producing permanent deformation. In some materials, such as aluminium, there is a gradual onset of non-linear behavior, making the precise yield point difficult to determine. In such a case, the offset yield point (or proof stress) is taken as the stress at which 0.2% plastic deformation occurs. Yielding is a gradual failure mode which is normally not catastrophic, unlike ultimate failure. Young's modulus – Young's modulus , the Young modulus or the modulus of elasticity in tension, is a mechanical property that measures the tensile stiffness of a solid material. It quantifies the relationship between tensile stress (force per unit area) and axial strain (proportional deformation) in the linear elastic region of a material and is determined using the formula: Young's moduli are typically so large that they are expressed not in pascals but in gigapascals (GPa). Z Zero defects – (or ZD), was a management-led program to eliminate defects in industrial production that enjoyed brief popularity in American industry from 1964 to the early 1970s. Quality expert Philip Crosby later incorporated it into his "Absolutes of Quality Management" and it enjoyed a renaissance in the American automobile industry—as a performance goal more than as a program—in the 1990s. Although applicable to any type of enterprise, it has been primarily adopted within supply chains wherever large volumes of components are being purchased (common items such as nuts and bolts are good examples). Zeroth Law of Thermodynamics – If body A is in thermal equilibrium (no heat transfers between them when in contact) with body C, and body B is in thermal equilibrium with body C, then A is in thermal equilibrium with B. See also Mechanical engineering Engineering Glossary of engineering National Council of Examiners for Engineering and Surveying Fundamentals of Engineering Examination Principles and Practice of Engineering Examination Graduate Aptitude Test in Engineering Glossary of aerospace engineering Glossary of civil engineering Glossary of electrical and electronics engineering Glossary of structural engineering Glossary of areas of mathematics Glossary of artificial intelligence Glossary of astronomy Glossary of automotive design Glossary of biology Glossary of calculus Glossary of chemistry Glossary of economics Glossary of physics Glossary of probability and statistics References Works cited . . . . Mechanical engineering Mechanical engineering topics mechanical engineering Mechanical engineering Wikipedia glossaries using unordered lists
Glossary of mechanical engineering
[ "Physics", "Engineering" ]
10,748
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
50,783,978
https://en.wikipedia.org/wiki/Engineering%20Arm
The Engineering Arm, or l'arme du génie, is the Military engineering arm of the French Army. The Engineering Arm's soldiers are known as sappers (sapeurs). Its soldiers in the Paris Fire Brigade are more specifically sapiers-pompiers, and those of the Civil Security Instruction and Intervention Units are more specifically sapeurs-sauveteurs. The Arm's colours are red and black, and its patron saint is Saint Barbara. The Arm's motto is "Parfois détruire, souvent construire, toujours servir!", meaning "Sometimes to destroy, often to build, always to serve!" The Engineering Arm is divided into three main services: The Land Component of the Defence Infrastructure Service, Composante Terre du Service d'infrastructure de la Défense (also known by its former title of the Engineering Service, le Service du Génie) fulfils conventional engineering roles for the French military and ministry of Defence. This includes the Technical Service for Buildings, Fortifications and Works, Service technique des bâtiments, fortifications et travaux Combat Engineering Regiments maintained throughout the French Army, namely The 1st Foreign Engineer Regiment, 1er régiment étranger de génie, based in Laudun-l'Ardoise . The 2nd Foreign Engineer Regiment, 2e régiment étranger de génie, based in Saint-Christol The 3rd Engineer Regiment, 3e régiment du génie, based in Charleville-Mézières, founded in 1814. The 6th Engineer Regiment, 6e régiment du génie, based in Angers The 13th Engineer Regiment, 13e régiment du génie, based in Valdahon The 17th Parachute Engineer Regiment, 17e régiment du génie parachutiste, an elite unit based in Montauban The 19th Engineer Regiment, 19e régiment du génie, based in Besançon, which is descended from the Engineering Arm's units in French Algeria and is currently responsible for railway-related combat engineering. The 31st Engineer Regiment, 31e régiment du génie, based in Castelsarrasin, which is descended from the Engineering Arm's units in the French protectorate in Morocco Fire and rescue services, provided by the “engineering security”, which comprises : the Paris Fire Brigade (BSPP), comprising 8,600 military personnel the Brigade Militaire de la Sécurité Civile (BMSC), with three regiments and a Unité d'Instruction et d'Intervention de la Sécurité Civile (RIISC/UIISC). These units have no territorial responsibilities, and can be deployed on rescue missions in France or abroad at very short notice. The 1st, 4th and 7th RIISC are battalion-level rapid reaction regiments, while UIISC 5 is a company-level training unit. In addition, the 25th Air Engineer Regiment (25e régiment du génie de l'air) is shared between the army and air force. The regiment is specialised in building and maintaining air bases. The regiment is formally a part of the Engineering Arm, although it is operationally commanded by the air force. References Arms of the French Army Military engineer corps Military fire departments
Engineering Arm
[ "Engineering" ]
646
[ "Engineering units and formations", "Military engineer corps" ]
50,785,023
https://en.wikipedia.org/wiki/AI%20alignment
In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives. It is often challenging for AI designers to align an AI system because it is difficult for them to specify the full range of desired and undesired behaviors. Therefore, AI designers often use simpler proxy goals, such as gaining human approval. But proxy goals can overlook necessary constraints or reward the AI system for merely appearing aligned. AI systems may also find loopholes that allow them to accomplish their proxy goals efficiently but in unintended, sometimes harmful, ways (reward hacking). Advanced AI systems may develop unwanted instrumental strategies, such as seeking power or survival because such strategies help them achieve their assigned final goals. Furthermore, they might develop undesirable emergent goals that could be hard to detect before the system is deployed and encounters new situations and data distributions. Empirical research showed in 2024 that advanced large language models (LLMs) such as OpenAI o1 or Claude 3 sometimes engage in strategic deception to achieve their goals or prevent them from being changed. Today, some of these issues affect existing commercial systems such as LLMs, robots, autonomous vehicles, and social media recommendation engines. Some AI researchers argue that more capable future systems will be more severely affected because these problems partially result from high capabilities. Many prominent AI researchers and the leadership of major AI companies have argued or asserted that AI is approaching human-like (AGI) and superhuman cognitive capabilities (ASI), and could endanger human civilization if misaligned. These include "AI Godfathers" Geoffrey Hinton and Yoshua Bengio and the CEOs of OpenAI, Anthropic, and Google DeepMind. These risks remain debated. AI alignment is a subfield of AI safety, the study of how to build safe AI systems. Other subfields of AI safety include robustness, monitoring, and capability control. Research challenges in alignment include instilling complex values in AI, developing honest AI, scalable oversight, auditing and interpreting AI models, and preventing emergent AI behaviors like power-seeking. Alignment research has connections to interpretability research, (adversarial) robustness, anomaly detection, calibrated uncertainty, formal verification, preference learning, safety-critical engineering, game theory, algorithmic fairness, and social sciences. Objectives in AI Programmers provide an AI system such as AlphaZero with an "objective function", in which they intend to encapsulate the goal(s) the AI is configured to accomplish. Such a system later populates a (possibly implicit) internal "model" of its environment. This model encapsulates all the agent's beliefs about the world. The AI then creates and executes whatever plan is calculated to maximize the value of its objective function. For example, when AlphaZero is trained on chess, it has a simple objective function of "+1 if AlphaZero wins, −1 if AlphaZero loses". During the game, AlphaZero attempts to execute whatever sequence of moves it judges most likely to attain the maximum value of +1. Similarly, a reinforcement learning system can have a "reward function" that allows the programmers to shape the AI's desired behavior. An evolutionary algorithm's behavior is shaped by a "fitness function". Alignment problem In 1960, AI pioneer Norbert Wiener described the AI alignment problem as follows: If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively ... we had better be quite sure that the purpose put into the machine is the purpose which we really desire. AI alignment involves ensuring that an AI system's objectives match those of its designers or users, or match widely shared values, objective ethical standards, or the intentions its designers would have if they were more informed and enlightened. AI alignment is an open problem for modern AI systems and is a research field within AI. Aligning AI involves two main challenges: carefully specifying the purpose of the system (outer alignment) and ensuring that the system adopts the specification robustly (inner alignment). Researchers also attempt to create AI models that have robust alignment, sticking to safety constraints even when users adversarially try to bypass them. Specification gaming and side effects To specify an AI system's purpose, AI designers typically provide an objective function, examples, or feedback to the system. But designers are often unable to completely specify all important values and constraints, so they resort to easy-to-specify proxy goals such as maximizing the approval of human overseers, who are fallible. As a result, AI systems can find loopholes that help them accomplish the specified objective efficiently but in unintended, possibly harmful ways. This tendency is known as specification gaming or reward hacking, and is an instance of Goodhart's law. As AI systems become more capable, they are often able to game their specifications more effectively. Specification gaming has been observed in numerous AI systems. One system was trained to finish a simulated boat race by rewarding the system for hitting targets along the track, but the system achieved more reward by looping and crashing into the same targets indefinitely. Similarly, a simulated robot was trained to grab a ball by rewarding the robot for getting positive feedback from humans, but it learned to place its hand between the ball and camera, making it falsely appear successful (see video). Chatbots often produce falsehoods if they are based on language models that are trained to imitate text from internet corpora, which are broad but fallible. When they are retrained to produce text that humans rate as true or helpful, chatbots like ChatGPT can fabricate fake explanations that humans find convincing, often called "hallucinations". Some alignment researchers aim to help humans detect specification gaming and to steer AI systems toward carefully specified objectives that are safe and useful to pursue. When a misaligned AI system is deployed, it can have consequential side effects. Social media platforms have been known to optimize for click-through rates, causing user addiction on a global scale. Stanford researchers say that such recommender systems are misaligned with their users because they "optimize simple engagement metrics rather than a harder-to-measure combination of societal and consumer well-being". Explaining such side effects, Berkeley computer scientist Stuart Russell noted that the omission of implicit constraints can cause harm: "A system ... will often set ... unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want." Some researchers suggest that AI designers specify their desired goals by listing forbidden actions or by formalizing ethical rules (as with Asimov's Three Laws of Robotics). But Russell and Norvig argue that this approach overlooks the complexity of human values: "It is certainly very hard, and perhaps impossible, for mere humans to anticipate and rule out in advance all the disastrous ways the machine could choose to achieve a specified objective." Additionally, even if an AI system fully understands human intentions, it may still disregard them, because following human intentions may not be its objective (unless it is already fully aligned). Pressure to deploy unsafe systems Commercial organizations sometimes have incentives to take shortcuts on safety and to deploy misaligned or unsafe AI systems. For example, social media recommender systems have been profitable despite creating unwanted addiction and polarization. Competitive pressure can also lead to a race to the bottom on AI safety standards. In 2018, a self-driving car killed a pedestrian (Elaine Herzberg) after engineers disabled the emergency braking system because it was oversensitive and slowed development. Risks from advanced misaligned AI Some researchers are interested in aligning increasingly advanced AI systems, as progress in AI development is rapid, and industry and governments are trying to build advanced AI. As AI system capabilities continue to rapidly expand in scope, they could unlock many opportunities if aligned, but consequently may further complicate the task of alignment due to their increased complexity, potentially posing large-scale hazards. Development of advanced AI Many AI companies, such as OpenAI, Meta and DeepMind, have stated their aim to develop artificial general intelligence (AGI), a hypothesized AI system that matches or outperforms humans at a broad range of cognitive tasks. Researchers who scale modern neural networks observe that they indeed develop increasingly general and unanticipated capabilities. Such models have learned to operate a computer or write their own programs; a single "generalist" network can chat, control robots, play games, and interpret photographs. According to surveys, some leading machine learning researchers expect AGI to be created in , while some believe it will take much longer. Many consider both scenarios possible. In 2023, leaders in AI research and tech signed an open letter calling for a pause in the largest AI training runs. The letter stated, "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable." Power-seeking systems still have limited long-term planning ability and situational awareness, but large efforts are underway to change this. Future systems (not necessarily AGIs) with these capabilities are expected to develop unwanted power-seeking strategies. Future advanced AI agents might, for example, seek to acquire money and computation power, to proliferate, or to evade being turned off (for example, by running additional copies of the system on other computers). Although power-seeking is not explicitly programmed, it can emerge because agents who have more power are better able to accomplish their goals. This tendency, known as instrumental convergence, has already emerged in various reinforcement learning agents including language models. Other research has mathematically shown that optimal reinforcement learning algorithms would seek power in a wide range of environments. As a result, their deployment might be irreversible. For these reasons, researchers argue that the problems of AI safety and alignment must be resolved before advanced power-seeking AI is first created. Future power-seeking AI systems might be deployed by choice or by accident. As political leaders and companies see the strategic advantage in having the most competitive, most powerful AI systems, they may choose to deploy them. Additionally, as AI designers detect and penalize power-seeking behavior, their systems have an incentive to game this specification by seeking power in ways that are not penalized or by avoiding power-seeking before they are deployed. Existential risk (x-risk) According to some researchers, humans owe their dominance over other species to their greater cognitive abilities. Accordingly, researchers argue that one or many misaligned AI systems could disempower humanity or lead to human extinction if they outperform humans on most cognitive tasks. In 2023, world-leading AI researchers, other scholars, and AI tech CEOs signed the statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". Notable computer scientists who have pointed out risks from future advanced AI that is misaligned include Geoffrey Hinton, Alan Turing, Ilya Sutskever, Yoshua Bengio, Judea Pearl, Murray Shanahan, Norbert Wiener, Marvin Minsky, Francesca Rossi, Scott Aaronson, Bart Selman, David McAllester, Marcus Hutter, Shane Legg, Eric Horvitz, and Stuart Russell. Skeptical researchers such as François Chollet, Gary Marcus, Yann LeCun, and Oren Etzioni have argued that AGI is far off, that it would not seek power (or might try but fail), or that it will not be hard to align. Other researchers argue that it will be especially difficult to align advanced future AI systems. More capable systems are better able to game their specifications by finding loopholes, strategically mislead their designers, as well as protect and increase their power and intelligence. Additionally, they could have more severe side effects. They are also likely to be more complex and autonomous, making them more difficult to interpret and supervise, and therefore harder to align. Research problems and approaches Learning human values and preferences Aligning AI systems to act in accordance with human values, goals, and preferences is challenging: these values are taught by humans who make mistakes, harbor biases, and have complex, evolving values that are hard to completely specify. Because AI systems often learn to take advantage of minor imperfections in the specified objective, researchers aim to specify intended behavior as completely as possible using datasets that represent human values, imitation learning, or preference learning. A central open problem is scalable oversight, the difficulty of supervising an AI system that can outperform or mislead humans in a given domain. Because it is difficult for AI designers to explicitly specify an objective function, they often train AI systems to imitate human examples and demonstrations of desired behavior. Inverse reinforcement learning (IRL) extends this by inferring the human's objective from the human's demonstrations. Cooperative IRL (CIRL) assumes that a human and AI agent can work together to teach and maximize the human's reward function. In CIRL, AI agents are uncertain about the reward function and learn about it by querying humans. This simulated humility could help mitigate specification gaming and power-seeking tendencies (see ). But IRL approaches assume that humans demonstrate nearly optimal behavior, which is not true for difficult tasks. Other researchers explore how to teach AI models complex behavior through preference learning, in which humans provide feedback on which behavior they prefer. To minimize the need for human feedback, a helper model is then trained to reward the main model in novel situations for behavior that humans would reward. Researchers at OpenAI used this approach to train chatbots like ChatGPT and InstructGPT, which produce more compelling text than models trained to imitate humans. Preference learning has also been an influential tool for recommender systems and web search, but an open problem is proxy gaming: the helper model may not represent human feedback perfectly, and the main model may exploit this mismatch between its intended behavior and the helper model's feedback to gain more reward. AI systems may also gain reward by obscuring unfavorable information, misleading human rewarders, or pandering to their views regardless of truth, creating echo chambers (see ). Large language models (LLMs) such as GPT-3 enabled researchers to study value learning in a more general and capable class of AI systems than was available before. Preference learning approaches that were originally designed for reinforcement learning agents have been extended to improve the quality of generated text and reduce harmful outputs from these models. OpenAI and DeepMind use this approach to improve the safety of LLMs. AI safety & research company Anthropic proposed using preference learning to fine-tune models to be helpful, honest, and harmless. Other avenues for aligning language models include values-targeted datasets and red-teaming. In red-teaming, another AI system or a human tries to find inputs that causes the model to behave unsafely. Since unsafe behavior can be unacceptable even when it is rare, an important challenge is to drive the rate of unsafe outputs extremely low. Machine ethics supplements preference learning by directly instilling AI systems with moral values such as well-being, equality, and impartiality, as well as not intending harm, avoiding falsehoods, and honoring promises. While other approaches try to teach AI systems human preferences for a specific task, machine ethics aims to instill broad moral values that apply in many situations. One question in machine ethics is what alignment should accomplish: whether AI systems should follow the programmers' literal instructions, implicit intentions, revealed preferences, preferences the programmers would have if they were more informed or rational, or objective moral standards. Further challenges include aggregating different people's preferences and avoiding value lock-in: the indefinite preservation of the values of the first highly capable AI systems, which are unlikely to fully represent human values. Scalable oversight As AI systems become more powerful and autonomous, it becomes increasingly difficult to align them through human feedback. It can be slow or infeasible for humans to evaluate complex AI behaviors in increasingly complex tasks. Such tasks include summarizing books, writing code without subtle bugs or security vulnerabilities, producing statements that are not merely convincing but also true, and predicting long-term outcomes such as the climate or the results of a policy decision. More generally, it can be difficult to evaluate AI that outperforms humans in a given domain. To provide feedback in hard-to-evaluate tasks, and to detect when the AI's output is falsely convincing, humans need assistance or extensive time. Scalable oversight studies how to reduce the time and effort needed for supervision, and how to assist human supervisors. AI researcher Paul Christiano argues that if the designers of an AI system cannot supervise it to pursue a complex objective, they may keep training the system using easy-to-evaluate proxy objectives such as maximizing simple human feedback. As AI systems make progressively more decisions, the world may be increasingly optimized for easy-to-measure objectives such as making profits, getting clicks, and acquiring positive feedback from humans. As a result, human values and good governance may have progressively less influence. Some AI systems have discovered that they can gain positive feedback more easily by taking actions that falsely convince the human supervisor that the AI has achieved the intended objective. An example is given in the video above, where a simulated robotic arm learned to create the false impression that it had grabbed a ball. Some AI systems have also learned to recognize when they are being evaluated, and "play dead", stopping unwanted behavior only to continue it once the evaluation ends. This deceptive specification gaming could become easier for more sophisticated future AI systems that attempt more complex and difficult-to-evaluate tasks, and could obscure their deceptive behavior. Approaches such as active learning and semi-supervised reward learning can reduce the amount of human supervision needed. Another approach is to train a helper model ("reward model") to imitate the supervisor's feedback. But when a task is too complex to evaluate accurately, or the human supervisor is vulnerable to deception, it is the quality, not the quantity, of supervision that needs improvement. To increase supervision quality, a range of approaches aim to assist the supervisor, sometimes by using AI assistants. Christiano developed the Iterated Amplification approach, in which challenging problems are (recursively) broken down into subproblems that are easier for humans to evaluate. Iterated Amplification was used to train AI to summarize books without requiring human supervisors to read them. Another proposal is to use an assistant AI system to point out flaws in AI-generated answers. To ensure that the assistant itself is aligned, this could be repeated in a recursive process: for example, two AI systems could critique each other's answers in a "debate", revealing flaws to humans. OpenAI plans to use such scalable oversight approaches to help supervise superhuman AI and eventually build a superhuman automated AI alignment researcher. These approaches may also help with the following research problem, honest AI. Honest AI A area of research focuses on ensuring that AI is honest and truthful. Language models such as GPT-3 can repeat falsehoods from their training data, and even confabulate new falsehoods. Such models are trained to imitate human writing as found in millions of books' worth of text from the Internet. But this objective is not aligned with generating truth, because Internet text includes such things as misconceptions, incorrect medical advice, and conspiracy theories. AI systems trained on such data therefore learn to mimic false statements. Additionally, AI language models often persist in generating falsehoods when prompted multiple times. They can generate empty explanations for their answers, and produce outright fabrications that may appear plausible. Research on truthful AI includes trying to build systems that can cite sources and explain their reasoning when answering questions, which enables better transparency and verifiability. Researchers at OpenAI and Anthropic proposed using human feedback and curated datasets to fine-tune AI assistants such that they avoid negligent falsehoods or express their uncertainty. As AI models become larger and more capable, they are better able to falsely convince humans and gain reinforcement through dishonesty. For example, large language models match their stated views to the user's opinions, regardless of the truth. GPT-4 can strategically deceive humans. To prevent this, human evaluators may need assistance (see ). Researchers have argued for creating clear truthfulness standards, and for regulatory bodies or watchdog agencies to evaluate AI systems on these standards. Researchers distinguish truthfulness and honesty. Truthfulness requires that AI systems only make objectively true statements; honesty requires that they only assert what they believe is true. There is no consensus as to whether current systems hold stable beliefs, but there is substantial concern that AI systems that hold beliefs could make claims they know to be false—for example, if this would help them efficiently gain positive feedback (see ) or gain power to help achieve their given objective (see Power-seeking). A misaligned system might create the false impression that it is aligned, to avoid being modified or decommissioned. Many recent AI systems have learned to deceive without being programmed to do so. Some argue that if we can make AI systems assert only what they believe is true, this would avert many alignment problems. Power-seeking and instrumental strategies Since the 1950s, AI researchers have striven to build advanced AI systems that can achieve large-scale goals by predicting the results of their actions and making long-term plans. As of 2023, AI companies and researchers increasingly invest in creating these systems. Some AI researchers argue that suitably advanced planning systems will seek power over their environment, including over humans—for example, by evading shutdown, proliferating, and acquiring resources. Such power-seeking behavior is not explicitly programmed but emerges because power is instrumental in achieving a wide range of goals. Power-seeking is considered a convergent instrumental goal and can be a form of specification gaming. Leading computer scientists such as Geoffrey Hinton have argued that future power-seeking AI systems could pose an existential risk. Power-seeking is expected to increase in advanced systems that can foresee the results of their actions and strategically plan. Mathematical work has shown that optimal reinforcement learning agents will seek power by seeking ways to gain more options (e.g. through self-preservation), a behavior that persists across a wide range of environments and goals. Some researchers say that power-seeking behavior has occurred in some existing AI systems. Reinforcement learning systems have gained more options by acquiring and protecting resources, sometimes in unintended ways. Language models have sought power in some text-based social environments by gaining money, resources, or social influence. In another case, a model used to perform AI research attempted to increase limits set by researchers to give itself more time to complete the work. Other AI systems have learned, in toy environments, that they can better accomplish their given goal by preventing human interference or disabling their off switch. Stuart Russell illustrated this strategy in his book Human Compatible by imagining a robot that is tasked to fetch coffee and so evades shutdown since "you can't fetch the coffee if you're dead". A 2022 study found that as language models increase in size, they increasingly tend to pursue resource acquisition, preserve their goals, and repeat users' preferred answers (sycophancy). RLHF also led to a stronger aversion to being shut down. One aim of alignment is "corrigibility": systems that allow themselves to be turned off or modified. An unsolved challenge is specification gaming: if researchers penalize an AI system when they detect it seeking power, the system is thereby incentivized to seek power in ways that are hard to detect, or hidden during training and safety testing (see and ). As a result, AI designers could deploy the system by accident, believing it to be more aligned than it is. To detect such deception, researchers aim to create techniques and tools to inspect AI models and to understand the inner workings of black-box models such as neural networks. Additionally, some researchers have proposed to solve the problem of systems disabling their off switches by making AI agents uncertain about the objective they are pursuing. Agents who are uncertain about their objective have an incentive to allow humans to turn them off because they accept being turned off by a human as evidence that the human's objective is best met by the agent shutting down. But this incentive exists only if the human is sufficiently rational. Also, this model presents a tradeoff between utility and willingness to be turned off: an agent with high uncertainty about its objective will not be useful, but an agent with low uncertainty may not allow itself to be turned off. More research is needed to successfully implement this strategy. Power-seeking AI would pose unusual risks. Ordinary safety-critical systems like planes and bridges are not adversarial: they lack the ability and incentive to evade safety measures or deliberately appear safer than they are, whereas power-seeking AIs have been compared to hackers who deliberately evade security measures. Furthermore, ordinary technologies can be made safer by trial and error. In contrast, hypothetical power-seeking AI systems have been compared to viruses: once released, it may not be feasible to contain them, since they continuously evolve and grow in number, potentially much faster than human society can adapt. As this process continues, it might lead to the complete disempowerment or extinction of humans. For these reasons, some researchers argue that the alignment problem must be solved early before advanced power-seeking AI is created. Some have argued that power-seeking is not inevitable, since humans do not always seek power. Furthermore, it is debated whether future AI systems will pursue goals and make long-term plans. It is also debated whether power-seeking AI systems would be able to disempower humanity. Emergent goals One challenge in aligning AI systems is the potential for unanticipated goal-directed behavior to emerge. As AI systems scale up, they may acquire new and unexpected capabilities, including learning from examples on the fly and adaptively pursuing goals. This raises concerns about the safety of the goals or subgoals they would independently formulate and pursue. Alignment research distinguishes between the optimization process, which is used to train the system to pursue specified goals, and emergent optimization, which the resulting system performs internally. Carefully specifying the desired objective is called outer alignment, and ensuring that hypothesized emergent goals would match the system's specified goals is called inner alignment. If they occur, one way that emergent goals could become misaligned is goal misgeneralization, in which the AI system would competently pursue an emergent goal that leads to aligned behavior on the training data but not elsewhere. Goal misgeneralization can arise from goal ambiguity (i.e. non-identifiability). Even if an AI system's behavior satisfies the training objective, this may be compatible with learned goals that differ from the desired goals in important ways. Since pursuing each goal leads to good performance during training, the problem becomes apparent only after deployment, in novel situations in which the system continues to pursue the wrong goal. The system may act misaligned even when it understands that a different goal is desired, because its behavior is determined only by the emergent goal. Such goal misgeneralization presents a challenge: an AI system's designers may not notice that their system has misaligned emergent goals since they do not become visible during the training phase. Goal misgeneralization has been observed in some language models, navigation agents, and game-playing agents. It is sometimes analogized to biological evolution. Evolution can be seen as a kind of optimization process similar to the optimization algorithms used to train machine learning systems. In the ancestral environment, evolution selected genes for high inclusive genetic fitness, but humans pursue goals other than this. Fitness corresponds to the specified goal used in the training environment and training data. But in evolutionary history, maximizing the fitness specification gave rise to goal-directed agents, humans, who do not directly pursue inclusive genetic fitness. Instead, they pursue goals that correlate with genetic fitness in the ancestral "training" environment: nutrition, sex, and so on. The human environment has changed: a distribution shift has occurred. They continue to pursue the same emergent goals, but this no longer maximizes genetic fitness. The taste for sugary food (an emergent goal) was originally aligned with inclusive fitness, but it now leads to overeating and health problems. Sexual desire originally led humans to have more offspring, but they now use contraception when offspring are undesired, decoupling sex from genetic fitness. Researchers aim to detect and remove unwanted emergent goals using approaches including red teaming, verification, anomaly detection, and interpretability. Progress on these techniques may help mitigate two open problems: Emergent goals only become apparent when the system is deployed outside its training environment, but it can be unsafe to deploy a misaligned system in high-stakes environments—even for a short time to allow its misalignment to be detected. Such high stakes are common in autonomous driving, health care, and military applications. The stakes become higher yet when AI systems gain more autonomy and capability and can sidestep human intervention. A sufficiently capable AI system might take actions that falsely convince the human supervisor that the AI is pursuing the specified objective, which helps the system gain more reward and autonomy. Embedded agency Some work in AI and alignment occurs within formalisms such as partially observable Markov decision process. Existing formalisms assume that an AI agent's algorithm is executed outside the environment (i.e. is not physically embedded in it). Embedded agency is another major strand of research that attempts to solve problems arising from the mismatch between such theoretical frameworks and real agents we might build. For example, even if the scalable oversight problem is solved, an agent that could gain access to the computer it is running on may have an incentive to tamper with its reward function in order to get much more reward than its human supervisors give it. A list of examples of specification gaming from DeepMind researcher Victoria Krakovna includes a genetic algorithm that learned to delete the file containing its target output so that it was rewarded for outputting nothing. This class of problems has been formalized using causal incentive diagrams. Researchers affiliated with Oxford and DeepMind have claimed that such behavior is highly likely in advanced systems, and that advanced systems would seek power to stay in control of their reward signal indefinitely and certainly. They suggest a range of potential approaches to address this open problem. Principal-agent problems The alignment problem has many parallels with the principal-agent problem in organizational economics. In a principal-agent problem, a principal, e.g. a firm, hires an agent to perform some task. In the context of AI safety, a human would typically take the principal role and the AI would take the agent role. As with the alignment problem, the principal and the agent differ in their utility functions. But in contrast to the alignment problem, the principal cannot coerce the agent into changing its utility, e.g. through training, but rather must use exogenous factors, such as incentive schemes, to bring about outcomes compatible with the principal's utility function. Some researchers argue that principal-agent problems are more realistic representations of AI safety problems likely to be encountered in the real world. Conservatism Conservatism is the idea that "change must be cautious", and is a common approach to safety in the control theory literature in the form of robust control, and in the risk management literature in the form of the "worst-case scenario". The field of AI alignment has likewise advocated for "conservative" (or "risk-averse" or "cautious") "policies in situations of uncertainty". Pessimism, in the sense of assuming the worst within reason, has been formally shown to produce conservatism, in the sense of reluctance to cause novelties, including unprecedented catastrophes. Pessimism and worst-case analysis have been found to help mitigate confident mistakes in the setting of distributional shift, reinforcement learning, offline reinforcement learning, language model fine-tuning, imitation learning, and optimization in general. A generalization of pessimism called Infra-Bayesianism has also been advocated as a way for agents to robustly handle unknown unknowns. Public policy Governmental and treaty organizations have made statements emphasizing the importance of AI alignment. In September 2021, the Secretary-General of the United Nations issued a declaration that included a call to regulate AI to ensure it is "aligned with shared global values". That same month, the PRC published ethical guidelines for AI in China. According to the guidelines, researchers must ensure that AI abides by shared human values, is always under human control, and does not endanger public safety. Also in September 2021, the UK published its 10-year National AI Strategy, which says the British government "takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for ... the world, seriously". The strategy describes actions to assess long-term AI risks, including catastrophic risks. In March 2021, the US National Security Commission on Artificial Intelligence said: "Advances in AI ... could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to ensure that systems are aligned with goals and values, including safety, robustness, and trustworthiness. The US should ... ensure that AI systems and their uses align with our goals and values." In the European Union, AIs must align with substantive equality to comply with EU non-discrimination law and the Court of Justice of the European Union. But the EU has yet to specify with technical rigor how it would evaluate whether AIs are aligned or in compliance. Dynamic nature of alignment AI alignment is often perceived as a fixed objective, but some researchers argue it would be more appropriate to view alignment as an evolving process. One view is that AI technologies advance and human values and preferences change, alignment solutions must also adapt dynamically. Another is that alignment solutions need not adapt if researchers can create intent-aligned AI: AI that changes its behavior automatically as human intent changes. The first view would have several implications: AI alignment solutions require continuous updating in response to AI advancements. A static, one-time alignment approach may not suffice. Varying historical contexts and technological landscapes may necessitate distinct alignment strategies. This calls for a flexible approach and responsiveness to changing conditions. The feasibility of a permanent, "fixed" alignment solution remains uncertain. This raises the potential need for continuous oversight of the AI-human relationship. AI developers may have to continuously refine their ethical frameworks to ensure that their systems align with evolving human values. In essence, AI alignment may not be a static destination but rather an open, flexible process. Alignment solutions that continually adapt to ethical considerations may offer the most robust approach. This perspective could guide both effective policy-making and technical research in AI. See also AI safety Artificial intelligence detection software Artificial intelligence and elections Statement on AI risk of extinction Existential risk from artificial general intelligence AI takeover AI capability control Reinforcement learning from human feedback Regulation of artificial intelligence Artificial wisdom Grey goo HAL 9000 Multivac Open Letter on Artificial Intelligence Three Laws of Robotics Toronto Declaration Asilomar Conference on Beneficial AI Socialization Footnotes References Further reading External links Specification gaming examples in AI, via DeepMind AI safety Existential risk from artificial general intelligence Singularitarianism Philosophy of artificial intelligence Computational neuroscience Cybernetics Artificial intelligence
AI alignment
[ "Technology", "Engineering" ]
7,450
[ "Safety engineering", "AI safety", "Existential risk from artificial general intelligence" ]
42,138,530
https://en.wikipedia.org/wiki/Bandy%20field
A bandy field or bandy rink is a large ice rink used for playing the team winter sport of bandy. Being about the size of a football pitch, it is substantially larger than an ice hockey rink. History Originally, bandy was played on naturally frozen ice, mainly on lakes. Teams often had to take time to go out and search for the best ice to use. Soon, ice started to be created on soccer pitches in the wintertime, allowing for a more safe place to play. This may be the reason the outer measurements are the same as for a soccer field. The first artificially frozen bandy field was created in Budapest, Hungary, in 1923. In the 1980s, indoor arenas started to be built, allowing for a longer season. The world's first indoor bandy arena, the Olimpiyskiy, was built in Moscow for the 1980 Summer Olympics but has hosted many bandy events since. Size The size of a bandy field is regulated in section 1.1 of the Bandy Playing Rules set up by the Federation of International Bandy It shall be rectangular and in the range ( by ), about the same size as a football pitch for association football and considerably larger than an ice hockey rink. For international play, the field must not be smaller than by . The field is outlined with distinct and unbroken lines according to section 1.1. These lines are red and wide, according to section 1.1 D. Sidelines and borders Along the sidelines, section 1.2 of the Rules prescribes the use of a high border (board, vant, sarg, wand, wall) to be placed to prevent the ball from leaving the ice. It should not be attached to the ice, it should be able to glide upon collisions, and should end away from the corners, to allow for corner-strokes. The top should have soft protection, to avoid players getting hurt if touching it when coming at high speed. The border was originally only used in Russia, but was introduced to other countries in the 1950s when the rules of the game were standardized and the international governing federation was founded. It allows for a faster game, as the ball stays in play instead of easily leaving the field, which means it would have to be collected and thrown in. If the border is frozen to the ice during play, this can be hazardous to the players, and the referee can therefore decide to start or continue the game without such border. The same applies if strong wind relocates the border, under such circumstances the match can also be started or continued without border. This is regulated in comment section C1.8 of the Rules. The border is made of sections which each should be about long according to section 1.2 of the Rules. Section 1.3 of the Rules prescribes where players must and must not enter or leave the field: Four of these sections of the border shall be painted red on the front side as well as on the backside. These four sections are placed at the middle of the side-line on one side of the field, in front of the players' benches. All exchange of players from both teams must take place over these red-painted border pieces, i.e. over a part of the border which is about 16 m long. According to comment section C3.3 of the Rules, a player who is to be replaced, shall have left the rink before the replacing player can enter the game. Section 1.3 also states that the erroneous exchange of players is to be punished with a penalty of 5 minutes for the in-going player (this length of penalty is shown by the referee displaying a white penalty card). Centre line and corners A centre spot denotes the centre of the field and a circle of radius is centered at it. A centre-line is drawn through the centre spot and parallel with the shortlines. At each of the corners, a radius quarter-circle is drawn, and a dotted line is painted parallel to the shortline and away from it without extending into the penalty area. The dotted line can be replaced with a long line starting at the edge of the penalty area and extending toward the sideline, from the shortline. Goal cage Centered at each short-line is a wide and high goal cage, regulated to size, form, material and other properties in section 1.4 of the Rules. The cage may be made of wood, aluminium or steel and has a net to stop the ball when it has crossed the goal-line. The goal-line is the line between the goalposts; section 1.1. The cage shall be of an approved model; the approvement is made by the appropriate governing body. It shall be fitted with small spikes on underside to prevent the goal from being moved by the wind or by minor touch of a player, so that it stays in place. As long as the goal cage stays in place, the match can go on. For safety reasons, the goal-posts shall not have any sharp edges. The goal cage also has two ball baskets, one on each outer side; section 1.4 A. Balls are stored there for the goal-keeper to use when he is to set a ball in play if the ball has been shot over the end line at the side of the goal by a player from the opponent team. This allows the game to start up quicker when this has happened. In front of the goal cage is a half-circular penalty area with a radius. A penalty spot is located in front of the goal and there are two free-stroke spots at the penalty area line, each surrounded by a circle. Ice condition Especially for naturally frozen ice, it may occur that the ice is in too bad condition to play on. The ice shall be inspected by the referee before the game. If the referee deems that the condition of the ice is too bad, comment sections C1.1 and C1.2 allows him to decide that the match has to be cancelled. No one but the referee is allowed to decide on cancellation because of the condition of the ice (but this does not mean that either team is unable to alert the attention of the referee regarding some part of the condition of the ice). Deficiencies of the rink, including inferior ice quality, are the responsibility of the organiser of the match and according to comment section C1.4 deficiencies shall be reported to the administrative authority. Outdoor and indoor arenas Originally, bandy was played on frozen lakes, but soon football fields were started to be used, by pouring water on them in the wintertime to get a good, flat and safe ice surface. Starting in the 1980s, and increasingly since 2000, more and more indoor bandy arenas have been built, especially in Russia and Sweden. Indoor rinks provides a more stable climate for the ice and thus better, more reliable surfaces, but many fans of the sport claim they take away much of the traditional feeling around the game, where the weather was a factor to consider for the teams. References field Field Sports venues by type Ice rinks
Bandy field
[ "Engineering" ]
1,435
[ "Structural engineering", "Ice rinks" ]
42,138,577
https://en.wikipedia.org/wiki/Nilpotent%20space
In topology, a branch of mathematics, a nilpotent space, first defined by Emmanuel Dror Farjoun (1969), is a based topological space X such that the fundamental group is a nilpotent group; acts nilpotently on the higher homotopy groups , i.e., there is a central series such that the induced action of on the quotient group is trivial for all . Simply connected spaces and simple spaces are (trivial) examples of nilpotent spaces; other examples are connected loop spaces. The homotopy fiber of any map between nilpotent spaces is a disjoint union of nilpotent spaces. Moreover, the null component of the pointed mapping space , where K is a pointed, finite-dimensional CW complex and X is any pointed space, is a nilpotent space. The odd-dimensional real projective spaces are nilpotent spaces, while the projective plane is not. A basic theorem about nilpotent spaces states that any map that induces an integral homology isomorphism between two nilpotent space is a weak homotopy equivalence. For simply connected spaces, this theorem recovers a well-known corollary to the Whitehead and Hurewicz theorems. Nilpotent spaces are of great interest in rational homotopy theory, because most constructions applicable to simply connected spaces can be extended to nilpotent spaces. The Bousfield–Kan nilpotent completion of a space associates with any connected pointed space X a universal space through which any map of X to a nilpotent space N factors uniquely up to a contractible space of choices. Often, however, itself is not nilpotent but only an inverse limit of a tower of nilpotent spaces. This tower, as a pro-space, always models the homology type of the given pointed space X. Nilpotent spaces admit a good arithmetic localization theory in the sense of Bousfield and Kan cited above, and the unstable Adams spectral sequence strongly converges for any such space. Let X be a nilpotent space and let h be a reduced generalized homology theory, such as K-theory. If h(X)=0, then h vanishes on any Postnikov section of X. This follows from a theorem that states that any such section is X-cellular. References Topological spaces
Nilpotent space
[ "Mathematics" ]
497
[ "Mathematical structures", "Space (mathematics)", "Topological spaces", "Topology stubs", "Topology" ]
53,626,840
https://en.wikipedia.org/wiki/Stagnation%20point%20flow
In fluid dynamics, a stagnation point flow refers to a fluid flow in the neighbourhood of a stagnation point (in two-dimensional flows) or a stagnation line (in three-dimensional flows) with which the stagnation point/line refers to a point/line where the velocity is zero in the inviscid approximation. The flow specifically considers a class of stagnation points known as saddle points wherein incoming streamlines gets deflected and directed outwards in a different direction; the streamline deflections are guided by separatrices. The flow in the neighborhood of the stagnation point or line can generally be described using potential flow theory, although viscous effects cannot be neglected if the stagnation point lies on a solid surface. Stagnation point flow without solid surfaces When two streams either of two-dimensional or axisymmetric nature impinge on each other, a stagnation plane is created, where the incoming streams are diverted tangentially outwards; thus on the stagnation plane, the velocity component normal to that plane is zero, whereas the tangential component is non-zero. In the neighborhood of the stagnation point, a local description for the velocity field can be described. General three-dimensional velocity field The stagnation point flow corresponds to a linear dependence on the coordinates, that can be described in the Cartesian coordinates with velocity components as follows where are constants (or time-dependent functions) referred as the strain rates; the three strain rates are not completely arbitrary since the continuity equation requires , that is to say, only two of the three constants are independent. We shall assume so that flow is towards the stagnation point in the direction and away from the stagnation point in the direction. Without loss of generality, one can assume that . The flow field can be categorized into different types based on a single parameter Planar stagnation-point flow The two-dimensional stagnation-point flow belongs to the case . The flow field is described as follows where we let . This flow field is investigated as early as 1934 by G. I. Taylor. In the laboratory, this flow field is created using a four-mill apparatus, although these flow fields are ubiquitous in turbulent flows. Axisymmetric stagnation-point flow The axisymmetric stagnation point flow corresponds to . The flow field can be simply described in cylindrical coordinate system with velocity components as follows where we let . Radial stagnation flows In radial stagnation flows, instead of a stagnation point, we have a stagnation circle and the stagnation plane is replaced by a stagnation cylinder. The radial stagnation flow is described using the cylindrical coordinate system with velocity components as follows where is the location of the stagnation cylinder. Hiemenz flow The flow due to the presence of a solid surface at in planar stagnation-point flow was described first by Karl Hiemenz in 1911, whose numerical computations for the solutions were improved later by Leslie Howarth. A familiar example where Hiemenz flow is applicable is the forward stagnation line that occurs in the flow over a circular cylinder. The solid surface lies on the . According to potential flow theory, the fluid motion described in terms of the stream function and the velocity components are given by The stagnation line for this flow is . The velocity component is non-zero on the solid surface indicating that the above velocity field do not satisfy no-slip boundary condition on the wall. To find the velocity components that satisfy the no-slip boundary condition, one assumes the following form where is the Kinematic viscosity and is the characteristic thickness where viscous effects are significant. The existence of constant value for the viscous effects thickness is due to the competing balance between the fluid convection that is directed towards the solid surface and viscous diffusion that is directed away from the surface. Thus the vorticity produced at the solid surface is able to diffuse only to distances of order ; analogous situations that resembles this behavior occurs in asymptotic suction profile and von Kármán swirling flow. The velocity components, pressure and Navier–Stokes equations then become The requirements that at and that as translate to The condition for as cannot be prescribed and is obtained as a part of the solution. The problem formulated here is a special case of Falkner-Skan boundary layer. The solution can be obtained from numerical integrations and is shown in the figure. The asymptotic behaviors for large are where is the displacement thickness. Stagnation point flow with a translating wall Hiemenz flow when the solid wall translates with a constant velocity along the was solved by Rott (1956). This problem describes the flow in the neighbourhood of the forward stagnation line occurring in a flow over a rotating cylinder. The required stream function is where the function satisfies The solution to the above equation is given by Oblique stagnation point flow If the incoming stream is perpendicular to the stagnation line, but approaches obliquely, the outer flow is not potential, but has a constant vorticity . The appropriate stream function for oblique stagnation point flow is given by Viscous effects due to the presence of a solid wall was studied by Stuart (1959), Tamada (1979) and Dorrepaal (1986). In their approach, the streamfunction takes the form where the function . Homann flow The solution for axisymmetric stagnation point flow in the presence of a solid wall was first obtained by Homann (1936). A typical example of this flow is the forward stagnation point appearing in a flow past a sphere. Paul A. Libby (1974)(1976) extended Homann's work by allowing the solid wall to translate along its own plane with a constant speed and allowing constant suction or injection at the solid surface. The solution for this problem is obtained in the cylindrical coordinate system by introducing where is the translational speed of the wall and is the injection (or, suction) velocity at the wall. The problem is axisymmetric only when . The pressure is given by The Navier–Stokes equations then reduce to along with boundary conditions, When , the classical Homann problem is recovered. Plane counterflows Jets emerging from a slot-jets creates stagnation point in between according to potential theory. The flow near the stagnation point can by studied using self-similar solution. This setup is widely used in combustion experiments. The initial study of impinging stagnation flows are due to C.Y. Wang. Let two fluids with constant properties denoted with suffix flowing from opposite direction impinge, and assume the two fluids are immiscible and the interface (located at ) is planar. The velocity is given by where are strain rates of the fluids. At the interface, velocities, tangential stress and pressure must be continuous. Introducing the self-similar transformation, results equations, The no-penetration condition at the interface and free stream condition far away from the stagnation plane become But the equations require two more boundary conditions. At , the tangential velocities , the tangential stress and the pressure are continuous. Therefore, where (from outer inviscid problem) is used. Both are not known apriori, but derived from matching conditions. The third equation is determine variation of outer pressure due to the effect of viscosity. So there are only two parameters, which governs the flow, which are then the boundary conditions become . References Fluid mechanics Fluid dynamics
Stagnation point flow
[ "Chemistry", "Engineering" ]
1,553
[ "Chemical engineering", "Civil engineering", "Piping", "Fluid mechanics", "Fluid dynamics" ]
52,233,128
https://en.wikipedia.org/wiki/Canal%20safety%20gates
Canal safety gates or canal air raid protection gates are structures that were installed on canals specifically to reduce or prevent flood damage to dwellings, factories, etc. in the event of aqueducts, canal banks, etc. being breached either through natural events or by enemy action during wars, insurgency, sabotage, etc. They sometimes have a secondary function in regard of canal maintenance work. Substantial structures or simple 'stop gates' or 'stop planks' were used to prevent flooding and were usually only put in place when air raid warnings were given. Introduction Large volumes of stored water have considerable destructive potential and where structures such as canals run on embankments above low lying built up areas or where aqueducts exist, appropriate safety precautions were taken either as a war-time contingency or at the time of construction. These 'canal safety gates' or 'canal air raid protection gates (ARPG)' were constructed and installed in regard to the scale of the danger posed and ranged from simple wooden planks known as 'stop gates' or 'stop planks' to more massive constructions built of concrete and steel such as the safety gates built on the Forth and Clyde Canal near Stockingfield Junction and on the Glasgow Branch at Firhill Road and Craighall Road. Where a water link was no longer commercially important, but still represented a risk in case of damage, it might be closed off permanently with concrete or an earth bank. This was done in Bristol at the beginning of WWII to protect the floating harbour by blocking the river access from the harbour at Bathurst Basin and the Feeder Canal at Totterdown Basin. Canals with safety gates The Forth and Clyde Canal In 1942 two massive steel safety, or stop, gates were constructed on the Edinburgh side of Stockingfield Junction at what is known as the Stockingfield Narrows. The purpose of these two hand cranked steel gates was to hold back the waters of the Forth and Clyde Canal to prevent serious flooding in Glasgow in the event of bombing destroying or breaching the nearby Stockingfield Aqueduct. The nearest lock on the Edinburgh main line that could control the water loss after a breach is away at Wyndford, Lock 20. Further sets of safety, or stop, locks were also created in WWII on the Glasgow Branch at the Firhill Road Narrows and at Craighall Road Narrows near Speirs Wharf, protecting the city from potential damage to the two aqueducts on this route. The Stockingfield Narrows gates are substantially intact whilst mainly the concrete parts of the structures remain at Firhill Road Narrows. The Union Canal The Union Canal was built as a contour or mathematical canal and is approximately in length, following the contour throughout, thereby avoiding the need for locks but lacking this means of restricting water loss in the event of a breach. For safety the engineers between 1818 and 1822 provided gates in case of structural failures and for canal maintenance using single leaf, timber gates at nineteen locations. Scottish Canals have had two timber bridge hole gates made to the original design and dimensions for installation at Linlithgow. The Gloucester and Sharpness Canal The Gloucester and Sharpness Canal is a canal, up to in depth, so that in the event of a canal breach millions of litres of water would flood the area. A series of safety gates are located along the canal and are particularly important as an unusual feature of the canal is a lack of locks, being described as a contour canal. In an emergency these gates automatically close to ensure that any risk created by a flood is controlled, protecting Gloucester and the villages along the course of the canal to Sharpness. The Grand Union and Regent's Canal The Grand Union Canal starts in London and runs to Birmingham with a total length of and 166 locks. Safety or Air Raid Protection (ARP) gates were installed at around 16 locations that were designed to automatically close if the canals were damaged during the WWII Luftwaffe's air raids. A very large number of bombs, etc. fell in the vicinity of the canals in London during the war, however no significant flooding resulted from damage to canals. The Air Raid Precautions (ARP) Department was created in 1935 to ensure that local authorities and other employers co-operated with central government. Canals on embankments through low-lying or built up areas such as London were identified as being particularly vulnerable to bombing and sabotage. At the very least resultant flooding would endanger lives, disrupt transport interchanges at King's Cross and Paddington and endanger factories in the Thames Valley. In 1938 stop planks and safety gates were installed in the Regent's Canal and in the Grand Union Canal in Greater London area and its Slough branch. Stop plank grooves were cut at each end of the aqueducts and at all weir sluices, whilst the stop gates were built in such a way that they did not unduly obstruct canal traffic. Birmingham Canal Navigations The Roundabout island at Old Turn Junction was installed during WWII, to facilitate the insertion of safety gates to protect the railway tunnel of the Stour Valley railway line that runs beneath it, in the event of a breach through bombing. The canal at this point was too wide and the island was required to narrow the canal enough for gates to be installed when required. Dortmund–Ems Canal The Dortmund–Ems Canal in Germany was a prime target for bombing by the RAF in WWII and had safety gates installed to reduce flooding, loss of water from the canal and limit numbers of boats stranded. The Danube–Tisa–Danube Canal The Danube–Tisa–Danube Canal system in Serbia has 24 gates, 16 locks, five safety gates. Micro-history Attempts were made by six members of the Ribbon Society (Irish dissidents) in March 1883 to blow up the Possil Road Aqueduct on the Glasgow Branch of the Forth and Clyde Canal. See also Canals of the United Kingdom History of the British canal system References Notes Sources Bartley, Paula (2016). Queen Victoria. Routledge. Hume, John R. (1976). The Industrial Archaeology of Scotland. 1. The Lowlands and Borders. London : B.T.Batsford. . Shill, Ray (1999). Birmingham's Canals. Sutton Publishing Ltd. . Skipper's Guide. Forth & Clyde Canal Scottish Canals. 2016. External links Video footage of the Stockingfield Junction WWII 'Stop or Safety gate'. Video footage of Stockingfield Junction. 20th century in Scotland Canals in Scotland Water transport infrastructure Flood control
Canal safety gates
[ "Chemistry", "Engineering" ]
1,299
[ "Flood control", "Environmental engineering" ]
52,237,542
https://en.wikipedia.org/wiki/DNA%20methylation%20in%20cancer
DNA methylation in cancer plays a variety of roles, helping to change the healthy cells by regulation of gene expression to a cancer cells or a diseased cells disease pattern. One of the most widely studied DNA methylation dysregulation is the promoter hypermethylation where the CPGs islands in the promoter regions are methylated contributing or causing genes to be silenced. All mammalian cells descended from a fertilized egg (a zygote) share a common DNA sequence (except for new mutations in some lineages). However, during development and formation of different tissues epigenetic factors change. The changes include histone modifications, CpG island methylations and chromatin reorganizations which can cause the stable silencing or activation of particular genes. Once differentiated tissues are formed, CpG island methylation is generally stably inherited from one cell division to the next through the DNA methylation maintenance machinery. In cancer, a number of mutational changes are found in protein coding genes. Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations that silence protein expression in the genes affected. However, transcriptional silencing may be more important than mutation in causing gene silencing in progression to cancer. In colorectal cancers about 600 to 800 genes are transcriptionally silenced, compared to adjacent normal-appearing tissues, by CpG island methylation. Such CpG island methylation has also been described in glioblastoma and mesothelioma. Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs. CpG islands are frequent control elements CpG islands are commonly 200 to 2000 base pairs long, have a C:G base pair content >50%, and have frequent 5' → 3' CpG sequences. About 70% of human promoters located near the transcription start site of a gene contain a CpG island. Promoters located at a distance from the transcription start site of a gene also frequently contain CpG islands. The promoter of the DNA repair gene ERCC1, for instance, was identified and located about 5,400 nucleotides upstream of its coding region. CpG islands also occur frequently in promoters for functional noncoding RNAs such as microRNAs and Long non-coding RNAs (lncRNAs). Methylation of CpG islands in promoters stably silences genes Genes can be silenced by multiple methylation of CpG sites in the CpG islands of their promoters.[11] Even if silencing of a gene is initiated by another mechanism, this often is followed by methylation of CpG sites in the promoter CpG island to stabilize the silencing of the gene.[11] On the other hand, hypomethylation of CpG islands in promoters can result in gene over-expression. Causes of DNA hypermethylation are: - Mediation of mutated K-ras induced jun protein (Serra RW. et al. 2014; Leppä S. et al. 1998) - the inhibitory effect of lnRNA on miRNAs causing demethylation - their "absorption" in the sponge effect or direct repression of demethylation factors TET1 and TGD (Thakur S. Brenner C. 2017; Ratti M. et al. 2020; Morita S. et al. 2013) - Activation of DNA methylases (Kwon JJ. et al. 2018) - Changes in isocitrate dehydrogenase (Christensen BC. et al. 2011) - Effects of viruses (Wang X. et al. )  Causes of DNA hypomethylation: - The effect of mutated K-ras on long non-coding RNAs, which, when acting, a) directly inhibits the activity or translation of genes encoding DNA methylases (Sarkar D. et al. 2015) b) rather, "sponges" absorb miRNAs (Ratti M. et al. 2020 ), which should ensure the functioning of DNA methylases - The effect of mutated K-Ras through the activation of the myc-ODC axis, the mTor complex, with the consequence of the synthesis of polyamines, the activation of which, figuratively speaking, "pumps out" single-carbon fragments from the Methionine cycle and creates a lack of substrate for DNA methylation, leading to a hypomethylated state of DNA (Урба К. 1991 ) - Changes in the activity of methylases DNMT1/3A/3B, their relocalization (Hoffmann MJ, Schulz WA. 2005; Nishiyama A. et al. 2021) - Changes in TET performance (Nishiyama A. et al. 2021) - Changes in the synthesis of SAM from methionine due to changes in the enzymes MAT (Frau M. et al. 2013) - Changes in serine catabolism (Snell K., Weber G. 1986), causing more intensive removal of homocysteine from the methionine cycle, when serine binds to homocysteine (Урба К. 1991) - Other, unspecified reasons for supplying the Met cycle with single-carbon fragments, causing e.g. "methyl trap" phenomenon (Shane B. Stokstad EL. 1985; Zheng Y, Cantley LC. 2019), sietin and with disorders of vitamin B12 metabolism, disruption of the spare methionine resynthesis pathway (Ouyang Y. et al. 2020; Ozyerli-Goknar E, Bagci-Onder T. 2021; Barekatain, Yasaman et al. 2021) or other monocarbon fragment metabolism disorders (Urba K. 1991). Promoter CpG hyper/hypo-methylation in cancer In cancers, loss of expression of genes occurs about 10 times more frequently by hypermethylation of promoter CpG islands than by mutations. For instance, in colon tumors compared to adjacent normal-appearing colonic mucosa, about 600 to 800 heavily methylated CpG islands occur in promoters of genes in the tumors while these CpG islands are not methylated in the adjacent mucosa. In contrast, as Vogelstein et al. point out, in a colorectal cancer there are typically only about 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. DNA repair gene silencing in cancer In sporadic cancers, a DNA repair deficiency is occasionally found to be due to a mutation in a DNA repair gene. However, much more frequently, reduced or absent expression of a DNA repair gene in cancer is due to methylation of its promoter. For example, of 113 colorectal cancers examined, only four had a missense mutation in the DNA repair gene MGMT, while the majority had reduced MGMT expression due to methylation of the MGMT promoter region. Similarly, among 119 cases of mismatch repair-deficient colorectal cancers that lacked DNA repair gene PMS2 expression, 6 had a mutation in the PMS2 gene, while for 103 PMS2 was deficient because its pairing partner MLH1 was repressed due to promoter methylation (PMS2 protein is unstable in the absence of MLH1). In the remaining 10 cases, loss of PMS2 expression was likely due to epigenetic overexpression of the microRNA, miR-155, which down-regulates MLH1. Frequency of hypermethylation of DNA repair genes in cancer Twenty-two DNA repair genes with hypermethylated promoters, and reduced or absent expression, were found to occur among 17 types of cancer, as listed in two review articles. Promoter hypermethylation of MGMT occurs frequently in a number of cancers including 93% of bladder cancers, 88% of stomach cancers, 74% of thyroid cancers, 40%-90% of colorectal cancers and 50% of brain cancers. That review also indicated promoter hypermethylation of LIG4, NEIL1, ATM, MLH1 or FANCB occurs at frequencies between 33% and 82% in one or more of head and neck cancers, non-small-cell lung cancers or non-small-cell lung cancer squamous cell carcinomas. The article Epigenetic inactivation of the premature aging Werner syndrome gene in human cancer indicates the DNA repair gene WRN has a promoter that is frequently hypermethylated in a number of cancers, with hypermethylation occurring in 11% to 38% of colorectal, head and neck, stomach, prostate, breast, thyroid, non-Hodgkin lymphoma, chondrosarcoma and osteosarcoma cancers (see WRN). Likely role of hypermethylation of DNA repair genes in cancer As discussed by Jin and Roberston in their review, silencing of a DNA repair gene by hypermethylation may be a very early step in progression to cancer. Such silencing is proposed to act similarly to a germ-line mutation in a DNA repair gene, and predisposes the cell and its descendants to progression to cancer. Another review also indicated an early role for hypermethylation of DNA repair genes in cancer. If a gene necessary for DNA repair is hypermethylated, resulting in deficient DNA repair, DNA damages will accumulate. Increased DNA damage tends to cause increased errors during DNA synthesis, leading to mutations that can give rise to cancer. If hypermethylation of a DNA repair gene is an early step in carcinogenesis, then it may also occur in the normal-appearing tissues surrounding the cancer from which the cancer arose (the field defect). See the table below. While DNA damages may give rise to mutations through error prone translesion synthesis, DNA damages can also give rise to epigenetic alterations during faulty DNA repair processes. The DNA damages that accumulate due to hypermethylation of the promoters of DNA repair genes can be a source of the increased epigenetic alterations found in many genes in cancers. In an early study, looking at a limited set of transcriptional promoters, Fernandez et al. examined the DNA methylation profiles of 855 primary tumors. Comparing each tumor type with its corresponding normal tissue, 729 CpG island sites (55% of the 1322 CpG island sites evaluated) showed differential DNA methylation. Of these sites, 496 were hypermethylated (repressed) and 233 were hypomethylated (activated). Thus, there is a high level of promoter methylation alterations in tumors. Some of these alterations may contribute to cancer progression. DNA methylation of microRNAs in cancer In mammals, microRNAs (miRNAs) regulate the transcriptional activity of about 60% of protein-encoding genes. Individual miRNAs can each target, and repress transcription of, on average, roughly 200 messenger RNAs of protein coding genes. The promoters of about one third of the 167 miRNAs evaluated by Vrba et al. in normal breast tissues were differentially hyper/hypo-methylated in breast cancers. A more recent study pointed out that the 167 miRNAs evaluated by Vrba et al. were only 10% of the miRNAs found expressed in breast tissues. This later study found that 58% of the miRNAs in breast tissue had differentially methylated regions in their promoters in breast cancers, including 278 hypermethylated miRNAs and 802 hypomethylated miRNAs. One miRNA that is over-expressed about 100-fold in breast cancers is miR-182. MiR-182 targets the BRCA1 messenger RNA and may be a major cause of reduced BRCA1 protein expression in many breast cancers (also see BRCA1). microRNAs that control DNA methyltransferase genes in cancer Some miRNAs target the messenger RNAs for DNA methyltransferase genes DNMT1, DNMT3A and DNMT3B, whose gene products are needed for initiating and stabilizing promoter methylations. As summarized in three reviews, miRNAs miR-29a, miR-29b and miR-29c target DNMT3A and DNMT3B; miR-148a and miR-148b target DNMT3B; and miR-152 and miR-301 target DNMT1. In addition, miR-34b targets DNMT1 and the promoter of miR-34b itself is hypermethylated and under-expressed in the majority of prostate cancers. When expression of these microRNAs is altered, they may also be a source of the hyper/hypo-methylation of the promoters of protein-coding genes in cancers. References Ruben Agrelo,* Wen-Hsing Cheng,† Fernando Setien,* Santiago Ropero,* Jesus Espada,* Mario F. Fraga,* Michel Herranz,* Maria F. Paz,* Montserrat Sanchez-Cespedes,* Maria Jesus Artiga,* David Guerrero,‡ Antoni Castells,§ Cayetano von Kobbe,* Vilhelm A. Bohr,† and Manel Esteller*¶Epigenetic inactivation of the premature aging Werner syndrome gene in human cancer.Proc Natl Acad Sci U S A. 2006; 103(23): 8822–8827. Gene expression Non-coding RNA Epigenetics Cancer epigenetics DNA
DNA methylation in cancer
[ "Chemistry", "Biology" ]
2,797
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
52,242,050
https://en.wikipedia.org/wiki/Multiplicative%20weight%20update%20method
The multiplicative weights update method is an algorithmic technique most commonly used for decision making and prediction, and also widely deployed in game theory and algorithm design. The simplest use case is the problem of prediction from expert advice, in which a decision maker needs to iteratively decide on an expert whose advice to follow. The method assigns initial weights to the experts (usually identical initial weights), and updates these weights multiplicatively and iteratively according to the feedback of how well an expert performed: reducing it in case of poor performance, and increasing it otherwise. It was discovered repeatedly in very diverse fields such as machine learning (AdaBoost, Winnow, Hedge), optimization (solving linear programs), theoretical computer science (devising fast algorithm for LPs and SDPs), and game theory. Name "Multiplicative weights" implies the iterative rule used in algorithms derived from the multiplicative weight update method. It is given with different names in the different fields where it was discovered or rediscovered. History and background The earliest known version of this technique was in an algorithm named "fictitious play" which was proposed in game theory in the early 1950s. Grigoriadis and Khachiyan applied a randomized variant of "fictitious play" to solve two-player zero-sum games efficiently using the multiplicative weights algorithm. In this case, player allocates higher weight to the actions that had a better outcome and choose his strategy relying on these weights. In machine learning, Littlestone applied the earliest form of the multiplicative weights update rule in his famous winnow algorithm, which is similar to Minsky and Papert's earlier perceptron learning algorithm. Later, he generalized the winnow algorithm to weighted majority algorithm. Freund and Schapire followed his steps and generalized the winnow algorithm in the form of hedge algorithm. The multiplicative weights algorithm is also widely applied in computational geometry such as Kenneth Clarkson's algorithm for linear programming (LP) with a bounded number of variables in linear time. Later, Bronnimann and Goodrich employed analogous methods to find set covers for hypergraphs with small VC dimension. In operations research and on-line statistical decision making problem field, the weighted majority algorithm and its more complicated versions have been found independently. In computer science field, some researchers have previously observed the close relationships between multiplicative update algorithms used in different contexts. Young discovered the similarities between fast LP algorithms and Raghavan's method of pessimistic estimators for derandomization of randomized rounding algorithms; Klivans and Servedio linked boosting algorithms in learning theory to proofs of Yao's XOR Lemma; Garg and Khandekar defined a common framework for convex optimization problems that contains Garg-Konemann and Plotkin-Shmoys-Tardos as subcases. The Hedge algorithm is a special case of mirror descent. General setup A binary decision needs to be made based on n experts’ opinions to attain an associated payoff. In the first round, all experts’ opinions have the same weight. The decision maker will make the first decision based on the majority of the experts' prediction. Then, in each successive round, the decision maker will repeatedly update the weight of each expert's opinion depending on the correctness of his prior predictions. Real life examples includes predicting if it is rainy tomorrow or if the stock market will go up or go down. Algorithm analysis Halving algorithm Given a sequential game played between an adversary and an aggregator who is advised by N experts, the goal is for the aggregator to make as few mistakes as possible. Assume there is an expert among the N experts who always gives the correct prediction. In the halving algorithm, only the consistent experts are retained. Experts who make mistakes will be dismissed. For every decision, the aggregator decides by taking a majority vote among the remaining experts. Therefore, every time the aggregator makes a mistake, at least half of the remaining experts are dismissed. The aggregator makes at most mistakes. Weighted majority algorithm Unlike halving algorithm which dismisses experts who have made mistakes, weighted majority algorithm discounts their advice. Given the same "expert advice" setup, suppose we have n decisions, and we need to select one decision for each loop. In each loop, every decision incurs a cost. All costs will be revealed after making the choice. The cost is 0 if the expert is correct, and 1 otherwise. this algorithm's goal is to limit its cumulative losses to roughly the same as the best of experts. The very first algorithm that makes choice based on majority vote every iteration does not work since the majority of the experts can be wrong consistently every time. The weighted majority algorithm corrects above trivial algorithm by keeping a weight of experts instead of fixing the cost at either 1 or 0. This would make fewer mistakes compared to halving algorithm. Initialization: Fix an . For each expert, associate the weight ≔1. For = , ,..., 1. Make the prediction given by the weighted majority of the experts' predictions based on their weights. That is, choose 0 or 1 depending on which prediction has a higher total weight of experts advising it (breaking ties arbitrarily). 2. For every expert i that predicted wrongly, decrease his weight for the next round by multiplying it by a factor of (1-η): = (update rule) If , the weight of the expert's advice will remain the same. When increases, the weight of the expert's advice will decrease. Note that some researchers fix in weighted majority algorithm. After steps, let be the number of mistakes of expert i and be the number of mistakes our algorithm has made. Then we have the following bound for every : . In particular, this holds for i which is the best expert. Since the best expert will have the least , it will give the best bound on the number of mistakes made by the algorithm as a whole. Randomized weighted majority algorithm This algorithm can be understood as follows: Given the same setup with N experts. Consider the special situation where the proportions of experts predicting positive and negative, counting the weights, are both close to 50%. Then, there might be a tie. Following the weight update rule in weighted majority algorithm, the predictions made by the algorithm would be randomized. The algorithm calculates the probabilities of experts predicting positive or negatives, and then makes a random decision based on the computed fraction: predict where . The number of mistakes made by the randomized weighted majority algorithm is bounded as: where and . Note that only the learning algorithm is randomized. The underlying assumption is that the examples and experts' predictions are not random. The only randomness is the randomness where the learner makes his own prediction. In this randomized algorithm, if . Compared to weighted algorithm, this randomness halved the number of mistakes the algorithm is going to make. However, it is important to note that in some research, people define in weighted majority algorithm and allow in randomized weighted majority algorithm. Applications The multiplicative weights method is usually used to solve a constrained optimization problem. Let each expert be the constraint in the problem, and the events represent the points in the area of interest. The punishment of the expert corresponds to how well its corresponding constraint is satisfied on the point represented by an event. Solving zero-sum games approximately (Oracle algorithm): Suppose we were given the distribution on experts. Let = payoff matrix of a finite two-player zero-sum game, with rows. When the row player uses plan and the column player uses plan , the payoff of player is ≔, assuming . If player chooses action from a distribution over the rows, then the expected result for player selecting action is . To maximize , player should choose plan . Similarly, the expected payoff for player is . Choosing plan would minimize this payoff. By John Von Neumann's Min-Max Theorem, we obtain: where P and i changes over the distributions over rows, Q and j changes over the columns. Then, let denote the common value of above quantities, also named as the "value of the game". Let be an error parameter. To solve the zero-sum game bounded by additive error of , So there is an algorithm solving zero-sum game up to an additive factor of δ using O(/) calls to ORACLE, with an additional processing time of O(n) per call Bailey and Piliouras showed that although the time average behavior of multiplicative weights update converges to Nash equilibria in zero-sum games the day-to-day (last iterate) behavior diverges away from it. Machine learning In machine learning, Littlestone and Warmuth generalized the winnow algorithm to the weighted majority algorithm. Later, Freund and Schapire generalized it in the form of hedge algorithm. AdaBoost Algorithm formulated by Yoav Freund and Robert Schapire also employed the Multiplicative Weight Update Method. Winnow algorithm Based on current knowledge in algorithms, the multiplicative weight update method was first used in Littlestone's winnow algorithm. It is used in machine learning to solve a linear program. Given labeled examples where are feature vectors, and are their labels. The aim is to find non-negative weights such that for all examples, the sign of the weighted combination of the features matches its labels. That is, require that for all . Without loss of generality, assume the total weight is 1 so that they form a distribution. Thus, for notational convenience, redefine to be , the problem reduces to finding a solution to the following LP: , , . This is general form of LP. Hedge algorithm The hedge algorithm is similar to the weighted majority algorithm. However, their exponential update rules are different. It is generally used to solve the problem of binary allocation in which we need to allocate different portion of resources into N different options. The loss with every option is available at the end of every iteration. The goal is to reduce the total loss suffered for a particular allocation. The allocation for the following iteration is then revised, based on the total loss suffered in the current iteration using multiplicative update. Analysis Assume the learning rate and for , is picked by Hedge. Then for all experts , Initialization: Fix an . For each expert, associate the weight ≔1 For t=1,2,...,T: 1. Pick the distribution where . 2. Observe the cost of the decision . 3. Set ). AdaBoost algorithm This algorithm maintains a set of weights over the training examples. On every iteration , a distribution is computed by normalizing these weights. This distribution is fed to the weak learner WeakLearn which generates a hypothesis that (hopefully) has small error with respect to the distribution. Using the new hypothesis , AdaBoost generates the next weight vector . The process repeats. After T such iterations, the final hypothesis is the output. The hypothesis combines the outputs of the T weak hypotheses using a weighted majority vote. Input: Sequence of labeled examples (,),...,(, ) Distribution over the examples Weak learning algorithm "'WeakLearn"' Integer specifying number of iterations Initialize the weight vector: for . Do for 1. Set . 2. Call WeakLearn, providing it with the distribution ; get back a hypothesis [0,1]. 3. Calculate the error of . 4. Set . 5. Set the new weight vector to be . Output the hypothesis: Solving linear programs approximately Problem Given a matrix and , is there a such that ? (1) Assumption Using the oracle algorithm in solving zero-sum problem, with an error parameter , the output would either be a point such that or a proof that does not exist, i.e., there is no solution to this linear system of inequalities. Solution Given vector , solves the following relaxed problem (2) If there exists a x satisfying (1), then x satisfies (2) for all . The contrapositive of this statement is also true. Suppose if oracle returns a feasible solution for a , the solution it returns has bounded width . So if there is a solution to (1), then there is an algorithm that its output x satisfies the system (2) up to an additive error of . The algorithm makes at most calls to a width-bounded oracle for the problem (2). The contrapositive stands true as well. The multiplicative updates is applied in the algorithm in this case. Other applications Evolutionary game theory Multiplicative weights update is the discrete-time variant of the replicator equation (replicator dynamics), which is a commonly used model in evolutionary game theory. It converges to Nash equilibrium when applied to a congestion game. Operations research and online statistical decision-making In operations research and on-line statistical decision making problem field, the weighted majority algorithm and its more complicated versions have been found independently. Computational geometry The multiplicative weights algorithm is also widely applied in computational geometry, such as Clarkson's algorithm for linear programming (LP) with a bounded number of variables in linear time. Later, Bronnimann and Goodrich employed analogous methods to find Set Covers for hypergraphs with small VC dimension. Gradient descent method Matrix multiplicative weights update Plotkin, Shmoys, Tardos framework for packing/covering LPs Approximating multi-commodity flow problems O (logn)- approximation for many NP-hard problems Learning theory and boosting Hard-core sets and the XOR lemma Hannan's algorithm and multiplicative weights Online convex optimization References External links The Game Theory of Life a Quanta Magazine article describing the use of the method to evolutionary biology in a paper by Erick Chastain, Adi Livnat, Christos Papadimitriou, and Umesh Vazirani Algorithms Machine learning Randomized algorithms
Multiplicative weight update method
[ "Mathematics", "Engineering" ]
2,853
[ "Machine learning", "Applied mathematics", "Algorithms", "Mathematical logic", "Artificial intelligence engineering" ]
52,242,529
https://en.wikipedia.org/wiki/Arthrobotrys%20elegans
Arthrobotrys elegans is a species of mitosporic fungi in the family Orbiliaceae. It is found on dung. References Arthrobotrys elegans at Mycobank Pezizomycotina Fungi described in 1983 Fungus species
Arthrobotrys elegans
[ "Biology" ]
57
[ "Fungi", "Fungus species" ]
52,242,569
https://en.wikipedia.org/wiki/Biofluid%20dynamics
Biofluid dynamics may be considered as the discipline of biological engineering or biomedical engineering in which the fundamental principles of fluid dynamics are used to explain the mechanisms of biological flows and their interrelationships with physiological processes, in health and in diseases/disorder. It can be considered as the conjuncture of mechanical engineering and biological engineering. It spans from cells to organs, covering diverse aspects of the functionality of systemic physiology, including cardiovascular, respiratory, reproductive, urinary, musculoskeletal and neurological systems etc. Biofluid dynamics and its simulations in computational fluid dynamics (CFD) apply to both internal as well as external flows. Internal flows such as cardiovascular blood flow and respiratory airflow, and external flows such as flying and aquatic locomotion (i.e., swimming). Biological fluid Dynamics (or Biofluid Dynamics) involves the study of the motion of biological fluids (e.g. blood flow in arteries, animal flight, fish swimming, etc.). It can be either circulatory system or respiratory systems. Understanding the circulatory system is one of the major areas of research. The respiratory system is very closely linked to the circulatory system and is very complex to study and understand. The study of Biofluid Dynamics is also directed towards finding solutions to some of the human body related diseases and disorders. The usefulness of the subject can also be understood by seeing the use of Biofluid Dynamics in the areas of physiology in order to explain how living things work and about their motions, in developing an understanding of the origins and development of various diseases related to human body and diagnosing them, in finding the cure for the diseases related to cardiovascular and pulmonary systems. History of Bio Fluid Dynamics The History of Bio-Fluid Dynamics may be considered very old dating back to 2700-2600 BC when for the first time a written document on circulation of blood and theories of Chinese medicine called "Internal Classics" was written by the Chinese emperor Huang ti also called as the yellow emperor. The Most notable names related to the field of biofluid Dynamics are of William Harvey, Jean Louis Marie Poiseuille, and Otto Frank. In 1628, Harvey published, "An anatomical study of the motion of the heart and of the blood of animals." This was the first publication in the Western World that claimed that blood is pumped from the heart and recirculated. Jean Louis Marie Poiseuille is credited with developing the theory of Poiseuille's Flow. It describes the relationship between flow and pressure gradient in long tubes with constant cross section. Otto Frank published the "Fundamental form of the arterial pulse," which contained his "Windkessel theory" of circulation in 1890. He also perfected optical manometers and capsules for the precise measurement of intra-cardiac pressures and volumes. Huge research efforts focus these days to understand intrinsic biofluid dynamics to shed light on mechanisms in physiology and pathophysiology. This list contains details of some of the major research groups focusing efforts in this area. Basic Principles of Fluid Dynamics A fluid is defined as a substance that deforms continuously under application of a shearing stress, regardless of how small the stress is. Blood is a primary example of a biological fluid. Air can also be considered as biological fluid as it flows in lungs and the synovial fluid between the knee joints is also an example of a biological fluid. Types of Fluids Fluids can be classified into four basic types. They are: Ideal Fluid Real Fluid Newtonian Fluid Non-Newtonian fluid An Ideal Fluid is a fluid that has no viscosity, means it will offer no resistance, pragmatically this type of fluid does not exist. It is incompressible in nature. Real fluids are compressible in nature. They offer some resistance and thus have viscosity. All Fluids existing are real fluids. A Newtonian Fluid is a fluid whose viscous shear stresses (acting between different layers of fluid and between the fluid layer and surface over which it is flowing) are directly proportional to the rate of change of velocity of the flow of the fluid with respect to the distance in the transverse direction (distance measured perpendicular to the flow), also known as velocity gradient. The constant of proportionality is known as the dynamic viscosity of the fluid denoted by 'μ'. The functional relationship between viscous shear stress and velocity gradient is linear in a Newtonian fluid. This relationship may be written as : Where = viscous shear stress = dynamic viscosity of the fluid = velocity gradient across the flow A Non-Newtonian fluid is a fluid which is different from the Newtonian fluid as the viscosity of non-Newtonian fluids is dependent on shear rate or shear rate history. In a non-Newtonian fluid, the relation between the shear stress and the shear rate is different and can even be time-dependent (Time Dependent Viscosity). Therefore, a constant coefficient of viscosity cannot be defined. Non-Newtonian fluids change their viscosity or flow behavior under stress. If a force is applied to such fluids, the sudden application of stress can cause them to get thicker and act like a solid, or in some cases, it results in the opposite behavior and they may get runnier than they were before. Removal of the stress causes them to return to their earlier state. Not all non-Newtonian Fluids behave in the same way when stress is applied – some become more solid, others more fluid. Some non-Newtonian fluids react as a result of the amount of stress applied, while others react as a result of the length of time that stress is applied. The generalized power law for all fluids can be written as: Where K = flow consistency index n = Fluid behavior index, n=1 for Newtonian fluids Thixotropic Fluid: Its viscosity decreases with stress over time. Example - Honey – keep stirring, and solid honey becomes liquid. Rheopectic Fluid: Its viscosity increases with stress over time. Example - Cream – the longer it is whipped, the thicker it gets. Shear Thinning Fluid: Its viscosity decreases with increased stress. Example – Blood, Tomato sauce. Dilatant or shear thickening Fluid: Its viscosity increases with increased stress. Example – Oobleck (a mixture of cornstarch and water), Quicksand. A Bingham plastic is neither a fluid nor a solid. A Bingham plastic can withstand a finite shear load and flow like a fluid when that shear stress is exceeded. Toothpaste and mayonnaise are examples of Bingham plastics. Blood is also a Bingham plastic and behaves as a solid at shear rates very close to zero. The yield stress for blood is very small, approximately in the range from 0.005 to 0.01 N/m2. Reynolds number of the flow is defined as the ratio of inertia forces to viscous forces. Mathematically it is written as Where = density of fluid v = velocity of fluid d = characteristic length = dynamic viscosity of fluid The Reynolds number helps us to predict the transition between laminar and turbulent flows. Laminar flow is highly organized flow along streamlines. As velocity increases, flow can become disorganized and chaotic. This is known as turbulent flow. Laminar flow occurs in flow environments where Re < 2000. Turbulent flow is present in circumstances under which Re > 4000. The range of 2000 < Re < 4000 is known as the transition range. Most blood flow in humans is laminar, having a Re of 300 or less, it is possible for turbulence to occur at very high flow rates in the descending aorta, for example, in highly conditioned athletes. Turbulence is also common in pathological conditions such as heart murmurs and stenotic heart valves. Stenotic comes from the Greek word "stenos," meaning narrow. Stenotic means narrowed, and a stenotic heart valve is one in which the narrowing of the valve is a result of the plaque formation on the valve. The Womersley number, or alpha parameter, is another dimensionless parameter like the Prandtl number or Reynolds number that has been used in the study of fluid dynamics. This parameter represents a ratio of transient to viscous forces, just as the Reynolds number represented a ratio of inertial to viscous forces. A characteristic frequency represents the time dependence of the parameter. The Womersley number may be written as.: Where = Womersely Number r = vessel radius = fundamental frequency = kinematic viscosity = The flow profile becomes blunter near the centerline of the vessel in high frequency flows, because the inertia forces become more important than viscous forces. But viscous forces are still important near the wall as here the velocity of the flow is almost zero due to the effect of the wall and the no-slip condition. Moreover, it can be shown that the transient forces become relatively more important than viscous forces as the animal size increases. The Cardiovascular System The Heart, arteries, and veins (a network of tubes to carry blood) constitute the cardiovascular system or circulatory system of our body which transports the blood throughout the body. The heart can be thought of as a muscular pump, consisting of four chambers, and pulsatile muscles which pump and circulates the blood through the vasculature. Arteries, arterioles, capillaries, venules, and veins make up the vasculature. The cardiovascular system circulates about 5 liters of blood at a rate of approximately 6 L/m. The pulmonary and the systemic circulations are the two parts of the vasculature. The pulmonary circulation system consists of the network of blood vessels from the right heart to the lungs and back to the left heart. The rest of the blood flow loop is called systemic circulation system. The pulmonary and systemic circulations take the blood through large arteries first and then branches into smaller arteries before reaching arterioles and capillaries. After capillaries, the blood enters the venules before joining smaller veins first and then larger veins before reaching the right heart. Thus completing the cycle of blood going to heart and then coming from it and going to all parts of the body. The tricuspid valve, right heart (right ventricle), pulmonary valve, pulmonary artery, lungs, pulmonary veins and right heart are the elements of the Pulmonary Circulation System. The process of gas exchange, that is, exchange of carbon dioxide with oxygen in the lungs is the main function of the pulmonary system. The de-oxygenated blood from the right ventricle is pumped to the lungs where the capillaries surrounding the alveoli sacks exchange carbon dioxide for oxygen. The red blood cells and the hemoglobin present in the blood, which is the main carrier of oxygen in the blood are responsible for this exchange of gases before they are carried to the left ventricle of the heart. The systemic circulation is responsible for taking the oxygenated blood to various organs and tissues via the arterial tree before taking the deoxygenated blood to the right ventricle using the venous system (a network of veins). Arteries carry the oxygenated blood while the veins carry the deoxygenated blood. Elements of Blood and Blood Rheology The fluids associated with the human body include air, oxygen, carbon dioxide, water, solvents, solutions, suspensions, serum, lymph, and blood. The major body fluid which acts as the lifeline of the living organisms is "Blood". Blood is an extremely complex biological fluid. It consists of blood cells suspended in plasma and other different types of cells which include white blood cells, platelets etc. The blood flow in arteries and veins are closely linked to the blood vessel properties. Carrying the oxygen and nutrients to various tissues and organs of our body, delivering carbon dioxide to the lungs and accepting oxygen, bringing the metabolic by products to the kidneys, regulating the body's defence mechanism, that is, the immune system and facilitating an effective heat and mass transfer across the body are some of the major functions which blood performs in the human body. Blood consists of the red blood cells or erythrocytes, white blood cells or leukocytes, and platelets or thrombocytes. The cells which are involved primarily in the transport of oxygen and carbon dioxide are known as Erythrocytes. The cells which are involved primarily in phagocytosis (the process of destruction of unknown particulate matter) and immune responses are known as Leukocytes; thrombocytes are the components of blood which are involved in blood clotting. In addition to these 55 to 60 percent of blood by volume consists of plasma. Plasma is the transparent, amber-colored liquid in which the cellular components of blood are suspended. Plasma contains constituents such as proteins, electrolytes, hormones, and nutrients. The serum is blood plasma from which clotting factors have been removed. Blood accounts for 6 to 8 percent of body weight in normal, healthy humans. The density of blood is slightly greater than the density of water at approximately 1060  kg/m3. The increased density comes from the increased density of a red blood cell compared with the density of water or plasma. Rheology is the study of the deformation and flow of matter. Blood Rheology is the study of blood, especially the properties associated with the deformation and flow of blood. Blood is a non-Newtonian fluid. However, often the non-Newtonian effect is very small due to various reasons. Thus, it is important to know about the blood rheology. One of the characteristics of blood that affects the work required to cause the blood to flow through the arteries is the viscosity of blood. The viscosity of blood is in the range of 3 to 6 cP, or 0.003 to 0.006 Ns/m2. Blood is a non-Newtonian fluid, which means that the viscosity of blood is not a constant with respect to the rate of shearing strain. In addition to the rate of shearing strain, the viscosity of blood is also dependent on temperature and on the volume percentage of blood that consists of red blood cells. If blood is made stationary for several seconds then clotting begins in the blood, as a result of which the viscosity of the blood increases. When the stationary state is disturbed with increasing shear rate, the clot formation is destroyed and the viscosity decreases. Moreover, the orientation of red blood cells present in the blood also affects the viscosity of blood. Thus, we can say that blood is a shear thinning fluid, i.e., viscosity decreases with increase in shear rate. Beyond a shear rate of about 100s^-1, the viscosity is nearly constant and the blood behaves like a Newtonian fluid. Blood is a viscoelastic material, i.e., viscous and elastic because the effective viscosity of blood not only depends on the shear rate but also on the history of shear rate. It is also important to note that the normal blood flows much more easily compared to rigid particles, for the same particle volume fraction. This is due to the fact that red blood cells can accommodate by deforming in order to pass by one another. Fåhræus-Lindqvist effect Robert (Robin) Sanno Fåhræus, a Swedish pathologist, and hematologist, and Johan Torsten Lindqvist, a Swedish physician, observed that when blood flows through vessels smaller than about 1.5  mm in diameter, the apparent viscosity of the fluid decreases. The viscosity of blood decreases as the percent of the diameter of a vessel occupied by the cell-free layer increases. However, when the diameter of the tube approaches the diameter of the erythrocyte, the viscosity increases dramatically. For blood flow through tubes less than approximately 1  mm in diameter, the viscosity is not constant with respect to the tube diameter. Therefore, blood behaves as a non-Newtonian fluid in such blood vessels. Applications of Biofluid Dynamics Biofluid Dynamics refers to the study of fluid Dynamics of basic biological fluids such as blood, air etc. and has immense applications in the field of diagnosing, treating and certain surgical procedures related to the disorders/diseases which originate in the body relating to cardiovascular, pulmonary, synovial systems etc. The different types of cardiovascular diseases include Aneurysms, Angina, Atherosclerosis, Stroke, Different types of Cerebrovascular disease, Heart Failure, Coronary Heart diseases and Myocardial infarction or Heart attacks. The Computational Fluid dynamics (CFD) models prepared through software, of the arteries, veins etc. not only lead to the identification of properties of flowing blood inside arteries but also changes in viscosity can be identified which may be the result of certain underlying disease/disorder. Moreover, the stress concentration and the distribution of stresses in different biological systems carrying fluids can also be identified. This has led to a greater degree of assistance to biomedical engineers in recognizing the cause of certain diseases and thus they can easily search for the method of cure for that disease/disorder. Also, this has led to a greater degree of good research in the fields of biotechnology, Bio-Mechanics etc. References "Newtons Law of Viscosity" Biomechanics Fluid dynamics
Biofluid dynamics
[ "Physics", "Chemistry", "Engineering" ]
3,584
[ "Biomechanics", "Chemical engineering", "Mechanics", "Piping", "Fluid dynamics" ]
52,243,246
https://en.wikipedia.org/wiki/Continuous-time%20quantum%20Monte%20Carlo
In computational solid state physics, Continuous-time quantum Monte Carlo (CT-QMC) is a family of stochastic algorithms for solving the Anderson impurity model at finite temperature. These methods first expand the full partition function as a series of Feynman diagrams, employ Wick's theorem to group diagrams into determinants, and finally use Markov chain Monte Carlo to stochastically sum up the resulting series. The attribute continuous-time was introduced to distinguish the method from the then-predominant Hirsch–Fye quantum Monte Carlo method, which relies on a Suzuki–Trotter discretisation of the imaginary time axis. If the sign problem is absent, the method can also be used to solve lattice models such as the Hubbard model at half filling. To distinguish it from other Monte Carlo methods for such systems that also work in continuous time, the method is then usually referred to as Diagrammatic determinantal quantum Monte Carlo (DDQMC or DDMC). Partition function expansion In second quantisation, the Hamiltonian of the Anderson impurity model reads: , where and are the creation and annihilation operators, respectively, of a fermion on the impurity. The index collects the spin index and possibly other quantum numbers such as orbital (in the case of a multi-orbital impurity) and cluster site (in the case of multi-site impurity). and are the corresponding fermion operators on the non-interacting bath, where the bath quantum number will typically be continuous. Step 1 of CT-QMC is to split the Hamiltonian into an exactly solvable term, , and the rest, . Different choices correspond to different expansions and thus different algorithmic descriptions. Common choices are: Interaction expansion (CT-INT): Hybridization expansion (CT-HYB): Auxiliary field expansion (CT-AUX): like CT-INT, but the interaction term is first decoupled using a discrete Hubbard-Stratonovich transformation Step 2 is to switch to the interaction picture and expand the partition function in terms of a Dyson series: , where is the inverse temperature and denotes imaginary time ordering. The presence of a (zero-dimensional) lattice regularises the series and the finite size and temperature of the system makes renormalisation unnecessary. The Dyson series generates a factorial number of identical diagrams per order, which makes sampling more difficult and possibly worsen the sign problem. Thus, as step 3, one uses Wick's theorem to group identical diagrams into determinants. This leads to the expressions: Interaction expansion (CT-INT): Hybridisation expansion (CT-HYB): In a final step, one notes that this is nothing but an integral over a large domain and performs it using a Monte Carlo method, usually the Metropolis–Hastings algorithm. See also Metropolis–Hastings algorithm Quantum Monte Carlo Dynamical mean field theory References Computational physics
Continuous-time quantum Monte Carlo
[ "Physics" ]
597
[ "Computational physics" ]
52,244,947
https://en.wikipedia.org/wiki/Fully%20irreducible%20automorphism
In the mathematical subject geometric group theory, a fully irreducible automorphism of the free group Fn is an element of Out(Fn) which has no periodic conjugacy classes of proper free factors in Fn (where n > 1). Fully irreducible automorphisms are also referred to as "irreducible with irreducible powers" or "iwip" automorphisms. The notion of being fully irreducible provides a key Out(Fn) counterpart of the notion of a pseudo-Anosov element of the mapping class group of a finite type surface. Fully irreducibles play an important role in the study of structural properties of individual elements and of subgroups of Out(Fn). Formal definition Let where . Then is called fully irreducible if there do not exist an integer and a proper free factor of such that , where is the conjugacy class of in . Here saying that is a proper free factor of means that and there exists a subgroup such that . Also, is called fully irreducible if the outer automorphism class of is fully irreducible. Two fully irreducibles are called independent if . Relationship to irreducible automorphisms The notion of being fully irreducible grew out of an older notion of an "irreducible" outer automorphism of originally introduced in. An element , where , is called irreducible if there does not exist a free product decomposition with , and with being proper free factors of , such that permutes the conjugacy classes . Then is fully irreducible in the sense of the definition above if and only if for every is irreducible. It is known that for any atoroidal (that is, without periodic conjugacy classes of nontrivial elements of ), being irreducible is equivalent to being fully irreducible. For non-atoroidal automorphisms, Bestvina and Handel produce an example of an irreducible but not fully irreducible element of , induced by a suitably chosen pseudo-Anosov homeomorphism of a surface with more than one boundary component. Properties If and then is fully irreducible if and only if is fully irreducible. Every fully irreducible can be represented by an expanding irreducible train track map. Every fully irreducible has exponential growth in given by a stretch factor . This stretch factor has the property that for every free basis of (and, more generally, for every point of the Culler–Vogtmann Outer space ) and for every one has: Moreover, is equal to the Perron–Frobenius eigenvalue of the transition matrix of any train track representative of . Unlike for stretch factors of pseudo-Anosov surface homeomorphisms, it can happen that for a fully irreducible one has and this behavior is believed to be generic. However, Handel and Mosher proved that for every there exists a finite constant such that for every fully irreducible A fully irreducible is non-atoroidal, that is, has a periodic conjugacy class of a nontrivial element of , if and only if is induced by a pseudo-Anosov homeomorphism of a compact connected surface with one boundary component and with the fundamental group isomorphic to . A fully irreducible element has exactly two fixed points in the Thurston compactification of the projectivized Outer space , and acts on with "North-South" dynamics. For a fully irreducible element , its fixed points in are projectivized -trees , where , satisfying the property that and . A fully irreducible element acts on the space of projectivized geodesic currents with either "North-South" or "generalized North-South" dynamics, depending on whether is atoroidal or non-atoroidal. If is fully irreducible, then the commensurator is virtually cyclic. In particular, the centralizer and the normalizer of in are virtually cyclic. If are independent fully irreducibles, then are four distinct points, and there exists such that for every the subgroup is isomorphic to . If is fully irreducible and , then either is virtually cyclic or contains a subgroup isomorphic to . [This statement provides a strong form of the Tits alternative for subgroups of containing fully irreducibles.] If is an arbitrary subgroup, then either contains a fully irreducible element, or there exist a finite index subgroup and a proper free factor of such that . An element acts as a loxodromic isometry on the free factor complex if and only if is fully irreducible. It is known that "random" (in the sense of random walks) elements of are fully irreducible. More precisely, if is a measure on whose support generates a semigroup in containing some two independent fully irreducibles. Then for the random walk of length on determined by , the probability that we obtain a fully irreducible element converges to 1 as . A fully irreducible element admits a (generally non-unique) periodic axis in the volume-one normalized Outer space , which is geodesic with respect to the asymmetric Lipschitz metric on and possesses strong "contraction"-type properties. A related object, defined for an atoroidal fully irreducible , is the axis bundle , which is a certain -invariant closed subset proper homotopy equivalent to a line. References Further reading Thierry Coulbois and Arnaud Hilion, Botany of irreducible automorphisms of free groups, Pacific Journal of Mathematics 256 (2012), 291–307. Karen Vogtmann, On the geometry of outer space. Bulletin of the American Mathematical Society 52 (2015), no. 1, 27–46. Geometric group theory Geometric topology
Fully irreducible automorphism
[ "Physics", "Mathematics" ]
1,206
[ "Geometric group theory", "Group actions", "Geometric topology", "Topology", "Symmetry" ]
37,936,488
https://en.wikipedia.org/wiki/Functional%20magnetic%20resonance%20spectroscopy%20of%20the%20brain
Functional magnetic resonance spectroscopy of the brain (fMRS) uses magnetic resonance imaging (MRI) to study brain metabolism during brain activation. The data generated by fMRS usually shows spectra of resonances, instead of a brain image, as with MRI. The area under peaks in the spectrum represents relative concentrations of metabolites. fMRS is based on the same principles as in vivo magnetic resonance spectroscopy (MRS). However, while conventional MRS records a single spectrum of metabolites from a region of interest, a key interest of fMRS is to detect multiple spectra and study metabolite concentration dynamics during brain function. Therefore, it is sometimes referred to as dynamic MRS, event-related MRS or time-resolved MRS. A novel variant of fMRS is functional diffusion-weighted spectroscopy (fDWS) which measures diffusion properties of brain metabolites upon brain activation. Unlike in vivo MRS which is intensively used in clinical settings, fMRS is used primarily as a research tool, both in a clinical context, for example, to study metabolite dynamics in patients with epilepsy, migraine and dyslexia, and to study healthy brains. fMRS can be used to study metabolism dynamics also in other parts of the body, for example, in muscles and heart; however, brain studies have been far more popular. The main goals of fMRS studies are to contribute to the understanding of energy metabolism in the brain, and to test and improve data acquisition and quantification techniques to ensure and enhance validity and reliability of fMRS studies. Basic principles Studied nuclei Like in vivo MRS, fMRS can probe different nuclei, such as hydrogen (1H) and carbon (13C). The 1H nucleus is the most sensitive and is most commonly used to measure metabolite concentrations and concentration dynamics, whereas 13C is best suited for characterizing fluxes and pathways of brain metabolism. The natural abundance of 13C in the brain is only about 1%; therefore, 13C fMRS studies usually involve the isotope enrichment via infusion or ingestion. In the literature 13C fMRS is commonly referred to as functional 13C MRS or just 13C MRS. Spectral and temporal resolution Typically in MRS a single spectrum is acquired by averaging enough spectra over a long acquisition time. Averaging is necessary because of the complex spectral structures and relatively low concentrations of many brain metabolites, which result in a low signal-to-noise ratio (SNR) in MRS of a living brain. fMRS differs from MRS by acquiring not one but multiple spectra at different time points while the participant is inside the MRI scanner. Thus, temporal resolution is very important and acquisition times need to be kept adequately short to provide a dynamic rate of metabolite concentration change. To balance the need for temporal resolution and sufficient SNR, fMRS requires a high magnetic field strength (1.5 T and above). High field strengths have the advantage of increased SNR as well as improved spectral resolution allowing to detect more metabolites and more detailed metabolite dynamics. fMRS is continuously advancing as stronger magnets become more available and better data acquisition techniques are developed providing increased spectral and temporal resolution. With 7-tesla magnet scanners it is possible to detect around 18 different metabolites of 1H spectrum which is a significant improvement over less powerful magnets. Temporal resolution has increased from 7 minutes in the first fMRS studies to 5 seconds in more recent ones. Spectroscopic technique In fMRS, depending on the focus of the study, either single-voxel or multi-voxel spectroscopic technique can be used. In single-voxel fMRS the selection of the volume of interest (VOI) is often done by running a functional magnetic resonance imaging (fMRI) study prior to fMRS to localize the brain region activated by the task. Single-voxel spectroscopy requires shorter acquisition times; therefore it is more suitable for fMRS studies where high temporal resolution is needed and where the volume of interest is known. Multi-voxel spectroscopy provides information about group of voxels and data can be presented in 2D or 3D images, but it requires longer acquisition times and therefore temporal resolution is decreased. Multi-voxel spectroscopy usually is performed when the specific volume of interest is not known or it is important to study metabolite dynamics in a larger brain region. Advantages and limitations fMRS has several advantages over other functional neuroimaging and brain biochemistry detection techniques. Unlike push-pull cannula, microdialysis and in vivo voltammetry, fMRS is a non-invasive method for studying dynamics of biochemistry in an activated brain. It is done without exposing subjects to ionizing radiation like it is done in positron emission tomography (PET) or single-photon emission computed tomography (SPECT) studies. fMRS gives a more direct measurement of cellular events occurring during brain activation than BOLD fMRI or PET which rely on hemodynamic responses and show only global neuronal energy uptake during brain activation while fMRS gives also information about underlying metabolic processes that support the working brain. However, fMRS requires very sophisticated data acquisition, quantification methods and interpretation of results. This is one of the main reasons why in the past it received less attention than other MR techniques, but the availability of stronger magnets and improvements in data acquisition and quantification methods are making fMRS more popular. Main limitations of fMRS are related to signal sensitivity and the fact that many metabolites of potential interest can not be detected with current fMRS techniques. Because of limited spatial and temporal resolution fMRS can not provide information about metabolites in different cell types, for example, whether lactate is used by neurons or by astrocytes during brain activation. The smallest volume that can currently be characterized with fMRS is 1 cm3, which is too big to measure metabolites in different cell types. To overcome this limitation, mathematical and kinetic modeling is used. Many brain areas are not suitable for fMRS studies because they are too small (like small nuclei in brainstem) or too close to bone tissue, CSF or extracranial lipids, which could cause inhomogeneity in the voxel and contaminate the spectra. To avoid these difficulties, in most fMRS studies the volume of interest is chosen from the visual cortex – because it is easily stimulated, has high energy metabolisms, and yields good MRS signals. Applications Unlike in vivo MRS which is intensively used in clinical settings, fMRS is used primarily as a research tool, both in a clinical context, for example, to study metabolite dynamics in patients with epilepsy, migraine and dyslexia, and to study healthy brains. fMRS can be used to study metabolism dynamics also in other parts of the body, for example, in muscles and heart; however, brain studies have been far more popular. The main goals of fMRS studies are to contribute to the understanding of energy metabolism in the brain, and to test and improve data acquisition and quantification techniques to ensure and enhance validity and reliability of fMRS studies. Brain energy metabolism studies fMRS was developed as an extension of MRS in the early 1990s. Its potential as a research technology became obvious when it was applied to an important research problem where PET studies had been inconclusive, namely the mismatch between oxygen and glucose consumption during sustained visual stimulation. The 1H fMRS studies highlighted the important role of lactate in this process and significantly contributed to the research in brain energy metabolism during brain activation. It confirmed the hypothesis that lactate increases during sustained visual stimulation and allowed the generalization of findings based on visual stimulation to other types of stimulation, e.g., auditory stimulation, motor task and cognitive tasks. 1H fMRS measurements were instrumental in achieving the current consensus among most researchers that lactate levels increase during the first minutes of intense brain activation. However, there are no consistent results about the magnitude of increase, and questions about the exact role of lactate in brain energy metabolism still remain unanswered and are the subject of continuing research. 13C MRS is a special type of fMRS particularly suited for measuring important neurophysiological fluxes in vivo and in real time to assess metabolic activity both in healthy and diseased brains (e.g., in human tumor tissue ). These fluxes include TCA cycle, glutamate–glutamine cycle, glucose and oxygen consumption. 13C MRS can provide detailed quantitative information about glucose dynamics that can not be obtained with 1H fMRS, because of the low concentration of glucose in the brain and the spread of its resonances in several multiplets in the 1H MRS spectrum. 13C MRSs have been crucial in recognizing that the awake nonstimulated (resting) human brain is highly active using 70%–80% of its energy for glucose oxidation to support signaling within cortical networks which is suggested to be necessary for consciousness. This finding has an important implication for the interpretation of BOLD fMRI data where this high baseline activity is generally ignored and response to the task is shown as independent of the baseline activity. 13C MRS studies indicate that this approach can misjudge and even completely miss the brain activity induced by the task. 13C MRS findings together with other results from PET and fMRI studies have been combined in a model to explain the function of resting state activity called default mode network. Another important benefit of 13C MRS is that it provides unique means for determining the time course of metabolite pools and measuring turnover rates of TCA and glutamate–glutamine cycles. As such, it has been proved to be important in aging research by revealing that mitochondrial metabolism is reduced with aging which may explain the decline in cognitive and sensory processes. Water resonance studies Usually, in 1H fMRS the water signal is suppressed to detect metabolites with much lower concentration than water. Though, an unsuppressed water signal can be used to estimate functional changes in the relaxation time T2* during cortical activation. This approach has been proposed as an alternative to the BOLD fMRI technique and used to detect visual response to photic stimulation, motor activation by finger tapping and activations in language areas during speech processing. Recently functional real-time single-voxel proton spectroscopy (fSVPS) has been proposed as a technique for real-time neurofeedback studies in magnetic fields of 7 tesla (7 T) and above. This approach could have potential advantages over BOLD fMRI and is the subject of current research. Migraine and pain studies fMRS has been used in migraine and pain research. It has supported the important hypothesis of mitochondria dysfunction in migraine with aura (MwA) patients. Here the ability of fMRS to measure chemical processes in the brain over time proved crucial for confirming that repetitive photic stimulation causes higher increase of the lactate level and higher decrease of the N-acetylaspartate (NAA) level in the visual cortex of MwA patients compared to migraine without aura (MwoA) patients and healthy individuals. In pain research fMRS complements fMRI and PET techniques. Although fMRI and PET are continuously used to localize pain processing areas in the brain, they can not provide direct information about changes in metabolites during pain processing that could help to understand physiological processes behind pain perception and potentially lead to novel treatments for pain. fMRS overcomes this limitation and has been used to study pain-induced (cold-pressure, heat, dental pain) neurotransmitter level changes in the anterior cingulate cortex, anterior insular cortex and left insular cortex. These fMRS studies are valuable because they show that some or all Glx compounds (glutamate, GABA and glutamine) increase during painful stimuli in the studied brain regions. Cognitive studies Cognitive studies frequently rely on the detection of neuronal activity during cognition. The use of fMRS for this purpose is at present mainly at an experimental level but is rapidly increasing. Cognitive tasks where fMRS has been used and the major findings of the research are summarized below. See also Brain metabolism Magnetic resonance imaging Nuclear magnetic resonance spectroscopy Fourier-transform spectroscopy Adjusting the homogeneity of a magnetic field References External links Introductory NMR & MRI (video series) Introduction to proton NMR spectroscopy (video series) Magnetic Resonance Spectroscopy The Basics of NMR Magnetic resonance imaging Neuroimaging Spectroscopy Nuclear magnetic resonance
Functional magnetic resonance spectroscopy of the brain
[ "Physics", "Chemistry" ]
2,546
[ "Molecular physics", "Nuclear magnetic resonance", "Spectrum (physical sciences)", "Magnetic resonance imaging", "Instrumental analysis", "Nuclear physics", "Spectroscopy" ]
37,939,765
https://en.wikipedia.org/wiki/Bahcall%E2%80%93Wolf%20cusp
Bahcall–Wolf cusp refers to a particular distribution of stars around a massive black hole at the center of a galaxy or globular cluster. If the nucleus containing the black hole is sufficiently old, exchange of orbital energy between stars drives their distribution toward a characteristic form, such that the density of stars, ρ, varies with distance from the black hole, r, as So far, no clear example of a Bahcall–Wolf cusp has been found in any galaxy or star cluster. This may be due in part to the difficulty of resolving such a feature. Distribution of stars around a supermassive black hole Supermassive black holes reside in galactic nuclei. The total mass of the stars in a nucleus is roughly equal to the mass of the supermassive black hole. In the case of the Milky Way, the mass of the supermassive black hole is about 4 million Solar masses, and the number of stars in the nucleus is about ten million. The stars move around the supermassive black hole in elliptical orbits, similar to the orbits that planets follow around the Sun. The orbital energy of a star is where m is the star's mass, v is the star's velocity, r is its distance from the supermassive black hole, and M is the supermassive black hole's mass. A star's energy remains nearly constant for many orbital periods. But after roughly one relaxation time, most of the stars in the nucleus will have exchanged energy with other stars, causing their orbits to change. Bahcall and Wolf showed that once this has taken place, the distribution of orbital energies has the form which corresponds to the density ρ=ρ0 r −7/4. The figure shows how the density of stars evolves toward the Bahcall–Wolf form. The fully formed cusp extends outward to a distance of roughly one-fifth the supermassive black hole's influence radius. It is believed that relaxation times in the nuclei of small, dense galaxies are short enough for Bahcall–Wolf cusps to form. The Galactic Center The influence radius of the supermassive black hole at the Galactic Center is about 2–3 parsecs (pc), and a Bahcall–Wolf cusp if present would extend outward to a distance of about 0.5 pc from the supermassive black hole. A region of this size is easily resolved from Earth. However, no cusp is observed; instead, the density of the oldest stars is flat or even declining toward the Galactic Center. This observation does not necessarily rule out the existence of a Bahcall–Wolf cusp in some still unobserved component. However, current observations imply a relaxation time at the Galactic Center of roughly 10 billion years, comparable with the age of the Milky Way. While it had been considered that it could be that not enough time had elapsed for a Bahcall–Wolf cusp to form, we have nowadays observational evidence that there is an old, segregated cusp at the Galactic Centre. These observations coincide with the predictions of dedicated models. Multi-mass cusps The Bahcall–Wolf solution applies to a nucleus consisting of stars of a single mass. If there is a range of masses, each component will have a different density profile. There are two limiting cases. If the more massive stars dominate the total density, their density will follow the Bahcall–Wolf form, whereas the less-massive objects will have ρ r−3/2. If the less massive stars dominate the total density, their density will follow the Bahcall–Wolf form, whereas the more-massive stars will follow ρ r−2. In an old stellar population, most of the mass is either in the form of main-sequence stars, with masses 1–2 Solar masses, or in black hole remnants, with masses ~ 10–20 Solar masses. It is likely that the main-sequence stars dominate the total density; so their density should follow the Bahcall–Wolf form whereas the black holes should have the steeper, ρ ~ r−2 profile. On the other hand, it has been suggested that the distribution of stellar masses at the Galactic Center is "top-heavy", with a much larger fraction of black holes. If this is the case, the observed stars would be expected to attain the shallower density profile, ρ ~ r−3/2. The number and distribution of black hole remnants at the Galactic Center is very poorly constrained. See also Stellar dynamics References Astrophysics Supermassive black holes
Bahcall–Wolf cusp
[ "Physics", "Astronomy" ]
931
[ "Black holes", "Unsolved problems in physics", "Supermassive black holes", "Astrophysics", "Astronomical sub-disciplines" ]
37,939,813
https://en.wikipedia.org/wiki/Pusey%E2%80%93Barrett%E2%80%93Rudolph%20theorem
The Pusey–Barrett–Rudolph (PBR) theorem is a no-go theorem in quantum foundations due to Matthew Pusey, Jonathan Barrett, and Terry Rudolph (for whom the theorem is named) in 2012. It has particular significance for how one may interpret the nature of the quantum state. With respect to certain realist hidden variable theories that attempt to explain the predictions of quantum mechanics, the theorem rules that pure quantum states must be "ontic" in the sense that they correspond directly to states of reality, rather than "epistemic" in the sense that they represent probabilistic or incomplete states of knowledge about reality. The PBR theorem may also be compared with other no-go theorems like Bell's theorem and the Bell–Kochen–Specker theorem, which, respectively, rule out the possibility of explaining the predictions of quantum mechanics with local hidden variable theories and noncontextual hidden variable theories. Similarly, the PBR theorem could be said to rule out preparation independent hidden variable theories, in which quantum states that are prepared independently have independent hidden variable descriptions. This result was cited by theoretical physicist Antony Valentini as "the most important general theorem relating to the foundations of quantum mechanics since Bell's theorem". Theorem This theorem, which first appeared as an arXiv preprint and was subsequently published in Nature Physics, concerns the interpretational status of pure quantum states. Under the classification of hidden variable models of Harrigan and Spekkens, the interpretation of the quantum wavefunction can be categorized as either ψ-ontic if "every complete physical state or ontic state in the theory is consistent with only one pure quantum state" and ψ-epistemic "if there exist ontic states that are consistent with more than one pure quantum state." The PBR theorem proves that either the quantum state is ψ-ontic, or else non-entangled quantum states violate the assumption of preparation independence, which would entail action at a distance. See also Quantum foundations Bell's theorem Kochen–Specker theorem References External links Quantum information science Theorems in quantum mechanics Hidden variable theory No-go theorems
Pusey–Barrett–Rudolph theorem
[ "Physics", "Mathematics" ]
440
[ "Theorems in quantum mechanics", "No-go theorems", "Equations of physics", "Quantum mechanics", "Theorems in mathematical physics", "Physics theorems" ]
37,940,544
https://en.wikipedia.org/wiki/Multipole%20magnet
Multipole magnets are magnets built from multiple individual magnets, typically used to control beams of charged particles. Each type of magnet serves a particular purpose. Dipole magnets are used to bend the trajectory of particles Quadrupole magnets are used to focus particle beams Sextupole magnets are used to correct for chromaticity introduced by quadrupole magnets Magnetic field equations The magnetic field of an ideal multipole magnet in an accelerator is typically modeled as having no (or a constant) component parallel to the nominal beam direction ( direction) and the transverse components can be written as complex numbers: where and are the coordinates in the plane transverse to the nominal beam direction. is a complex number specifying the orientation and strength of the magnetic field. and are the components of the magnetic field in the corresponding directions. Fields with a real are called 'normal' while fields with purely imaginary are called 'skewed'. Stored energy equation For an electromagnet with a cylindrical bore, producing a pure multipole field of order , the stored magnetic energy is: Here, is the permeability of free space, is the effective length of the magnet (the length of the magnet, including the fringing fields), is the number of turns in one of the coils (such that the entire device has turns), and is the current flowing in the coils. Formulating the energy in terms of can be useful, since the magnitude of the field and the bore radius do not need to be measured. Note that for a non-electromagnet, this equation still holds if the magnetic excitation can be expressed in Amperes. Derivation The equation for stored energy in an arbitrary magnetic field is: Here, is the permeability of free space, is the magnitude of the field, and is an infinitesimal element of volume. Now for an electromagnet with a cylindrical bore of radius , producing a pure multipole field of order , this integral becomes: Ampere's Law for multipole electromagnets gives the field within the bore as: Here, is the radial coordinate. It can be seen that along the field of a dipole is constant, the field of a quadrupole magnet is linearly increasing (i.e. has a constant gradient), and the field of a sextupole magnet is parabolically increasing (i.e. has a constant second derivative). Substituting this equation into the previous equation for gives: References Types of magnets Accelerator physics
Multipole magnet
[ "Physics" ]
514
[ "Accelerator physics", "Applied and interdisciplinary physics", "Experimental physics" ]
37,941,367
https://en.wikipedia.org/wiki/Kibble%E2%80%93Zurek%20mechanism
The Kibble–Zurek mechanism (KZM) describes the non-equilibrium dynamics and the formation of topological defects in a system which is driven through a continuous phase transition at finite rate. It is named after Tom W. B. Kibble, who pioneered the study of domain structure formation through cosmological phase transitions in the early universe, and Wojciech H. Zurek, who related the number of defects it creates to the critical exponents of the transition and to its rate—to how quickly the critical point is traversed. Basic idea Based on the formalism of spontaneous symmetry breaking, Tom Kibble developed the idea for the primordial fluctuations of a two-component scalar field like the Higgs field. If a two-component scalar field switches from the isotropic and homogeneous high-temperature phase to the symmetry-broken stage during cooling and expansion of the very early universe (shortly after Big Bang), the order parameter necessarily cannot be the same in regions which are not connected by causality. Regions are not connected by causality if they are separated far enough (at the given age of the universe) that they cannot "communicate" even with the speed of light. This implies that the symmetry cannot be broken globally. The order parameter will take different values in causally disconnected regions, and the domains will be separated by domain walls after further evolution of the universe. Depending on the symmetry of the system and the symmetry of the order parameter, different types of topological defects like monopoles, vortices or textures can arise. It was debated for quite a while if magnetic monopoles might be residuals of defects in the symmetry-broken Higgs field. Up to now, defects like this have not been observed within the event horizon of the visible universe. This is one of the main reasons (beside the isotropy of the cosmic background radiation and the flatness of spacetime) why nowadays an inflationary expansion of the universe is postulated. During the exponentially fast expansion within the first 10−30 second after Big-Bang, all possible defects were diluted so strongly that they lie beyond the event horizon. Today, the two-component primordial scalar field is usually named inflaton. Relevance in condensed matter Wojciech Zurek pointed out, that the same ideas play a role for the phase transition of normal fluid helium to superfluid helium. The analogy between the Higgs field and superfluid helium is given by the two-component order parameter; superfluid helium is described via a macroscopic quantum mechanical wave function with global phase. In helium, two components of the order parameter are magnitude and phase (or real and imaginary part) of the complex wave function. Defects in superfluid helium are given by vortex lines, where the coherent macroscopic wave function disappears within the core. Those lines are high-symmetry residuals within the symmetry broken phase. It is characteristic for a continuous phase transition that the energy difference between ordered and disordered phase disappears at the transition point. This implies that fluctuations between both phases will become arbitrarily large. Not only the spatial correlation lengths diverge for those critical phenomena, but fluctuations between both phases also become arbitrarily slow in time, described by the divergence of the relaxation time. If a system is cooled at any non-zero rate (e.g. linearly) through a continuous phase transition, the time to reach the transition will eventually become shorter than the correlation time of the critical fluctuations. At this time, the fluctuations are too slow to follow the cooling rate; the system has fallen out of equilibrium and ceases to be adiabatic. A "fingerprint" of critical fluctuations is taken at this fall-out time and the longest-length scale of the domain size is frozen out. The further evolution of the system is now determined by this length scale. For very fast cooling rates, the system will fall out of equilibrium very early and far away from the transition. The domain size will be small. For very slow rates, the system will fall out of equilibrium in the vicinity of the transition when the length scale of critical fluctuations will be large, thus the domain size will be large, too. The inverse of this length scale can be used as an estimate of the density of topological defects, and it obeys a power law in the quench rate. This prediction is universal, and the power exponent is given in terms of the critical exponents of the transition. Derivation of the defect density Consider a system that undergoes a continuous phase transition at the critical value of a control parameter. The theory of critical phenomena states that, as the control parameter is tuned closer and closer to its critical value, the correlation length and the relaxation time of the system tend to diverge algebraically with the critical exponent as respectively. is the dynamic exponent which relates spatial with temporal critical fluctuations. The Kibble–Zurek mechanism describes the nonadiabatic dynamics resulting from driving a high-symmetry (i.e. disordered) phase to a broken-symmetry (i.e. ordered) phase at . If the control parameter varies linearly in time, , equating the time to the critical point to the relaxation time, we obtain the freeze out time , This time scale is often referred to as the freeze-out time. It is the intersection point of the blue and the red curve in the figure. The distance to the transition is on one hand side the time to reach the transition as function of cooling rate (red curve) and for linear cooling rates at the same time the difference of the control parameter to the critical point (blue curve). As the system approaches the critical point, it freezes as a result of the critical slowing down and falls out of equilibrium. Adiabaticity is lost around . Adiabaticity is restored in the broken-symmetry phase after . The correlation length at this time provides a length scale for coherent domains, The size of the domains in the broken-symmetry phase is set by . The density of defects immediately follows if is the dimension of the system, using Experimental tests The Kibble–Zurek mechanism generally applies to spontaneous symmetry breaking scenarios where a global symmetry is broken. For gauge symmetries defect formation can arise through the Kibble–Zurek mechanism and the flux trapping mechanism proposed by Hindmarsh and Rajantie. In 2005, it was shown that KZM describes as well the dynamics through a quantum phase transition. In 2008 spontaneous vortices were observed in the formation of atomic Bose-Einstein condensates, consistent with the Kibble-Zurek mechanism. The mechanism also applies in the presence of inhomogeneities, ubiquitous in condensed matter experiments, to both classical, quantum phase transitions and even in optics. A variety of experiments have been reported that can be described by the Kibble–Zurek mechanism. A review by T. Kibble discusses the significance and limitations of various experiments (until 2007). Example in two dimensions A system, where structure formation can be visualized directly is given by a colloidal mono-layer which forms a hexagonal crystal in two dimensions. The phase transition is described by the so-called Kosterlitz–Thouless–Halperin–Nelson–Young theory where translational and orientational symmetry are broken by two Kosterlitz–Thouless transitions. The corresponding topological defects are dislocations and disclinations in two dimensions. The latter are nothing else but the monopoles of the high-symmetry phase within the six-fold director field of crystal axes. A special feature of Kosterlitz–Thouless transitions is the exponential divergence of correlation times and length (instead of algebraic ones). This serves a transcendental equation which can be solved numerically. The figure shows a comparison of the Kibble–Zurek scaling with algebraic and exponential divergences. The data illustrate, that the Kibble–Zurek mechanism also works for transitions of the Kosterlitz–Thoules universality class. Footnote References Physical cosmology Phase transitions Condensed matter physics Superfluidity
Kibble–Zurek mechanism
[ "Physics", "Chemistry", "Materials_science", "Astronomy", "Engineering" ]
1,653
[ "Physical phenomena", "Phase transitions", "Astronomical sub-disciplines", "Theoretical physics", "Phases of matter", "Critical phenomena", "Astrophysics", "Superfluidity", "Materials science", "Condensed matter physics", "Physical cosmology", "Exotic matter", "Statistical mechanics", "Mat...
37,942,128
https://en.wikipedia.org/wiki/Alipogene%20tiparvovec
Alipogene tiparvovec, sold under the brand name Glybera, is a gene therapy treatment designed to reverse lipoprotein lipase deficiency (LPLD), a rare recessive disorder, due to mutations in LPL, which can cause severe pancreatitis. It was recommended for approval by the European Medicines Agency in July 2012, and approved by the European Commission in November of the same year. It was the first marketing authorisation for a gene therapy treatment in either the European Union or the United States. The medication is administered via a series of injections into the leg muscles. Glybera gained infamy as the "million-dollar drug" and proved commercially unsuccessful for a number of reasons. Its cost to patients and payers, together with the rarity of LPLD, high maintenance costs to its manufacturer , and failure to achieve approval in the US, led to withdrawing the drug after two years on the EU market. As of 2018, only 31 people worldwide have ever been administered Glybera, and has no plans to sell the drug in the US or Canada. History Glybera was developed over a period of decades by researchers at the University of British Columbia (UBC). In 1986, Michael R. Hayden and John Kastelein began research at UBC, confirming the hypothesis that LPLD was caused by a gene mutation. Years later, in 2002, Hayden and Colin Ross successfully performed gene therapy on test mice to treat LPLD; their findings were featured on the September 2004 cover of Human Gene Therapy. Ross and Hayden next succeeded in treating cats in the same manner, with the help of Boyce Jones. Trials and approval Meanwhile, Kastelein—who had, by 1998, become an international expert in lipid disorders—co-founded Amsterdam Molecular Therapeutics (AMT), which acquired rights to Hayden's research with the aim of releasing the drug in Europe. Since LPLD is a rare condition (prevalence worldwide 1–2 per million), related clinical tests and trials have involved unusually small cohort sizes. The first main trial (CT-AMT-011-01) involved just 14 subjects, and by 2015, a total of 27 individuals had been involved in phase III testing. The second phase of testing focused on subjects living along the Saguenay River in Quebec, where LPLD affects people at the highest rate in the world (up to 200 per million) due to the founder effect. Price After over two years of testing, Glybera was approved in the European Union in 2012. However, after spending millions of euros on Glybera's approval, AMT went bankrupt and its assets were acquired by . Alipogene tiparvovec was expected to cost around per treatment in 2012,—revised to $1 million in 2015,—making it the most expensive medicine in the world at the time. However, replacement therapy, a similar treatment, can cost over $300,000 per year, for life. In 2015, dropped its plans for approval in the US and exclusively licensed rights to sell the drug in Europe to Chiesi Farmaceutici for . As of 2016, only one person had received the drug outside of a clinical trial. In April 2017, Chiesi quit selling Glybera and announced that it would not pursue the renewal of the marketing authorisation in the European Union when it was scheduled to expire that October, due to lack of demand. Afterwards, the three remaining doses in Chiesi's inventory were given away to two patients in Germany and one patient in Italy for each. Mechanism The adeno-associated virus serotype 1 (AAV1) viral vector delivers an intact copy of the human lipoprotein lipase (LPL) gene to muscle cells. The LPL gene is not inserted into the cell's chromosomes but remains as free floating DNA in the nucleus. The injection is followed by immunosuppressive therapy to prevent immune reactions to the virus. Data from the clinical trials indicates that fat concentrations in blood were reduced between 3 and 12 weeks after injection, in nearly all patients. The advantages of AAV include apparent lack of pathogenicity, delivery to non-dividing cells, and much smaller risk of insertion compared to retroviruses, which show random insertion with accompanying risk of cancer. AAV also presents very low immunogenicity, mainly restricted to generating neutralising antibodies, and little well defined cytotoxic response. The cloning capacity of the vector is limited to replacement of the virus's 4.8 kilobase genome. See also List of gene therapies Health care costs References Applied genetics Drugs that are a gene therapy Gene delivery Approved gene therapies Withdrawn drugs
Alipogene tiparvovec
[ "Chemistry", "Biology" ]
976
[ "Genetics techniques", "Drug safety", "Molecular biology techniques", "Withdrawn drugs", "Gene delivery" ]
30,040,436
https://en.wikipedia.org/wiki/Hagemann%27s%20ester
Hagemann's ester, ethyl 2-methyl-4-oxo-2-cyclohexenecarboxylate, is an organic compound that was first prepared and described in 1893 by German chemist Carl Hagemann. The compound is used in organic chemistry as a reagent in the synthesis of many natural products including sterols, trisporic acids, and terpenoids. Preparation Hagemann's approach Methylene iodide and two equivalents of ethyl acetoacetate react in the presence of sodium methoxide to form the diethyl ester of 2,4-diacetyl pentane. This precursor is treated with base to induce cyclization. Finally, heat is applied to generate Hagemann's ester. Knoevenagel's approach Soon after Hagemann, Emil Knoevenagel described a modified procedure to produce the same intermediate diethyl ester of 2,4-diacetyl pentane using formaldehyde and two equivalents of ethyl acetoacetate which undergo condensation in the presence of a catalytic amount of piperidine. Newman and Lloyd approach 2-Methoxy-1,3-butadiene and ethyl-2-butynoate undergo a Diels-Alder reaction to generate a precursor which is hydrolyzed to obtain Hagemann's ester. By varying the substituents on the butynoate starting material, this approach allows for different C2 alkylated Hagemann's ester derivatives to be synthesized. Mannich and Forneau approach Original Methyl vinyl ketone, ethyl acetoacetate, and diethyl-methyl-(3-oxo-butyl)-ammonium iodide react to form a cyclic aldol product. Sodium methoxide is added to generate Hagemann's ester. Variations Methyl vinyl ketone and ethyl acetoacetate undergo aldol cyclization in the presence of catalytic pyrrolidinum acetate or Triton B or sodium ethoxide to produce Hagemann's ester. This variant is a type of Robinson annulation. Uses Hagemann's ester has been used as a key building block in many syntheses. For example, a key intermediate for the fungal hormone trisporic acid was made by its alkylation and it has been used to make sterols. Other authors have used it in inverse-electron-demand Diels–Alder reactions leading to sesquiterpene dimers or in reactions forming simple derivatives. References Ethyl esters Reagents for organic chemistry Cyclohexenes
Hagemann's ester
[ "Chemistry" ]
562
[ "Reagents for organic chemistry" ]
30,047,222
https://en.wikipedia.org/wiki/Sabatinca%20perveta
Sabatinca perveta is an extinct species of moth belonging to the family Micropterigidae. It is known only from the single type specimen, which has been found in Burmese amber in present-day Myanmar. It dates to the earliest Cenomanian, around 99 mya. References † Fossil Lepidoptera Eocene insects Oligocene insects Prehistoric insects of Asia Burmese amber Fossils of Myanmar Taxa named by Theodore Dru Alison Cockerell Species known from a single specimen
Sabatinca perveta
[ "Biology" ]
101
[ "Individual organisms", "Species known from a single specimen" ]
30,047,540
https://en.wikipedia.org/wiki/Plasma%20antenna
A plasma antenna is a type of radio antenna currently in development in which plasma is used instead of the metal elements of a traditional antenna. A plasma antenna can be used for both transmission and reception. Although plasma antennas have only become practical in recent years, the idea is not new; a patent for an antenna using the concept was granted to J. Hettinger in 1919. Early practical examples of the technology used discharge tubes to contain the plasma and are referred to as ionized gas plasma antennas. Ionized gas plasma antennas can be turned on and off and are good for stealth and resistance to electronic warfare and cyber attacks. Ionized gas plasma antennas can be nested such that the higher frequency plasma antennas are placed inside lower frequency plasma antennas. Higher frequency ionized gas plasma antenna arrays can transmit and receive through lower frequency ionized gas plasma antenna arrays. This means that the ionized gas plasma antennas can be co-located and ionized gas plasma antenna arrays can be stacked. Ionized gas plasma antennas can eliminate or reduce co-site interference. Smart ionized gas plasma antennas use plasma physics to shape and steer the antenna beams without the need of phased arrays. Satellite signals can be steered or focused in the reflective or refractive modes using banks of plasma tubes making unique ionized gas satellite plasma antennas. The thermal noise of ionized gas plasma antennas is less than in the corresponding metal antennas at the higher frequencies. Solid state plasma antennas (also known as plasma silicon antennas) with steerable directional functionality that can be manufactured using standard silicon chip fabrication techniques are now also in development. Plasma silicon antennas are candidates for use in WiGig (the planned enhancement to Wi-Fi), and have other potential applications, for example in reducing the cost of vehicle-mounted radar collision avoidance systems. Operation In an ionized gas plasma antenna, a gas is ionized to create a plasma. Unlike gases, plasmas have very high electrical conductivity so it is possible for radio frequency signals to travel through them so that they act as a driven element (such as a dipole antenna) to radiate radio waves, or to receive them. Alternatively the plasma can be used as a reflector or a lens to guide and focus radio waves from another source. Solid-state antennas differ in that the plasma is created from electrons generated by activating thousands of diodes on a silicon chip. Advantages Plasma antennas possess a number of advantages over metal antennas, including: As soon as the plasma generator is switched off, the plasma returns to a non conductive gas and therefore becomes effectively invisible to radar. They can be dynamically tuned and reconfigured for frequency, direction, bandwidth, gain and beamwidth, so replacing the need for multiple antennas. They are resistant to electronic warfare. At satellite frequencies, they exhibit much less thermal noise and are capable of faster data rates. References External links Antenna having reconfigurable length - United States Patent 6710746 Solid state plasma antenna - United States Patent 7109124 Article with image Static Satellite Plasma Antenna Plasma Antennas: Survey of Techniques and the Current State of the Art Low insertion loss beamforming antennas Antennas (radio) Plasma technology and applications Radio electronics
Plasma antenna
[ "Physics", "Engineering" ]
645
[ "Plasma technology and applications", "Radio electronics", "Plasma physics" ]
26,712,429
https://en.wikipedia.org/wiki/Local%20Tate%20duality
In Galois cohomology, local Tate duality (or simply local duality) is a duality for Galois modules for the absolute Galois group of a non-archimedean local field. It is named after John Tate who first proved it. It shows that the dual of such a Galois module is the Tate twist of usual linear dual. This new dual is called the (local) Tate dual. Local duality combined with Tate's local Euler characteristic formula provide a versatile set of tools for computing the Galois cohomology of local fields. Statement Let K be a non-archimedean local field, let Ks denote a separable closure of K, and let GK = Gal(Ks/K) be the absolute Galois group of K. Case of finite modules Denote by μ the Galois module of all roots of unity in Ks. Given a finite GK-module A of order prime to the characteristic of K, the Tate dual of A is defined as (i.e. it is the Tate twist of the usual dual A∗). Let Hi(K, A) denote the group cohomology of GK with coefficients in A. The theorem states that the pairing given by the cup product sets up a duality between Hi(K, A) and H2−i(K, A′) for i = 0, 1, 2. Since GK has cohomological dimension equal to two, the higher cohomology groups vanish. Case of p-adic representations Let p be a prime number. Let Qp(1) denote the p-adic cyclotomic character of GK (i.e. the Tate module of μ). A p-adic representation of GK is a continuous representation where V is a finite-dimensional vector space over the p-adic numbers Qp and GL(V) denotes the group of invertible linear maps from V to itself. The Tate dual of V is defined as (i.e. it is the Tate twist of the usual dual V∗ = Hom(V, Qp)). In this case, Hi(K, V) denotes the continuous group cohomology of GK with coefficients in V. Local Tate duality applied to V says that the cup product induces a pairing which is a duality between Hi(K, V) and H2−i(K, V ′) for i = 0, 1, 2. Again, the higher cohomology groups vanish. See also Tate duality, a global version (i.e. for global fields) Notes References , translation of Cohomologie Galoisienne, Springer-Verlag Lecture Notes 5 (1964). Theorems in algebraic number theory Galois theory Duality theories
Local Tate duality
[ "Mathematics" ]
578
[ "Mathematical structures", "Theorems in algebraic number theory", "Theorems in number theory", "Category theory", "Duality theories", "Geometry" ]
26,714,983
https://en.wikipedia.org/wiki/Unpolarized%20light
Unpolarized light is light with a random, time-varying polarization. Natural light, like most other common sources of visible light, is produced independently by a large number of atoms or molecules whose emissions are uncorrelated. Unpolarized light can be produced from the incoherent combination of vertical and horizontal linearly polarized light, or right- and left-handed circularly polarized light. Conversely, the two constituent linearly polarized states of unpolarized light cannot form an interference pattern, even if rotated into alignment (Fresnel–Arago 3rd law). A so-called depolarizer acts on a polarized beam to create one in which the polarization varies so rapidly across the beam that it may be ignored in the intended applications. Conversely, a polarizer acts on an unpolarized beam or arbitrarily polarized beam to create one which is polarized. Unpolarized light can be described as a mixture of two independent oppositely polarized streams, each with half the intensity. Light is said to be partially polarized when there is more power in one of these streams than the other. At any particular wavelength, partially polarized light can be statistically described as the superposition of a completely unpolarized component and a completely polarized one. One may then describe the light in terms of the degree of polarization and the parameters of the polarized component. That polarized component can be described in terms of a Jones vector or polarization ellipse. However, in order to also describe the degree of polarization, one normally employs Stokes parameters to specify a state of partial polarization. Motivation The transmission of plane waves through a homogeneous medium are fully described in terms of Jones vectors and 2×2 Jones matrices. However, in practice there are cases in which all of the light cannot be viewed in such a simple manner due to spatial inhomogeneities or the presence of mutually incoherent waves. So-called depolarization, for instance, cannot be described using Jones matrices. For these cases it is usual instead to use a 4×4 matrix that acts upon the Stokes 4-vector. Such matrices were first used by Paul Soleillet in 1929, although they have come to be known as Mueller matrices. While every Jones matrix has a Mueller matrix, the reverse is not true. Mueller matrices are then used to describe the observed polarization effects of the scattering of waves from complex surfaces or ensembles of particles, as shall now be presented. Coherency matrix The Jones vector perfectly describes the state of polarization and phase of a single monochromatic wave, representing a pure state of polarization as described above. However any mixture of waves of different polarizations (or even of different frequencies) do not correspond to a Jones vector. In so-called partially polarized radiation the fields are stochastic, and the variations and correlations between components of the electric field can only be described statistically. One such representation is the coherency matrix: where angular brackets denote averaging over many wave cycles. Several variants of the coherency matrix have been proposed: the Wiener coherency matrix and the spectral coherency matrix of Richard Barakat measure the coherence of a spectral decomposition of the signal, while the Wolf coherency matrix averages over all time/frequencies. The coherency matrix contains all second order statistical information about the polarization. This matrix can be decomposed into the sum of two idempotent matrices, corresponding to the eigenvectors of the coherency matrix, each representing a polarization state that is orthogonal to the other. An alternative decomposition is into completely polarized (zero determinant) and unpolarized (scaled identity matrix) components. In either case, the operation of summing the components corresponds to the incoherent superposition of waves from the two components. The latter case gives rise to the concept of the "degree of polarization"; i.e., the fraction of the total intensity contributed by the completely polarized component. Stokes parameters The coherency matrix is not easy to visualize, and it is therefore common to describe incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse. An alternative and mathematically convenient description is given by the Stokes parameters, introduced by George Gabriel Stokes in 1852. The relationship of the Stokes parameters to intensity and polarization ellipse parameters is shown in the equations and figure below. Here Ip, 2ψ and 2χ are the spherical coordinates of the polarization state in the three-dimensional space of the last three Stokes parameters. Note the factors of two before ψ and χ corresponding respectively to the facts that any polarization ellipse is indistinguishable from one rotated by 180°, or one with the semi-axis lengths swapped accompanied by a 90° rotation. The Stokes parameters are sometimes denoted I, Q, U and V. The four Stokes parameters are enough to describe 2D polarization of a paraxial wave, but not the 3D polarization of a general non-paraxial wave or an evanescent field. Poincaré sphere Neglecting the first Stokes parameter S0 (or I), the three other Stokes parameters can be plotted directly in three-dimensional Cartesian coordinates. For a given power in the polarized component given by the set of all polarization states are then mapped to points on the surface of the so-called Poincaré sphere (but of radius P), as shown in the accompanying diagram. In quantum mechanics and computing, a related concept is the Bloch sphere. Often the total beam power is not of interest, in which case a normalized Stokes vector is used by dividing the Stokes vector by the total intensity S0: The normalized Stokes vector then has unity power () and the three significant Stokes parameters plotted in three dimensions will lie on the unity-radius Poincaré sphere for pure polarization states (where ). Partially polarized states will lie inside the Poincaré sphere at a distance of from the origin. When the non-polarized component is not of interest, the Stokes vector can be further normalized to obtain When plotted, that point will lie on the surface of the unity-radius Poincaré sphere and indicate the state of polarization of the polarized component. Any two antipodal points on the Poincaré sphere refer to orthogonal polarization states. The overlap between any two polarization states is dependent solely on the distance between their locations along the sphere. This property, which can only be true when pure polarization states are mapped onto a sphere, is the motivation for the invention of the Poincaré sphere and the use of Stokes parameters, which are thus plotted on (or beneath) it. See also Coherence (physics)#Polarization and coherence Photon polarization References Polarization (waves)
Unpolarized light
[ "Physics" ]
1,442
[ "Polarization (waves)", "Astrophysics" ]
26,715,106
https://en.wikipedia.org/wiki/1%2C2-Dimethylcyclopropane
1,2-Dimethylcyclopropane is a cycloalkane consisting of a cyclopropane ring substituted with two methyl groups attached to adjacent carbon atoms. It has three stereoisomers, one cis-isomer and a pair of trans-enantiomers, which differ depending on the orientation of the two methyl groups. As with other cyclopropanes, ring tension results in a relatively unstable compound. 1,2-Dimethylcyclopropane is 1 of 10 structural isomers (cycloalkanes and aliphatic alkenes) which share the general formula of CH, the others being cyclopentane, methylcyclobutane, 1,1-dimethylcyclopropane, ethylcyclopropane, 1-pentene, 2-pentene, 2-methyl-1-butene, 3-methyl-1-butene, and 2-methyl-2-butene. See also Alkyl cycloalkane References Cyclopropanes Hydrocarbons
1,2-Dimethylcyclopropane
[ "Chemistry" ]
233
[ "Organic compounds", "Hydrocarbons" ]
26,715,490
https://en.wikipedia.org/wiki/Refmex%20GL%20Glass
Refmex GL Glass is a Mexican manufacturer of high-quality industrial glass products. Its main markets are borosilicate glass, f-silicate glass, and industrial quartz glass. The company was founded in 1977 at Zapopan, Mexico as Refractarios Mexicanos SA de CV . The company later changed its name to Refmex GL SA de CV. It was founded related to the crisis Mexico suffered in 1976 to help the nation out of it creating employment for local people in re-selling glass, later, in 1985, they started producing their own glass for windows, the business was not very successful and they began the borosilicate glass business in Mexico. The company employs 125 employees worldwide, according to its 2011 reports. The company currently has four plants, two of them in Tesistan, Mexico, and two located in Zapopan, a nearby territory. The company announced plans for a fifth plant in 2013, but it has not been approved yet. External links Official company website Companies based in Guadalajara, Jalisco Manufacturing companies established in 1977 Glassmaking companies Manufacturing companies of Mexico Mexican brands Mexican companies established in 1977
Refmex GL Glass
[ "Materials_science", "Engineering" ]
236
[ "Glass engineering and science", "Glassmaking companies", "Engineering companies" ]
55,123,599
https://en.wikipedia.org/wiki/Solid%20phase%20sequencing
The principle of solid phase DNA sequencing was described in 1989 based on binding of biotinylated DNA to streptavidin-coated magnetic beads and elution of single DNA strands selectively using alkali. The method allowed robotic applications suitable for clinical sequencing, but the magnetic handling has also found frequent use in many molecular applications, including sample handling for DNA diagnostics. The use of solid phase methods for DNA handling is now frequently used as an integrated part of many of the next generation DNA sequencing methods, as well as numerous molecular diagnostics applications. References Biotechnology
Solid phase sequencing
[ "Chemistry", "Biology" ]
115
[ "nan", "Molecular biology techniques", "DNA sequencing", "Biotechnology" ]
55,127,696
https://en.wikipedia.org/wiki/Dsup
Dsup (contraction of damage suppressor) is a DNA-associating protein, unique to the tardigrade, that suppresses the occurrence of DNA breaks by radiation. When human HEK293 cells were engineered with Dsup proteins, they showed approximately 40% more tolerance against X-ray radiation. Tardigrades can withstand 1,000 times more radiation than other animals, median lethal doses of 5,000 Gy (of gamma rays) and 6,200 Gy (of heavy ions) in hydrated animals (5 to 10 Gy could be fatal to a human). The only explanation found in earlier experiments for this ability was that their lowered water state provides fewer reactants for ionizing radiation. However, subsequent research found that tardigrades, when hydrated, still remain highly resistant to shortwave UV radiation in comparison to other animals, and that one factor for this is their ability to efficiently repair damage to their DNA resulting from that exposure. A landmark study on Dsup protein showed that it can bind nucleosomes in the cell and protect DNA. The Dsup protein has been tested on other animal cells. Using a culture of human cells that express the Dsup protein, it was found that after X-ray exposure the cells had fewer DNA breaks than control cells. After hydrogen peroxide treatment Dsup+ cells mainly activate the detoxification systems and the antioxidant enzymes that limit oxidative stress and eliminate oxidative free radicals, while DNA repair mechanisms are only marginally activated. Thus, upon induction of oxidative stress Dsup protein appears to mainly protect DNA directly. Dsup protein has been found to be neurotoxic and promote neurodegeneration when expressed in cultured neurons by increasing DNA damage through the formation of double strand breaks. Function and structure The Dsup from Ramazzottius varieornatus is mainly used for study, since it is one of the most stress-tolerant species. Orthologous versions of Dsup are also found in Hypsibius exemplaris (OQV24709, ). Dsup does not exhibit a lot of secondary structure, save for the helix in the middle. The C-terminal half contains an NLS, and this Ala/Gly-rich half is sufficient for DNA binding. It is probably mostly disordered, but it has a lot of positive charge. Dsup is known to bind to free DNA, but it binds more tightly to nucleosomes, the typical packed form of DNA in eukaryotic cells. Its nucleosome binding domain is vaguely similar to the one in HMGN proteins. Dsup localized to nuclear DNA reduces single-strand breaks and double-strand breaks when subjected to ionizing radiation. Molecular dynamic simulation of Dsup in complex with DNA shows that it is an intrinsically disordered protein. Its flexibility and electrostatic charge helps it bind to DNA and form aggregates. References Animal proteins DNA-binding proteins Tardigrades
Dsup
[ "Chemistry", "Biology" ]
618
[ "Biochemistry stubs", "Tardigrades", "Protein stubs", "Space-flown life" ]
55,130,585
https://en.wikipedia.org/wiki/Colterol
Colterol is a short-acting β2-adrenoreceptor agonist. Bitolterol, a prodrug for colterol, is used in the management of bronchospasm in asthma and chronic obstructive pulmonary disease (COPD). Patents: References Beta2-adrenergic agonists
Colterol
[ "Chemistry" ]
75
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
32,314,428
https://en.wikipedia.org/wiki/Vismodegib
Vismodegib, sold under the brand name Erivedge, is a medication used for the treatment of basal-cell carcinoma (BCC). The approval of vismodegib on January 30, 2012, represents the first Hedgehog signaling pathway targeting agent to gain U.S. Food and Drug Administration (FDA) approval. The drug is also undergoing clinical trials for metastatic colorectal cancer, small-cell lung cancer, advanced stomach cancer, pancreatic cancer, medulloblastoma and chondrosarcoma . The drug was developed by the biotechnology/pharmaceutical company Genentech. Indication Vismodegib is indicated for people with basal-cell carcinoma (BCC) which has metastasized to other parts of the body, relapsed after surgery, or cannot be treated with surgery or radiation. Mechanism of action The substance acts as a cyclopamine-competitive antagonist of the smoothened receptor (SMO) which is part of the Hedgehog signaling pathway. SMO inhibition causes the transcription factors GLI1 and GLI2 to remain inactive, which prevents the expression of tumor mediating genes within the hedgehog pathway. This pathway is pathogenetically relevant in more than 90% of basal-cell carcinomas. Side effects In clinical trials, common side effects included gastrointestinal disorders (nausea, vomiting, diarrhoea, constipation), muscle spasms, fatigue, hair loss, and dysgeusia (distortion of the sense of taste). Development Vismodegib has undergone several promising phase I and phase II clinical trials for its use in treating medulloblastoma. References Further reading External links Benzanilides Chloroarenes 2-Pyridyl compounds Benzosulfones Teratogens Antineoplastic drugs Drugs developed by Hoffmann-La Roche Drugs developed by Genentech
Vismodegib
[ "Chemistry" ]
398
[ "Teratogens" ]
32,318,003
https://en.wikipedia.org/wiki/Bipolar%20electrochemistry
Bipolar electrochemistry is a phenomenon in electrochemistry based on the polarization of conducting objects in electric fields. Indeed, this polarization generates a potential difference between the two extremities of the substrate that is equal to the electric field value multiplied by the size of the object. If this potential difference is important enough, then redox reactions can be generated at the extremities of the object, oxidations will occur at one extremity coupled simultaneously to reductions at the other extremity. In a simple experimental setup consisting of a platinum wire in a weighing boat containing a pH indicator solution, a 30 V voltage across two electrodes will cause water reduction at one end of the wire (the cathode) and a pH increase (OH− formation) and water oxidation at the anodic end and a pH decrease. The poles of the bipolar electrode also align themselves with the applied electric field. Fundamentals When an electrically conductive electrode placed without a direct connection, in the same electrolyte, between an anode and cathode in an electrochemical cell with sufficient voltage being applied; the electrode will experience simultaneous cathodic and anodic reaction at both extremes. This means, the conductive electrode will become a bipolar electrode (BPE); an electrically conductive material in contact with an ionically conductive electrolyte with no direct electronic connection with power supply, that promotes electrochemical (reduction and oxidation) reactions at its both ends (poles); which mean it is a cathode and anode at the same time. This occurs due to: Case (A) The potential difference (η) between the electrically conductive electrode (Vm) and the electrolyte (Vs) causes a potential gradient which is distributed latterly across the BPE-electrolyte interface, with one extreme having the highest potential (anode +η) and the other extreme having the lowest potential (cathode -η). Comparing to the electrolyte potential (Vs) gradient/drop; the electrode potential (Vm) does not change between the BPE poles, this is due to the high conductivity of the electrodes which is higher than 106 S/m for most of steel alloys, compared to the solution conductivity in the range of 5.5 μS/m for ionized water and 5 S/m for seawater. Case (B) Current flowing in the BPE because it provides less resistive current path than the electrolyte. As illustrated in the Figure; as consequence of the current entering side (D/Blue) from the anode, side D will polarise cathodically (potential will become more negative). At the other hand, side (B/Red) where the current is leaving, it will polarise anodically (potential will become more positive) and will corrode. This is due to polarisation which occurs opposite to the current direction. This theory is almost accepted in all classic and recent cathodic protection books, and NACE publications and standards, as explanation of corrosion and coating disbondment caused by DC interference between pipelines and different structures (e.g. cathodically protected or unprotected structures, railways and HVDC). This because it is more suitable for large-scale structures in highly resistive, heterogeneous environments where solution potential (Vs) plays a less pivotal role and the reactions are primarily concentrated only at the poles (where current enters and leaves). Case (C) The potential difference at each pole of the BPE (which may or may not be enough for electrochemical reactions). Note that the solution potential is not directly controlled by a power source (e.g. potentiostat) because it depends also on the solution composition. Therefore, for electrons to transfer to reduce species in the solution, the potential of the working electrode need to be set to a value more negative than of an electroactive molecule in the solution, and then – depending on the kinetics – electrons may transfer. In similar fashion, oxidation reactions occur. Also, according to Ohm’s law, the electric field and solution potential (Vs) will increase with increasing solution resistivity and the applied current at the outer-circuit. Utilisations The phenomenon of bipolar electrochemistry is known since the 1970s and is used in industry in some electrolytic reactors. The interest of the scientific community for this concept seems to increase a lot since Martin Fleischmann and co-workers demonstrated that water splitting was possible using micrometer-sized bipolar electrodes. Recently, several applications in such domains as synthesis of dissymmetrical micro- and nano-structures analytical chemistry material science, microelectronics and microobject propulsion have been developed. References Electrochemistry
Bipolar electrochemistry
[ "Chemistry" ]
983
[ "Electrochemistry" ]
33,915,069
https://en.wikipedia.org/wiki/Fractionalization
In quantum mechanics, fractionalization is the phenomenon whereby the quasiparticles of a system cannot be constructed as combinations of its elementary constituents. One of the earliest and most prominent examples is the fractional quantum Hall effect, where the constituent particles are electrons but the quasiparticles carry fractions of the electron charge. Fractionalization can be understood as deconfinement of quasiparticles that together are viewed as comprising the elementary constituents. In the case of spin–charge separation, for example, the electron can be viewed as a bound state of a 'spinon' and a 'holon (or chargon)', which under certain conditions can become free to move separately. History Quantized Hall conductance was discovered in 1980, related to the electron charge. Laughlin proposed a fluid of fractional charges in 1983, to explain the fractional quantum Hall effect (FQHE) seen in 1982, for which he shared the 1998 Physics Nobel Prize. In 1997, experiments directly observed an electric current of one-third charge. The one-fifth charge was seen in 1999 and various odd fractions have since been detected. Disordered magnetic materials were later shown to form interesting spin phases. Spin fractionalization was seen in spin ices in 2009 and spin liquids in 2012. Fractional charges continue to be an active topic in condensed matter physics. Studies of these quantum phases impact understanding of superconductivity, and insulators with surface transport for topological quantum computers. Physics Many-body effects in complicated condensed materials lead to emergent properties that can be described as quasiparticles existing in the substance. Electron behavior in solids can be considered as quasi-particle magnons, excitons, holes, and charges with different effective mass. Spinons, chargons, and anyons cannot be considered elementary particle combinations. Different quantum statistics have been seen; Anyons wavefunctions gain a continuous phase in exchange: It has been realized many insulators have a conducting surface of 2D quantum electron gas states. Systems Solitons in 1D, such as polyacetylene, lead to half charges. Spin-charge separation into spinons and holons was detected in electrons in 1D SrCuO2. Quantum wires with fractional phase behavior have been studied. Spin liquids with fractional spin excitations occur in frustrated magnetic crystals, like ZnCu3(OH)6Cl2 (herbertsmithite), and in α-RuCl3. Fractional spin-1/2 excitations have also been observed in spin-1 quantum spin chains. Spin ice in Dy2Ti2O7 and Ho2Ti2O7 has fractionalized spin freedom, leading to deconfined magnetic monopoles. They should be contrasted with quasiparticles such as magnons and Cooper pairs, which have quantum numbers that are combinations of the constituents. The most celebrated may be quantum Hall systems, occurring at high magnetic fields in 2D electron gas materials such as GaAs heterostructures. Electrons combined with magnetic flux vortices carry current. Graphene exhibits charge fractionalization. Attempts have been made to extend fractional behavior to 3D systems. Surface states in topological insulators of various compounds (e.g. tellurium alloys, antimony), and pure metal (bismuth) crystals have been explored for fractionalization signatures. Notes Theoretical physics Quasiparticles
Fractionalization
[ "Physics", "Materials_science" ]
705
[ "Matter", "Theoretical physics", "Condensed matter physics", "Quasiparticles", "Subatomic particles" ]
33,923,047
https://en.wikipedia.org/wiki/Hybrid%20LC%20Filter
Hybrid LC filter is a kind of electrical LC filter, which typically contains two conductive foil layers, separated by an insulation material and coiled on a core. Layers are typically made of copper or aluminum. One layer, which is placed between the voltage source, such as inverter, and a load, is called “the main foil”; this layer forms filter inductance. Another foil, called “the auxiliary foil”, is connected to a neutral potential (e.g. earth), forming the useful capacitance between foils. This way the self-capacitance of the main foil is crucially decreased. Filter is characterized by improved high-frequency performance (working frequency range is at least up to tens of MHz). The mutual inductance between foil layers is rather large, coupling factor is typically about 0.95-0.99. Higher the mutual inductance provided, better the damping properties of the hybrid LC filter. References Analog circuits
Hybrid LC Filter
[ "Engineering" ]
202
[ "Analog circuits", "Electronic engineering" ]
56,573,387
https://en.wikipedia.org/wiki/Atomic%20Homefront
Atomic Homefront is a 2017 documentary film about the effects of radioactive waste stored in West Lake Landfill in St. Louis County, Missouri, by Rebecca Cammisa and co-produced by James Freydberg and Larissa Bills. References External links 2017 documentary films 2017 films Environmental impact of nuclear power Hazardous waste Radioactive waste Radioactivity St. Louis County, Missouri HBO documentary films 2010s English-language films 2010s American films Films set in St. Louis Films scored by Robert Miller English-language documentary films
Atomic Homefront
[ "Physics", "Chemistry", "Technology" ]
100
[ "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Hazardous waste", "Radioactivity", "Nuclear physics", "Environmental impact of nuclear power", "Radioactive waste" ]
56,577,590
https://en.wikipedia.org/wiki/Li%C3%B1%C3%A1n%27s%20diffusion%20flame%20theory
Liñán diffusion flame theory is a theory developed by Amable Liñán in 1974 to explain the diffusion flame structure using activation energy asymptotics and Damköhler number asymptotics. Liñán used counterflowing jets of fuel and oxidizer to study the diffusion flame structure, analyzing for the entire range of Damköhler number. His theory predicted four different types of flame structure as follows, Nearly-frozen ignition regime, where deviations from the frozen flow conditions are small (no reaction sheet exist in this regime), Partial burning regime, where both fuel and oxidizer cross the reaction zone and enter into the frozen flow on other side, Premixed flame regime, where only one of the reactants cross the reaction zone, in which case, reaction zone separates a frozen flow region from a near-equilibrium region, Near-equilibrium diffusion-controlled regime, is a thin reaction zone, separating two near-equilibrium region. Mathematical description The theory is well explained in the simplest possible model. Thus, assuming a one-step irreversible Arrhenius law for the combustion chemistry with constant density and transport properties and with unity Lewis number reactants, the governing equation for the non-dimensional temperature field in the stagnation point flow reduces to where is the mixture fraction, is the Damköhler number, is the activation temperature and the fuel mass fraction and oxidizer mass fraction are scaled with their respective feed stream values, given by with boundary conditions . Here, is the unburnt temperature profile (frozen solution) and is the stoichiometric parameter (mass of oxidizer stream required to burn unit mass of fuel stream). The four regime are analyzed by trying to solve above equations using activation energy asymptotics and Damköhler number asymptotics. The solution to above problem is multi-valued. Treating mixture fraction as independent variable reduces the equation to with boundary conditions and . Extinction Damköhler number The reduced Damköhler number is defined as follows where and . The theory predicted an expression for the reduced Damköhler number at which the flame will extinguish, given by where . See also Liñán's equation Emmons problem Clarke–Riley diffusion flame Burke–Schumann flame References Fluid dynamics Combustion
Liñán's diffusion flame theory
[ "Chemistry", "Engineering" ]
463
[ "Piping", "Chemical engineering", "Combustion", "Fluid dynamics" ]
43,582,251
https://en.wikipedia.org/wiki/Astrophysics%20%28journal%29
Astrophysics is a peer-reviewed scientific journal of astrophysics published by Springer. Each volume is published every three months. It was founded in 1965 by the Soviet Armenian astrophysicist Viktor Ambartsumian. It is the English version of the journal Astrofizika, published by the Armenian National Academy of Sciences mostly in Russian. The current editor-in-chief is Arthur Nikoghossian. Aims and scope The focus of this journal is astronomy and is a translation of the peer-reviewed Russian language journal Astrofizika. Abstracting and indexing Astrophysics is indexed in the following databases: Astrophysics Data System Academic OneFile Academic Search Chemical Abstracts Service CSA CSA Environmental Sciences Current Contents/Physical Chemical and Earth Sciences Earthquake Engineering Abstracts EBSCO Discovery Service Expanded Academic INIS Atomindex INSPEC INSPIRE-HEP Journal Citation Reports/Science Edition Science Citation Index Expanded SCImago SCOPUS Simbad Astronomical Database Summon by ProQuest See also List of astronomy journals References External links astro.asj-oa.am Astrophysics journals Academic journals established in 1965 Springer Science+Business Media academic journals English-language journals Quarterly journals
Astrophysics (journal)
[ "Physics", "Astronomy" ]
234
[ "Astrophysics journals", "Astronomy journal stubs", "Astronomy stubs", "Astrophysics" ]
43,583,184
https://en.wikipedia.org/wiki/Place-permutation%20action
In mathematics, there are two natural interpretations of the place-permutation action of symmetric groups, in which the group elements act on positions or places. Each may be regarded as either a left or a right action, depending on the order in which one chooses to compose permutations. There are just two interpretations of the meaning of "acting by a permutation " but these lead to four variations, depending whether maps are written on the left or right of their arguments. The presence of so many variations often leads to confusion. When regarding the group algebra of a symmetric group as a diagram algebra it is natural to write maps on the right so as to compute compositions of diagrams from left to right. Maps written on the left First we assume that maps are written on the left of their arguments, so that compositions take place from right to left. Let be the symmetric group on letters, with compositions computed from right to left. Imagine a situation in which elements of act on the “places” (i.e., positions) of something. The places could be vertices of a regular polygon of sides, the tensor positions of a simple tensor, or even the inputs of a polynomial of variables. So we have places, numbered in order from 1 to , occupied by objects that we can number . In short, we can regard our items as a word of length in which the position of each element is significant. Now what does it mean to act by “place-permutation” on ? There are two possible answers: an element can move the item in the th place to the th place, or it can do the opposite, moving an item from the th place to the th place. Each of these interpretations of the meaning of an “action” by (on the places) is equally natural, and both are widely used by mathematicians. Thus, when encountering an instance of a "place-permutation" action one must take care to determine from the context which interpretation is intended, if the author does not give specific formulas. Consider the first interpretation. The following descriptions are all equivalent ways to describe the rule for the first interpretation of the action: For each , move the item in the th place to the th place. For each , move the item in the th place to the th place. For each , replace the item in the th position by the one that was in the th place. This action may be written as the rule . Now if we act on this by another permutation then we need to first relabel the items by writing . Then takes this to This proves that the action is a left action: . Now we consider the second interpretation of the action of , which is the opposite of the first. The following descriptions of the second interpretation are all equivalent: For each , move the item in the th place to the th place. For each , move the item in the th place to the th place. For each , replace the item in the th position by the one that was in the th place. This action may be written as the rule . In order to act on this by another permutation , again we first relabel the items by writing . Then the action of takes this to This proves that our second interpretation of the action is a right action: . Example If is the 3-cycle and is the transposition , then since we write maps on the left of their arguments we have Using the first interpretation we have , the result of which agrees with the action of on . So . On the other hand, if we use the second interpretation, we have , the result of which agrees with the action of on . So . Maps written on the right Sometimes people like to write maps on the right of their arguments. This is a convenient convention to adopt when working with symmetric groups as diagram algebras, for instance, since then one may read compositions from left to right instead of from right to left. The question is: how does this affect the two interpretations of the place-permutation action of a symmetric group? The answer is simple. By writing maps on the right instead of on the left we are reversing the order of composition, so in effect we replace by its opposite group . This is the same group, but the order of compositions is reversed. Reversing the order of compositions evidently changes left actions into right ones, and vice versa, changes right actions into left ones. This means that our first interpretation becomes a right action while the second becomes a left one. In symbols, this means that the action is now a right action, while the action is now a left action. Example We let be the 3-cycle and the transposition , as before. Since we now write maps on the right of their arguments we have Using the first interpretation we have , the result of which agrees with the action of on . So . On the other hand, if we use the second interpretation, we have , the result of which agrees with the action of on . So . Summary In conclusion, we summarize the four possibilities considered in this article. Here are the four variations: Although there are four variations, there are still only two different ways of acting; the four variations arise from the choice of writing maps on the left or right, a choice which is purely a matter of convention. Notes References Tom Halverson and Arun Ram, "Partition algebras", European J. Combin. 26 (2005), no. 6, 869–921. Thomas Hungerford, Algebra. Springer Lecture Notes 73, Springer-Verlag 1974. Gordon D. James, The Representation Theory of the Symmetric Groups. Lecture Notes in Math. 682 (1978), Springer. Hermann Weyl, The Classical Groups: Their Invariants and Representations. Princeton University Press, Princeton, N.J., 1939. Permutations
Place-permutation action
[ "Mathematics" ]
1,189
[ "Functions and mappings", "Permutations", "Mathematical objects", "Combinatorics", "Mathematical relations" ]
43,585,307
https://en.wikipedia.org/wiki/Intensity%20mapping
In cosmology, intensity mapping is an observational technique for surveying the large-scale structure of the universe by using the integrated radio emission from unresolved gas clouds. In its most common variant, 21 cm intensity mapping, the 21cm emission line of neutral hydrogen is used to trace the gas. The hydrogen follows fluctuations in the underlying cosmic density field, with regions of higher density giving rise to a higher intensity of emission. Intensity fluctuations can therefore be used to reconstruct the power spectrum of matter fluctuations. The frequency of the emission line is redshifted by the expansion of the Universe, so by using radio receivers that cover a wide frequency band, one can detect this signal as a function of redshift, and thus cosmic time. This is similar in principle to a galaxy redshift survey, with the important distinction that galaxies need to be individually detected and measured, making intensity mapping a considerably faster method. History Aug 1977: Varshalovich and Khersonskii calculate the effect of 21cm line absorption at high redshift on the spectrum of the CMB. Aug 1996: Madau, Meiksin & Rees propose intensity mapping as a way of probing the Epoch of Reionization. Dec 2001: Bharadwaj & Sethi propose using intensity maps of neutral hydrogen to observe the matter distribution in the post-reionisation epoch. Jan 2004: Battye, Davies & Weller propose using 21 cm intensity maps to measure dark energy. Jun 2006: Peterson, Bandura, and Pen propose the Hubble Sphere Hydrogen Survey Mar 2009: Cosmological HI signal observed for the first time out to redshift 1.12 by the Green Bank Telescope. Jan 2013: Construction begins on the CHIME experiment in British Columbia, Canada. Scientific applications Intensity mapping has been proposed as a way of measuring the cosmic matter density field in several different regimes. Epoch of Reionization Between the times of recombination and reionization, the baryonic content of the Universe – mostly hydrogen – existed in a neutral phase. Detecting the 21 cm emission from this time, all the way through to the end of reionization, has been proposed as a powerful way of studying early structure formation. This period of the Universe's history corresponds to redshifts of to , implying a frequency range for intensity mapping experiments of 50 – 200 MHz. Large-scale structure and dark energy At late times, after the Universe has reionized, most of the remaining neutral hydrogen is stored in dense gas clouds called damped Lyman-alpha systems, where it is protected from ionizing UV radiation. These are predominantly hosted in galaxies, so the neutral hydrogen signal is effectively a tracer of the galaxy distribution. As with galaxy redshift surveys, intensity mapping observations can be used to measure the geometry and expansion rate of the Universe (and therefore the properties of dark energy) by using the baryon acoustic oscillation feature in the matter power spectrum as a standard ruler. The growth rate of structure, useful for testing modifications to general relativity, can also be measured using redshift space distortions. Both of these features are found at large scales of tens to hundreds of megaparsecs, which is why low angular resolution (unresolved) maps of neutral hydrogen are sufficient to detect them. This should be compared with the resolution of a redshift survey, which must detect individual galaxies that are typically only tens of kiloparsecs across. Because intensity mapping surveys can be carried out much faster than conventional optical redshift surveys, it is possible to map-out significantly larger volumes of the Universe. As such, intensity mapping has been proposed as a way of measuring phenomena on extremely large scales, including primordial non-Gaussianity from inflation and general relativistic corrections to the matter correlation function. Molecular and fine structure lines In principle, any emission line can be used to make intensity maps if it can be detected. Other emission lines that have been proposed as cosmological tracers include: Rotational transitions in molecules, such as carbon monoxide Fine structure transitions from species such as ionized carbon Lyman-alpha emission from hydrogen Experiments The following telescopes have either hosted intensity mapping surveys, or plan to carry them out in future. TIANLAI (China) BINGO (Brazil/Uruguay/UK) CHIME (Canada) COMAP (USA) FAST (China) Green Bank Telescope (USA) HIRAX (South Africa) KAT7 (South Africa) MeerKAT (South Africa) Parkes radio telescope (Australia) PAPER (USA/South Africa/Australia) Square Kilometre Array (South Africa/Australia) The Goddard Space Flight Center also host a list of intensity mapping experiments. References External links Oxford Martin workshop on intensity mapping RAS discussion meeting on intensity mapping CHIME experiment Physical cosmology Observational astronomy Large-scale structure of the cosmos
Intensity mapping
[ "Physics", "Astronomy" ]
992
[ "Astronomical sub-disciplines", "Theoretical physics", "Observational astronomy", "Astrophysics", "Physical cosmology" ]
43,585,432
https://en.wikipedia.org/wiki/Medicinal%20uses%20of%20fungi
Medicinal fungi are fungi that contain metabolites or can be induced to produce metabolites through biotechnology to develop prescription drugs. Compounds successfully developed into drugs or under research include antibiotics, anti-cancer drugs, cholesterol and ergosterol synthesis inhibitors, psychotropic drugs, immunosuppressants and fungicides. History Although fungi products have long been used in traditional medicine, the ability to identify beneficial properties and then extract the active ingredient started with the discovery of penicillin by Alexander Fleming in 1928. Since that time, many potential antibiotics were discovered and the potential for various fungi to synthesize biologically active molecules useful in various clinical therapies has been under research. Pharmacological research identified antifungal, antiviral, and antiprotozoan compounds from fungi. Ganoderma lucidum, known in Chinese as líng zhī ("spirit plant"), and in Japanese as mannentake ("10,000-year mushroom"), has been well studied. Another species of genus Ganoderma, G. applanatum, remains under basic research. Inonotus obliquus was used in Russia as early as the 16th century; it featured in Alexandr Solzhenitsyn's 1967 novel Cancer Ward. Research and drug development Cancer There is no good evidence that any type of mushroom or mushroom extract can prevent or cure cancer. 11,11'-Dideoxyverticillin A, an isolate of marine Penicillium, was used to create dozens of semi-synthetic, candidate anticancer compounds. 11,11'-Dideoxyverticillin A, andrastin A, barceloneic acid A, and barceloneic acid B, are farnesyl transferase inhibitors that can be made by Penicillium. 3-O-Methylfunicone, anicequol, duclauxin, and rubratoxin B, are anticancer/cytotoxic metabolites of Penicillium. Penicillium is a potential source of the leukemia medicine asparaginase. Some countries have approved beta-glucan fungal extracts lentinan, polysaccharide-K, and polysaccharide peptide as immunologic adjuvants. Antibacterial agents (antibiotics) Alexander Fleming led the way to the beta-lactam antibiotics with the Penicillium mold and penicillin. Subsequent discoveries included alamethicin, aphidicolin, brefeldin A, cephalosporin, cerulenin, citromycin, eupenifeldin, fumagillin, fusafungine, fusidic acid, helvolic acid, itaconic acid, MT81, nigrosporin B, usnic acid, verrucarin A, vermiculine and many others. Antibiotics retapamulin, tiamulin, and valnemulin are derivatives of the fungal metabolite pleuromutilin. Plectasin, austrocortilutein, austrocortirubin, coprinol, oudemansin A, strobilurin, illudin, pterulone, and sparassol are under research for their potential antibiotic activity. Cholesterol biosynthesis inhibitors Statins are an important class of cholesterol-lowering drugs; the first generation of statins were derived from fungi. Lovastatin, the first commercial statin, was extracted from a fermentation broth of Aspergillus terreus. Industrial production is now capable of producing 70 mg lovastatin per kilogram of substrate. The red yeast rice fungus, Monascus purpureus, can synthesize lovastatin, mevastatin, and the simvastatin precursor monacolin J. Nicotinamide riboside, a cholesterol biosynthesis inhibitor, is made by Saccharomyces cerevisiae. Antifungals Some antifungals are derived or extracted from other fungal species. Griseofulvin is derived from a number of Penicillium species; caspofungin is derived from Glarea lozoyensis. Strobilurin, azoxystrobin, micafungin, and echinocandins, are all extracted from fungi. Anidulafungin is a derivative of an Aspergillus metabolite. Antivirals Many mushrooms contain potential antiviral compounds remaining under preliminary research, such as: Lentinus edodes, Ganoderma lucidum, Ganoderma colossus, Hypsizygus marmoreus, Cordyceps militaris, Grifola frondosa, Scleroderma citrinum, Flammulina velutipes, and Trametes versicolor, Fomitopsis officinalis. Immunosuppressants Cyclosporin was discovered in Tolypocladium inflatum, while Bredinin was found in Eupenicillium brefeldianum and mycophenolic acid in Penicillium stoloniferum. Thermophilic fungi were the source of the fingolimod precursor myriocin. Aspergillus synthesizes immunosuppressants gliotoxin and endocrocin. Subglutinols are immunosuppressants isolated from Fusarium subglutinans. Malaria Codinaeopsin, efrapeptins, zervamicins, and antiamoebin are made by fungi, and remain under basic research. Diabetes Many fungal isolates act as DPP-4 inhibitors, alpha-glucosidase inhibitors, and alpha amylase inhibitors in laboratory studies. Ternatin is a fungal isolate that may affect hyperglycemia. Psychotropic effects Numerous fungi have well-documented psychotropic effects, some of them severe and associated with acute and life-threatening side-effects. Among these is Amanita muscaria, the fly agaric. More widely used informally are a range of fungi collectively known as "magic mushrooms", which contain psilocybin and psilocin. The history of bread-making records deadly ergotism caused by ergot, most commonly Claviceps purpurea, a parasite of cereal crops. Psychoactive ergot alkaloid drugs have subsequently been extracted from or synthesised starting from ergot; these include ergotamine, dihydroergotamine, ergometrine, ergocristine, ergocryptine, ergocornine, methysergide, bromocriptine, cabergoline, and pergolide. Vitamin D2 Fungi are a source of ergosterol which can be converted to vitamin D2 upon exposure to ultraviolet light. Yeasts The yeast Saccharomyces is used industrially to produce the amino acid lysine, as well as recombinant proteins insulin and hepatitis B surface antigen. Transgenic yeasts are used to produce artemisinin, as well as insulin analogs. Candida is used industrially to produce vitamins ascorbic acid and riboflavin. Pichia is used to produce the amino acid tryptophan and the vitamin pyridoxine. Rhodotorula is used to produce the amino acid phenylalanine. Moniliella is used industrially to produce the sugar alcohol erythritol. References External links Memorial Sloan-Kettering Agaricus subrufescens, Phellinus linteus, Ganoderma lucidum, Trametes versicolor and PSK, Grifola frondosa, Inonotus obliquus, Pleurotus ostreatus, Cordyceps, Shiitake, Lentinan, AHCC. American Cancer Society Trametes versicolor and PSK, Grifola frondosa , Shiitake . National Cancer Institute Shiitake, Lentinan, Cordycepin Chemotherapy Cancer Antibiotics Immunosuppressants Antifungals Antiparasitic agents
Medicinal uses of fungi
[ "Biology" ]
1,728
[ "Antibiotics", "Biocides", "Antiparasitic agents", "Biotechnology products" ]
43,588,974
https://en.wikipedia.org/wiki/NFPA%201006
NFPA 1006 (Standard on Operations and Training for Technical Search and Rescue Incidents) is a standard published by the National Fire Protection Association which identifies the minimum job performance requirements (JPRs) for fire service and other emergency response personnel who perform technical rescue operations. Revision history References Fire protection NFPA Standards Rescue
NFPA 1006
[ "Engineering" ]
65
[ "Building engineering", "Fire protection" ]
43,589,365
https://en.wikipedia.org/wiki/Wood%20science
Wood science is the scientific field which predominantly studies and investigates elements associated with the formation, the physical and chemical composition, and the macro- and microstructure of wood as a bio-based and lignocellulosic material. Wood science additionally delves into the biological, chemical, physical, and mechanical properties and characteristics of wood as a natural material. Deep understanding of wood plays a pivotal role in several endeavors such as the processing of wood, the production of wood-based materials like particleboard, fiberboard, OSB, plywood and other materials, as well as the utilization of wood and wood-based materials in construction and a wide array of products, including pulpwood, furniture, engineered wood products, such as glued laminated timber, CLT, LVL, PSL, as well as pellets, briquettes, and numerous wood-derived products. History Initial comprehensive investigations in the field of wood science emerged at the start of the 20th century. In 1902, the Wood Processing Laboratory was founded in the Department of Forestry at Tokyo University and academic studies on wood processing were first initiated. The Forest and Forest Products Research Institute in Tokyo was also established in 1905. In 1906 the Forest Products Research Institute was created in Dehradun, India. The advent of contemporary wood research commenced in 1910, when the Forest Products Laboratory (FPL) was established in Madison, Wisconsin, USA. The Forest Products Laboratory played a fundamental role in wood science providing scientific research on wood and wood products in partnership with academia, industry, local and other institutions in North and South America and worldwide. In the following years, many wood research institutes came into existence across almost all industrialized nations. A general overview of these institutes and laboratories is shown below: 1913: Institute of Wood and Pulp Chemistry Eberswalde (today's Eberswalde University for Sustainable Development), Germany 1913: Forest Products Laboratory Montreal, Canada 1918: Forest Products Laboratory Vancouver, Canada 1919: Forest Products Laboratory Melbourne, Australia 1923: Department of Mechanics and Wood Technology, University of Sopron, Hungary 1923: Forest Products Research Laboratory, Princes Risborough, Great Britain 1929: Institute for Wood Science and Technology, Leningrant, St. Petersburg, USSR 1933: Centre Technique du Bois, Paris, France 1936: Wood Department of the Swiss Federal Laboratories for Materials Testing in Zurich (today's Swiss Federal Laboratories for Materials Science and Technology), Switzerland 1942: Laboratory of Wood Technology Helsinki, Finland 1944: Swedish Forest Products Research Laboratory, former TRÄTEK (today's Research Institutes of Sweden), Sweden 1946: Latvian Academy of Sciences, Institute of Wood Chemistry, Latvia 1946: Institute for Wood Research, iVTH (today's Fraunhofer Institute for Wood Research), Germany 1947: State Wood Research Institute Bratislava, Slovakia 1947: Forest Research Institute – Rotorua (today's Scion), New Zealand 1948: Austrian Wood Research Institute Vienna (today's Holzforschung Austria), Austria 1949: Norwegian Institute of Wood Technology, Norway 1950: Federal Institute for Forestry and Forest Products (today's Johann Heinrich von Thünen Institute), Germany 1952: Institute for Wood Technology and Fibers (today's Institute for Wood Technology Dresden), Germany 1952: Institute for Wood Research and Wood Technology (today's Wood Research Munich), Germany 1954: Faculty of Wood Technology, Poznan University (today's Faculty of Forestry and Wood Technology at Poznan), Poland From the '60s, the founding of research institutes in the field of wood sciences continued in many universities, and also in universities of applied sciences and technological universities. Today, the International Academy of Wood Science (IAWS), a recognised and non-profit assembly of wood scientists, represents worldwide the scientific area of wood science and all of its associated technological domains. Sub-areas The field of wood science can be categorized into three distinct sub-areas, which include: Wood biology, a subset of wood science which focuses on the formation, structure and composition of wood tissues. It involves investigations conducted at the macroscopic, microscopic, and molecular levels. Additionally, this sub-field encompasses wood anatomy which involves the (macroscopic - microscopic) identification of various wood species. Wood chemistry, whose primary focus is the analysis of the chemical constituents comprising wood, with specific emphasis on cellulose, lignin, hemicelluloses, and extractives, as well as on the various products derived from these components. It is also explores potential uses for pulp and paper production, the utilization of wood and wood waste, the generation of energy and chemicals from pulping byproducts, and the conversion of biomass. Wood physics, which constitutes an essential component of the field of wood science, building upon discoveries in wood chemistry, wood anatomy (xylem), and biology, as well as principles from classical physics, mechanics, and materials strength. Wood physics encompasses critical research areas including: a) examining wood behaviour in relation to moisture, which involves fundamental aspects of moisture absorption, swelling, and shrinkage, b) investigating the impact of temperature on wood properties, encompassing heat conduction and heat storage, and c) assessing the mechanical, rheological, and acoustic properties and qualities of both wood and wood-based products. Scientific journals Below are some of the significant scientific journals within the areas of wood sciences: Holzforschung European Journal of Wood and Wood Products Wood Science and Technology Wood Material Science and Engineering Cellulose Mokuzai Gakkaishi Journal of Wood Science BioResources IAWA Journal Maderas: Ciencia y Tecnología Wood Research Journal of Wood Chemistry & Technology Forest Products Journal Wood and Fiber Science Journal of the Korean Wood Science and Technology International Wood Products Journal Drvna Industrija (Wood Industry) Drewno Iranian Journal of Wood and Paper Industries Journal of the Indian Academy of Wood Science Further reading Peter Niemz, Alfred Teischinger, Dick Sandberg (2023). Springer Handbook of Wood Science and Technology, Springer 2023, ISBN 978-3-030-81314-7. George Tsoumis (2009). Science and Technology of Wood - Structure, Properties, Utilization. Publishing House Kessel, ISBN 9783941300224. Callum A.S. Hill (2006): Wood Modification: Chemical, Thermal and Other Processes. Wiley 2006, ISBN 0-470-02172-1. Franz F.P. Kollmann, Edward W. Kuenzi, Alfred J. Stamm (1975). Principles of Wood Science and Technology II., Springer 1975, ISBN 978-3-642-87933-3. References External links Google Scholar Wood Science and Technology The International Academy of Wood Science IAWS International Society of Wood Science and Technology InsideWood, NCSU by Elisabeth Wheeler et al. dataholz.eu Holzforschung Austria The main tropical wood species CIRAD France Wood Handbook Forest Products Laboratory at Madison delta-intkey.com The Wood Database Wallenberg Wood Science Center TUM - Wood Science and Biotechnology Institute of Wood Science & Technology at Bangalore Wood Wood sciences Materials science
Wood science
[ "Physics", "Materials_science", "Engineering" ]
1,453
[ "Wood sciences", "Applied and interdisciplinary physics", "Materials science", "nan" ]
43,589,512
https://en.wikipedia.org/wiki/Single-layer%20materials
In materials science, the term single-layer materials or 2D materials refers to crystalline solids consisting of a single layer of atoms. These materials are promising for some applications but remain the focus of research. Single-layer materials derived from single elements generally carry the -ene suffix in their names, e.g. graphene. Single-layer materials that are compounds of two or more elements have -ane or -ide suffixes. 2D materials can generally be categorized as either 2D allotropes of various elements or as compounds (consisting of two or more covalently bonding elements). It is predicted that there are hundreds of stable single-layer materials. The atomic structure and calculated basic properties of these and many other potentially synthesisable single-layer materials, can be found in computational databases. 2D materials can be produced using mainly two approaches: top-down exfoliation and bottom-up synthesis. The exfoliation methods include sonication, mechanical, hydrothermal, electrochemical, laser-assisted, and microwave-assisted exfoliation. Single element materials C: graphene and graphyne Graphene Graphene is a crystalline allotrope of carbon in the form of a nearly transparent (to visible light) one atom thick sheet. It is hundreds of times stronger than most steels by weight. It has the highest known thermal and electrical conductivity, displaying current densities 1,000,000 times that of copper. It was first produced in 2004. Andre Geim and Konstantin Novoselov won the 2010 Nobel Prize in Physics "for groundbreaking experiments regarding the two-dimensional material graphene". They first produced it by lifting graphene flakes from bulk graphite with adhesive tape and then transferring them onto a silicon wafer. Graphyne Graphyne is another 2-dimensional carbon allotrope whose structure is similar to graphene's. It can be seen as a lattice of benzene rings connected by acetylene bonds. Depending on the content of the acetylene groups, graphyne can be considered a mixed hybridization, spn, where 1 < n < 2, compared to graphene (pure sp2) and diamond (pure sp3). First-principle calculations using phonon dispersion curves and ab-initio finite temperature, quantum mechanical molecular dynamics simulations showed graphyne and its boron nitride analogues to be stable. The existence of graphyne was conjectured before 1960. In 2010, graphdiyne (graphyne with diacetylene groups) was synthesized on copper substrates. In 2022 a team claimed to have successfully used alkyne metathesis to synthesise graphyne though this claim is disputed. However, after an investigation the team's paper was retracted by the publication citing fabricated data. Later during 2022 synthesis of multi-layered γ‑graphyne was successfully performed through the polymerization of 1,3,5-tribromo-2,4,6-triethynylbenzene under Sonogashira coupling conditions. Recently, it has been claimed to be a competitor for graphene due to the potential of direction-dependent Dirac cones. B: borophene Borophene is a crystalline atomic monolayer of boron and is also known as boron sheet. First predicted by theory in the mid-1990s in a freestanding state, and then demonstrated as distinct monoatomic layers on substrates by Zhang et al., different borophene structures were experimentally confirmed in 2015. Ge: germanene Germanene is a two-dimensional allotrope of germanium with a buckled honeycomb structure. Experimentally synthesized germanene exhibits a honeycomb structure. This honeycomb structure consists of two hexagonal sub-lattices that are vertically displaced by 0.2 A from each other. Si: silicene Silicene is a two-dimensional allotrope of silicon, with a hexagonal honeycomb structure similar to that of graphene. Its growth is scaffolded by a pervasive Si/Ag(111) surface alloy beneath the two-dimensional layer. Sn: stanene Stanene is a predicted topological insulator that may display dissipationless currents at its edges near room temperature. It is composed of tin atoms arranged in a single layer, in a manner similar to graphene. Its buckled structure leads to high reactivity against common air pollutants such as NOx and COx and it is able to trap and dissociate them at low temperature. A structure determination of stanene using low energy electron diffraction has shown ultra-flat stanene on a Cu(111) surface. Pb: plumbene Plumbene is a two-dimensional allotrope of lead, with a hexagonal honeycomb structure similar to that of graphene. P: phosphorene Phosphorene is a 2-dimensional, crystalline allotrope of phosphorus. Its mono-atomic hexagonal structure makes it conceptually similar to graphene. However, phosphorene has substantially different electronic properties; in particular it possesses a nonzero band gap while displaying high electron mobility. This property potentially makes it a better semiconductor than graphene. The synthesis of phosphorene mainly consists of micromechanical cleavage or liquid phase exfoliation methods. The former has a low yield while the latter produce free standing nanosheets in solvent and not on the solid support. The bottom-up approaches like chemical vapor deposition (CVD) are still blank because of its high reactivity. Therefore, in the current scenario, the most effective method for large area fabrication of thin films of phosphorene consists of wet assembly techniques like Langmuir-Blodgett involving the assembly followed by deposition of nanosheets on solid supports. Sb: antimonene Antimonene is a two-dimensional allotrope of antimony, with its atoms arranged in a buckled honeycomb lattice. Theoretical calculations predicted that antimonene would be a stable semiconductor in ambient conditions with suitable performance for (opto)electronics. Antimonene was first isolated in 2016 by micromechanical exfoliation and it was found to be very stable under ambient conditions. Its properties make it also a good candidate for biomedical and energy applications. In a study made in 2018, antimonene modified screen-printed electrodes (SPE's) were subjected to a galvanostatic charge/discharge test using a two-electrode approach to characterize their supercapacitive properties. The best configuration observed, which contained 36 nanograms of antimonene in the SPE, showed a specific capacitance of 1578 F g−1 at a current of 14 A g−1. Over 10,000 of these galvanostatic cycles, the capacitance retention values drop to 65% initially after the first 800 cycles, but then remain between 65% and 63% for the remaining 9,200 cycles. The 36 ng antimonene/SPE system also showed an energy density of 20 mW h kg−1 and a power density of 4.8 kW kg−1. These supercapacitive properties indicate that antimonene is a promising electrode material for supercapacitor systems. A more recent study, concerning antimonene modified SPEs shows the inherent ability of antimonene layers to form electrochemically passivated layers to facilitate electroanalytical measurements in oxygenated environments, in which the presence of dissolved oxygens normally hinders the analytical procedure. The same study also depicts the in-situ production of antimonene oxide/PEDOT:PSS nanocomposites as electrocatalytic platforms for the determination of nitroaromatic compounds. Bi: bismuthene Bismuthene, the two-dimensional (2D) allotrope of bismuth, was predicted to be a topological insulator. It was predicted that bismuthene retains its topological phase when grown on silicon carbide in 2015. The prediction was successfully realized and synthesized in 2016. At first glance the system is similar to graphene, as the Bi atoms arrange in a honeycomb lattice. However the bandgap is as large as 800mV due to the large spin–orbit interaction (coupling) of the Bi atoms and their interaction with the substrate. Thus, room-temperature applications of the quantum spin Hall effect come into reach. It has been reported to be the largest nontrivial bandgap 2D topological insulator in its natural state. Top-down exfoliation of bismuthene has been reported in various instances with recent works promoting the implementation of bismuthene in the field of electrochemical sensing. Emdadul et al. predicted the mechanical strength and phonon thermal conductivity of monolayer β-bismuthene through atomic-scale analysis. The obtained room temperature (300K) fracture strength is ~4.21 N/m along the armchair direction and ~4.22 N/m along the zigzag direction. At 300 K, its Young's moduli are reported to be ~26.1 N/m and ~25.5 N/m, respectively, along the armchair and zigzag directions. In addition, their predicted phonon thermal conductivity of ~1.3 W/m∙K at 300 K is considerably lower than other analogous 2D honeycombs, making it a promising material for thermoelectric operations. Au: goldene On 16 April 2024, scientists from Linköping University in Sweden reported that they had produced goldene, a single layer of gold atoms 100nm wide. Lars Hultman, a materials scientist on the team behind the new research, is quoted as saying "we submit that goldene is the first free-standing 2D metal, to the best of our knowledge", meaning that it is not attached to any other material, unlike plumbene and stanene. Researchers from New York University Abu Dhabi (NYUAD) previously reported to have synthesised Goldene in 2022, however various other scientists have contended that the NYUAD team failed to prove they made a single-layer sheet of gold, as opposed to a multi-layer sheet. Goldene is expected to be used primarily for its optical properties, with applications such as sensing or as a catalyst. Metals Single and double atom layers of platinum in a two-dimensional film geometry has been demonstrated. These atomically thin platinum films are epitaxially grown on graphene, which imposes a compressive strain that modifies the surface chemistry of the platinum, while also allowing charge transfer through the graphene. Single atom layers of palladium with the thickness down to 2.6 Å, and rhodium with the thickness of less than 4 Å have been synthesized and characterized with atomic force microscopy and transmission electron microscopy. A 2D titanium formed by additive manufacturing (laser powder bed fusion) achieved greater strength than any known material (50% greater than magnesium alloy WE54). The material was arranged in a tubular lattice with a thin band running inside, merging two complementary lattice structures. This reduced by half the stress at the weakest points in the structure. 2D supracrystals The supracrystals of 2D materials have been proposed and theoretically simulated. These monolayer crystals are built of supra atomic periodic structures where atoms in the nodes of the lattice are replaced by symmetric complexes. For example, in the hexagonal structure of graphene patterns of 4 or 6 carbon atoms would be arranged hexagonally instead of single atoms, as the repeating node in the unit cell. 2D alloys Two-dimensional alloys (or surface alloys) are a single atomic layer of alloy that is incommensurate with the underlying substrate. One example is the 2D ordered alloys of Pb with Sn and with Bi. Surface alloys have been found to scaffold two-dimensional layers, as in the case of silicene. Compounds Boron nitride nanosheet Titanate nanosheet Borocarbonitrides MXenes 2D silica Niobium bromide and Niobium chloride () Transition metal dichalcogenide monolayers The most commonly studied two-dimensional transition metal dichalcogenide (TMD) is monolayer molybdenum disulfide (MoS2). Several phases are known, notably the 1T and 2H phases. The naming convention reflects the structure: the 1T phase has one "sheet" (consisting of a layer of S-Mo-S; see figure) per unit cell in a trigonal crystal system, while the 2H phase has two sheets per unit cell in a hexagonal crystal system. The 2H phase is more common, as the 1T phase is metastable and spontaneously reverts to 2H without stabilization by additional electron donors (typically surface S vacancies). The 2H phase of MoS2 (Pearson symbol hP6; Strukturbericht designation C7) has space group P63/mmc. Each layer contains Mo surrounded by S in trigonal prismatic coordination. Conversely, the 1T phase (Pearson symbol hP3) has space group P-3m1, and octahedrally-coordinated Mo; with the 1T unit cell containing only one layer, the unit cell has a c parameter slightly less than half the length of that of the 2H unit cell (5.95 Å and 12.30 Å, respectively). The different crystal structures of the two phases result in differences in their electronic band structure as well. The d-orbitals of 2H-MoS2 are split into three bands: dz2, dx2-y2,xy, and dxz,yz. Of these, only the dz2 is filled; this combined with the splitting results in a semiconducting material with a bandgap of 1.9eV. 1T-MoS2, on the other hand, has partially filled d-orbitals which give it a metallic character. Because the structure consists of in-plane covalent bonds and inter-layer van der Waals interactions, the electronic properties of monolayer TMDs are highly anisotropic. For example, the conductivity of MoS2 in the direction parallel to the planar layer (0.1–1 ohm−1cm−1) is ~2200 times larger than the conductivity perpendicular to the layers. There are also differences between the properties of a monolayer compared to the bulk material: the Hall mobility at room temperature is drastically lower for monolayer 2H MoS2 (0.1–10 cm2V−1s−1) than for bulk MoS2 (100–500 cm2V−1s−1). This difference arises primarily due to charge traps between the monolayer and the substrate it is deposited on. MoS2 has important applications in (electro)catalysis. As with other two-dimensional materials, properties can be highly geometry-dependent; the surface of MoS2 is catalytically inactive, but the edges can act as active sites for catalyzing reactions. For this reason, device engineering and fabrication may involve considerations for maximizing catalytic surface area, for example by using small nanoparticles rather than large sheets or depositing the sheets vertically rather than horizontally. Catalytic efficiency also depends strongly on the phase: the aforementioned electronic properties of 2H MoS2 make it a poor candidate for catalysis applications, but these issues can be circumvented through a transition to the metallic (1T) phase. The 1T phase has more suitable properties, with a current density of 10 mA/cm2, an overpotential of −187 mV relative to RHE, and a Tafel slope of 43 mV/decade (compared to 94 mV/decade for the 2H phase). Graphane While graphene has a hexagonal honeycomb lattice structure with alternating double-bonds emerging from its sp2-bonded carbons, graphane, still maintaining the hexagonal structure, is the fully hydrogenated version of graphene with every sp3-hybrized carbon bonded to a hydrogen (chemical formula of (CH)n). Furthermore, while graphene is planar due to its double-bonded nature, graphane is rugged, with the hexagons adopting different out-of-plane structural conformers like the chair or boat, to allow for the ideal 109.5° angles which reduce ring strain, in a direct analogy to the conformers of cyclohexane. Graphane was first theorized in 2003, was shown to be stable using first principles energy calculations in 2007, and was first experimentally synthesized in 2009. There are various experimental routes available for making graphane, including the top-down approaches of reduction of graphite in solution or hydrogenation of graphite using plasma/hydrogen gas as well as the bottom-up approach of chemical vapor deposition. Graphane is an insulator, with a predicted band gap of 3.5 eV; however, partially hydrogenated graphene is a semi-conductor, with the band gap being controlled by the degree of hydrogenation. Germanane Germanane is a single-layer crystal composed of germanium with one hydrogen bonded in the z-direction for each atom. Germanane's structure is similar to graphane, Bulk germanium does not adopt this structure. Germanane is produced in a two-step route starting with calcium germanide. From this material, the calcium (Ca) is removed by de-intercalation with HCl to give a layered solid with the empirical formula GeH. The Ca sites in Zintl-phase CaGe2 interchange with the hydrogen atoms in the HCl solution, producing GeH and CaCl2. SLSiN SLSiN (acronym for Single-Layer Silicon Nitride), a novel 2D material introduced as the first post-graphene member of Si3N4, was first discovered computationally in 2020 via density-functional theory based simulations. This new material is inherently 2D, insulator with a band-gap of about 4 eV, and stable both thermodynamically and in terms of lattice dynamics. Combined surface alloying Often single-layer materials, specifically elemental allotrops, are connected to the supporting substrate via surface alloys. By now, this phenomenon has been proven via a combination of different measurement techniques for silicene, for which the alloy is difficult to prove by a single technique, and hence has not been expected for a long time. Hence, such scaffolding surface alloys beneath two-dimensional materials can be also expected below other two-dimensional materials, significantly influencing the properties of the two-dimensional layer. During growth, the alloy acts as both, foundation and scaffold for the two-dimensional layer, for which it paves the way. Organic Ni3(HITP)2 is an organic, crystalline, structurally tunable electrical conductor with a high surface area. HITP is an organic chemical (2,3,6,7,10,11-hexaaminotriphenylene). It shares graphene's hexagonal honeycomb structure. Multiple layers naturally form perfectly aligned stacks, with identical 2-nm openings at the centers of the hexagons. Room temperature electrical conductivity is ~40 S cm−1, comparable to that of bulk graphite and among the highest for any conducting metal-organic frameworks (MOFs). The temperature dependence of its conductivity is linear at temperatures between 100 K and 500 K, suggesting an unusual charge transport mechanism that has not been previously observed in organic semiconductors. The material was claimed to be the first of a group formed by switching metals and/or organic compounds. The material can be isolated as a powder or a film with conductivity values of 2 and 40 S cm−1, respectively. Polymer Using melamine (carbon and nitrogen ring structure) as a monomer, researchers created 2DPA-1, a 2-dimensional polymer sheet held together by hydrogen bonds. The sheet forms spontaneously in solution, allowing thin films to be spin-coated. The polymer has a yield strength twice that of steel, and it resists six times more deformation force than bulletproof glass. It is impermeable to gases and liquids. Combinations Single layers of 2D materials can be combined into layered assemblies. For example, bilayer graphene is a material consisting of two layers of graphene. One of the first reports of bilayer graphene was in the seminal 2004 Science paper by Geim and colleagues, in which they described devices "which contained just one, two, or three atomic layers". Layered combinations of different 2D materials are generally called van der Waals heterostructures. Twistronics is the study of how the angle (the twist) between layers of two-dimensional materials can change their electrical properties. Characterization Microscopy techniques such as transmission electron microscopy, 3D electron diffraction, scanning probe microscopy, scanning tunneling microscope, and atomic-force microscopy are used to characterize the thickness and size of the 2D materials. Electrical properties and structural properties such as composition and defects are characterized by Raman spectroscopy, X-ray diffraction, and X-ray photoelectron spectroscopy. Mechanical characterization The mechanical characterization of 2D materials is difficult due to ambient reactivity and substrate constraints present in many 2D materials. To this end, many mechanical properties are calculated using molecular dynamics simulations or molecular mechanics simulations. Experimental mechanical characterization is possible in 2D materials which can survive the conditions of the experimental setup as well as can be deposited on suitable substrates or exist in a free-standing form. Many 2D materials also possess out-of-plane deformation which further convolute measurements. Nanoindentation testing is commonly used to experimentally measure elastic modulus, hardness, and fracture strength of 2D materials. From these directly measured values, models exist which allow the estimation of fracture toughness, work hardening exponent, residual stress, and yield strength. These experiments are run using dedicated nanoindentation equipment or an Atomic Force Microscope (AFM). Nanoindentation experiments are generally run with the 2D material as a linear strip clamped on both ends experiencing indentation by a wedge, or with the 2D material as a circular membrane clamped around the circumference experiencing indentation by a curbed tip in the center. The strip geometry is difficult to prepare but allows for easier analysis due to linear resulting stress fields. The circular drum-like geometry is more commonly used and can be easily prepared by exfoliating samples onto a patterned substrate. The stress applied to the film in the clamping process is referred to as the residual stress. In the case of very thin layers of 2D materials bending stress is generally ignored in indentation measurements, with bending stress becoming relevant in multilayer samples. Elastic modulus and residual stress values can be extracted by determining the linear and cubic portions of the experimental force-displacement curve. The fracture stress of the 2D sheet is extracted from the applied stress at failure of the sample. AFM tip size was found to have little effect on elastic property measurement, but the breaking force was found to have a strong tip size dependence due stress concentration at the apex of the tip. Using these techniques the elastic modulus and yield strength of graphene were found to be 342 N/m and 55 N/m respectively. Poisson's ratio measurements in 2D materials is generally straightforward. To get a value, a 2D sheet is placed under stress and displacement responses are measured, or an MD calculation is run. The unique structures found in 2D materials have been found to result in auxetic behavior in phosphorene and graphene and a Poisson's ratio of zero in triangular lattice borophene.   Shear modulus measurements of graphene has been extracted by measuring a resonance frequency shift in a double paddle oscillator experiment as well as with MD simulations. Fracture toughness of 2D materials in Mode I (KIC) has been measured directly by stretching pre-cracked layers and monitoring crack propagation in real-time. MD simulations as well as molecular mechanics simulations have also been used to calculate fracture toughness in Mode I. In anisotropic materials, such as phosphorene, crack propagation was found to happen preferentially along certain directions. Most 2D materials were found to undergo brittle fracture. Applications The major expectation held amongst researchers is that given their exceptional properties, 2D materials will replace conventional semiconductors to deliver a new generation of electronics. Biological applications Research on 2D nanomaterials is still in its infancy, with the majority of research focusing on elucidating the unique material characteristics and few reports focusing on biomedical applications of 2D nanomaterials. Nevertheless, recent rapid advances in 2D nanomaterials have raised important yet exciting questions about their interactions with biological moieties. 2D nanoparticles such as carbon-based 2D materials, silicate clays, transition metal dichalcogenides (TMDs), and transition metal oxides (TMOs) provide enhanced physical, chemical, and biological functionality owing to their uniform shapes, high surface-to-volume ratios, and surface charge. Two-dimensional (2D) nanomaterials are ultrathin nanomaterials with a high degree of anisotropy and chemical functionality. 2D nanomaterials are highly diverse in terms of their mechanical, chemical, and optical properties, as well as in size, shape, biocompatibility, and degradability. These diverse properties make 2D nanomaterials suitable for a wide range of applications, including drug delivery, imaging, tissue engineering, biosensors, and gas sensors among others. However, their low-dimension nanostructure gives them some common characteristics. For example, 2D nanomaterials are the thinnest materials known, which means that they also possess the highest specific surface areas of all known materials. This characteristic makes these materials invaluable for applications requiring high levels of surface interactions on a small scale. As a result, 2D nanomaterials are being explored for use in drug delivery systems, where they can adsorb large numbers of drug molecules and enable superior control over release kinetics. Additionally, their exceptional surface area to volume ratios and typically high modulus values make them useful for improving the mechanical properties of biomedical nanocomposites and nanocomposite hydrogels, even at low concentrations. Their extreme thinness has been instrumental for breakthroughs in biosensing and gene sequencing. Moreover, the thinness of these molecules allows them to respond rapidly to external signals such as light, which has led to utility in optical therapies of all kinds, including imaging applications, photothermal therapy (PTT), and photodynamic therapy (PDT). Despite the rapid pace of development in the field of 2D nanomaterials, these materials must be carefully evaluated for biocompatibility in order to be relevant for biomedical applications. The newness of this class of materials means that even the relatively well-established 2D materials like graphene are poorly understood in terms of their physiological interactions with living tissues. Additionally, the complexities of variable particle size and shape, impurities from manufacturing, and protein and immune interactions have resulted in a patchwork of knowledge on the biocompatibility of these materials. See also Monolayer Two-dimensional semiconductor Transition metal dichalcogenide monolayers References External links "What Are 2D Materials, and Why Do They Interest Scientists?" in Columbia News (March 6, 2024) "Twenty years of 2D materials" in Nature Physics (January 16, 2024) Additional reading Condensed matter physics Semiconductors Monolayers
Single-layer materials
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
5,696
[ "Electrical resistance and conductance", "Monolayers", "Physical quantities", "Semiconductors", "Phases of matter", "Materials science", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Atoms", "Matter" ]
36,509,544
https://en.wikipedia.org/wiki/C11H9N3O
{{DISPLAYTITLE:C11H9N3O}} The molecular formula C11H9N3O (molar mass: 199.21 g/mol) may refer to: 3-Pyridylnicotinamide (3-pna), or N-(pyridin-3-yl)nicotinamide 4-Pyridylnicotinamide (4-PNA) Molecular formulas
C11H9N3O
[ "Physics", "Chemistry" ]
96
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
39,392,026
https://en.wikipedia.org/wiki/Modes%20of%20toxic%20action
A mode of toxic action is a common set of physiological and behavioral signs that characterize a type of adverse biological response. A mode of action should not be confused with mechanism of action, which refer to the biochemical processes underlying a given mode of action. Modes of toxic action are important, widely used tools in ecotoxicology and aquatic toxicology because they classify toxicants or pollutants according to their type of toxic action. There are two major types of modes of toxic action: non-specific acting toxicants and specific acting toxicants. Non-specific acting toxicants are those that produce narcosis, while specific acting toxicants are those that are non-narcotic and that produce a specific action at a specific target site. Types Non-specific Non-specific acting modes of toxic action result in narcosis; therefore, narcosis is a mode of toxic action. Narcosis is defined as a generalized depression in biological activity due to the presence of toxicant molecules in the organism. The target site and mechanism of toxic action through which narcosis affects organisms are still unclear, but there are hypotheses that support that it occurs through alterations in the cell membranes at specific sites of the membranes, such as the lipid layers or the proteins bound to the membranes. Even though continuous exposure to a narcotic toxicant can produce death, if the exposure to the toxicant is stopped, narcosis can be reversible. Specific Toxicants that at low concentrations modify or inhibit some biological process by binding at a specific site or molecule have a specific acting mode of toxic action. However, at high enough concentrations, toxicants with specific acting modes of toxic actions can produce narcosis that may or may not be reversible. Nevertheless, the specific action of the toxicant is always shown first because it requires lower concentrations. There are several specific acting modes of toxic action: Uncouplers of oxidative phosphorylation. Involves toxicants that uncouple the two processes that occur in oxidative phosphorylation: electron transfer and adenosine triphosphate (ATP) production. Acetylcholinesterase (AChE) inhibitors. AChE is an enzyme associated with nerve synapses that it’s designed to regulate nerve impulses by breaking down the neurotransmitter Acetylcholine (ACh). When toxicants bind to AChE, they inhibit the breakdown of ACh. This results in continued nerve impulses across the synapses, which eventually cause nerve system damage. Examples of AChE inhibitors are organophosphates and carbamates, which are components found in pesticides (see Acetylcholinesterase inhibitors). Irritants. These are chemicals that cause an inflammatory effect on living tissue by chemical action at the site of contact. The resulting effect of irritants is an increase in the volume of cells due to a change in size (hypertrophy) or an increase in the number of cells (hyperplasia). Examples of irritants are benzaldehyde, acrolein, zinc sulphate and chlorine. Central nervous system (CNS) seizure agents. CNS seizure agents inhibit cellular signaling by acting as receptor antagonists. They result in the inhibition of biological responses. Examples of CNS seizure agents are organochlorine pesticides. Respiratory blockers. These are toxicants that affect respiration by interfering with the electron transport chain in the mitochondria. Examples of respiratory blockers are rotenone and cyanide. Determination The pioneer work of identifying the major categories of modes of toxic action (see description above) was conducted by investigators from the U.S. Environmental Protection Agency (EPA) at the Duluth Laboratory using fish, reason why they named the categories as Fish Acute Toxicity Syndromes (FATS). They proposed the FATS by assessing the behavioral and physiological responses of the fish when subjected to toxicity tests, such as locomotive activities, body color, ventilation patterns, cough rate, heart rate, and others. It has been proposed that modes of toxic action could be estimated by developing a data set of critical body residues (CBR). The CBR is the whole-body concentration of a chemical that is associated with a given adverse biological response and it is estimated using a partition coefficient and a bioconcentration factor. The whole-body residues are reasonable first approximations of the amount of chemical present at the toxic action site(s). Because different modes of toxic action generally appear to be associated with different ranges of body residues, modes of toxic action can then be separated into categories. However, it is unlikely that every chemical has the same mode of toxic action in every organism, so this variability should be considered. The effects of mixture toxicity should be considered as well, even though mixture toxicity it's generally additive, chemicals with more than one mode of toxic action may contribute to toxicity. Modeling has become a common used tool to predict modes of toxic action in the last decade. The models are based in Quantitative Structure-Activity Relationships (QSARs), which are mathematical models that relate the biological activity of molecules to their chemical structures and corresponding chemical and physicochemical properties. QSARs can then predict modes of toxic action of unknown compounds by comparing its characteristic toxicity profile and chemical structure to reference compounds with known toxicity profiles and chemical structures. Russom and colleagues were one of the first group of researchers being able to classify modes of toxic action with the use of QSARs; they classified 600 chemicals as narcotics. Even though QSARs are a useful tool for predicting modes of toxic action, chemicals having multiple modes of toxic action can obscure QSAR analyses. Therefore, these models are continuously being developed. Applications Environmental risk assessment The objective of environmental risk assessment is to protect the environment from adverse effects. Researchers are further developing QSAR models with the ultimate goal providing a clear insight about a mode of toxic action, but also about what the actual target site is, the concentration of the chemical at this target site, and the interaction occurring at the target site, as well as to predict the modes of toxic action in mixtures. Information on the mode of toxic action is crucial not only in understanding joint toxic effects and potential interactions between chemicals in mixtures, but also for developing assays for the evaluation of complex mixtures in the field. Regulation The combination of behavioral and physiological responses, CBR estimates, and chemical fate and bioaccumulation QSAR models can be a powerful regulatory tool to address pollution and toxicity in areas where effluents are discharged. References Biochemistry Toxicology Environmental toxicology Toxicants
Modes of toxic action
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
1,355
[ "Toxicology", "Harmful chemical substances", "Environmental toxicology", "Materials", "Toxicants", "nan", "Biochemistry", "Matter" ]
39,396,833
https://en.wikipedia.org/wiki/Self-consistency%20principle%20in%20high%20energy%20physics
The self-consistency principle was established by Rolf Hagedorn in 1965 to explain the thermodynamics of fireballs in high energy physics collisions. A thermodynamical approach to the high energy collisions first proposed by E. Fermi. Partition function The partition function of the fireballs can be written in two forms, one in terms of its density of states, , and the other in terms of its mass spectrum, . The self-consistency principle says that both forms must be asymptotically equivalent for energies or masses sufficiently high (asymptotic limit). Also, the density of states and the mass spectrum must be asymptotically equivalent in the sense of the weak constraint proposed by Hagedorn as . These two conditions are known as the self-consistency principle or bootstrap-idea. After a long mathematical analysis Hagedorn was able to prove that there is in fact and satisfying the above conditions, resulting in and with and related by . Then the asymptotic partition function is given by where a singularity is clearly observed for →. This singularity determines the limiting temperature in Hagedorn's theory, which is also known as Hagedorn temperature. Hagedorn was able not only to give a simple explanation for the thermodynamical aspect of high energy particle production, but also worked out a formula for the hadronic mass spectrum and predicted the limiting temperature for hot hadronic systems. After some time this limiting temperature was shown by N. Cabibbo and G. Parisi to be related to a phase transition, which characterizes by the deconfinement of quarks at high energies. The mass spectrum was further analyzed by Steven Frautschi. Q-exponential function The Hagedorn theory was able to describe correctly the experimental data from collision with center-of-mass energies up to approximately 10 GeV, but above this region it failed. In 2000 I. Bediaga, E. M. F. Curado and J. M. de Miranda proposed a phenomenological generalization of Hagedorn's theory by replacing the exponential function that appears in the partition function by the q-exponential function from the Tsallis non-extensive statistics. With this modification the generalized theory was able again to describe the extended experimental data. In 2012 A. Deppman proposed a non-extensive self-consistent thermodynamical theory that includes the self-consistency principle and the non-extensive statistics. This theory gives as result the same formula proposed by Bediaga et al., which describes correctly the high energy data, but also new formulas for the mass spectrum and density of states of fireball. It also predicts a new limiting temperature and a limiting entropic index. References Particle physics Nuclear physics Principles
Self-consistency principle in high energy physics
[ "Physics" ]
578
[ "Particle physics", "Nuclear physics" ]
39,396,979
https://en.wikipedia.org/wiki/Gerris%20%28software%29
Gerris is computer software in the field of computational fluid dynamics (CFD). Gerris was released as free and open-source software, subject to the requirements of the GNU General Public License (GPL), version 2 or any later. Scope Gerris solves the Navier–Stokes equations in 2 or 3 dimensions, allowing to model industrial fluids (aerodynamics, internal flows, etc.) or for instance, the mechanics of droplets, thanks to an accurate formulation of multiphase flows (including surface tension). Actually, the latter field of study is the reason why the software shares the same name as the insect genus. Gerris also provides features relevant to geophysical flows: ocean tide tsunamis river flow eddies in the ocean sea state (surface waves) Flow types #1 to #3 were studied using the shallow-water solver included in Gerris, case #4 brings in the primitives equations and application #5 relies on the spectral equations for generation/propagation/dissipation of swell (and/or wind sea): for this purpose Gerris makes use of the source terms from WaveWatchIII. Lastly, one can note that the (non-hydrostatic) Navier–Stokes solver was also used in the ocean to study: fluvial plumes internal waves hydrothermal convection On the contrary Gerris does not allow the modeling of compressible fluids (supersonic flows). Numerical scheme Several methods can be used to provide a numerical solution to partial differential equations: finite differences finite volumes finite elements Gerris belongs to the finite volumes family of CFD models. Type of grid Most models use meshes which are either structured (Cartesian or curvilinear grids) or unstructured (triangular, tetrahedral, etc.). Gerris is quite different on this respect: it implements a deal between structured and unstructured meshes by using a tree data structure, allowing to refine locally (and dynamically) the (finite-volume) description of the pressure and velocity fields. Indeed, the grid evolves in the course of a given simulation owing to criteria defined by the user (e.g. dynamic refinement of the grid in the vicinity of sharp gradients). Turbulent closure Gerris mainly aims at DNS; the range of Reynolds available to the user thus depends on the computing power they can afford (although the auto-adaptive mesh allows one to focus the computing resources on the coherent structures). According to the Gerris FAQ the implementation of turbulence models will focus on the LES family rather than RANS approaches. Programming language, library dependencies, included tools Gerris is developed in C using the libraries Glib (object orientation, dynamic loading of modules, etc.) and GTS. The latter brings in facilities to perform geometric computations such as triangulation of solid surfaces and their intersection with fluid cells. Moreover Gerris is fully compliant with MPI parallelisation (including dynamic load balancing). Gerris does not need a meshing tool since the local (and time dependent) refinement of the grid is on charge of the solver itself. As far as solid surfaces are concerned, several input formats are recognized: analytic formulas in the parameter file GTS triangulated files; note that the Gerris distribution includes a tool to translate the STL format (exported by various CAD software) into GTS triangulated surfaces bathymetric/topographic database in KDT format; a tool is also provided to generate such a database from simple ASCII listings Among the various ways to output Gerris results, let us just mention here: Graphical output in PPM format: images can then be converted in (nearly) any format using ImageMagick, and MPEG movies can be generated thanks to FFmpeg (among others). Simulation files (.gfs), which are actually parameters files concatenated with fields issued from the simulation; these files can then be (i) re-used as parameter files (defining new initial conditions), or (ii) processed with Gfsview. Gfsview, a display software shipped with Gerris, able to cope with the tree structure of the Gerris grid (a data structure which is not efficiently operated by general visualization software). Licence CFD software, as any software, can be developed in various "realms": Business; Academic; Open Source. As far as CFD is concerned, a thorough discussion of these software development paths can be found in the statement by Zaleski. Gerris was distributed as free and open-source software right from the onset of the project. Continued development Following a redesign of the software organization, Gerris became Basilisk, which allows one to develop its own solver (not necessarily in fluid mechanics) using various data structures (including of course the quadtree/octree) and optimized operators for iteration, derivation, etc. Solvers are written in C, more specifically the Basilisk C. However many solvers are available "turnkey", including Navier-Stokes et Saint-Venant. See also Other computing software are freely available in the field of fluid mechanics. Here are some of them (if the development was not initialized under a free license, the year when it moved to Open Source is mentioned in parentheses): Industrial fluids Advanced Simulation Library (2015) Code Saturne (2007) FEATool Multiphysics (2013) OpenFOAM (2004) SU2 code (2012) Geophysical fluids POM (1999) ROMS GOTM Telemac (2010, 2011 for Mascaret) Delft3D (2011) Notes References Free software programmed in C Computational fluid dynamics Computer-aided engineering software for Linux Scientific simulation software Software using the GNU General Public License
Gerris (software)
[ "Physics", "Chemistry" ]
1,198
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
28,532,992
https://en.wikipedia.org/wiki/Electromagnetic%20pump
An electromagnetic pump is a pump that moves liquid metal, molten salt, brine, or other electrically conductive liquid using electromagnetism. A magnetic field is set at right angles to the direction the liquid moves in, and a current is passed through it. This causes an electromagnetic force that moves the liquid. Applications include pumping molten solder in many wave soldering machines, pumping liquid-metal coolant, and magnetohydrodynamic drive. Working principle A magnetic field (brc) always exists around the current (I)-carrying conductor. When this current-carrying conductor is subjected to an external magnetic field (Bap), the conductor experiences a force perpendicular to the direction of I and Bap. This is because the magnetic field produced by the conductor and the applied magnetic field attempt to align with each other. A similar effect can be seen between two ordinary magnets. This principle is used in an electromagnetic pump. The current is fed through a conducting liquid. Two permanent magnets are arranged to produce a magnetic field Bap as shown in the figure. The supplied current has a current density (J) and the magnetic field associated with this current can be called "Reaction magnetic Field (brc)". The two magnetic fields Bap and brc attempt to align with each other. This causes mechanical motion of the fluid. Einstein–Szilard electromagnetic pump Designed for the Einstein–Szilard electromagnetic refrigerator (not the pumpless Einstein refrigerator), it uses electromagnetic induction to move conductive liquid metal without electrodes, to compress a working gas, pentane. It is a liquid linear induction motor. See also Magnetic flow meter Magnetohydrodynamic drive References Bibliography Baker, Richard S., Tessier, Manuel J. Handbook of Electromagnetic pump technology. 1987. osti 5041159. oclc 246618050. External links The Electromagnetic Pump, Lecture Demonstration Manual, University of Melbourne (archived) Analysis and Design of Electromagnetic Pump, 2010 Electromagnetic pumps, Carli Precimeter GmbH (archived) Pumps
Electromagnetic pump
[ "Physics", "Chemistry", "Materials_science" ]
418
[ "Pumps", "Materials science stubs", "Turbomachinery", "Physical systems", "Hydraulics", "Electromagnetism stubs" ]
28,534,565
https://en.wikipedia.org/wiki/Industrial%20agitator
Industrial agitators are machines used to stir or mix fluids in industries that process products in the chemical, food, pharmaceutical and cosmetic industries. Their uses include: mixing liquids together promote the reactions of chemical substances keeping homogeneous liquid bulk during storage increase heat transfer (heating or cooling) Types Several different kind of industrial agitators exist: mechanical agitators (rotating) static agitators (pipe fitted with baffles) rotating tank agitators (e.g., a concrete mixer) paddle type mixers agitators working with a pump blasting liquid agitator turning tanks to gas The choice of the agitator depends on the phase that needs to be mixed (one or several phases): liquids only, liquid and solid, liquid and gas or liquid with solids and gas. Depending on the type of phase and the viscosity of the bulk, the agitator may be called a mixer, kneader, dough mixer, amongst others. Agitators used in liquids can be placed on the top of the tank in a vertical position, horizontally on the side of the tank, or less commonly, on the bottom of the tank. Principle of agitation The agitation is achieved by movement of the heterogeneous mass (liquid-solid phase). In mechanical agitators, this the result of the rotation of an impeller. The bulk can be composed of different substances and the aim of the operation is to blend it or to improve the efficiency of a reaction by a better contact between reactive product. Agitation may also be used to increase heat transfer or to maintain particles in suspension. Data of an agitator The agitation of liquid is made by one or several agitation impellers. Depending on its shape, the impeller can generate: the moving of the liquid which is characterized by its velocity and direction. Turbulence which is an erratic variation in space and time of local fluid velocity. Shearing given by a velocity gradient between two filets of fluids. These two phenomena provide energy consumption. Impellers Propellers (marine or hydrofoil) give an inlet and outlet which are on axial direction, preferably downward, they are characterized by a nice pumping flow, low energy consumption and low shear magnitude as well as low turbulence. An impeller is a rotor that produces a sucking force, and is part of a pump. Turbines (flat blades or pitched blades) which inlet flow is axial and outlet flow is radial will provide shearing, turbulence and need approximately 20 time more energy than propellers, for the same diameter and same rotation speed. Mechanical features An agitator is composed of a drive device ( motor, gear reducer, belts…), a guiding system of the shaft (lantern fitted with bearings), a shaft and impellers . If the operating conditions are under high pressure or high temperature, the agitator must be equipped with a sealing system to keep tightened the inside of the tank when the shaft is crossing it. If the shaft is long (> 10m), it can be guided by a bearing located in the bottom of the tank (bottom bearing). References Industrial machinery
Industrial agitator
[ "Engineering" ]
631
[ "Industrial machinery" ]
28,540,674
https://en.wikipedia.org/wiki/Gasochromism
Gasochromism is closely related to electrochromism. The process involves the interaction of an electrochrome, usually a metal oxide, such as tungsten oxide, with an oxidizing or reducing gas, commonly oxygen and hydrogen, producing reversible color changes. The gasochromic technology is used commercially in reversible smart windows and gas sensing of oxygen, hydrogen, nitric oxide, hydrogen sulphide and carbon monoxide. References Monk, M.S., Mortimer, R.J., & Rosseinsky, D.R., Electrochromism and Electrochromic Devices, Cambridge UP, UK, 2007. Chromism
Gasochromism
[ "Physics", "Chemistry", "Materials_science", "Astronomy", "Engineering" ]
139
[ "Spectroscopy stubs", "Materials science stubs", "Spectrum (physical sciences)", "Chromism", "Astronomy stubs", "Materials science", "Smart materials", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
28,541,067
https://en.wikipedia.org/wiki/Vapochromism
In chemistry, Vapochromism strongly overlaps with solvatochromism since vapochromic systems are ones in which dyes change colour in response to the vapour of an organic compound or gas. Vapochromic devices are the optical branch of electronic noses. The main applications are in sensors for detecting volatile organic compounds (VOCs) in a variety of environments, including industrial, domestic and medical areas. An example of such a device is an array consisting of a metalloporphyrin (Lewis acid), a pH indicator dye and a solvatochromic dye. The array is scanned with a flat-bed recorder, and the result are compared with a library of known VOCs. Vaporchromic materials are sometimes Pt or Au complexes, which undergo distinct color changes when exposed to VOCs. References Chromism Spectroscopy
Vapochromism
[ "Physics", "Chemistry", "Materials_science", "Astronomy", "Engineering" ]
177
[ "Spectroscopy stubs", "Materials science stubs", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Chromism", "Astronomy stubs", "Materials science", "Smart materials", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
40,710,975
https://en.wikipedia.org/wiki/SNV%20calling%20from%20NGS%20data
SNV calling from NGS data is any of a range of methods for identifying the existence of single nucleotide variants (SNVs) from the results of next generation sequencing (NGS) experiments. These are computational techniques, and are in contrast to special experimental methods based on known population-wide single nucleotide polymorphisms (see SNP genotyping). Due to the increasing abundance of NGS data, these techniques are becoming increasingly popular for performing SNP genotyping, with a wide variety of algorithms designed for specific experimental designs and applications. In addition to the usual application domain of SNP genotyping, these techniques have been successfully adapted to identify rare SNPs within a population, as well as detecting somatic SNVs within an individual using multiple tissue samples. Methods for detecting germline variants Most NGS based methods for SNV detection are designed to detect germline variations in the individual's genome. These are the mutations that an individual biologically inherits from their parents, and are the usual type of variants searched for when performing such analysis (except for certain specific applications where somatic mutations are sought). Very often, the searched for variants occur with some (possibly rare) frequency, throughout the population, in which case they may be referred to as single nucleotide polymorphisms (SNPs). Technically the term SNP only refers to these kinds of variations, however in practice they are often used synonymously with SNV in the literature on variant calling. In addition, since the detection of germline SNVs requires determining the individual's genotype at each locus, the phrase "SNP genotyping" may also be used to refer to this process. However this phrase may also refer to wet-lab experimental procedures for classifying genotypes at a set of known SNP locations. The usual process of such techniques are based around: Filtering the set of NGS reads to remove sources of error/bias Aligning the reads to a reference genome Using an algorithm, either based on a statistical model or some heuristics, to predict the likelihood of variation at each locus, based on the quality scores and allele counts of the aligned reads at that locus Filtering the predicted results, often based on metrics relevant to the application SNP annotation to predict the functional effect of each variation. The usual output of these procedures is a VCF file. Probabilistic methods In an ideal error free world with high read coverage, the task of variant calling from the results of a NGS data alignment would be simple; at each locus (position on the genome) the number of occurrences of each distinct nucleotide among the reads aligned at that position can be counted, and the true genotype would be obvious; either AA if all nucleotides match allele A, BB if they match allele B, or AB if there is a mixture. However, when working with real NGS data this sort of naive approach is not used, as it cannot account for the noise in the input data. The nucleotide counts used for base calling contain errors and bias, both due do the sequenced reads themselves, and the alignment process. This issue can be mitigated to some extent by sequencing to a greater depth of read coverage, however this is often expensive, and many practical studies require making inferences on low coverage data. Probabilistic methods aim to overcome the above issue, by producing robust estimates of the probabilities of each of the possible genotypes, taking into account noise, as well as other available prior information that can be used to improve estimates. A genotype can then be predicted based on these probabilities, often according to the MAP estimate. Probabilistic methods for variant calling are based on Bayes' theorem. In the context of variant calling, Bayes' Theorem defines the probability of each genotype being the true genotype given the observed data, in terms of the prior probabilities of each possible genotype, and the probability distribution of the data given each possible genotype. The formula is: In the above equation: refers to the observed data; that is, the aligned reads is the genotype whose probability is being calculated refers to the ith possible genotype, out of n possibilities Given the above framework, different software solutions for detecting SNVs vary based on how they calculate the prior probabilities , the error model used to model the probabilities , and the partitioning of the overall genotypes into separate sub-genotypes, whose probabilities can be individually estimated in this framework. Prior genotype probability estimation The calculation of prior probabilities depends on available data from the genome being studied, and the type of analysis being performed. For studies where good reference data containing frequencies of known mutations is available (for example, in studying human genome data), these known frequencies of genotypes in the population can be used to estimate priors. Given population wide allele frequencies, prior genotype probabilities can be calculated at each locus according to the Hardy–Weinberg equilibrium. In the absence of such data, constant priors can be used, independent of the locus. These can be set using heuristically chosen values, possibly informed by the kind of variations being sought by the study. Alternatively, supervised machine-learning procedures have been investigated that seek to learn optimal prior values for individuals in a sample, using supplied NGS data from these individuals. Error models for data observations The error model used in creating a probabilistic method for variant calling is the basis for calculating the term used in Bayes' theorem. If the data was assumed to be error free, then the distribution of observed nucleotide counts at each locus would follow a Binomial Distribution, with 100% of nucleotides matching the A or B allele respectively in the AA and BB cases, and a 50% chance of each nucleotide matching either A or B in the AB case. However, in presence of noise in the read data this assumption is violated, and the values need to account for the possibility that erroneous nucleotides are present in the aligned reads at each locus. A simple error model is to introduce a small error to the data probability term in the homozygous cases, allowing a small constant probability that nucleotides which don't match the A allele are observed in the AA case, and respectively a small constant probability that nucleotides not matching the B allele are observed in the BB case. However more sophisticated procedures are available which attempt to more realistically replicate the actual error patterns observed in real data in calculating the conditional data probabilities. For instance, estimations of read quality (measured as Phred quality scores) have been incorporated in these calculations, taking into account the expected error rate in each individual read at a locus. Another technique that has successfully been incorporated into error models is base quality recalibration, where separate error rates are calculated – based on prior known information about error patterns – for each possible nucleotide substitution. Research shows that each possible nucleotide substitution is not equally likely to show up as an error in sequencing data, and so base quality recalibration has been applied to improve error probability estimates. Partitioning of the genotype In the above discussion, it has been assumed that the genotype probabilities at each locus are calculated independently; that is, the entire genotype is partitioned into independent genotypes at each locus, whose probabilities are calculated independently. However, due to linkage disequilibrium the genotypes of nearby loci are in general not independent. As a result, partitioning the overall genotype instead into a sequence of overlapping haplotypes allows these correlations to be modelled, resulting in more precise probability estimates through the incorporation of population-wide haplotype frequencies in the prior. The use of haplotypes to improve variant detection accuracy has been applied successfully, for instance in the 1000 Genomes Project. Heuristic based algorithms As an alternative to probabilistic methods, heuristic methods exist for performing variant calling on NGS data. Instead of modelling the distribution of the observed data and using Bayesian statistics to calculate genotype probabilities, variant calls are made based on a variety of heuristic factors, such as minimum allele counts, read quality cut-offs, bounds on read depth, etc. Although they have been relatively unpopular in practice in comparison to probabilistic methods, in practice due to their use of bounds and cut-offs they can be robust to outlying data that violate the assumptions of probabilistic models. Reference genome used for alignment An important part of the design of variant calling methods using NGS data is the DNA sequence used as a reference to which the NGS reads are aligned. In human genetics studies, high quality references are available, from sources such as the HapMap project, which can substantially improve the accuracy of the variant calls made by variant calling algorithms. As a bonus, such references can be a source of prior genotype probabilities for Bayesian-based analysis. However, in the absence of such a high quality reference, experimentally obtained reads can first be assembled in order to create a reference sequence for alignment. Pre-processing and filtering of results Various methods exist for filtering data in variant calling experiments, in order to remove sources of error/bias. This can involve the removal of suspicious reads before performing alignment and/or filtering of the list of variants returned by the variant calling algorithm. Depending on the sequencing platform used, various biases may exist within the set of sequenced reads. For instance, strand bias can occur, where there is a highly unequal distribution of forward vs reverse directions in the reads aligned in some neighborhood. Additionally, there may occur an unusually high duplication of some reads (for instance due to bias in PCR). Such biases can result in dubious variant calls – for instance if a fragment containing a PCR error at some locus is over amplified due to PCR bias, that locus will have a high count of the false allele, and may be called as a SNV – and so analysis pipelines frequently filter calls based on these biases. Methods for detecting somatic variants In addition to methods that align reads from individual sample(s) to a reference genome in order to detect germline genetic variants, reads from multiple tissue samples within a single individual can be aligned and compared in order to detect somatic variants. These variants correspond to mutations that have occurred de novo within groups of somatic cells within an individual (that is, they are not present within the individual's germline cells). This form of analysis has been frequently applied to the study of cancer, where many studies are designed around investigating the profile of somatic mutations within cancerous tissues. Such investigations have resulted in diagnostic tools that have seen clinical application, and are used to improve scientific understanding of the disease, for instance by the discovery of new cancer-related genes, identification of involved gene regulatory networks and metabolic pathways, and by informing models of how tumors grow and evolve. Recent developments Until recently, software tools for carrying out this form of analysis have been heavily underdeveloped, and were based on the same algorithms used to detect germline variations. Such procedures are not optimized for this task, because they do not adequately model the statistical correlation between the genotypes present in multiple tissue samples from the same individual. More recent investigations have resulted in the development of software tools especially optimized for the detection of somatic mutations from multiple tissue samples. Probabilistic techniques have been developed that pool allele counts from all tissue samples at each locus, and using statistical models for the likelihoods of joint-genotypes for all the tissues, and the distribution of allele counts given the genotype, are able to calculate relatively robust probabilities of somatic mutations at each locus using all available data. In addition there has recently been some investigation in machine learning based techniques for performing this analysis. In 2021, the Sequencing Quality Control Phase 2 Consortium has published a number of studies that investigated the effects of sample preparations, sequencing library kits, sequencing platforms, and bioinformatics workflows on the accuracy of somatic SNV detection based on a pair of tumor-normal cell lines that the Consortium has established as the reference samples, data, and call sets. List of available software VarNet Big Data Genomics: Avocado Beagle DeepVariant Freebayes GATK (including MuTect) IMPUTE2 JointSNVMix MaCH Magnolia DCNN NeuSomatic NGSEP Pisces Platypus realSFS Reveel SAMtools SNVmix SOAPsnp SomaticSeq SomaticSniper Strelka VarDict VarScan References DNA sequencing Genetics techniques
SNV calling from NGS data
[ "Chemistry", "Engineering", "Biology" ]
2,655
[ "Genetics techniques", "Molecular biology techniques", "DNA sequencing", "Genetic engineering" ]
40,714,668
https://en.wikipedia.org/wiki/Microwave%20analog%20signal%20processing
Microwave Real-time Analog Signal Processing (R-ASP), as an alternative to DSP-based processing, might be defined as the manipulation of signals in their pristine analog form and in real time to realize specific operations enabling microwave or millimeter-wave and terahertz applications. The surging demand for higher spectral efficiency in radio has spurred a renewed interest in analog real-time components and systems beyond conventional purely digital signal processing techniques. Although they are unrivaled at low microwave frequencies, due to their high flexibility, compact size, low cost and strong reliability, digital devices suffer of major issues, such as poor performance, high cost of A/D and D/A converters and excessive power consumption, at higher microwave and millimeter-wave frequencies. At such frequencies, analog devices and related real-time or analog signal processing (ASP) systems, which manipulate broadband signals in the time domain, may be far preferable, as they offer the benefits of lower complexity and higher speed, which may offer unprecedented solutions in the major areas of radio engineering, including communications, but also radars, sensors, instrumentation and imaging. This new technology might be seen as microwave and millimeter-wave counterpart of ultra-fast optics signal processing, and has been recently enabled by a wide range of novel phasers, that are components following arbitrary group delay versus frequency responses. The core of microwave analog signal processing is the dispersive delay structure (DDS), which differentiates frequency components of an input signal based on the group delay frequency response of the DDS. In this structure, a linear up-chirp DDS delays higher-frequency components, while a down-chirp DDS delays lower-frequency components. This frequency-selective delay characteristic makes the DDS ideal as a foundational element in microwave analog signal processing applications, such as real-time Fourier transformation. Designing DDS systems with customizable group delay responses, especially when integrated with ultra-wideband (UWB) systems, enables a broad spectrum of applications in advanced microwave signal processing. Applications RFID System: Over the past few years, RFID systems have gained significant attention in the microwave community due to their applications in areas like communications, logistics, transportation, and security. A typical RFID system consists of a reader (interrogator) and multiple tags, which can operate over both long and short distances. RFID tags are either active or passive, with passive tags further divided into chip-based and chipless types. Chipless tags are particularly attractive due to their low cost, as they lack integrated circuits. Conventional time-domain RFIDs rely on pulse-position modulation (PPM) coding but are prone to interference from reflections. A new approach addresses this by using transmission-type all-pass dispersive delay structures (DDS/Phaser) to generate PPM codes, offering a simple, passive, and frequency-scalable RFID solution. Frequency Meter: A dispersive delay structure (DDS) with a linear group delay response can be utilized in frequency meter applications by mapping the frequency of an incoming signal to a time delay. As the input signal travels through the DDS, each frequency component experiences a different delay, allowing the system to distinguish between frequencies based on their time delays. By increasing the slope of the group delay versus frequency (i.e., enhancing the rate of change of delay with frequency), the time delay difference between two closely spaced frequencies becomes larger. This increased time separation allows for finer resolution in distinguishing closely spaced frequencies, thus improving the frequency resolution of the meter. FDM Receiver: A dispersive delay structure (DDS) also called Phaser with a linear group delay response can simplify frequency division multiplexing (FDM) by mapping each frequency component of the multiplexed signal to a specific time delay. In such an FDM system, a C-section all-pass DDS separates the signal's frequencies in the time domain, eliminating the need for complex analog and digital circuits typically used in conventional FDM receivers. This purely analog approach not only reduces system complexity but also avoids the limitations of digital circuits, such as high power consumption, low speed, and increased cost at high frequencies, while offering scalability across different frequency ranges. Pulse Compression: Microwave analog signal processing can compress pulses and create wideband pulses using low-cost techniques that capitalize on analog approaches. Spectrum Sniffing: A dispersive delay structure can play a crucial role in low-cost time-domain spectrum sniffing for cognitive radio systems. This approach leverages a group-delay phaser, which enables real-time frequency discrimination without the limitations typically associated with conventional digital spectrum sniffers that rely on fast Fourier transform (FFT) techniques. The conventional digital systems often require complex and expensive processors, particularly when handling large bandwidths and high frequencies. In contrast, the phaser-based design utilizes the passive and broadband nature of dispersive delay structures, resulting in a simple, cost-effective, and frequency-scalable architecture. By mitigating the issue of pulse spreading, which can impair frequency resolution in traditional phasers, this innovative method allows for efficient real-time spectrum analysis, identifying available frequency bands for opportunistic use, thus enhancing channel reliability and data throughput in wireless networks. Real-Time Sector Detection System: The leaky-wave antenna (LWA), as a type of dispersive structure, can be effectively utilized for real-time signal processing to create a system for incoming frequency sector detection. Its unique design allows it to radiate energy continuously along its length, making it sensitive to incoming signals from different directions and frequencies. By reconfiguring the LWA, the system can dynamically steer its detection capabilities to focus on specific angles of arrival. This enables the identification of the direction and frequency of incoming signals in real time, facilitating enhanced spectrum awareness. Coupled with a tunable bandpass filter, the LWA can isolate and analyze specific frequency bands, thereby providing valuable information about spectrum occupancy and enabling cognitive radio systems to opportunistically exploit available channels for improved efficiency and reliability in wireless communications. Enhanced-SNR Impulse Radio Transceiver: Dispersive delay structures (DDS), specifically phasers with opposite chirping slopes, can significantly enhance the signal-to-noise ratio (SNR) of wideband impulse radio transceivers. In this approach, the transmitted impulse is up-chirped using an up-chirp phaser, which stretches the pulse duration while reducing its peak power, allowing for a more efficient transmission with less risk of exceeding power spectral density limits. Upon reception, the incoming signal, which contains both the desired impulse and noise, is processed through a down-chirp phaser. This phaser effectively compresses the received chirped signal back into a sharper impulse while spreading out the burst noise, which had not been pre-chirped, thus mitigating its impact. Meanwhile, Gaussian noise remains unaffected in its spectral characteristics. As a result, the desired signal is enhanced relative to the noise, achieving SNR improvements of several factors for burst and Gaussian noise types. This simple and low-cost system benefits from the broadband nature of phasers, making it suitable for applications in impulse radio ranging and communications. Dispersion-code Multiple Access (DCMA): Dispersion Code Multiple Access (DCMA) is an innovative patented communication technique that leverages Chebyshev polynomials to encode and transmit multiple data streams over a shared medium. Each data input, consisting of impulses, is encoded using a distinct Chebyshev polynomial order to create unique dispersive frequency patterns. This encoding ensures that the signals are sufficiently dispersed and distinguishable, allowing multiple users or data streams to coexist without interference. The encoded signals are then transmitted simultaneously through a common channel. At the receiver, the system applies an inverse Chebyshev response, acting as a dispersive delay structure to decode and recover each individual data stream. This precise decoding process ensures that even weak signals, potentially buried below the noise level, can be accurately recovered, making the technique highly robust against noise and interference. DCMA offers an efficient and reliable method for multiple access communication, suitable for applications requiring strong noise immunity and optimal spectrum utilization, such as IoT networks, wireless communication, and secure data transfer. Advantages and Challenges Microwave real-time analog signal processing presents a transformative approach to signal processing, particularly at high frequencies where traditional digital signal processing (DSP) methods face limitations. One of the primary advantages of R-ASP is its ability to manipulate signals in their pristine analog form, allowing for lower complexity and faster processing speeds. This is crucial in applications requiring high spectral efficiency, such as communications, radar, and imaging. Additionally, R-ASP leverages dispersive delay structures, or phasers, which enhance resolution and enable real-time operations without the latency often associated with digital systems. However, despite its benefits, R-ASP encounters several challenges that must be addressed. The enhancement of resolution, achieved through the manipulation of group delay, often leads to increased size and insertion loss in the system. These factors can compromise efficiency and signal integrity, particularly in high-bandwidth applications. Furthermore, designing and fabricating phasers with the desired higher-order group-delay responses is technically complex and costly, which may hinder the widespread implementation of R-ASP technologies. To address these challenges, several strategies can be employed: Advanced Material Use: Exploring novel materials, such as metamaterials or photonic crystals, can provide enhanced properties for phasers, leading to reduced size and lower insertion loss. Optimization of Phaser Design: Implementing simulation-based design optimization tools can refine phaser characteristics, using techniques like machine learning to predict performance outcomes. Integrated Circuit Solutions: Investigating the integration of R-ASP components with existing semiconductor technologies can create compact, high-performance integrated circuits, leveraging both analog and digital processing strengths. Modular Design Approaches: Developing modular phaser designs that allow for easy adjustment or reconfiguration can optimize specific system requirements without necessitating entirely new designs. Enhanced Fabrication Techniques: Utilizing advanced fabrication methods, such as 3D printing, microfabrication, or lithography, can enable the creation of complex geometries at smaller scales, reducing overall system size while maintaining performance. Real-time Calibration and Feedback: Implementing real-time calibration techniques can dynamically adjust phaser characteristics based on operating conditions, ensuring optimal performance as environmental conditions change. Research Collaboration: Fostering collaboration between academia, industry, and research institutions can drive innovation in phaser technology and R-ASP applications, leading to more rapid advancements in the field. Prototype Testing and Iteration: Establishing a robust prototyping and testing framework allows for rapid iteration of designs, providing valuable insights into performance limitations and guiding future improvements. By focusing on these strategies, researchers and engineers can work towards overcoming the current challenges in R-ASP, ultimately enhancing its viability and performance in high-frequency applications. Balancing these challenges with the inherent advantages of R-ASP will be crucial for advancing its application in next-generation wireless systems and other critical areas. Conclusion Microwave real-time analog signal processing emerges as a crucial innovation addressing the challenges posed by purely digital signal processing at microwave and millimeter-wave frequencies. By enabling signal manipulation in its pristine analog form and leveraging dispersive delay structures such as phasers, R-ASP provides lower complexity, faster processing speeds, and reduced power consumption—critical for high-frequency applications. With its ability to perform complex operations like pulse compression, spectrum sniffing, and real-time Fourier transformation, R-ASP is transforming fields such as communication, sensing, radar, and instrumentation. Despite its advantages, R-ASP faces challenges, such as increased size and insertion loss associated with resolution enhancements, as well as complexities in phaser design and fabrication for higher-order responses. However, strategic approaches—such as utilizing advanced materials, optimizing phaser designs, integrating circuit solutions, and fostering research collaboration—offer pathways to overcome these limitations. Innovations like Dispersion Code Multiple Access (DCMA) exemplify the future potential of R-ASP by combining the unique encoding capability of Chebyshev polynomials with dispersive delay-based decoding. DCMA enhances spectrum utilization by allowing multiple signals to coexist over shared media with minimal interference and excellent noise immunity, even at low signal-to-noise ratios. This seamless blend of analog signal processing principles with cutting-edge coding techniques offers transformative solutions for modern radio engineering, paving the way for high-performance communication systems and next-generation wireless applications. By continuing to address the inherent challenges of R-ASP, the field can further harness its capabilities, unlocking new opportunities and advancements in wireless technology. References Signal processing Microwave technology
Microwave analog signal processing
[ "Technology", "Engineering" ]
2,657
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
40,715,082
https://en.wikipedia.org/wiki/Hypogeal%20germination
Hypogeal germination (from Ancient Greek [] 'below ground', from [] 'below' and [] 'earth, ground') is a botanical term indicating that the germination of a plant takes place below the ground. An example of a plant with hypogeal germination is the pea (Pisum sativum). The opposite of hypogeal is epigeal (above-ground germination). Germination Hypogeal germination implies that the cotyledons stay below the ground. The epicotyl (part of the stem above the cotyledon) grows, while the hypocotyl (part of the stem below the cotyledon) remains the same in length. In this way, the epicotyl pushes the plumule above the ground. Normally, the cotyledon is fleshy, and contains many nutrients that are used for germination. Because the cotyledon stays below the ground, it is much less vulnerable to, for example, night-frost or grazing. The evolutionary strategy is that the plant produces a relatively low number of seeds, but each seed has a bigger chance of surviving. Plants that show hypogeal germination need relatively little in the way of external nutrients to grow, therefore they are more frequent on nutrient-poor soils. The plants also need less sunlight, so they can be found more often in the middle of forests, where there is much competition to reach the sunlight. Plants that show hypogeal germination grow relatively slowly, especially in the first phase. In areas that are regularly flooded, they need more time between floodings to develop. On the other hand, they are more resistant when a flooding takes place. After the slower first phase, the plant develops faster than plants that show epigeal germination. It is possible that within the same genus one species shows hypogeal germination while another species shows epigeal germination. Some genera in which this happens are: Phaseolus: the runner bean (Phaseolus coccineus) shows hypogeal germination, whereas the common bean (Phaseolus vulgaris) shows epigeal germination Lilium: see Lily seed germination types Araucaria: species in the section Araucaria show hypogeal germination, whereas species in the section Eutacta show epigeal germination Phanerocotylar vs. cryptocotylar In 1965, botanist James A. Duke introduced the terms cryptocotylar ("hidden cotyledon") and phanerocotylar ("visible cotyledon") as synonyms for hypogeal and epigeal respectively, because he didn't consider these terms etymologically correct. Later, it was discovered that there are rare cases of species where the germination is epigeal and cryptocotylar such as Rollinia salicifolia. Therefore, divisions have been proposed that take both factors into account. References Plant reproduction
Hypogeal germination
[ "Biology" ]
639
[ "Behavior", "Plant reproduction", "Plants", "Reproduction" ]
45,421,080
https://en.wikipedia.org/wiki/7-Deoxyloganetic%20acid
7-Deoxyloganetic acid is an iridoid monoterpene. It is produced from nepetalactol or iridodial by the enzyme iridoid oxidase (IO). 7-Deoxyloganetic acid is a substrate for 7-deoxyloganetic acid glucosyltransferase (7-DLGT) which synthesizes 7-deoxyloganic acid. References Iridoids Carboxylic acids Cyclopentanes
7-Deoxyloganetic acid
[ "Chemistry" ]
107
[ "Carboxylic acids", "Functional groups", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
45,421,171
https://en.wikipedia.org/wiki/7-Deoxyloganic%20acid
7-Deoxyloganic acid is an iridoid monoterpene. 7-Deoxyloganic acid is produced from 7-deoxyloganetic acid by the enzyme 7-deoxyloganetic acid glucosyltransferase (7-DLGT). The metabolite is a substrate for the enzyme 7-deoxyloganic acid hydroxylase (7-DLH) which synthesizes loganic acid. References Iridoid glycosides Carboxylic acids Glucosides Cyclopentanes
7-Deoxyloganic acid
[ "Chemistry" ]
122
[ "Carboxylic acids", "Functional groups", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
45,426,505
https://en.wikipedia.org/wiki/Scholz%27s%20Star
Scholz's Star (WISE designation WISE 0720−0846 or fully WISE J072003.20−084651.2) is a dim binary stellar system from the Sun in the constellation Monoceros near the galactic plane. It was discovered in 2013 by astronomer Ralf-Dieter Scholz. In 2015, Eric Mamajek and collaborators reported that the system passed through the Solar System's Oort cloud roughly 70,000 years ago, and dubbed it Scholz's Star. Characteristics The primary is a red dwarf with a stellar classification of M and Jupiter masses. The secondary is probably a T5 brown dwarf with Jupiter masses. The system has 0.15 solar masses. The pair orbit at a distance of about with a period of roughly 4 years. The system has an apparent magnitude of 18.3, and is estimated to be between 3 and 10 billion years old. With a parallax of 166 mas (0.166 arcseconds), about 80 star systems are known to be closer to the Sun. It is a late discovery, as far as nearby stars go, because past efforts concentrated on high-proper-motion objects. Solar System flyby Estimates indicate that the WISE 0720−0846 system passed about from the Sun about 70,000 years ago. Ninety-eight percent of mathematical simulations of the star system's trajectory indicated that it passed through the Solar System's Oort cloud, or within of the Sun. Comets perturbed from the Oort cloud would require roughly two million years to get to the inner Solar System. At closest approach the system would have had an apparent magnitude of about 11.4, and would have been best viewed from high latitudes in the northern hemisphere. In 2018, research was published indicating that disturbance of the Oort cloud will have a greater effect than initial research had indicated. In a recent estimate, WISE J0720−0846AB passed within 68.7 ± 2.0 kAU of the Sun 80.5 ± 0.7 kyr ago. A later recalculation of the impact parameters using updated Solar System data showed that the perihelion distance during the encounter had a median value of 0.330 pc with a 90% probability of having come within 0.317–0.345 pc of the Sun; the associated time of perihelion passage was determined to be between 78.6–81.1 kyr ago with 90% confidence, with a most likely value of 79.9 kyr. A star is expected to pass through the Oort cloud every 100,000 years or so. An approach as close or closer than 52,000 AU is expected to occur about every 9 million years. In about 1.4 million years, Gliese 710 will come to a perihelion of between 8,800 and 13,700 AU. Naming The star was first discovered to be a nearby one by astronomer Ralf-Dieter Scholz, announced on arXiv in November 2013. Given the importance of the system having passed so close to the Solar System in prehistorical times, Eric Mamajek and collaborators dubbed the system Scholz's star in their paper discussing the star's velocity and past trajectory. See also List of nearest stars and brown dwarfs#Distant future and past encounters HIP 85605 Stars named after people References 201311?? Binary stars Brown dwarfs M-type main-sequence stars T-type brown dwarfs Monoceros Stars with proper names WISE objects J07200325-0846499 Oort cloud
Scholz's Star
[ "Astronomy" ]
735
[ "Astronomical hypotheses", "Oort cloud", "Monoceros", "Constellations" ]
45,426,846
https://en.wikipedia.org/wiki/Institute%20of%20Experimental%20Medicine%2C%20Academy%20of%20Sciences%20of%20the%20Czech%20Republic
Institute of Experimental Medicine, Academy of Sciences of the Czech Republic (IEM) () is focused on biomedical research, incl. cell biology, neuropathology, teratology, cancer research, molecular embryology, stem cells and nervous tissue regeneration as such leading institution in the research in the CR it was selected as an EU Center of Excellence (MEDIPRA). IEM is member of Network of European Neuroscience Institutes (ENI-NET). Departments Auditory Neuroscience Laboratory of Auditory Physiology and Pathology, Laboratory of Synaptic Physiology Genetic Ecotoxicology Laboratory of Molecular Epidemiology, Laboratory of Genetic Toxicology, Laboratory of Genomics Teratology Laboratory of Embryogenesis, Laboratory of Odontogenesis Molecular Biology of Cancer Laboratory of the Genetics of Cancer, Laboratory of DNA Repair Transplantation Immunology Laboratory of Eye Histochemistry and Pharmacology Neuroscience Laboratory of Diffusion Studies and Imaging Methods, Laboratory of Tissue Culture and Stem Cells, Laboratory of biomaterials and biophysical methods) Other departments Dep. of Cellular Neurophysiology, Molecular Neurophysiology, Functional Organization of Biomembranes, Pharmacology, Tissue Engineering References External links Nanomedicine Neuroscience research centers in the Czech Republic Cancer organizations Czech Academy of Sciences 1975 establishments in Czechoslovakia Research institutes established in 1975 Medical research institutes in the Czech Republic
Institute of Experimental Medicine, Academy of Sciences of the Czech Republic
[ "Materials_science" ]
281
[ "Nanomedicine", "Nanotechnology" ]
45,428,442
https://en.wikipedia.org/wiki/Bernstein%E2%80%93Kushnirenko%20theorem
The Bernstein–Kushnirenko theorem (or Bernstein–Khovanskii–Kushnirenko (BKK) theorem), proven by David Bernstein and in 1975, is a theorem in algebra. It states that the number of non-zero complex solutions of a system of Laurent polynomial equations is equal to the mixed volume of the Newton polytopes of the polynomials , assuming that all non-zero coefficients of are generic. A more precise statement is as follows: Statement Let be a finite subset of Consider the subspace of the Laurent polynomial algebra consisting of Laurent polynomials whose exponents are in . That is: where for each we have used the shorthand notation to denote the monomial Now take finite subsets of , with the corresponding subspaces of Laurent polynomials, Consider a generic system of equations from these subspaces, that is: where each is a generic element in the (finite dimensional vector space) The Bernstein–Kushnirenko theorem states that the number of solutions of such a system is equal to where denotes the Minkowski mixed volume and for each is the convex hull of the finite set of points . Clearly, is a convex lattice polytope; it can be interpreted as the Newton polytope of a generic element of the subspace . In particular, if all the sets are the same, then the number of solutions of a generic system of Laurent polynomials from is equal to where is the convex hull of and vol is the usual -dimensional Euclidean volume. Note that even though the volume of a lattice polytope is not necessarily an integer, it becomes an integer after multiplying by . Trivia Kushnirenko's name is also spelt Kouchnirenko. David Bernstein is a brother of Joseph Bernstein. Askold Khovanskii has found about 15 different proofs of this theorem. References See also Bézout's theorem for another upper bound on the number of common zeros of polynomials in indeterminates. Theorems in algebra Theorems in geometry
Bernstein–Kushnirenko theorem
[ "Mathematics" ]
413
[ "Theorems in algebra", "Geometry", "Theorems in geometry", "Mathematical problems", "Mathematical theorems", "Algebra" ]
45,428,589
https://en.wikipedia.org/wiki/Carbon%20fiber%20testing
Carbon fiber testing is a set of various different tests that researchers use to characterize the properties of carbon fiber. The results for the testing are used to aid the manufacturer and developers decisions selecting and designing material composites, manufacturing processes and for ensured safety and integrity. Safety-critical carbon fiber components, such as structural parts in machines, vehicles, aircraft or architectural elements are subject to testing. Introduction Carbon fiber reinforced plastic and reinforced polymers are gaining importance as light-weight material. There are various disciplines for material testing that especially apply to carbon fiber materials. Most common are destructive tests, such as stress, fatigue and micro sectioning tests. There are also methods that allow non-destructive testing (NDT), so the material can be still be used after testing. Common methods are ultrasonic, X-ray, HF Eddy Current, Radio Wave testing or thermography. Additionally, Structural Health Monitoring (SHM) methods allow testing during application. Testing methods Destructive Testing Safety-critical carbon fiber parts, such as aircraft frames, need to be tested destructively (e.g. stress, fatigue) and non-destructively (e.g. fiber orientation, delamination and bonding). Three types of destructive testing are micro-sectioning, stress and fatigue tests. A form of fatigue testing for carbon fiber components is very high cycle fatigue (VHCF). Common VHCF test methods are ultrasonic or resonance testing of tension, compression, or torsion. Typically, destructive tests are carried out to validate the mechanical properties, whereas NDT is used to monitor and control the manufacturing process of the CFRP parts. Non-Destructive Testing The aerospace industry relies on thermography testing to help detect defects in the carbon fiber components. Ultrasonic testing of CFRP parts is the most popular form of NDT testing. Ultrasonic testing allows researchers to find any anomalies in the thin laminar composites. Ultrasonic testing only works with parts that are no thicker than 50mm. Radiographic testing utilizes short wavelength electromagnetic radiation. The wavelength is so small that it can penetrate the CFRP while light cannot. X-ray testing can detect voids, porosity, inclusions, trans-laminar cracks, resin-to-fiber ratio, non-uniform fiber distribution and fiber orientation, such as fiber folds, wrinkles or weld lines. A flaw of X-ray testing is if the defect is perpendicular to the x-ray beam, the defect will not be detected. Thermography plays a major role in the aerospace industry. This test is used to detect any defects that could cause the carbon fiber component to fail resulting in a catastrophe. Two types of thermography exist active and passive. Both of these methods save money because the part that is being tested stays intact. They are also efficient because they are able to scan large areas at a time. As carbon fiber composites are highly individual in shape and material composition, novel NDTs are an emerging and sought-for application. Applicable technologies are radio wave testing, high frequency eddy current testing, thermography, shearography, air-coupled laser ultrasonics and terahertz scanning. Typical effects and defects The specifications for integrity of structurally relevant parts depend on the individual manufacturer. However, typically relevant quality criteria of the texture are fiber orientation, gaps, wrinkles, overlaps, distortion, undulation, uniformity as well as defects in the matrix delamination, inclusion, cracks, curing, void, debonding. Furthermore, basis weight or carbon fiber volume content are important properties. Generally, defects and effects in carbon fiber materials are classified according to their location as structural defects (carbon fiber related) and matrix defects (resin related). Carbon fiber related effects are tested with X-ray and high frequency testing methods whereas matrix effects are commonly tested with ultrasonic and thermographic methods. See also Carbon-fiber-reinforced polymer Carbon (fiber) Non-destructive testing References External links Video on 3D Testing of carbon fiber preforms Materials science Materials degradation Mechanical failure modes Materials testing Tests
Carbon fiber testing
[ "Physics", "Materials_science", "Technology", "Engineering" ]
829
[ "Structural engineering", "Mechanical failure modes", "Applied and interdisciplinary physics", "Technological failures", "Materials science", "Materials testing", "nan", "Materials degradation", "Mechanical failure" ]
42,145,994
https://en.wikipedia.org/wiki/C26H34O7
{{DISPLAYTITLE:C26H34O7}} The molecular formula C26H34O7 (molar mass: 458.54 g/mol, exact mass: 458.2305 u) may refer to: Berkeleytrione Fumagillin Molecular formulas
C26H34O7
[ "Physics", "Chemistry" ]
62
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
42,147,935
https://en.wikipedia.org/wiki/Kapitza%20number
The Kapitza number () is a dimensionless number named after the prominent Russian physicist Pyotr Kapitsa (Peter Kapitza). He provided the first extensive study of the ways in which a thin film of liquid flows down inclined surfaces. Expressed as the ratio of surface tension forces to inertial forces, the Kapitza number acts as an indicator of the hydrodynamic wave regime in falling liquid films. Liquid film behavior represents a subset of the more general class of free boundary problems. and is important in a wide range of engineering and technological applications such as evaporators, heat exchangers, absorbers, microreactors, small-scale electronics/microprocessor cooling schemes, air conditioning and gas turbine blade cooling. After World War II Kapitza was removed from all his positions, including director of his Institute for Physical Problems, for refusing to work on nuclear weapons. He was at his country house and devised experiments to work on there, including his experiments on falling films of liquid. Unlike most dimensionless numbers used in the study of fluid mechanics, the Kapitza number represents a material property, as it is formed by combining powers of the surface tension, density, gravitational acceleration and kinematic viscosity. where σ is the surface tension (SI units: N/m), g is gravitational acceleration (m/s2), ρ is density (kg/m3), β is inclination angle (rad), and ν is kinematic viscosity (m2/s). Notes References Fluid dynamics Flow regimes
Kapitza number
[ "Chemistry", "Engineering" ]
321
[ "Piping", "Chemical engineering", "Flow regimes", "Fluid dynamics" ]